title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Eclipse Temurin features
Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 17 release of Eclipse Temurin includes, see OpenJDK 17.0.11 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 17.0.11 release: XML Security for Java updated to Apache Santuario 3.0.3 In OpenJDK 17.0.11, the XML signature implementation is based on Apache Santuario 3.0.3. This enhancement introduces the following four SHA-3-based RSA-MGF1 SignatureMethod algorithms: SHA3_224_RSA_MGF1 SHA3_256_RSA_MGF1 SHA3_384_RSA_MGF1 SHA3_512_RSA_MGF1 Because the javax.xml.crypto.dsig.SignatureMethod API cannot be modified in update releases to provide constant values for the new algorithms, use the following equivalent string literal values for these algorithms: http://www.w3.org/2007/05/xmldsig-more#sha3-224-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-256-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-384-rsa-MGF1 http://www.w3.org/2007/05/xmldsig-more#sha3-512-rsa-MGF1 This enhancement also introduces support for the ED25519 and ED448 elliptic curve algorithms, which are both Edwards-curve Digital Signature Algorithm (EdDSA) signature schemes. Note In contrast to the upstream community version of Apache Santuario 3.0.3, the JDK still supports the here() function. However, future support for the here() function is not guaranteed. You should avoid using here() in new XML signatures. You should also update any XML signatures that currently use here() to stop using this function. The here() function is enabled by default. To disable the here() function, ensure that the jdk.xml.dsig.hereFunctionSupported system property is set to false . See JDK-8319124 (JDK Bug System) . Fixed indefinite hanging of jspawnhelper In earlier releases, if the parent JVM process failed before successful completion of the handshake between the JVM and a jspawnhelper process, the jspawnhelper process could remain unresponsive indefinitely. In OpenJDK 17.0.11, if the parent process fails prematurely, the jspawnhelper process receives an end-of-file (EOF) signal from the communication pipe. This enhancement helps to ensure that the jspawnhelper process shuts down correctly. See JDK-8307990 (JDK Bug System) . SystemTray.isSupported() method returns false on most Linux desktops In OpenJDK 17.0.11, the java.awt.SystemTray.isSupported() method returns false on systems that do not support the SystemTray API correctly. This enhancement is in accordance with the SystemTray API specification. The SystemTray API is used to interact with the taskbar in the system desktop to provide notifications. SystemTray might also include an icon representing an application. Due to an underlying platform issue, GNOME desktop support for taskbar icons has not worked correctly for several years. This platform issue affects the JDK's ability to provide SystemTray support on GNOME desktops. This issue typically affects systems that use GNOME Shell 44 or earlier. Note Because the lack of correct SystemTray support is a long-standing issue on some systems, this API enhancement to return false on affected systems is likely to have a minimal impact on users. See JDK-8322750 (JDK Bug System) . Certainly R1 and E1 root certificates added In OpenJDK 17.0.11, the cacerts truststore includes two Certainly root certificates: Certificate 1 Name: Certainly Alias name: certainlyrootr1 Distinguished name: CN=Certainly Root R1, O=Certainly, C=US Certificate 2 Name: Certainly Alias name: certainlyroote1 Distinguished name: CN=Certainly Root E1, O=Certainly, C=US See JDK-8321408 (JDK Bug System) . Revised on 2024-04-26 14:11:06 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.11/openjdk-temurin-features-17-0-11_openjdk
Chapter 258. OptaPlanner Component
Chapter 258. OptaPlanner Component Available as of Camel version 2.13 The optaplanner: component solves the planning problem contained in a message with OptaPlanner . For example: feed it an unsolved Vehicle Routing problem and it solves it. The component supports consumer as BestSolutionChangedEvent listener and producer for processing Solution and ProblemFactChange Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-optaplanner</artifactId> <version>x.x.x</version><!-- use the same version as your Camel core version --> </dependency> 258.1. URI format optaplanner:solverConfig[?options] The solverConfig is the classpath-local URI of the solverConfig, for example /org/foo/barSolverConfig.xml . You can append query options to the URI in the following format, ?option=value&option=value&... 258.2. OptaPlanner Options The OptaPlanner component has no options. The OptaPlanner endpoint is configured using URI syntax: with the following path and query parameters: 258.2.1. Path Parameters (1 parameters): Name Description Default Type configFile Required Specifies the location to the solver file String 258.2.2. Query Parameters (7 parameters): Name Description Default Type solverId (common) Specifies the solverId to user for the solver instance key DEFAULT_SOLVER String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern async (producer) Specifies to perform operations in async mode false boolean threadPoolSize (producer) Specifies the thread pool size to use when async is true 10 int synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 258.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.optaplanner.enabled Enable optaplanner component true Boolean camel.component.optaplanner.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 258.4. Message Headers Name Default Value Type Context Description CamelOptaPlannerSolverId null String Shared Specifies the solverId to use CamelOptaPlannerIsAsync PUT String Producer Specify whether to use another thread for submitting Solution instances rather than blocking the current thread. 258.5. Message Body Camel takes the planning problem for the IN body, solves it and returns it on the OUT body. (since v 2.16) The IN body object supports the following use cases: If the body is instance of Solution, then it will be solved using the solver identified by solverId and either synchronously or asynchronously. If the body is instance of ProblemFactChange, then it will trigger addProblemFactChange. If the processing is asynchronously, then it will wait till isEveryProblemFactChangeProcessed before returning result. If the body is none of the above types, then the producer will return the best result from the solver identified by solverId 258.6. Termination The solving will take as long as specified in the solverConfig . <solver> ... <termination> <!-- Terminate after 10 seconds, unless it's not feasible by then yet --> <terminationCompositionStyle>AND</terminationCompositionStyle> <secondsSpentLimit>10</secondsSpentLimit> <bestScoreLimit>-1hard/0soft</bestScoreLimit> </termination> ... <solver> 258.6.1. Samples Solve an planning problem that's on the ActiveMQ queue with OptaPlanner: from("activemq:My.Queue"). .to("optaplanner:/org/foo/barSolverConfig.xml"); Expose OptaPlanner as a REST service: from("cxfrs:bean:rsServer?bindingStyle=SimpleConsumer") .to("optaplanner:/org/foo/barSolverConfig.xml"); 258.7. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-optaplanner</artifactId> <version>x.x.x</version><!-- use the same version as your Camel core version --> </dependency>", "optaplanner:solverConfig[?options]", "optaplanner:configFile", "<solver> <termination> <!-- Terminate after 10 seconds, unless it's not feasible by then yet --> <terminationCompositionStyle>AND</terminationCompositionStyle> <secondsSpentLimit>10</secondsSpentLimit> <bestScoreLimit>-1hard/0soft</bestScoreLimit> </termination> <solver>", "from(\"activemq:My.Queue\"). .to(\"optaplanner:/org/foo/barSolverConfig.xml\");", "from(\"cxfrs:bean:rsServer?bindingStyle=SimpleConsumer\") .to(\"optaplanner:/org/foo/barSolverConfig.xml\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/optaplanner-component
Chapter 157. Ignite Compute Component
Chapter 157. Ignite Compute Component Available as of Camel version 2.17 The Ignite Compute endpoint is one of camel-ignite endpoints which allows you to run compute operations on the cluster by passing in an IgniteCallable, an IgniteRunnable, an IgniteClosure, or collections of them, along with their parameters if necessary. This endpoint only supports producers. The host part of the endpoint URI is a symbolic endpoint ID, it is not used for any purposes. The endpoint tries to run the object passed in the body of the IN message as the compute job. It expects different payload types depending on the execution type. 157.1. Options The Ignite Compute component supports 4 options, which are listed below. Name Description Default Type ignite (producer) Sets the Ignite instance. Ignite configurationResource (producer) Sets the resource from where to load the configuration. It can be a: URI, String (URI) or an InputStream. Object igniteConfiguration (producer) Allows the user to set a programmatic IgniteConfiguration. IgniteConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Ignite Compute endpoint is configured using URI syntax: with the following path and query parameters: 157.1.1. Path Parameters (1 parameters): Name Description Default Type endpointId Required The endpoint ID (not used). String 157.1.2. Query Parameters (8 parameters): Name Description Default Type clusterGroupExpression (producer) An expression that returns the Cluster Group for the IgniteCompute instance. ClusterGroupExpression computeName (producer) The name of the compute job, which will be set via IgniteCompute#withName(String). String executionType (producer) Required The compute operation to perform. Possible values: CALL, BROADCAST, APPLY, EXECUTE, RUN, AFFINITY_CALL, AFFINITY_RUN. The component expects different payload types depending on the operation. IgniteComputeExecution Type propagateIncomingBodyIfNo ReturnValue (producer) Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void. true boolean taskName (producer) The task name, only applicable if using the IgniteComputeExecutionType#EXECUTE execution type. String timeoutMillis (producer) The timeout interval for triggered jobs, in milliseconds, which will be set via IgniteCompute#withTimeout(long). Long treatCollectionsAsCache Objects (producer) Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc. false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 157.2. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.ignite-compute.configuration-resource Sets the resource from where to load the configuration. It can be a: URI, String (URI) or an InputStream. The option is a java.lang.Object type. String camel.component.ignite-compute.enabled Enable ignite-compute component true Boolean camel.component.ignite-compute.ignite Sets the Ignite instance. The option is a org.apache.ignite.Ignite type. String camel.component.ignite-compute.ignite-configuration Allows the user to set a programmatic IgniteConfiguration. The option is a org.apache.ignite.configuration.IgniteConfiguration type. String camel.component.ignite-compute.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 157.2.1. Expected payload types Each operation expects the indicated types: Operation Expected payloads CALL Collection of IgniteCallable, or a single IgniteCallable. BROADCAST IgniteCallable, IgniteRunnable, IgniteClosure. APPLY IgniteClosure. EXECUTE ComputeTask, Class<? extends ComputeTask> or an object representing parameters if the taskName option is not null. RUN A Collection of IgniteRunnables, or a single IgniteRunnable. AFFINITY_CALL IgniteCallable. AFFINITY_RUN IgniteRunnable. 157.2.2. Headers used This endpoint uses the following headers: Header name Constant Expected type Description CamelIgniteComputeExecutionType IgniteConstants.IGNITE_COMPUTE_EXECUTION_TYPE IgniteComputeExecutionType enum Allows you to dynamically change the compute operation to perform. CamelIgniteComputeParameters IgniteConstants.IGNITE_COMPUTE_PARAMS Any object or Collection of objects. Parameters for APPLY, BROADCAST and EXECUTE operations. CamelIgniteComputeReducer IgniteConstants.IGNITE_COMPUTE_REDUCER IgniteReducer Reducer for the APPLY and CALL operations. CamelIgniteComputeAffinityCacheName IgniteConstants.IGNITE_COMPUTE_AFFINITY_CACHE_NAME String Affinity cache name for the AFFINITY_CALL and AFFINITY_RUN operations. CamelIgniteComputeAffinityKey IgniteConstants.IGNITE_COMPUTE_AFFINITY_KEY Object Affinity key for the AFFINITY_CALL and AFFINITY_RUN operations.
[ "ignite-compute:endpointId" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ignite-compute-component
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE In OpenShift Container Platform version 4.12, you can install a cluster on IBM Z or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.12 on the following IBM hardware: IBM z16 (all models), IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s IBM(R) LinuxONE Emperor 4, IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II, IBM(R) LinuxONE Emperor, and IBM(R) LinuxONE Rockhopper Note Support for RHCOS functionality for IBM z13 all models, IBM(R) LinuxONE Emperor, and IBM(R) LinuxONE Rockhopper is deprecated. These hardware models remain fully supported in OpenShift Container Platform 4.12. However, Red Hat recommends that you use later hardware models. Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.2 or later On your z/VM instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.2 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE in IBM Documentation. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z & IBM(R) LinuxONE environments 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. The machines are configured with static IP addresses. No DHCP server is required. Ensure that the machines have persistent IP addresses and hostnames. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z 2.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 2.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 2.15. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Post-installation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=sda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 2.12.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.12.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set option fail_over_mac=1 in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.16.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 2.19. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z
Chapter 9. Revision History
Chapter 9. Revision History 0.1-8 Tue Sep 29 2020, Jaroslav Klech ( [email protected] ) Document version for 7.9 GA publication. 0.1-7 Tue Mar 31 2020, Jaroslav Klech ( [email protected] ) Document version for 7.8 GA publication. 0.1-6 Tue Aug 6 2019, Jaroslav Klech ( [email protected] ) Document version for 7.7 GA publication. 0.1-5 Fri Oct 19 2018, Jaroslav Klech ( [email protected] ) Document version for 7.6 GA publication. 0.1-4 Mon Mar 26 2018, Marie Dolezelova ( [email protected] ) Document version for 7.5 GA publication. 0.1-3 Mon Jan 5 2018, Mark Flitter ( [email protected] ) Document version for 7.5 Beta publication. 0.1-2 Mon Jul 31 2017, Mark Flitter ( [email protected] ) Document version for 7.4 GA publication. 0.1-0 Thu Apr 20 2017, Mark Flitter ( [email protected] ) Initial build for review
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/kernel_administration_guide/revision_history
Chapter 35. System and Subscription Management
Chapter 35. System and Subscription Management Undercloud no longer fails on a system with no configured repositories Previously, when the user tried to install the OpenStack Undercloud on a system with no configured repositories, the yum package manager required installation of MySQL dependencies which have been already installed. As a conseqence, the Undercloud install script failed. To fix the bug, yum has been fixed to correctly detect already installed MySQL dependencies. As a result, the Undercloud install script no longer fails on a system with no configured repositories. (BZ#1352585) the yum commands provided by the yum-plugin-verify now set the exit status to 1 if any mismatches are found The yum commands provided by the yum-plugin-verify plug-in returned exit code 0 for any discrepancies found in a package. The bug has been fixed, and the exit status is now set to 1 in case any mismatches are found. (BZ#1406891)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug_fixes_system_and_subscription_management
Chapter 2. OpenShift CLI (oc)
Chapter 2. OpenShift CLI (oc) 2.1. Getting started with the OpenShift CLI 2.1.1. About the OpenShift CLI With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Container Platform projects from a terminal. The OpenShift CLI is ideal in the following situations: Working directly with project source code Scripting OpenShift Container Platform operations Managing projects while restricted by bandwidth resources and the web console is unavailable 2.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) either by downloading the binary or by using an RPM. 2.1.2.1. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.2. Installing the OpenShift CLI by using the web console You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a web console. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . 2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select appropriate oc binary for your Linux platform, and then click Download oc for Linux . Save the file. Unpack the archive. USD tar xvf <file> Move the oc binary to a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for Windows platform, and then click Download oc for Windows for x86_64 . Save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for macOS platform, and then click Download oc for Mac for x86_64 . Note For macOS arm64, click Download oc for Mac for ARM 64 . Save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account. Important You must install oc for RHEL 9 by downloading the binary. Installing oc by using an RPM package is not supported on Red Hat Enterprise Linux (RHEL) 9. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.17. # subscription-manager repos --enable="rhocp-4.17-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.4. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Install the openshift-cli package by running the following command: USD brew install openshift-cli Verification Verify your installation by using an oc command: USD oc <command> 2.1.3. Logging in to the OpenShift CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. The OpenShift CLI ( oc ) is installed. Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY and NO_PROXY variables. These environment variables are respected by the oc CLI so that all communication with the cluster goes through the HTTP proxy. Authentication headers are sent only when using HTTPS transport. Procedure Enter the oc login command and pass in a user name: USD oc login -u user1 When prompted, enter the required information: Example output Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. 1 Enter the OpenShift Container Platform server URL. 2 Enter whether to use insecure connections. 3 Enter the user's password. Note If you are logged in to the web console, you can generate an oc login command that includes your token and server information. You can use the command to log in to the OpenShift Container Platform CLI without the interactive prompts. To generate the command, select Copy login command from the username drop-down menu at the top right of the web console. You can now create a project or issue other commands for managing your cluster. 2.1.4. Logging in to the OpenShift CLI using a web browser You can log in to the OpenShift CLI ( oc ) with the help of a web browser to access and manage your cluster. This allows users to avoid inserting their access token into the command line. Warning Logging in to the CLI through the web browser runs a server on localhost with HTTP, not HTTPS; use with caution on multi-user workstations. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You must have a browser installed. Procedure Enter the oc login command with the --web flag: USD oc login <cluster_url> --web 1 1 Optionally, you can specify the server URL and callback port. For example, oc login <cluster_url> --web --callback-port 8280 localhost:8443 . The web browser opens automatically. If it does not, click the link in the command output. If you do not specify the OpenShift Container Platform server oc tries to open the web console of the cluster specified in the current oc configuration file. If no oc configuration exists, oc prompts interactively for the server URL. Example output Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session. If more than one identity provider is available, select your choice from the options provided. Enter your username and password into the corresponding browser fields. After you are logged in, the browser displays the text access token received successfully; please return to your terminal . Check the CLI for a login confirmation. Example output Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Note The web console defaults to the profile used in the session. To switch between Administrator and Developer profiles, log out of the OpenShift Container Platform web console and clear the cache. You can now create a project or issue other commands for managing your cluster. 2.1.5. Using the OpenShift CLI Review the following sections to learn how to complete common tasks using the CLI. 2.1.5.1. Creating a project Use the oc new-project command to create a new project. USD oc new-project my-project Example output Now using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.2. Creating a new app Use the oc new-app command to create a new application. USD oc new-app https://github.com/sclorg/cakephp-ex Example output --> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php" ... Run 'oc status' to view your app. 2.1.5.3. Viewing pods Use the oc get pods command to view the pods for the current project. Note When you run oc inside a pod and do not specify a namespace, the namespace of the pod is used by default. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none> 2.1.5.4. Viewing pod logs Use the oc logs command to view logs for a particular pod. USD oc logs cakephp-ex-1-deploy Example output --> Scaling cakephp-ex-1 to 1 --> Success 2.1.5.5. Viewing the current project Use the oc project command to view the current project. USD oc project Example output Using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.6. Viewing the status for the current project Use the oc status command to view information about the current project, such as services, deployments, and build configs. USD oc status Example output In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details. 2.1.5.7. Listing supported API resources Use the oc api-resources command to view the list of supported API resources on the server. USD oc api-resources Example output NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap ... 2.1.6. Getting help You can get help with CLI commands and OpenShift Container Platform resources in the following ways: Use oc help to get a list and description of all available CLI commands: Example: Get general help for the CLI USD oc help Example output OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ... Use the --help flag to get help about a specific CLI command: Example: Get help for the oc create command USD oc create --help Example output Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ... Use the oc explain command to view the description and fields for a particular resource: Example: View documentation for the Pod resource USD oc explain pods Example output KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ... 2.1.7. Logging out of the OpenShift CLI You can log out the OpenShift CLI to end your current session. Use the oc logout command. USD oc logout Example output Logged "user1" out on "https://openshift.example.com" This deletes the saved authentication token from the server and removes it from your configuration file. 2.2. Configuring the OpenShift CLI 2.2.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 2.2.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 2.2.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. 2.3. Usage of oc and kubectl commands The Kubernetes command-line interface (CLI), kubectl , can be used to run commands against a Kubernetes cluster. Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with OpenShift Container Platform , or you can gain extended functionality by using the oc binary. 2.3.1. The oc binary The oc binary offers the same capabilities as the kubectl binary, but it extends to natively support additional OpenShift Container Platform features, including: Full support for OpenShift Container Platform resources Resources such as DeploymentConfig , BuildConfig , Route , ImageStream , and ImageStreamTag objects are specific to OpenShift Container Platform distributions, and build upon standard Kubernetes primitives. Authentication The oc binary offers a built-in login command for authentication and lets you work with projects, which map Kubernetes namespaces to authenticated users. Read Understanding authentication for more information. Additional commands The additional command oc new-app , for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command oc new-project makes it easier to start a project that you can switch to as your default. Important If you installed an earlier version of the oc binary, you cannot use it to complete all of the commands in OpenShift Container Platform 4.17 . If you want the latest features, you must download and install the latest version of the oc binary corresponding to your OpenShift Container Platform server version. Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow older oc binaries to update. Using new capabilities might require newer oc binaries. A 4.3 server might have additional capabilities that a 4.2 oc binary cannot use and a 4.3 oc binary might have additional capabilities that are unsupported by a 4.2 server. Table 2.1. Compatibility Matrix X.Y ( oc Client) X.Y+N footnote:versionpolicyn[Where N is a number greater than or equal to 1.] ( oc Client) X.Y (Server) X.Y+N footnote:versionpolicyn[] (Server) Fully compatible. oc client might not be able to access server features. oc client might provide options and features that might not be compatible with the accessed server. 2.3.2. The kubectl binary The kubectl binary is provided as a means to support existing workflows and scripts for new OpenShift Container Platform users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl CLI. Existing users of kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Container Platform cluster. You can install the supported kubectl binary by following the steps to Install the OpenShift CLI . The kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM. For more information, see the kubectl documentation . 2.4. Managing CLI profiles A CLI configuration file allows you to configure different profiles, or contexts, for use with the CLI tools overview . A context consists of user authentication an OpenShift Container Platform server information associated with a nickname . 2.4.1. About switches between CLI profiles Contexts allow you to easily switch between multiple users across multiple OpenShift Container Platform servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After a user logs in with the oc CLI for the first time, OpenShift Container Platform creates a ~/.kube/config file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during an oc login operation or by manually configuring CLI profiles, the updated information is stored in the configuration file: CLI config file apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k 1 The clusters section defines connection details for OpenShift Container Platform clusters, including the address for their master server. In this example, one cluster is nicknamed openshift1.example.com:8443 and another is nicknamed openshift2.example.com:8443 . 2 This contexts section defines two contexts: one nicknamed alice-project/openshift1.example.com:8443/alice , using the alice-project project, openshift1.example.com:8443 cluster, and alice user, and another nicknamed joe-project/openshift1.example.com:8443/alice , using the joe-project project, openshift1.example.com:8443 cluster and alice user. 3 The current-context parameter shows that the joe-project/openshift1.example.com:8443/alice context is currently in use, allowing the alice user to work in the joe-project project on the openshift1.example.com:8443 cluster. 4 The users section defines user credentials. In this example, the user nickname alice/openshift1.example.com:8443 uses an access token. The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use the oc status or oc project command to verify your current working environment: Verify the current working environment USD oc status Example output oc status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example. List the current project USD oc project Example output Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443". You can run the oc login command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use the oc project command and enter the name of the project: USD oc project alice-project Example output Now using project "alice-project" on server "https://openshift1.example.com:8443". At any time, you can use the oc config view command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage. Note If you have access to administrator credentials but are no longer logged in as the default system user system:admin , you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project: USD oc login -u system:admin -n default 2.4.2. Manual configuration of CLI profiles Note This section covers more advanced usage of CLI configurations. In most situations, you can use the oc login and oc project commands to log in and switch between contexts and projects. If you want to manually configure your CLI config files, you can use the oc config command instead of directly modifying the files. The oc config command includes a number of helpful sub-commands for this purpose: Table 2.2. CLI configuration subcommands Subcommand Usage set-cluster Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. USD oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true] set-context Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. USD oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>] use-context Sets the current context using the specified context nickname. USD oc config use-context <context_nickname> set Sets an individual value in the CLI config file. USD oc config set <property_name> <property_value> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. The <property_value> is the new value being set. unset Unsets individual values in the CLI config file. USD oc config unset <property_name> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. view Displays the merged CLI configuration currently in use. USD oc config view Displays the result of the specified CLI config file. USD oc config view --config=<specific_filename> Example usage Log in as a user that uses an access token. This token is used by the alice user: USD oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 View the cluster entry automatically created: USD oc config view Example output apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 Update the current context to have users log in to the desired namespace: USD oc config set-context `oc config current-context` --namespace=<project_name> Examine the current context, to confirm that the changes are implemented: USD oc whoami -c All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched. 2.4.3. Load and merge rules You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration: CLI config files are retrieved from your workstation, using the following hierarchy and merge rules: If the --config option is set, then only that file is loaded. The flag is set once and no merging takes place. If the USDKUBECONFIG environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. Otherwise, the ~/.kube/config file is used and no merging takes place. The context to use is determined based on the first match in the following flow: The value of the --context option. The current-context value from the CLI config file. An empty value is allowed at this stage. The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster: The value of the --user for user name and --cluster option for cluster name. If the --context option is present, then use the context's value. An empty value is allowed at this stage. The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow: The values of any of the following command line options: --server , --api-version --certificate-authority --insecure-skip-tls-verify If cluster information and a value for the attribute is present, then use it. If you do not have a server location, then there is an error. The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command line options take precedence over config file values. Valid command line options are: --auth-path --client-certificate --client-key --token For any information that is still missing, default values are used and prompts are given for additional information. 2.5. Extending the OpenShift CLI with plugins You can write and install plugins to build on the default oc commands, allowing you to perform new and more complex tasks with the OpenShift Container Platform CLI. 2.5.1. Writing CLI plugins You can write a plugin for the OpenShift Container Platform CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existing oc command. Procedure This procedure creates a simple Bash plugin that prints a message to the terminal when the oc foo command is issued. Create a file called oc-foo . When naming your plugin file, keep the following in mind: The file must begin with oc- or kubectl- to be recognized as a plugin. The file name determines the command that invokes the plugin. For example, a plugin with the file name oc-foo-bar can be invoked by a command of oc foo bar . You can also use underscores if you want the command to contain dashes. For example, a plugin with the file name oc-foo_bar can be invoked by a command of oc foo-bar . Add the following contents to the file. #!/bin/bash # optional argument handling if [[ "USD1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "USD1" == "config" ]] then echo USDKUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo" After you install this plugin for the OpenShift Container Platform CLI, it can be invoked using the oc foo command. Additional resources Review the Sample plugin repository for an example of a plugin written in Go. Review the CLI runtime repository for a set of utilities to assist in writing plugins in Go. 2.5.2. Installing and using CLI plugins After you write a custom plugin for the OpenShift Container Platform CLI, you must install the plugin before use. Prerequisites You must have the oc CLI tool installed. You must have a CLI plugin file that begins with oc- or kubectl- . Procedure If necessary, update the plugin file to be executable. USD chmod +x <plugin_file> Place the file anywhere in your PATH , such as /usr/local/bin/ . USD sudo mv <plugin_file> /usr/local/bin/. Run oc plugin list to make sure that the plugin is listed. USD oc plugin list Example output The following compatible plugins are available: /usr/local/bin/<plugin_file> If your plugin is not listed here, verify that the file begins with oc- or kubectl- , is executable, and is on your PATH . Invoke the new command or option introduced by the plugin. For example, if you built and installed the kubectl-ns plugin from the Sample plugin repository , you can use the following command to view the current namespace. USD oc ns Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of oc-foo-bar is invoked by the oc foo bar command. 2.6. OpenShift CLI developer command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) developer commands. For administrator commands, see the OpenShift CLI administrator command reference . Run oc help to list all commands or run oc <command> --help to get additional details for a specific command. 2.6.1. OpenShift CLI (oc) developer commands 2.6.1.1. oc annotate Update the annotations on a resource Example usage # Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in "pod.json" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description- 2.6.1.2. oc api-resources Print the supported API resources on the server Example usage # Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io 2.6.1.3. oc api-versions Print the supported API versions on the server, in the form of "group/version" Example usage # Print the supported API versions oc api-versions 2.6.1.4. oc apply Apply a configuration to a resource by file name or stdin Example usage # Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap 2.6.1.5. oc apply edit-last-applied Edit latest last-applied-configuration annotations of a resource/object Example usage # Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json 2.6.1.6. oc apply set-last-applied Set the last-applied-configuration annotation on a live object to match the contents of a file Example usage # Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true 2.6.1.7. oc apply view-last-applied View the latest last-applied-configuration annotations of a resource/object Example usage # View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json 2.6.1.8. oc attach Attach to a running container Example usage # Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx 2.6.1.9. oc auth can-i Check whether an action is allowed Example usage # Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account "foo" of namespace "dev" can list pods # in the namespace "prod". # You must be allowed to use impersonation for the global option "--as". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i '*' '*' # Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace "foo" oc auth can-i --list --namespace=foo 2.6.1.10. oc auth reconcile Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects Example usage # Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml 2.6.1.11. oc auth whoami Experimental: Check self subject attributes Example usage # Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json 2.6.1.12. oc autoscale Autoscale a deployment config, deployment, replica set, stateful set, or replication controller Example usage # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80 2.6.1.13. oc cancel-build Cancel running, pending, or new builds Example usage # Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new 2.6.1.14. oc cluster-info Display cluster information Example usage # Print the address of the control plane and cluster services oc cluster-info 2.6.1.15. oc cluster-info dump Dump relevant information for debugging and diagnosis Example usage # Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state 2.6.1.16. oc completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Example usage # Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf " # oc shell completion source 'USDHOME/.kube/completion.bash.inc' " >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > "USD{fpath[1]}/_oc" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\.kube\completion.ps1 Add-Content USDPROFILE "USDHOME\.kube\completion.ps1" ## Execute completion code in the profile Add-Content USDPROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE 2.6.1.17. oc config current-context Display the current-context Example usage # Display the current-context oc config current-context 2.6.1.18. oc config delete-cluster Delete the specified cluster from the kubeconfig Example usage # Delete the minikube cluster oc config delete-cluster minikube 2.6.1.19. oc config delete-context Delete the specified context from the kubeconfig Example usage # Delete the context for the minikube cluster oc config delete-context minikube 2.6.1.20. oc config delete-user Delete the specified user from the kubeconfig Example usage # Delete the minikube user oc config delete-user minikube 2.6.1.21. oc config get-clusters Display clusters defined in the kubeconfig Example usage # List the clusters that oc knows about oc config get-clusters 2.6.1.22. oc config get-contexts Describe one or many contexts Example usage # List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context 2.6.1.23. oc config get-users Display users defined in the kubeconfig Example usage # List the users that oc knows about oc config get-users 2.6.1.24. oc config new-admin-kubeconfig Generate, make the server trust, and display a new admin.kubeconfig Example usage # Generate a new admin kubeconfig oc config new-admin-kubeconfig 2.6.1.25. oc config new-kubelet-bootstrap-kubeconfig Generate, make the server trust, and display a new kubelet /etc/kubernetes/kubeconfig Example usage # Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig 2.6.1.26. oc config refresh-ca-bundle Update the OpenShift CA bundle by contacting the API server Example usage # Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run 2.6.1.27. oc config rename-context Rename a context from the kubeconfig file Example usage # Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name 2.6.1.28. oc config set Set an individual value in a kubeconfig file Example usage # Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo "cert_data_here" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true 2.6.1.29. oc config set-cluster Set a cluster entry in kubeconfig Example usage # Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4 2.6.1.30. oc config set-context Set a context entry in kubeconfig Example usage # Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin 2.6.1.31. oc config set-credentials Set a user entry in kubeconfig Example usage # Set only the "client-key" field on the "cluster-admin" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Enable new exec auth plugin for the "cluster-admin" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never # Define new exec auth plugin arguments for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove- 2.6.1.32. oc config unset Unset an individual value in a kubeconfig file Example usage # Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace 2.6.1.33. oc config use-context Set the current-context in a kubeconfig file Example usage # Use the context for the minikube cluster oc config use-context minikube 2.6.1.34. oc config view Display merged kubeconfig settings or a specified kubeconfig file Example usage # Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' 2.6.1.35. oc cp Copy files and directories to and from containers Example usage # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar 2.6.1.36. oc create Create a resource from a file or from stdin Example usage # Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json 2.6.1.37. oc create build Create a new build Example usage # Create a new build oc create build myapp 2.6.1.38. oc create clusterresourcequota Create a cluster resource quota Example usage # Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10 2.6.1.39. oc create clusterrole Create a cluster role Example usage # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named "foo" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" 2.6.1.40. oc create clusterrolebinding Create a cluster role binding for a particular cluster role Example usage # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1 2.6.1.41. oc create configmap Create a config map from a local file, directory or literal value Example usage # Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.6.1.42. oc create cronjob Create a cron job with the specified name Example usage # Create a cron job oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date 2.6.1.43. oc create deployment Create a deployment with the specified name Example usage # Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 # Create a deployment named my-dep that runs multiple containers oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx 2.6.1.44. oc create deploymentconfig Create a deployment config with default options that uses a given image Example usage # Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx 2.6.1.45. oc create identity Manually create an identity (only needed if automatic creation is disabled) Example usage # Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones 2.6.1.46. oc create imagestream Create a new empty image stream Example usage # Create a new image stream oc create imagestream mysql 2.6.1.47. oc create imagestreamtag Create a new image stream tag Example usage # Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0 2.6.1.48. oc create ingress Create an ingress with the specified name Example usage # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret "my-cert" oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all --class=otheringress --rule="/path=svc:port" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \ --annotation ingress.annotation1=foo \ --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default \ --rule="foo.com/path*=svc:8080" \ --rule="bar.com/admin*=svc2:http" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/path/subpath*=othersvc:8080" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default \ --rule="foo.com/*=svc:8080,tls=secret1" # Create an ingress with a default backend oc create ingress ingdefault --class=default \ --default-backend=defaultsvc:http \ --rule="foo.com/*=svc:8080,tls=secret1" 2.6.1.49. oc create job Create a job with the specified name Example usage # Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named "a-cronjob" oc create job test-job --from=cronjob/a-cronjob 2.6.1.50. oc create namespace Create a namespace with the specified name Example usage # Create a new namespace named my-namespace oc create namespace my-namespace 2.6.1.51. oc create poddisruptionbudget Create a pod disruption budget with the specified name Example usage # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50% 2.6.1.52. oc create priorityclass Create a priority class with the specified name Example usage # Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description="high priority" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never" 2.6.1.53. oc create quota Create a quota with the specified name Example usage # Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort 2.6.1.54. oc create role Create a role with single rule Example usage # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named "pod-reader" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named "foo" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named "foo" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status 2.6.1.55. oc create rolebinding Create a role binding for a particular role or cluster role Example usage # Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev 2.6.1.56. oc create route edge Create a route that uses edge TLS termination Example usage # Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets 2.6.1.57. oc create route passthrough Create a route that uses passthrough TLS termination Example usage # Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com 2.6.1.58. oc create route reencrypt Create a route that uses reencrypt TLS termination Example usage # Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend 2.6.1.59. oc create secret docker-registry Create a secret for use with a Docker registry Example usage # If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json 2.6.1.60. oc create secret generic Create a secret from a local file, directory, or literal value Example usage # Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.6.1.61. oc create secret tls Create a TLS secret Example usage # Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key 2.6.1.62. oc create service clusterip Create a ClusterIP service Example usage # Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip="None" 2.6.1.63. oc create service externalname Create an ExternalName service Example usage # Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com 2.6.1.64. oc create service loadbalancer Create a LoadBalancer service Example usage # Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080 2.6.1.65. oc create service nodeport Create a NodePort service Example usage # Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080 2.6.1.66. oc create serviceaccount Create a service account with the specified name Example usage # Create a new service account named my-service-account oc create serviceaccount my-service-account 2.6.1.67. oc create token Request a service account token Example usage # Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc 2.6.1.68. oc create user Manually create a user (only needed if automatic creation is disabled) Example usage # Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones" 2.6.1.69. oc create useridentitymapping Manually map an identity to a user Example usage # Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones 2.6.1.70. oc debug Launch a new instance of a pod for debugging Example usage # Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows node # Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns 2.6.1.71. oc delete Delete resources by file names, stdin, resources and names, or by resources and label selector Example usage # Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all 2.6.1.72. oc describe Show details of a specific resource or group of resources Example usage # Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in "pod.json" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend 2.6.1.73. oc diff Diff the live version against a would-be applied version Example usage # Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f - 2.6.1.74. oc edit Edit a resource on the server Example usage # Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR="nano" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status' 2.6.1.75. oc events List events Example usage # List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal 2.6.1.76. oc exec Execute a command in a container Example usage # Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date 2.6.1.77. oc explain Get documentation for a resource Example usage # Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2 2.6.1.78. oc expose Expose a replicated application as a service or route Example usage # Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx 2.6.1.79. oc extract Extract secrets or config maps to disk Example usage # Extract the secret "test" to the current directory oc extract secret/test # Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map "nginx" to STDOUT oc extract configmap/nginx --to=- # Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf 2.6.1.80. oc get Display one or many resources Example usage # List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status 2.6.1.81. oc get-token Experimental: Get token from external OIDC issuer as credentials exec plugin Example usage # Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343 2.6.1.82. oc idle Idle scalable resources Example usage # Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt 2.6.1.83. oc image append Add layers to images and push them to a registry Example usage # Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz 2.6.1.84. oc image extract Copy files from an image to the file system Example usage # Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:] 2.6.1.85. oc image info Display information about an image Example usage # Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64 2.6.1.86. oc image mirror Mirror images from one repository to another Example usage # Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \ docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all exist # You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=linux/386 \ --keep-manifest-list=true 2.6.1.87. oc import-image Import images from a container image registry Example usage # Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm 2.6.1.88. oc kustomize Build a kustomization target from a directory or URL Example usage # Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 2.6.1.89. oc label Update the labels on a resource Example usage # Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in "pod.json" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar- 2.6.1.90. oc login Log in to a server Example usage # Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 # Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080 2.6.1.91. oc logout End the current server session Example usage # Log out oc logout 2.6.1.92. oc logs Print the logs for a container in a pod Example usage # Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container 2.6.1.93. oc new-app Create a new application Example usage # List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match "ruby" oc new-app --search ruby # Search for "ruby", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for "ruby" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml 2.6.1.94. oc new-build Create a new build configuration Example usage # Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp 2.6.1.95. oc new-project Request a new project Example usage # Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team." 2.6.1.96. oc observe Observe changes to resources and react to them (experimental) Example usage # Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh 2.6.1.97. oc patch Update fields of a resource Example usage # Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' 2.6.1.98. oc plugin list List all visible plugin executables on a user's PATH Example usage # List all available plugins oc plugin list 2.6.1.99. oc policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1 2.6.1.100. oc policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml 2.6.1.101. oc policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml 2.6.1.102. oc port-forward Forward one or more local ports to a pod Example usage # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000 2.6.1.103. oc process Process a template into list of resources Example usage # Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f - 2.6.1.104. oc project Switch to another project Example usage # Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project 2.6.1.105. oc projects Display existing projects Example usage # List all projects oc projects 2.6.1.106. oc proxy Run a proxy to the Kubernetes API server Example usage # To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api 2.6.1.107. oc registry login Log in to the integrated registry Example usage # Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS 2.6.1.108. oc replace Replace a resource by file name or stdin Example usage # Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*USD/\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json 2.6.1.109. oc rollback Revert part of an application back to a deployment Example usage # Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json 2.6.1.110. oc rollout cancel Cancel the in-progress deployment Example usage # Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx 2.6.1.111. oc rollout history View rollout history Example usage # View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3 2.6.1.112. oc rollout latest Start a new rollout for a deployment config with the latest state from its triggers Example usage # Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json 2.6.1.113. oc rollout pause Mark the provided resource as paused Example usage # Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx 2.6.1.114. oc rollout restart Restart a resource Example usage # Restart all deployments in test-namespace namespace oc rollout restart deployment -n test-namespace # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx 2.6.1.115. oc rollout resume Resume a paused resource Example usage # Resume an already paused deployment oc rollout resume dc/nginx 2.6.1.116. oc rollout retry Retry the latest failed rollout Example usage # Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend 2.6.1.117. oc rollout status Show the status of the rollout Example usage # Watch the status of the latest rollout oc rollout status dc/nginx 2.6.1.118. oc rollout undo Undo a rollout Example usage # Roll back to the deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3 2.6.1.119. oc rsh Start a shell session in a container Example usage # Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled 2.6.1.120. oc rsync Copy files between a local file system and a pod Example usage # Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir 2.6.1.121. oc run Run a particular image on the cluster Example usage # Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default" # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN> 2.6.1.122. oc scale Set a new size for a deployment, replica set, or replication controller Example usage # Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web 2.6.1.123. oc secrets link Link secrets to a service account Example usage # Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount 2.6.1.124. oc secrets unlink Detach secrets from a service account Example usage # Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name ... 2.6.1.125. oc set build-hook Update a build hook on a build config Example usage # Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh" 2.6.1.126. oc set build-secret Update a build secret on a build config Example usage # Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret 2.6.1.127. oc set data Update the data within a config map or secret Example usage # Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir 2.6.1.128. oc set deployment-hook Update a deployment hook on a deployment config Example usage # Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh 2.6.1.129. oc set env Update environment variables on a pod template Example usage # Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers="c1" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp 2.6.1.130. oc set image Update the image of a pod template Example usage # Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml 2.6.1.131. oc set image-lookup Change how images are resolved when deploying applications Example usage # Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all 2.6.1.132. oc set probe Update a probe on a pod template Example usage # Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30 2.6.1.133. oc set resources Update resource requests/limits on objects with pod templates Example usage # Set a deployments nginx container CPU limits to "200m and memory to 512Mi" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml 2.6.1.134. oc set route-backends Update the backends for a route Example usage # Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero 2.6.1.135. oc set selector Set the selector on a resource Example usage # Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f - 2.6.1.136. oc set serviceaccount Update the service account of a resource Example usage # Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml 2.6.1.137. oc set subject Update the user, group, or service account in a role binding or cluster role binding Example usage # Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml 2.6.1.138. oc set triggers Update the triggers on one or more objects Example usage # Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main 2.6.1.139. oc set volumes Update volumes on a pod template Example usage # List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount "v1" from container "c1" # (and by removing the volume "v1" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string> 2.6.1.140. oc start-build Start a new build Example usage # Starts build from build config "hello-world" oc start-build hello-world # Starts build from a build "hello-world-1" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config "hello-world" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config "hello-world" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait 2.6.1.141. oc status Show an overview of the current project Example usage # See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest 2.6.1.142. oc tag Tag existing images into image streams Example usage # Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d 2.6.1.143. oc version Print the client and server version information Example usage # Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version --output json # Print the OpenShift client version information for the current context oc version --client 2.6.1.144. oc wait Experimental: Wait for a specific condition on one or many resources Example usage # Wait for the pod "busybox1" to contain the status condition of type "Ready" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod "busybox1" to contain the status phase to be "Running" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for pod "busybox1" to be Ready oc wait --for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' pod/busybox1 # Wait for the service "loadbalancer" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s 2.6.1.145. oc whoami Return information about the current session Example usage # Display the currently authenticated user oc whoami 2.6.2. Additional resources OpenShift CLI administrator command reference 2.7. OpenShift CLI administrator command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) administrator commands. You must have cluster-admin or equivalent permissions to use these commands. For developer commands, see the OpenShift CLI developer command reference . Run oc adm -h to list all administrator commands or run oc <command> --help to get additional details for a specific command. 2.7.1. OpenShift CLI (oc) administrator commands 2.7.1.1. oc adm build-chain Output the inputs and dependencies of your builds Example usage # Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all 2.7.1.2. oc adm catalog mirror Mirror an operator-registry catalog Example usage # Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true 2.7.1.3. oc adm certificate approve Approve a certificate signing request Example usage # Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp 2.7.1.4. oc adm certificate deny Deny a certificate signing request Example usage # Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp 2.7.1.5. oc adm copy-to-node Copy specified files to the node Example usage # Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0 2.7.1.6. oc adm cordon Mark node as unschedulable Example usage # Mark node "foo" as unschedulable oc adm cordon foo 2.7.1.7. oc adm create-bootstrap-project-template Create a bootstrap project template Example usage # Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml 2.7.1.8. oc adm create-error-template Create an error page template Example usage # Output a template for the error page to stdout oc adm create-error-template 2.7.1.9. oc adm create-login-template Create a login template Example usage # Output a template for the login page to stdout oc adm create-login-template 2.7.1.10. oc adm create-provider-selection-template Create a provider selection template Example usage # Output a template for the provider selection page to stdout oc adm create-provider-selection-template 2.7.1.11. oc adm drain Drain node in preparation for maintenance Example usage # Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900 2.7.1.12. oc adm groups add-users Add users to a group Example usage # Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2 2.7.1.13. oc adm groups new Create a new group Example usage # Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name 2.7.1.14. oc adm groups prune Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.7.1.15. oc adm groups remove-users Remove users from a group Example usage # Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2 2.7.1.16. oc adm groups sync Sync OpenShift groups with records from an external provider Example usage # Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm 2.7.1.17. oc adm inspect Collect debugging data for a given resource Example usage # Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions 2.7.1.18. oc adm migrate icsp Update imagecontentsourcepolicy file(s) to imagedigestmirrorset file(s) Example usage # Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir 2.7.1.19. oc adm migrate template-instances Update template instances to point to the latest group-version-kinds Example usage # Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm 2.7.1.20. oc adm must-gather Launch a new instance of a pod for gathering debug information Example usage # Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh 2.7.1.21. oc adm new-project Create a new project Example usage # Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east' 2.7.1.22. oc adm node-image create Create an ISO image for booting the nodes to be added to the target cluster Example usage # Create the ISO image and download it in the current folder oc adm node-image create # Use a different assets folder oc adm node-image create --dir=/tmp/assets # Specify a custom image name oc adm node-image create -o=my-node.iso # Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb # Create an ISO to add a single node with a root device hint and without # using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda 2.7.1.23. oc adm node-image monitor Monitor new nodes being added to an OpenShift cluster Example usage # Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses 192.168.111.83 # Monitor multiple nodes being added to a cluster by separating each IP address with a comma oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84 2.7.1.24. oc adm node-logs Display and filter node logs Example usage # Show kubelet logs from all control plane nodes oc adm node-logs --role master -u kubelet # See what logs are available in control plane nodes in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all control plane nodes oc adm node-logs --role master --path=cron 2.7.1.25. oc adm ocp-certificates monitor-certificates Watch platform certificates Example usage # Watch platform certificates oc adm ocp-certificates monitor-certificates 2.7.1.26. oc adm ocp-certificates regenerate-leaf Regenerate client and serving certificates of an OpenShift cluster Example usage # Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key 2.7.1.27. oc adm ocp-certificates regenerate-machine-config-server-serving-cert Regenerate the machine config operator certificates in an OpenShift cluster Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server 2.7.1.28. oc adm ocp-certificates regenerate-top-level Regenerate the top level certificates in an OpenShift cluster Example usage # Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key 2.7.1.29. oc adm ocp-certificates remove-old-trust Remove old CAs from ConfigMaps representing platform trust bundles in an OpenShift cluster Example usage # Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z 2.7.1.30. oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server Update user-data secrets in an OpenShift cluster to use updated MCO certfs Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server 2.7.1.31. oc adm pod-network isolate-projects Isolate project network Example usage # Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret' 2.7.1.32. oc adm pod-network join-projects Join project network Example usage # Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret' 2.7.1.33. oc adm pod-network make-projects-global Make project network global Example usage # Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share' 2.7.1.34. oc adm policy add-cluster-role-to-group Add a role to groups for all projects in the cluster Example usage # Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins 2.7.1.35. oc adm policy add-cluster-role-to-user Add a role to users for all projects in the cluster Example usage # Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser 2.7.1.36. oc adm policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1 2.7.1.37. oc adm policy add-scc-to-group Add a security context constraint to groups Example usage # Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2 2.7.1.38. oc adm policy add-scc-to-user Add a security context constraint to users or a service account Example usage # Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1 2.7.1.39. oc adm policy remove-cluster-role-from-group Remove a role from groups for all projects in the cluster Example usage # Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins 2.7.1.40. oc adm policy remove-cluster-role-from-user Remove a role from users for all projects in the cluster Example usage # Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser 2.7.1.41. oc adm policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml 2.7.1.42. oc adm policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml 2.7.1.43. oc adm prune builds Remove old completed and failed builds Example usage # Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm 2.7.1.44. oc adm prune deployments Remove old completed and failed deployment configs Example usage # Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm 2.7.1.45. oc adm prune groups Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.7.1.46. oc adm prune images Remove unreferenced images Example usage # See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm 2.7.1.47. oc adm prune renderedmachineconfigs Prunes rendered MachineConfigs in an OpenShift cluster Example usage # See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs # To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs --confirm # See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker # Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs --count=10 --confirm # Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm 2.7.1.48. oc adm prune renderedmachineconfigs list Lists rendered MachineConfigs in an OpenShift cluster Example usage # List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker # List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use 2.7.1.49. oc adm reboot-machine-config-pool Initiate reboot of the specified MachineConfigPool Example usage # Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master 2.7.1.50. oc adm release extract Extract the contents of an update payload to disk Example usage # Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.7.1.51. oc adm release info Display information about a release Example usage # Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.7.1.52. oc adm release mirror Mirror a release to a different image registry location Example usage # Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release \ --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 \ --to=registry.example.com/your/repository --apply-release-image-signature 2.7.1.53. oc adm release new Create a new OpenShift release Example usage # Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 \ -- 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 \ cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 2.7.1.54. oc adm restart-kubelet Restart kubelet on the specified nodes Example usage # Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig 2.7.1.55. oc adm taint Update the taints on one or more nodes Example usage # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule 2.7.1.56. oc adm top images Show usage statistics for images Example usage # Show usage statistics for images oc adm top images 2.7.1.57. oc adm top imagestreams Show usage statistics for image streams Example usage # Show usage statistics for image streams oc adm top imagestreams 2.7.1.58. oc adm top node Display resource (CPU/memory) usage of nodes Example usage # Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME 2.7.1.59. oc adm top pod Display resource (CPU/memory) usage of pods Example usage # Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel 2.7.1.60. oc adm uncordon Mark node as schedulable Example usage # Mark node "foo" as schedulable oc adm uncordon foo 2.7.1.61. oc adm upgrade Upgrade a cluster or adjust the upgrade channel Example usage # View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true 2.7.1.62. oc adm verify-image-signature Verify the image identity contained in the image signature Example usage # Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 \ --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all 2.7.1.63. oc adm wait-for-node-reboot Wait for nodes to reboot after running oc adm reboot-machine-config-pool Example usage # Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4 2.7.1.64. oc adm wait-for-stable-cluster Wait for the platform operators to become stable Example usage # Wait for all cluster operators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m 2.7.2. Additional resources OpenShift CLI developer command reference
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\"", "yum install openshift-clients", "oc <command>", "brew install openshift-cli", "oc <command>", "oc login -u user1", "Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.", "oc login <cluster_url> --web 1", "Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.", "Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>", "oc new-project my-project", "Now using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc new-app https://github.com/sclorg/cakephp-ex", "--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>", "oc logs cakephp-ex-1-deploy", "--> Scaling cakephp-ex-1 to 1 --> Success", "oc project", "Using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc status", "In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.", "oc api-resources", "NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap", "oc help", "OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application", "oc create --help", "Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]", "oc explain pods", "KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources", "oc logout", "Logged \"user1\" out on \"https://openshift.example.com\"", "oc completion bash > oc_bash_completion", "sudo cp oc_bash_completion /etc/bash_completion.d/", "cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF", "apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k", "oc status", "status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.", "oc project", "Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".", "oc project alice-project", "Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".", "oc login -u system:admin -n default", "oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]", "oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]", "oc config use-context <context_nickname>", "oc config set <property_name> <property_value>", "oc config unset <property_name>", "oc config view", "oc config view --config=<specific_filename>", "oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config view", "apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config set-context `oc config current-context` --namespace=<project_name>", "oc whoami -c", "#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"", "chmod +x <plugin_file>", "sudo mv <plugin_file> /usr/local/bin/.", "oc plugin list", "The following compatible plugins are available: /usr/local/bin/<plugin_file>", "oc ns", "Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-", "Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io", "Print the supported API versions oc api-versions", "Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json", "Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true", "View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json", "Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx", "Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account \"foo\" of namespace \"dev\" can list pods # in the namespace \"prod\". # You must be allowed to use impersonation for the global option \"--as\". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo", "Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml", "Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json", "Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80", "Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new", "Print the address of the control plane and cluster services oc cluster-info", "Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # oc shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE", "Display the current-context oc config current-context", "Delete the minikube cluster oc config delete-cluster minikube", "Delete the context for the minikube cluster oc config delete-context minikube", "Delete the minikube user oc config delete-user minikube", "List the clusters that oc knows about oc config get-clusters", "List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context", "List the users that oc knows about oc config get-users", "Generate a new admin kubeconfig oc config new-admin-kubeconfig", "Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig", "Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run", "Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name", "Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4", "Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin", "Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never # Define new exec auth plugin arguments for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-", "Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace", "Use the context for the minikube cluster oc config use-context minikube", "Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json", "Create a new build oc create build myapp", "Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10", "Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 # Create a deployment named my-dep that runs multiple containers oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx", "Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx", "Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones", "Create a new image stream oc create imagestream mysql", "Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0", "Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"", "Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob", "Create a new namespace named my-namespace oc create namespace my-namespace", "Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%", "Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort", "Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status", "Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets", "Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com", "Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend", "If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key", "Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"", "Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com", "Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080", "Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080", "Create a new service account named my-service-account oc create serviceaccount my-service-account", "Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"", "Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones", "Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows node # Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns", "Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all", "Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend", "Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -", "Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status'", "List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal", "Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date", "Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2", "Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx", "Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf", "List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status", "Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343", "Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt", "Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz", "Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]", "Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64", "Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all exist # You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=linux/386 --keep-manifest-list=true", "Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm", "Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-", "Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 # Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080", "Log out oc logout", "Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container", "List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml", "Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp", "Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"", "Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh", "Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "List all available plugins oc plugin list", "Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml", "Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000", "Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -", "Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project", "List all projects oc projects", "To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api", "Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS", "Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json", "Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json", "Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx", "View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3", "Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json", "Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx", "Restart all deployments in test-namespace namespace oc rollout restart deployment -n test-namespace # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx", "Resume an already paused deployment oc rollout resume dc/nginx", "Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend", "Watch the status of the latest rollout oc rollout status dc/nginx", "Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3", "Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled", "Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir", "Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web", "Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount", "Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name", "Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"", "Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret", "Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir", "Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh", "Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp", "Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all", "Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30", "Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero", "Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -", "Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml", "Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml", "Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main", "List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>", "Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait", "See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest", "Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d", "Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version --output json # Print the OpenShift client version information for the current context oc version --client", "Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for pod \"busybox1\" to be Ready oc wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1 # Wait for the service \"loadbalancer\" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s", "Display the currently authenticated user oc whoami", "Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all", "Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true", "Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp", "Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp", "Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0", "Mark node \"foo\" as unschedulable oc adm cordon foo", "Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml", "Output a template for the error page to stdout oc adm create-error-template", "Output a template for the login page to stdout oc adm create-login-template", "Output a template for the provider selection page to stdout oc adm create-provider-selection-template", "Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900", "Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2", "Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name", "Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2", "Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm", "Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions", "Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir", "Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm", "Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh", "Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'", "Create the ISO image and download it in the current folder oc adm node-image create # Use a different assets folder oc adm node-image create --dir=/tmp/assets # Specify a custom image name oc adm node-image create -o=my-node.iso # Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb # Create an ISO to add a single node with a root device hint and without # using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda", "Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses 192.168.111.83 # Monitor multiple nodes being added to a cluster by separating each IP address with a comma oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84", "Show kubelet logs from all control plane nodes oc adm node-logs --role master -u kubelet # See what logs are available in control plane nodes in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all control plane nodes oc adm node-logs --role master --path=cron", "Watch platform certificates oc adm ocp-certificates monitor-certificates", "Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key", "Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server", "Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key", "Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z", "Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server", "Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'", "Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'", "Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'", "Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins", "Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1", "Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2", "Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1", "Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins", "Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml", "Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm", "Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm", "Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm", "See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs # To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs --confirm # See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker # Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs --count=10 --confirm # Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm", "List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker # List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use", "Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master", "Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature", "Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 --previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11", "Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig", "Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule", "Show usage statistics for images oc adm top images", "Show usage statistics for image streams oc adm top imagestreams", "Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME", "Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel", "Mark node \"foo\" as schedulable oc adm uncordon foo", "View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true", "Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all", "Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4", "Wait for all cluster operators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc
Chapter 15. Network policy
Chapter 15. Network policy 15.1. About network policy As a cluster administrator, you can define network policies that restrict traffic to pods in your cluster. 15.1.1. About network policy In a cluster using a Kubernetes Container Network Interface (CNI) plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.10, OpenShift SDN supports using network policy in its default network isolation mode. The OpenShift SDN cluster network provider now supports the egress network policy as specified by the egress field. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 15.1.2. Optimizations for network policy Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. Note The guidelines for efficient use of network policy rules applies to only the OpenShift SDN cluster network provider. It is inefficient to apply NetworkPolicy objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a podSelector . For example, if the spec podSelector and the ingress podSelector within a NetworkPolicy object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. When designing your network policy, refer to the following guidelines: Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. NetworkPolicy objects that select a whole namespace, by using the namespaceSelector or an empty podSelector , generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. 15.1.3. steps Creating a network policy Optional: Defining a default network policy 15.1.4. Additional resources Projects and namespaces Configuring multitenant network policy NetworkPolicy API 15.2. Logging network policy events As a cluster administrator, you can configure network policy audit logging for your cluster and enable logging for one or more namespaces. Note Audit logging of network policies is available for only the OVN-Kubernetes cluster network provider . 15.2.1. Network policy audit logging The OVN-Kubernetes cluster network provider uses Open Virtual Network (OVN) ACLs to manage network policy. Audit logging exposes allow and deny ACL events. You can configure the destination for network policy audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster. Network policy audit logging is enabled per namespace by annotating the namespace with the k8s.ovn.org/acl-logging key as in the following example: Example namespace annotation kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { "deny": "info", "allow": "info" } The logging format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0 . An example log entry might resemble the following: Example ACL deny log entry 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 The following table describes namespace annotation values: Table 15.1. Network policy audit logging namespace annotation Annotation Value k8s.ovn.org/acl-logging You must specify at least one of allow , deny , or both to enable network policy audit logging for a namespace. deny Optional: Specify alert , warning , notice , info , or debug . allow Optional: Specify alert , warning , notice , info , or debug . 15.2.2. Network policy audit configuration The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates default values for network policy audit logging feature. Audit logging configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 The following table describes the configuration fields for network policy audit logging. Table 15.2. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . 15.2.3. Configuring network policy auditing for a cluster As a cluster administrator, you can customize network policy audit logging for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To customize the network policy audit logging configuration, enter the following command: USD oc edit network.operator.openshift.io/cluster Tip You can alternatively customize and apply the following YAML to configure audit logging: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 Verification To create a namespace with network policies complete the following steps: Create a namespace for verification: USD cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOF Example output namespace/verify-audit-logging created Enable audit logging: USD oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "alert" }' namespace/verify-audit-logging annotated Create network policies for the namespace: USD cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: namespace: verify-audit-logging EOF Example output networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created Create a pod for source traffic in the default namespace: USD cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF Create two pods in the verify-audit-logging namespace: USD for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF done Example output pod/client created pod/server created To generate traffic and produce network policy audit log entries, complete the following steps: Obtain the IP address for pod named server in the verify-audit-logging namespace: USD POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') Ping the IP address from the command from the pod named client in the default namespace and confirm that all packets are dropped: USD oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed: USD oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_allow-from-same-namespace_0", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_allow-from-same-namespace_0", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 15.2.4. Enabling network policy audit logging for a namespace As a cluster administrator, you can enable network policy audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To enable network policy audit logging for a namespace, enter the following command: USD oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }' where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to enable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" } Example output namespace/verify-audit-logging annotated Verification Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 15.2.5. Disabling network policy audit logging for a namespace As a cluster administrator, you can disable network policy audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable network policy audit logging for a namespace, enter the following command: USD oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging- where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to disable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null Example output namespace/verify-audit-logging annotated 15.2.6. Additional resources About network policy 15.3. Creating a network policy As a user with the admin role, you can create a network policy for a namespace. 15.3.1. Creating a network policy To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] .Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 15.3.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 15.3.3. Additional resources Accessing the web console 15.4. Viewing a network policy As a user with the admin role, you can view a network policy for a namespace. 15.4.1. Viewing network policies You can examine the network policies in a namespace. Note If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command: USD oc get networkpolicy Optional: To examine a specific network policy, enter the following command: USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example: USD oc describe networkpolicy allow-same-namespace Output for oc describe command Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 15.4.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 15.5. Editing a network policy As a user with the admin role, you can edit an existing network policy for a namespace. 15.5.1. Editing a network policy You can edit a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure Optional: To list the network policy objects in a namespace, enter the following command: USD oc get networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the network policy object. If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the network policy object directly, enter the following command: USD oc edit networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the network policy object is updated. USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 15.5.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 15.5.3. Additional resources Creating a network policy 15.6. Deleting a network policy As a user with the admin role, you can delete a network policy from a namespace. 15.6.1. Deleting a network policy You can delete a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure To delete a network policy object, enter the following command: USD oc delete networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 15.7. Defining a default network policy for projects As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one. 15.7.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> After you save your changes, create a new project to verify that your changes were successfully applied. 15.7.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 15.8. Configuring multitenant isolation with network policy As a cluster administrator, you can configure your network policies to provide multitenant network isolation. Note If you are using the OpenShift SDN cluster network provider, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set. 15.8.1. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 15.8.2. steps Defining a default network policy 15.8.3. Additional resources OpenShift SDN network isolation modes
[ "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }", "2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "oc edit network.operator.openshift.io/cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF", "namespace/verify-audit-logging created", "oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"alert\" }'", "namespace/verify-audit-logging annotated", "cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: namespace: verify-audit-logging EOF", "networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created", "cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF", "for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done", "pod/client created pod/server created", "POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')", "oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms", "oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }", "namespace/verify-audit-logging annotated", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null", "namespace/verify-audit-logging annotated", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/default-deny created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy <policy_name> -n <namespace>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/default-deny deleted", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/network-policy
Chapter 1. About the migration guide
Chapter 1. About the migration guide This guide details the changes in the Apache Camel components that you must consider when migrating your application. This guide provides information about following changes. Supported Java versions Changes to Apache Camel components and deprecated components Changes to APIs and deprecated APIs Updates to EIP Updated to tracing and health checks
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/about-migration-guide
Chapter 1. Customizing nodes
Chapter 1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.15.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.15.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2 This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server. You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server. Tang Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible. Clevis is an automated decryption framework used to implement decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments For Assisted installer deployments: Each cluster can only have a single encryption method, Tang or TPM Encryption can be enabled on some or all nodes There is no Tang threshold; all servers must be valid and operational Encryption applies to the installation disks only, not to the workload disks Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. In the case of offline provisioning, the offline server is accessed using an included advertisement, and only uses that supplied advertisement if the number of online servers do not meet the set threshold. For example, the threshold value of 2 in the following configuration can be reached by accessing two Tang servers, with the offline server available as a backup, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.15.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" 4 threshold: 2 5 openshift: fips: true 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding rather than fetching the advertisement from the server at runtime. This lets the server be unavailable at provisioning time. 5 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible to reach the threshold value by using a single encryption mode. For example, if you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available. 1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure provided one device remains available. Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a pristine, non-degraded state. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to offer convenient, short-hand syntax for writing and validating machine configs. For more information, see "Creating machine configs with Butane". You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node. This is required on most Dell systems. Check the manual for your specific system. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang1.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang1.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command generates a thumbprint of the exchange key. No data passes to the encryption command during this step; /dev/null exists here as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Optional: For offline Tang provisioning: Obtain the advertisement from the server using the curl command. Replace http://tang2.example.com:7500 with the URL of your Tang server: USD curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws Expected output {"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"} Provide the advertisement file to Clevis for encryption: USD clevis-encrypt-tang '{"url":"http://tang2.example.com:7500","adv":"adv.jws"}' < /dev/null > /dev/null If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.15.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see "About disk encryption". 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Optional: Specify the advertisement for your offline Tang server in valid JSON format. 10 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information about this topic, see "Configuring an encryption threshold". 11 Include this section if you want to mirror the boot disk. For more details, see "About disk mirroring". 12 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 13 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. If you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node, for example: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices used by the software RAID device. List the file systems mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Note OpenShift Container Platform 4.15 does not support software RAIDs on the installation drive. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.15.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the mirrored boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.15.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 1.4.5. Configuring an Intel(R) Virtual RAID on CPU (VROC) data volume Intel(R) VROC is a type of hybrid RAID, where some of the maintenance is offloaded to the hardware, but appears as software RAID to the operating system. Important Support for Intel(R) VROC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following procedure configures an Intel(R) VROC-enabled RAID1. Prerequisites You have a system with Intel(R) Volume Management Device (VMD) enabled. Procedure Create the Intel(R) Matrix Storage Manager (IMSM) RAID container by running the following command: USD mdadm -CR /dev/md/imsm0 -e \ imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1 1 The RAID device names. In this example, there are two devices listed. If you provide more than two device names, you must adjust the -n flag. For example, listing three devices would use the flag -n3 . Create the RAID1 storage inside the container: Create a dummy RAID0 volume in front of the real RAID1 volume by running the following command: USD mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean Create the real RAID1 array by running the following command: USD mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0 Stop both RAID0 and RAID1 member arrays and delete the dummy RAID0 array with the following commands: USD mdadm -S /dev/md/dummy \ mdadm -S /dev/md/coreos \ mdadm --kill-subarray=0 /dev/md/imsm0 Restart the RAID1 arrays by running the following command: USD mdadm -A /dev/md/coreos /dev/md/imsm0 Install RHCOS on the RAID1 device: Get the UUID of the IMSM container by running the following command: USD mdadm --detail --export /dev/md/imsm0 Install RHCOS and include the rd.md.uuid kernel argument by running the following command: USD coreos-installer install /dev/md/coreos \ --append-karg rd.md.uuid=<md_UUID> 1 ... 1 The UUID of the IMSM container. Include any additional coreos-installer arguments you need to install RHCOS. 1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography .
[ "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane", "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane", "chmod +x butane", "echo USDPATH", "butane <butane_file>", "variant: openshift version: 4.15.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-custom.bu -o ./99-worker-custom.yaml", "oc create -f 99-worker-custom.yaml", "./openshift-install create manifests --dir <installation_directory>", "cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "cd kmods-via-containers/", "sudo make install", "sudo systemctl daemon-reload", "cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "cd kvc-simple-kmod", "cat simple-kmod.conf", "KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"", "sudo make install", "sudo kmods-via-containers build simple-kmod USD(uname -r)", "sudo systemctl enable [email protected] --now", "sudo systemctl status [email protected]", "● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "dmesg | grep 'Hello world'", "[ 6420.761332] Hello world from simple_kmod.", "sudo cat /proc/simple-procfs-kmod", "simple-procfs-kmod number = 0", "sudo spkut 44", "KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "FAKEROOT=USD(mktemp -d)", "cd kmods-via-containers", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd ../kvc-simple-kmod", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree", "variant: openshift version: 4.15.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true", "butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml", "oc create -f 99-simple-kmod.yaml", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "variant: openshift version: 4.15.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: \"{\\\"payload\\\": \\\"...\\\", \\\"protected\\\": \\\"...\\\", \\\"signature\\\": \\\"...\\\"}\" 4 threshold: 2 5 openshift: fips: true", "sudo yum install clevis", "clevis-encrypt-tang '{\"url\":\"http://tang1.example.com:7500\"}' < /dev/null > /dev/null 1", "The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1", "curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws", "{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}", "clevis-encrypt-tang '{\"url\":\"http://tang2.example.com:7500\",\"adv\":\"adv.jws\"}' < /dev/null > /dev/null", "./openshift-install create manifests --dir <installation_directory> 1", "variant: openshift version: 4.15.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: \"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}\" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13", "butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml", "oc debug node/compute-1", "chroot /host", "cryptsetup status root", "/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write", "clevis luks list -d /dev/sda4 1", "1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1", "cat /proc/mdstat", "Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>", "mdadm --detail /dev/md126", "/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8", "mount | grep /dev/md", "/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)", "variant: openshift version: 4.15.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true", "variant: openshift version: 4.15.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true", "butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1", "mdadm -CR /dev/md/imsm0 -e imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1", "mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean", "mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0", "mdadm -S /dev/md/dummy mdadm -S /dev/md/coreos mdadm --kill-subarray=0 /dev/md/imsm0", "mdadm -A /dev/md/coreos /dev/md/imsm0", "mdadm --detail --export /dev/md/imsm0", "coreos-installer install /dev/md/coreos --append-karg rd.md.uuid=<md_UUID> 1", "variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_configuration/installing-customizing
function::user_uint64
function::user_uint64 Name function::user_uint64 - Retrieves an unsigned 64-bit integer value stored in user space Synopsis Arguments addr the user space address to retrieve the unsigned 64-bit integer from Description Returns the unsigned 64-bit integer value from a given user space address. Returns zero when user space data is not accessible.
[ "user_uint64:long(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-uint64
Chapter 5. Using Jobs and DaemonSets
Chapter 5. Using Jobs and DaemonSets 5.1. Running background tasks on nodes automatically with daemon sets As an administrator, you can create and use daemon sets to run replicas of a pod on specific or all nodes in an OpenShift Container Platform cluster. A daemon set ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to the cluster. As nodes are removed from the cluster, those pods are removed through garbage collection. Deleting a daemon set will clean up the pods it created. You can use daemon sets to create shared storage, run a logging pod on every node in your cluster, or deploy a monitoring agent on every node. For security reasons, the cluster administrators and the project administrators can create daemon sets. For more information on daemon sets, see the Kubernetes documentation . Important Daemon set scheduling is incompatible with project's default node selector. If you fail to disable it, the daemon set gets restricted by merging with the default node selector. This results in frequent pod recreates on the nodes that got unselected by the merged node selector, which in turn puts unwanted load on the cluster. 5.1.1. Scheduled by default scheduler A daemon set ensures that all eligible nodes run a copy of a pod. Normally, the node that a pod runs on is selected by the Kubernetes scheduler. However, daemon set pods are created and scheduled by the daemon set controller. That introduces the following issues: Inconsistent pod behavior: Normal pods waiting to be scheduled are created and in Pending state, but daemon set pods are not created in Pending state. This is confusing to the user. Pod preemption is handled by default scheduler. When preemption is enabled, the daemon set controller will make scheduling decisions without considering pod priority and preemption. The ScheduleDaemonSetPods feature, enabled by default in OpenShift Container Platform, lets you schedule daemon sets using the default scheduler instead of the daemon set controller, by adding the NodeAffinity term to the daemon set pods, instead of the spec.nodeName term. The default scheduler is then used to bind the pod to the target host. If node affinity of the daemon set pod already exists, it is replaced. The daemon set controller only performs these operations when creating or modifying daemon set pods, and no changes are made to the spec.template of the daemon set. kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr #... spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #... In addition, a node.kubernetes.io/unschedulable:NoSchedule toleration is added automatically to daemon set pods. The default scheduler ignores unschedulable Nodes when scheduling daemon set pods. 5.1.2. Creating daemonsets When creating daemon sets, the nodeSelector field is used to indicate the nodes on which the daemon set should deploy replicas. Prerequisites Before you start using daemon sets, disable the default project-wide node selector in your namespace, by setting the namespace annotation openshift.io/node-selector to an empty string: USD oc patch namespace myproject -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}' Tip You can alternatively apply the following YAML to disable the default project-wide node selector for a namespace: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #... If you are creating a new project, overwrite the default node selector: USD oc adm new-project <name> --node-selector="" Procedure To create a daemon set: Define the daemon set yaml file: apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #... 1 The label selector that determines which pods belong to the daemon set. 2 The pod template's label selector. Must match the label selector above. 3 The node selector that determines on which nodes pod replicas should be deployed. A matching label must be present on the node. Create the daemon set object: USD oc create -f daemonset.yaml To verify that the pods were created, and that each node has a pod replica: Find the daemonset pods: USD oc get pods Example output hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m View the pods to verify the pod has been placed onto the node: USD oc describe pod/hello-daemonset-cx6md|grep Node Example output Node: openshift-node01.hostname.com/10.14.20.134 USD oc describe pod/hello-daemonset-e3md9|grep Node Example output Node: openshift-node02.hostname.com/10.14.20.137 Important If you update a daemon set pod template, the existing pod replicas are not affected. If you delete a daemon set and then create a new daemon set with a different template but the same label selector, it recognizes any existing pod replicas as having matching labels and thus does not update them or create new replicas despite a mismatch in the pod template. If you change node labels, the daemon set adds pods to nodes that match the new labels and deletes pods from nodes that do not match the new labels. To update a daemon set, force new pod replicas to be created by deleting the old replicas or nodes. 5.2. Running tasks in pods using jobs A job executes a task in your OpenShift Container Platform cluster. A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job will clean up any pod replicas it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Sample Job specification apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 #... 1 The pod replicas a job should run in parallel. 2 Successful pod completions are needed to mark a job completed. 3 The maximum duration the job can run. 4 The number of retries for a job. 5 The template for the pod the controller creates. 6 The restart policy of the pod. Additional resources Jobs in the Kubernetes documentation 5.2.1. Understanding jobs and cron jobs A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job cleans up any pods it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. There are two possible resource types that allow creating run-once objects in OpenShift Container Platform: Job A regular job is a run-once object that creates a task and ensures the job finishes. There are three main types of task suitable to run as a job: Non-parallel jobs: A job that starts only one pod, unless the pod fails. The job is complete as soon as its pod terminates successfully. Parallel jobs with a fixed completion count: a job that starts multiple pods. The job represents the overall task and is complete when there is one successful pod for each value in the range 1 to the completions value. Parallel jobs with a work queue: A job with multiple parallel worker processes in a given pod. OpenShift Container Platform coordinates pods to determine what each should work on or use an external queue service. Each pod is independently capable of determining whether or not all peer pods are complete and that the entire job is done. When any pod from the job terminates with success, no new pods are created. When at least one pod has terminated with success and all pods are terminated, the job is successfully completed. When any pod has exited with success, no other pod should be doing any work for this task or writing any output. Pods should all be in the process of exiting. For more information about how to make use of the different types of job, see Job Patterns in the Kubernetes documentation. Cron job A job can be scheduled to run multiple times, using a cron job. A cron job builds on a regular job by allowing you to specify how the job should be run. Cron jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails. Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period. A cron job creates a Job object based on the timezone configured on the control plane node that runs the cronjob controller. Warning A cron job creates a Job object approximately once per execution time of its schedule, but there are circumstances in which it fails to create a job or two jobs might be created. Therefore, jobs must be idempotent and you must configure history limits. 5.2.1.1. Understanding how to create jobs Both resource types require a job configuration that consists of the following key parts: A pod template, which describes the pod that OpenShift Container Platform creates. The parallelism parameter, which specifies how many pods running in parallel at any point in time should execute a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . The completions parameter, specifying how many successful pod completions are needed to finish a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify a value. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. 5.2.1.2. Understanding how to set a maximum duration for jobs When defining a job, you can define its maximum duration by setting the activeDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a first pod gets scheduled in the system, and defines how long a job can be active. It tracks overall time of an execution. After reaching the specified timeout, the job is terminated by OpenShift Container Platform. 5.2.1.3. Understanding how to set a job back off policy for pod failure A job can be considered failed, after a set amount of retries due to a logical error in configuration or other similar reasons. Failed pods associated with the job are recreated by the controller with an exponential back off delay ( 10s , 20s , 40s ...) capped at six minutes. The limit is reset if no new failed pods appear between controller checks. Use the spec.backoffLimit parameter to set the number of retries for a job. 5.2.1.4. Understanding how to configure a cron job to remove artifacts Cron jobs can leave behind artifact resources such as jobs or pods. As a user it is important to configure history limits so that old jobs and their pods are properly cleaned. There are two fields within cron job's spec responsible for that: .spec.successfulJobsHistoryLimit . The number of successful finished jobs to retain (defaults to 3). .spec.failedJobsHistoryLimit . The number of failed finished jobs to retain (defaults to 1). Tip Delete cron jobs that you no longer need: USD oc delete cronjob/<cron_job_name> Doing this prevents them from generating unnecessary artifacts. You can suspend further executions by setting the spec.suspend to true. All subsequent executions are suspended until you reset to false . 5.2.1.5. Known limitations The job specification restart policy only applies to the pods , and not the job controller . However, the job controller is hard-coded to keep retrying jobs to completion. As such, restartPolicy: Never or --restart=Never results in the same behavior as restartPolicy: OnFailure or --restart=OnFailure . That is, when a job fails it is restarted automatically until it succeeds (or is manually discarded). The policy only sets which subsystem performs the restart. With the Never policy, the job controller performs the restart. With each attempt, the job controller increments the number of failures in the job status and create new pods. This means that with each failed attempt, the number of pods increases. With the OnFailure policy, kubelet performs the restart. Each attempt does not increment the number of failures in the job status. In addition, kubelet will retry failed jobs starting pods on the same nodes. 5.2.2. Creating jobs You create a job in OpenShift Container Platform by creating a job object. Procedure To create a job: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 #... 1 Optional: Specify how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, defaults to 1 . 2 Optional: Specify how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. 3 Optional: Specify the maximum duration the job can run. 4 Optional: Specify the number of retries for a job. This field defaults to six. 5 Specify the template for the pod the controller creates. 6 Specify the restart policy of the pod: Never . Do not restart the job. OnFailure . Restart the job only if it fails. Always . Always restart the job. For details on how OpenShift Container Platform uses restart policy with failed containers, see the Example States in the Kubernetes documentation. Create the job: USD oc create -f <file-name>.yaml Note You can also create and launch a job from a single command using oc create job . The following command creates and launches a job similar to the one specified in the example: USD oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)' 5.2.3. Creating cron jobs You create a cron job in OpenShift Container Platform by creating a job object. Procedure To create a cron job: Create a YAML file similar to the following: apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: "*/1 * * * *" 1 concurrencyPolicy: "Replace" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: "cronjobpi" spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 9 #... 1 Schedule for the job specified in cron format . In this example, the job will run every minute. 2 An optional concurrency policy, specifying how to treat concurrent jobs within a cron job. Only one of the following concurrent policies may be specified. If not specified, this defaults to allowing concurrent executions. Allow allows cron jobs to run concurrently. Forbid forbids concurrent runs, skipping the run if the has not finished yet. Replace cancels the currently running job and replaces it with a new one. 3 An optional deadline (in seconds) for starting the job if it misses its scheduled time for any reason. Missed jobs executions will be counted as failed ones. If not specified, there is no deadline. 4 An optional flag allowing the suspension of a cron job. If set to true , all subsequent executions will be suspended. 5 The number of successful finished jobs to retain (defaults to 3). 6 The number of failed finished jobs to retain (defaults to 1). 7 Job template. This is similar to the job example. 8 Sets a label for jobs spawned by this cron job. 9 The restart policy of the pod. This does not apply to the job controller. Note The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. Create the cron job: USD oc create -f <file-name>.yaml Note You can also create and launch a cron job from a single command using oc create cronjob . The following command creates and launches a cron job similar to the one specified in the example: USD oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)' With oc create cronjob , the --schedule option accepts schedules in cron format .
[ "kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #", "oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #", "oc adm new-project <name> --node-selector=\"\"", "apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #", "oc create -f daemonset.yaml", "oc get pods", "hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m", "oc describe pod/hello-daemonset-cx6md|grep Node", "Node: openshift-node01.hostname.com/10.14.20.134", "oc describe pod/hello-daemonset-e3md9|grep Node", "Node: openshift-node02.hostname.com/10.14.20.137", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "oc delete cronjob/<cron_job_name>", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "oc create -f <file-name>.yaml", "oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 concurrencyPolicy: \"Replace\" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 9 #", "oc create -f <file-name>.yaml", "oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/nodes/using-jobs-and-daemonsets
Part I. Troubleshoot
Part I. Troubleshoot
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/troubleshoot
3.6. Suspending an XFS File System
3.6. Suspending an XFS File System To suspend or resume write activity to a file system, use the following command: Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. Note The xfs_freeze utility is provided by the xfsprogs package, which is only available on x86_64. To suspend (that is, freeze) an XFS file system, use: To unfreeze an XFS file system, use: When taking an LVM snapshot, it is not necessary to use xfs_freeze to suspend the file system first. Rather, the LVM management tools will automatically suspend the XFS file system before taking the snapshot. For more information about freezing and unfreezing an XFS file system, see man xfs_freeze .
[ "xfs_freeze mount-point", "xfs_freeze -f /mount/point", "xfs_freeze -u /mount/point" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/xfsfreeze
1.3. Pacemaker Overview
1.3. Pacemaker Overview The High Availability Add-On cluster infrastructure provides the basic functions for a group of computers (called nodes or members ) to work together as a cluster. Once a cluster is formed using the cluster infrastructure, you can use other components to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS2 file system or setting up service failover). The cluster infrastructure performs the following functions: Cluster management Lock management Fencing Cluster configuration management
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/s1-Pacemakeroverview-HAAO
Chapter 4. Deprovisioning individual nodes or groups
Chapter 4. Deprovisioning individual nodes or groups You can deprovision automation mesh nodes and instance groups using the Ansible Automation Platform installer. The procedures in this section describe how to deprovision specific nodes or entire groups, with example inventory files for each procedure. 4.1. Deprovisioning individual nodes using the installer You can deprovision nodes from your automation mesh using the Ansible Automation Platform installer. Edit the inventory file to mark the nodes to deprovision, then run the installer. Running the installer also removes all configuration files and logs attached to the node. Note You can deprovision any of your inventory's hosts except for the first host specified in the [automationcontroller] group. Procedure Append node_state=deprovision to nodes in the installer file you want to deprovision. Example This example inventory file deprovisions two nodes from an automation mesh configuration. [automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control 121-addr.tatu.home ansible_host=192.168.111.121 node_type=hybrid routable_hostname=121-addr.tatu.home 115-addr.tatu.home ansible_host=192.168.111.115 node_type=hybrid node_state=deprovision [automationcontroller:vars] peers=connected_nodes [execution_nodes] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928 108-addr.tatu.home ansible_host=192.168.111.108 receptor_listener_port=29182 node_state=deprovision 100-addr.tatu.home ansible_host=192.168.111.100 peers=110-addr.tatu.home node_type=hop 4.1.1. Deprovisioning isolated nodes You have the option to manually remove any isolated nodes using the awx-manage deprovisioning utility. Warning Use the deprovisioning command to remove only isolated nodes that have not migrated to execution nodes. To deprovision execution nodes from your automation mesh architecture, use the installer method instead. Procedure Shut down the instance: USD automation-controller-service stop Run the deprovision command from another instance, replacing host_name with the name of the node as listed in the inventory file: USD awx-manage deprovision_instance --hostname= <host_name> 4.2. Deprovisioning groups using the installer You can deprovision entire groups from your automation mesh using the Ansible Automation Platform installer. Running the installer will remove all configuration files and logs attached to the nodes in the group. Note You can deprovision any hosts in your inventory except for the first host specified in the [automationcontroller] group. Procedure Add node_state=deprovision to the [group:vars] associated with the group you want to deprovision. Example 4.2.1. Deprovisioning isolated instance groups You have the option to manually remove any isolated instance groups using the awx-manage deprovisioning utility. Warning Use the deprovisioning command to only remove isolated instance groups. To deprovision instance groups from your automation mesh architecture, use the installer method instead. Procedure Run the following command, replacing <name> with the name of the instance group: USD awx-manage unregister_queue --queuename= <name>
[ "[automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control 121-addr.tatu.home ansible_host=192.168.111.121 node_type=hybrid routable_hostname=121-addr.tatu.home 115-addr.tatu.home ansible_host=192.168.111.115 node_type=hybrid node_state=deprovision [automationcontroller:vars] peers=connected_nodes [execution_nodes] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928 108-addr.tatu.home ansible_host=192.168.111.108 receptor_listener_port=29182 node_state=deprovision 100-addr.tatu.home ansible_host=192.168.111.100 peers=110-addr.tatu.home node_type=hop", "automation-controller-service stop", "awx-manage deprovision_instance --hostname= <host_name>", "[execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com execution-node-2.example.com peers=execution-node-3.example.com execution-node-3.example.com peers=execution-node-4.example.com execution-node-4.example.com peers=execution-node-5.example.com execution-node-5.example.com peers=execution-node-6.example.com execution-node-6.example.com peers=execution-node-7.example.com execution-node-7.example.com [execution_nodes:vars] node_state=deprovision", "awx-manage unregister_queue --queuename= <name>" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_vm_environments/assembly-deprovisioning-mesh
Chapter 4. Recovering from data loss with VM snapshots
Chapter 4. Recovering from data loss with VM snapshots If a data loss event occurs, you can restore a Virtual Machine (VM) snapshot of a Certificate Authority (CA) replica to repair the lost data, or deploy a new environment from it. 4.1. Recovering from only a VM snapshot If a disaster affects all IdM servers, and only a snapshot of an IdM CA replica virtual machine (VM) is left, you can recreate your deployment by removing all references to the lost servers and installing new replicas. Prerequisites You have prepared a VM snapshot of a CA replica VM. See Preparing for data loss with VM snapshots . Procedure Boot the desired snapshot of the CA replica VM. Remove replication agreements to any lost replicas. Install a second CA replica. See Installing an IdM replica . The VM CA replica is now the CA renewal server. Red Hat recommends promoting another CA replica in the environment to act as the CA renewal server. See Changing and resetting IdM CA renewal server . Recreate the desired replica topology by deploying additional replicas with the desired services (CA, DNS). See Installing an IdM replica Update DNS to reflect the new replica topology. If IdM DNS is used, DNS service records are updated automatically. Verify that IdM clients can reach the IdM servers. See Adjusting IdM Clients during recovery . Verification Test the Kerberos server on every replica by successfully retrieving a Kerberos ticket-granting ticket as an IdM user. Test the Directory Server and SSSD configuration on every replica by retrieving user information. Test the CA server on every CA replica with the ipa cert-show command. Additional resources Planning the replica topology 4.2. Recovering from a VM snapshot among a partially-working environment If a disaster affects some IdM servers while others are still operating properly, you may want to restore the deployment to the state captured in a Virtual Machine (VM) snapshot. For example, if all Certificate Authority (CA) Replicas are lost while other replicas are still in production, you will need to bring a CA Replica back into the environment. In this scenario, remove references to the lost replicas, restore the CA replica from the snapshot, verify replication, and deploy new replicas. Prerequisites You have prepared a VM snapshot of a CA replica VM. See Preparing for data loss with VM snapshots . Procedure Remove all replication agreements to the lost servers. See Uninstalling an IdM server . Boot the desired snapshot of the CA replica VM. Remove any replication agreements between the restored server and any lost servers. If the restored server does not have replication agreements to any of the servers still in production, connect the restored server with one of the other servers to update the restored server. Review Directory Server error logs at /var/log/dirsrv/slapd-YOUR-INSTANCE/errors to see if the CA replica from the snapshot correctly synchronizes with the remaining IdM servers. If replication on the restored server fails because its database is too outdated, reinitialize the restored server. If the database on the restored server is correctly synchronized, continue by deploying additional replicas with the desired services (CA, DNS) according to Installing an IdM replica . Verification Test the Kerberos server on every replica by successfully retrieving a Kerberos ticket-granting ticket as an IdM user. Test the Directory Server and SSSD configuration on every replica by retrieving user information. Test the CA server on every CA replica with the ipa cert-show command. Additional resources Recovering from a VM snapshot to establish a new IdM environment 4.3. Recovering from a VM snapshot to establish a new IdM environment If the Certificate Authority (CA) replica from a restored Virtual Machine (VM) snapshot is unable to replicate with other servers, create a new IdM environment from the VM snapshot. To establish a new IdM environment, isolate the VM server, create additional replicas from it, and switch IdM clients to the new environment. Prerequisites You have prepared a VM snapshot of a CA replica VM. See Preparing for data loss with VM snapshots . Procedure Boot the desired snapshot of the CA replica VM. Isolate the restored server from the rest of the current deployment by removing all of its replication topology segments. First, display all domain replication topology segments. , delete every domain topology segment involving the restored server. Finally, perform the same actions with any ca topology segments. Install a sufficient number of IdM replicas from the restored server to handle the deployment load. There are now two disconnected IdM deployments running in parallel. Switch the IdM clients to use the new deployment by hard-coding references to the new IdM replicas. See Adjusting IdM clients during recovery . Stop and uninstall IdM servers from the deployment. See Uninstalling an IdM server . Verification Test the Kerberos server on every new replica by successfully retrieving a Kerberos ticket-granting ticket as an IdM user. Test the Directory Server and SSSD configuration on every new replica by retrieving user information. Test the CA server on every new CA replica with the ipa cert-show command.
[ "ipa server-del lost-server1.example.com ipa server-del lost-server2.example.com", "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True", "ipa cert-show 1 Issuing CA: ipa Certificate: MIIEgjCCAuqgAwIBAgIjoSIP Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Thu Oct 31 19:43:29 2019 UTC Not After: Mon Oct 31 19:43:29 2039 UTC Serial number: 1 Serial number (hex): 0x1 Revoked: False", "ipa server-del lost-server1.example.com ipa server-del lost-server2.example.com", "ipa topologysegment-add Suffix name: domain Left node: restored-CA-replica.example.com Right node: server3.example.com Segment name [restored-CA-replica.com-to-server3.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: restored-CA-replica.example.com Right node: server3.example.com Connectivity: both", "ipa-replica-manage re-initialize --from server2.example.com", "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True", "ipa cert-show 1 Issuing CA: ipa Certificate: MIIEgjCCAuqgAwIBAgIjoSIP Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Thu Oct 31 19:43:29 2019 UTC Not After: Mon Oct 31 19:43:29 2039 UTC Serial number: 1 Serial number (hex): 0x1 Revoked: False", "ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: restored-CA-replica.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------", "ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------", "ipa topologysegment-find Suffix name: ca ------------------ 1 segments matched ------------------ Segment name: ca_segment Left node: restored-CA-replica.example.com Right node: server4.example.com Connectivity: both ---------------------------- Number of entries returned 1 ---------------------------- ipa topologysegment-del Suffix name: ca Segment name: ca_segment ----------------------------- Deleted segment \"ca_segment\" -----------------------------", "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True", "ipa cert-show 1 Issuing CA: ipa Certificate: MIIEgjCCAuqgAwIBAgIjoSIP Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Thu Oct 31 19:43:29 2019 UTC Not After: Mon Oct 31 19:43:29 2039 UTC Serial number: 1 Serial number (hex): 0x1 Revoked: False" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/performing_disaster_recovery_with_identity_management/recovering-from-data-loss-with-snapshots_performing-disaster-recovery
Chapter 3. Listing of kernel parameters and values
Chapter 3. Listing of kernel parameters and values 3.1. Kernel command-line parameters Kernel command-line parameters, also known as kernel arguments, are used to customize the behavior of Red Hat Enterprise Linux at boot time only. 3.1.1. Setting kernel command-line parameters This section explains how to change a kernel command-line parameter on AMD64 and Intel 64 systems and IBM Power Systems servers using the GRUB2 boot loader, and on IBM Z using zipl . Kernel command-line parameters are saved in the boot/grub/grub.cfg configuration file, which is generated by the GRUB2 boot loader. Do not edit this configuration file. Changes to this file are only made by configuration scripts. Changing kernel command-line parameters in GRUB2 for AMD64 and Intel 64 systems and IBM Power Systems Hardware. Open the /etc/default/grub configuration file as root using a plain text editor such as vim or Gedit . In this file, locate the line beginning with GRUB_CMDLINE_LINUX similar to the following: Change the value of the required kernel command-line parameter. Then, save the file and exit the editor. Regenerate the GRUB2 configuration using the edited default file. If your system uses BIOS firmware, execute the following command: On a system with UEFI firmware, execute the following instead: After finishing the procedure above, the boot loader is reconfigured, and the kernel command-line parameter that you have specified in its configuration file is applied after the reboot. Changing kernel command-line parameters in zipl for IBM Z Hardware Open the /etc/zipl.conf configuration file as root using a plain text editor such as vim or Gedit . In this file, locate the parameters= section, and edit the requiremed parameter, or add it if not present. Then, save the file and exit the editor. Regenerate the zipl configuration: Note Executing only the zipl command with no additional options uses default values. See the zipl(8) man page for information about available options. After finishing the procedure above, the boot loader is reconfigured, and the kernel command-line parameter that you have specified in its configuration file is applied after the reboot. 3.1.2. What kernel command-line parameters can be controlled For complete list of kernel command-line parameters, see https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt . 3.1.2.1. Hardware specific kernel command-line parameters pci=option[,option... ] Specify behavior of the PCI hardware subsystem Setting Effect earlydump [X86] Dump the PCI configuration space before the kernel changes anything off [X86] Do not probe for the PCI bus noaer [PCIE] If the PCIEAER kernel parameter is enabled, this kernel boot option can be used to disable the use of PCIE advanced error reporting. noacpi [X86] Do not use the Advanced Configuration and Power Interface (ACPI) for Interrupt Request (IRQ) routing or for PCI scanning. bfsort Sort PCI devices into breadth-first order. This sorting is done to get a device order compatible with older (⇐ 2.4) kernels. nobfsort Do not sort PCI devices into breadth-first order. Additional PCI options are documented in the on disk documentation found in the kernel-doc-<version>.noarch package. Where '<version>' needs to be replaced with the corresponding kernel version. acpi=option Specify behavior of the Advanced Configuration and Power Interface Setting Effect acpi=off Disable ACPI acpi=ht Use ACPI boot table parsing, but do not enable ACPI interpreter This disables any ACPI functionality that is not required for Hyper Threading. acpi=force Require the ACPI subsystem to be enabled acpi=strict Make the ACPI layer be less tolerant of platforms that are not fully compliant with the ACPI specification. acpi_sci=<value> Set up ACPI SCI interrupt, where <value> is one of edge,level,high,low. acpi=noirq Do not use ACPI for IRQ routing acpi=nocmcff Disable firmware first (FF) mode for corrected errors. This disables parsing the HEST CMC error source to check if firmware has set the FF flag. This can result in duplicate corrected error reports.
[ "GRUB_CMDLINE_LINUX=\"rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "zipl" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/kernel_administration_guide/listing_of_kernel_parameters_and_values
Chapter 9. Red Hat Directory Server 11.2
Chapter 9. Red Hat Directory Server 11.2 9.1. Highlighted updates and new features This section documents new features and important updates in Directory Server 11.2. Directory Server rebased to version 1.4.3.8 The 389-ds-base packages have been upgraded to upstream version 1.4.3.8, which provides a number of bug fixes and enhancements over the version. For a complete list of notable changes, read the upstream release notes before updating: https://www.port389.org/docs/389ds/releases/release-1-4-3-8.html https://www.port389.org/docs/389ds/releases/release-1-4-3-7.html https://www.port389.org/docs/389ds/releases/release-1-4-3-6.html https://www.port389.org/docs/389ds/releases/release-1-4-3-5.html https://www.port389.org/docs/389ds/releases/release-1-4-3-4.html https://www.port389.org/docs/389ds/releases/release-1-4-3-3.html https://www.port389.org/docs/389ds/releases/release-1-4-3-2.html https://www.port389.org/docs/389ds/releases/release-1-4-3-1.html Highlighted updates and new features in the 389-ds-base packages Features in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.3 Release Notes: Directory Server exports the private key and certificate to a private name space when the service starts Directory Server now supports the pwdReset operation attribute Directory Server can now turn an instance to read-only mode if the disk monitoring threshold is reached Directory Server now logs the work and operation time in RESULT entries 9.2. Bug fixes This section describes bugs fixed in Directory Server 11.2 that have a significant impact on users. Bug fixes in the 389-ds-base packages Bug fixes in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.3 Release Notes: Directory Server no longer leaks memory when using indirect COS definitions 9.3. Known issues This section documents known problems and, if applicable, workarounds in Directory Server 11.2. Directory Server settings that are changed outside the web console's window are not automatically visible Because of the design of the Directory Server module in the Red Hat Enterprise Linux 8 web console, the web console does not automatically display the latest settings if a user changes the configuration outside of the console's window. For example, if you change the configuration using the command line while the web console is open, the new settings are not automatically updated in the web console. This applies also if you change the configuration using the web console on a different computer. To work around the problem, manually refresh the web console in the browser if the configuration has been changed outside the console's window. The Directory Server Web Console does not provide an LDAP browser The web console enables administrators to manage and configure Directory Server 11 instances. However, it does not provide an integrated LDAP browser. To manage users and groups in Directory Server, use the dsidm utility. To display and modify directory entries, use a third-party LDAP browser or the OpenLDAP client utilities provided by the openldap-clients package.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/release_notes/directory-server-11.2
Chapter 3. Distributed tracing installation
Chapter 3. Distributed tracing installation 3.1. Installing distributed tracing You can install Red Hat OpenShift distributed tracing on OpenShift Container Platform in either of two ways: You can install Red Hat OpenShift distributed tracing as part of Red Hat OpenShift Service Mesh. Distributed tracing is included by default in the Service Mesh installation. To install Red Hat OpenShift distributed tracing as part of a service mesh, follow the Red Hat Service Mesh Installation instructions. You must install Red Hat OpenShift distributed tracing in the same namespace as your service mesh, that is, the ServiceMeshControlPlane and the Red Hat OpenShift distributed tracing resources must be in the same namespace. If you do not want to install a service mesh, you can use the Red Hat OpenShift distributed tracing Operators to install distributed tracing by itself. To install Red Hat OpenShift distributed tracing without a service mesh, use the following instructions. 3.1.1. Prerequisites Before you can install Red Hat OpenShift distributed tracing, review the installation activities, and ensure that you meet the prerequisites: Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.9 overview . Install OpenShift Container Platform 4.9. Install OpenShift Container Platform 4.9 on AWS Install OpenShift Container Platform 4.9 on user-provisioned AWS Install OpenShift Container Platform 4.9 on bare metal Install OpenShift Container Platform 4.9 on vSphere Install the version of the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version and add it to your path. An account with the cluster-admin role. 3.1.2. Red Hat OpenShift distributed tracing installation overview The steps for installing Red Hat OpenShift distributed tracing are as follows: Review the documentation and determine your deployment strategy. If your deployment strategy requires persistent storage, install the OpenShift Elasticsearch Operator via the OperatorHub. Install the Red Hat OpenShift distributed tracing platform Operator via the OperatorHub. Modify the custom resource YAML file to support your deployment strategy. Deploy one or more instances of Red Hat OpenShift distributed tracing platform to your OpenShift Container Platform environment. 3.1.3. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing, giving demonstrations, or using Red Hat OpenShift distributed tracing platform in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait until you see that the OpenShift Elasticsearch Operator shows a status of "InstallSucceeded" before continuing. 3.1.4. Installing the Red Hat OpenShift distributed tracing platform Operator To install Red Hat OpenShift distributed tracing platform, you use the OperatorHub to install the Red Hat OpenShift distributed tracing platform Operator. By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Warning Do not install Community versions of the Operators. Community Operators are not supported. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type distributed tracing platform into the filter to locate the Red Hat OpenShift distributed tracing platform Operator. Click the Red Hat OpenShift distributed tracing platform Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing platform Operator shows a status of "Succeeded" before continuing. 3.1.5. Installing the Red Hat OpenShift distributed tracing data collection Operator Important The Red Hat OpenShift distributed tracing data collection Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To install Red Hat OpenShift distributed tracing data collection, you use the OperatorHub to install the Red Hat OpenShift distributed tracing data collection Operator. By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type distributed tracing data collection into the filter to locate the Red Hat OpenShift distributed tracing data collection Operator. Click the Red Hat OpenShift distributed tracing data collection Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, accept the default stable Update channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing data collection Operator shows a status of "Succeeded" before continuing. 3.2. Configuring and deploying distributed tracing The Red Hat OpenShift distributed tracing platform Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the distributed tracing platform resources. You can either install the default configuration or modify the file to better suit your business requirements. Red Hat OpenShift distributed tracing platform has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a distributed tracing platform instance the Operator uses this configuration file to create the objects necessary for the deployment. Jaeger custom resource file showing deployment strategy apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1 1 The Red Hat OpenShift distributed tracing platform Operator currently supports the following deployment strategies: allInOne (Default) - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage. Note In-memory storage is not persistent, which means that if the distributed tracing platform instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. production - The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. streaming - The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform ( AMQ Streams / Kafka ). Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. Note The streaming deployment strategy is currently unsupported on IBM Z. Note There are two ways to install and use Red Hat OpenShift distributed tracing, as part of a service mesh or as a stand alone component. If you have installed distributed tracing as part of Red Hat OpenShift Service Mesh, you can perform basic configuration as part of the ServiceMeshControlPlane but for completely control you should configure a Jaeger CR and then reference your distributed tracing configuration file in the ServiceMeshControlPlane . 3.2.1. Deploying the distributed tracing default strategy from the web console The custom resource definition (CRD) defines the configuration used when you deploy an instance of Red Hat OpenShift distributed tracing. The default CR is named jaeger-all-in-one-inmemory and it is configured with minimal resources to ensure that you can successfully install it on a default OpenShift Container Platform installation. You can use this default configuration to create a Red Hat OpenShift distributed tracing platform instance that uses the AllInOne deployment strategy, or you can define your own custom resource file. Note In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or is replaced, your trace data will be lost. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. Prerequisites The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Details tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, to install using the defaults, click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-all-in-one-inmemory . On the Jaeger Details page, click the Resources tab. Wait until the pod has a status of "Running" before continuing. 3.2.1.1. Deploying the distributed tracing default strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The Red Hat OpenShift distributed tracing platform Operator has been installed and verified. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger.yaml that contains the following text: Example jaeger-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory Run the following command to deploy distributed tracing platform: USD oc create -n tracing-system -f jaeger.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s 3.2.2. Deploying the distributed tracing production strategy from the web console The production deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your production YAML configuration, for example: Example jaeger-production.yaml file with Elasticsearch apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *' Click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-prod-elasticsearch . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 3.2.2.1. Deploying the distributed tracing production strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger-production.yaml that contains the text of the example file in the procedure. Run the following command to deploy distributed tracing platform: USD oc create -n tracing-system -f jaeger-production.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s 3.2.3. Deploying the distributed tracing streaming strategy from the web console The streaming deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. The streaming strategy provides a streaming capability that sits between the Collector and the Elasticsearch storage. This reduces the pressure on the storage under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the Kafka streaming platform. Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. If you do not have an AMQ Streams subscription, contact your sales representative for more information. Note The streaming deployment strategy is currently unsupported on IBM Z. Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your streaming YAML configuration, for example: Example jaeger-streaming.yaml file apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans #Note: If brokers are not defined,AMQStreams 1.4.0+ will self-provision Kafka. brokers: my-cluster-kafka-brokers.kafka:9092 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 Click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-streaming . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 3.2.3.1. Deploying the distributed tracing streaming strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger-streaming.yaml that contains the text of the example file in the procedure. Run the following command to deploy Jaeger: USD oc create -n tracing-system -f jaeger-streaming.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s 3.2.4. Validating your deployment 3.2.4.1. Accessing the Jaeger console To access the Jaeger console you must have either Red Hat OpenShift Service Mesh or Red Hat OpenShift distributed tracing installed, and Red Hat OpenShift distributed tracing platform installed, configured, and deployed. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the control plane project, for example tracing-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, tracing-system is the control plane namespace. USD export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 3.2.5. Customizing your deployment 3.2.5.1. Deployment best practices Red Hat OpenShift distributed tracing instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform instance name the tracing data should be reported to. If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform instance to each tenant namespace. Agent as a daemonset is not supported for multitenant installations or Red Hat OpenShift Dedicated. Agent as a sidecar is the only supported configuration for these use cases. If you are installing distributed tracing as part of Red Hat OpenShift Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource. For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option. 3.2.5.2. Distributed tracing default configuration options The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform resources. You can modify these parameters to customize your distributed tracing platform implementation to your business needs. Jaeger generic YAML example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {} Table 3.1. Jaeger parameters Parameter Description Values Default value apiVersion: API version to use when creating the object. jaegertracing.io/v1 jaegertracing.io/v1 kind: Defines the kind of Kubernetes object to create. jaeger metadata: Data that helps uniquely identify the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. name: Name for the object. The name of your distributed tracing platform instance. jaeger-all-in-one-inmemory spec: Specification for the object to be created. Contains all of the configuration parameters for your distributed tracing platform instance. When a common definition for all Jaeger components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/<component> node. N/A strategy: Jaeger deployment strategy allInOne , production , or streaming allInOne allInOne: Because the allInOne image deploys the Agent, Collector, Query, Ingester, and Jaeger UI in a single pod, configuration for this deployment must nest component configuration under the allInOne parameter. agent: Configuration options that define the Agent. collector: Configuration options that define the Jaeger Collector. sampling: Configuration options that define the sampling strategies for tracing. storage: Configuration options that define the storage. All storage-related options must be placed under storage , rather than under the allInOne or other component options. query: Configuration options that define the Query service. ingester: Configuration options that define the Ingester service. The following example YAML is the minimum required to create a Red Hat OpenShift distributed tracing platform deployment using the default settings. Example minimum required dist-tracing-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory 3.2.5.3. Jaeger Collector configuration options The Jaeger Collector is the component responsible for receiving the spans that were captured by the tracer and writing them to persistent Elasticsearch storage when using the production strategy, or to AMQ Streams when using the streaming strategy. The Collectors are stateless and thus many instances of Jaeger Collector can be run in parallel. Collectors require almost no configuration, except for the location of the Elasticsearch cluster. Table 3.2. Parameters used by the Operator to define the Jaeger Collector Parameter Description Values Specifies the number of Collector replicas to create. Integer, for example, 5 Table 3.3. Configuration parameters passed to the Collector Parameter Description Values Configuration options that define the Jaeger Collector. The number of workers pulling from the queue. Integer, for example, 50 The size of the Collector queue. Integer, for example, 2000 The topic parameter identifies the Kafka configuration used by the Collector to produce the messages, and the Ingester to consume the messages. Label for the producer. Identifies the Kafka configuration used by the Collector to produce the messages. If brokers are not specified, and you have AMQ Streams 1.4.0+ installed, the Red Hat OpenShift distributed tracing platform Operator will self-provision Kafka. Logging level for the Collector. Possible values: debug , info , warn , error , fatal , panic . 3.2.5.4. Distributed tracing sampling configuration options The Red Hat OpenShift distributed tracing platform Operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler. While all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage. Note This is not relevant if a trace was started by the Envoy proxy, as the sampling decision is made there. The Jaeger sampling decision is only relevant when the trace is started by an application using the client. When a service receives a request that contains no trace context, the client starts a new trace, assigns it a random trace ID, and makes a sampling decision based on the currently installed sampling strategy. The sampling decision propagates to all subsequent requests in the trace so that other services are not making the sampling decision again. distributed tracing platform libraries support the following samplers: Probabilistic - The sampler makes a random sampling decision with the probability of sampling equal to the value of the sampling.param property. For example, using sampling.param=0.1 samples approximately 1 in 10 traces. Rate Limiting - The sampler uses a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. For example, using sampling.param=2.0 samples requests with the rate of 2 traces per second. Table 3.4. Jaeger sampling options Parameter Description Values Default value Configuration options that define the sampling strategies for tracing. If you do not provide configuration, the Collectors will return the default probabilistic sampling policy with 0.001 (0.1%) probability for all services. Sampling strategy to use. See descriptions above. Valid values are probabilistic , and ratelimiting . probabilistic Parameters for the selected sampling strategy. Decimal and integer values (0, .1, 1, 10) 1 This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled. Probabilistic sampling example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5 If there are no user-supplied configurations, the distributed tracing platform uses the following settings: Default sampling spec: sampling: options: default_strategy: type: probabilistic param: 1 3.2.5.5. Distributed tracing storage configuration options You configure storage for the Collector, Ingester, and Query services under spec.storage . Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. Table 3.5. General storage parameters used by the Red Hat OpenShift distributed tracing platform Operator to define distributed tracing storage Parameter Description Values Default value Type of storage to use for the deployment. memory or elasticsearch . Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments as the data does not persist if the pod is shut down. For production environments distributed tracing platform supports Elasticsearch for persistent storage. memory Name of the secret, for example tracing-secret . N/A Configuration options that define the storage. Table 3.6. Elasticsearch index cleaner parameters Parameter Description Values Default value When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. true / false true Number of days to wait before deleting an index. Integer value 7 Defines the schedule for how often to clean the Elasticsearch index. Cron expression "55 23 * * *" 3.2.5.5.1. Auto-provisioning an Elasticsearch instance When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the storage section of the custom resource file. The Red Hat OpenShift distributed tracing platform Operator will provision Elasticsearch if the following configurations are set: spec.storage:type is set to elasticsearch spec.storage.elasticsearch.doNotProvision set to false spec.storage.options.es.server-urls is not defined, that is, there is no connection to an Elasticsearch instance that was not provisioned by the Red Hat Elasticsearch Operator. When provisioning Elasticsearch, the Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource. If you do not specify a value for spec.storage.elasticsearch.name , the Operator uses elasticsearch . Restrictions You can have only one distributed tracing platform with self-provisioned Elasticsearch instance per namespace. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. There can be only one Elasticsearch per namespace. Note If you already have installed Elasticsearch as part of OpenShift Logging, the Red Hat OpenShift distributed tracing platform Operator can use the installed OpenShift Elasticsearch Operator to provision storage. The following configuration parameters are for a self-provisioned Elasticsearch instance, that is an instance created by the Red Hat OpenShift distributed tracing platform Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under spec:storage:elasticsearch in your configuration file. Table 3.7. Elasticsearch resource configuration parameters Parameter Description Values Default value Use to specify whether or not an Elasticsearch instance should be provisioned by the Red Hat OpenShift distributed tracing platform Operator. true / false true Name of the Elasticsearch instance. The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in this parameter to connect to Elasticsearch. string elasticsearch Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as "split brain" problem can happen. Integer value. For example, Proof of concept = 1, Minimum deployment =3 3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 1 Available memory for requests, based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* 16Gi Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* Data replication policy defines how Elasticsearch shards are replicated across data nodes in the cluster. If not specified, the Red Hat OpenShift distributed tracing platform Operator automatically determines the most appropriate replication based on number of nodes. ZeroRedundancy (no replica shards), SingleRedundancy (one replica shard), MultipleRedundancy (each index is spread over half of the Data nodes), FullRedundancy (each index is fully replicated on every Data node in the cluster). Use to specify whether or not distributed tracing platform should use the certificate management feature of the Red Hat Elasticsearch Operator. This feature was added to logging subsystem for Red Hat OpenShift 5.2 in OpenShift Container Platform 4.7 and is the preferred setting for new Jaeger deployments. true / false true *Each Elasticsearch node can operate with a lower memory setting though this is NOT recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Production storage example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi Storage example with persistent storage: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy 1 Persistent storage configuration. In this case AWS gp2 with 5Gi size. When no value is specified, distributed tracing platform uses emptyDir . The OpenShift Elasticsearch Operator provisions PersistentVolumeClaim and PersistentVolume which are not removed with distributed tracing platform instance. You can mount the same volumes if you create a distributed tracing platform instance with the same name and namespace. 3.2.5.5.2. Connecting to an existing Elasticsearch instance You can use an existing Elasticsearch cluster for storage with distributed tracing. An existing Elasticsearch cluster, also known as an external Elasticsearch instance, is an instance that was not installed by the Red Hat OpenShift distributed tracing platform Operator or by the Red Hat Elasticsearch Operator. When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator will not provision Elasticsearch if the following configurations are set: spec.storage.elasticsearch.doNotProvision set to true spec.storage.options.es.server-urls has a value spec.storage.elasticsearch.name has a value, or if the Elasticsearch instance name is elasticsearch . The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in spec.storage.elasticsearch.name to connect to Elasticsearch. Restrictions You cannot share or reuse a OpenShift Container Platform logging Elasticsearch instance with distributed tracing platform. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. Note Red Hat does not provide support for your external Elasticsearch instance. You can review the tested integrations matrix on the Customer Portal . The following configuration parameters are for an already existing Elasticsearch instance, also known as an external Elasticsearch instance. In this case, you specify configuration options for Elasticsearch under spec:storage:options:es in your custom resource file. Table 3.8. General ES configuration parameters Parameter Description Values Default value URL of the Elasticsearch instance. The fully-qualified domain name of the Elasticsearch server. http://elasticsearch.<namespace>.svc:9200 The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. If you set both es.max-doc-count and es.max-num-spans , Elasticsearch will use the smaller value of the two. 10000 [ Deprecated - Will be removed in a future release, use es.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. If you set both es.max-num-spans and es.max-doc-count , Elasticsearch will use the smaller value of the two. 10000 The maximum lookback for spans in Elasticsearch. 72h0m0s The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default true / false false Timeout used for queries. When set to zero there is no timeout. 0s The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es.password . The password required by Elasticsearch. See also, es.username . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Table 3.9. ES data replication parameters Parameter Description Values Default value The number of replicas per index in Elasticsearch. 1 The number of shards per index in Elasticsearch. 5 Table 3.10. ES index configuration parameters Parameter Description Values Default value Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false true Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". Table 3.11. ES bulk processor configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 1000 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 200ms The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 5000000 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 1 Table 3.12. ES TLS configuration parameters Parameter Description Values Default value Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. Table 3.13. ES archive configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 0 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 0s The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 0 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 0 Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false false Enable extra storage. true / false false Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. 0 [ Deprecated - Will be removed in a future release, use es-archive.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. 0 The maximum lookback for spans in Elasticsearch. 0s The number of replicas per index in Elasticsearch. 0 The number of shards per index in Elasticsearch. 0 The password required by Elasticsearch. See also, es.username . The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, http://localhost:9200 . The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Timeout used for queries. When set to zero there is no timeout. 0s Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es-archive.password . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Storage example with volume mounts apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret. External Elasticsearch example: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public 1 URL to Elasticsearch service running in default namespace. 2 TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS. 3 Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic 4 Volume mounts and volumes which are mounted into all storage components. 3.2.5.6. Managing certificates with Elasticsearch You can create and manage certificates using the Red Hat Elasticsearch Operator. Managing certificates using the Red Hat Elasticsearch Operator also lets you use a single Elasticsearch cluster with multiple Jaeger Collectors. Important Managing certificates with Elasticsearch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Starting with version 2.4, the Red Hat OpenShift distributed tracing platform Operator delegates certificate creation to the Red Hat Elasticsearch Operator by using the following annotations in the Elasticsearch custom resource: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-<shared-es-node-name>: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-<shared-es-node-name>: "system.logging.curator" Where the <shared-es-node-name> is the name of the Elasticsearch node. For example, if you create an Elasticsearch node named custom-es , your custom resource might look like the following example. Example Elasticsearch CR showing annotations apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-custom-es: "system.logging.curator" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy Prerequisites OpenShift Container Platform 4.7 logging subsystem for Red Hat OpenShift 5.2 The Elasticsearch node and the Jaeger instances must be deployed in the same namespace. For example, tracing-system . You enable certificate management by setting spec.storage.elasticsearch.useCertManagement to true in the Jaeger custom resource. Example showing useCertManagement apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true The Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource when provisioning Elasticsearch. The certificates are provisioned by the Red Hat Elasticsearch Operator and the Red Hat OpenShift distributed tracing platform Operator injects the certificates. 3.2.5.7. Query configuration options Query is a service that retrieves traces from storage and hosts the user interface to display them. Table 3.14. Parameters used by the Red Hat OpenShift distributed tracing platform Operator to define Query Parameter Description Values Default value Specifies the number of Query replicas to create. Integer, for example, 2 Table 3.15. Configuration parameters passed to Query Parameter Description Values Default value Configuration options that define the Query service. Logging level for Query. Possible values: debug , info , warn , error , fatal , panic . The base path for all jaeger-query HTTP routes can be set to a non-root value, for example, /jaeger would cause all UI URLs to start with /jaeger . This can be useful when running jaeger-query behind a reverse proxy. /<path> Sample Query configuration apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "my-jaeger" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger 3.2.5.8. Ingester configuration options Ingester is a service that reads from a Kafka topic and writes to the Elasticsearch storage backend. If you are using the allInOne or production deployment strategies, you do not need to configure the Ingester service. Table 3.16. Jaeger parameters passed to the Ingester Parameter Description Values Configuration options that define the Ingester service. Specifies the interval, in seconds or minutes, that the Ingester must wait for a message before terminating. The deadlock interval is disabled by default (set to 0 ), to avoid terminating the Ingester when no messages arrive during system initialization. Minutes and seconds, for example, 1m0s . Default value is 0 . The topic parameter identifies the Kafka configuration used by the collector to produce the messages, and the Ingester to consume the messages. Label for the consumer. For example, jaeger-spans . Identifies the Kafka configuration used by the Ingester to consume the messages. Label for the broker, for example, my-cluster-kafka-brokers.kafka:9092 . Logging level for the Ingester. Possible values: debug , info , warn , error , fatal , dpanic , panic . Streaming Collector and Ingester example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200 3.2.6. Injecting sidecars Red Hat OpenShift distributed tracing platform relies on a proxy sidecar within the application's pod to provide the agent. The Red Hat OpenShift distributed tracing platform Operator can inject Agent sidecars into Deployment workloads. You can enable automatic sidecar injection or manage it manually. 3.2.6.1. Automatically injecting sidecars The Red Hat OpenShift distributed tracing platform Operator can inject Jaeger Agent sidecars into Deployment workloads. To enable automatic injection of sidecars, add the sidecar.jaegertracing.io/inject annotation set to either the string true or to the distributed tracing platform instance name that is returned by running USD oc get jaegers . When you specify true , there should be only a single distributed tracing platform instance for the same namespace as the deployment, otherwise, the Operator cannot determine which distributed tracing platform instance to use. A specific distributed tracing platform instance name on a deployment has a higher precedence than true applied on its namespace. The following snippet shows a simple application that will inject a sidecar, with the agent pointing to the single distributed tracing platform instance available in the same namespace: Automatic sidecar injection example apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: "sidecar.jaegertracing.io/inject": "true" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion 1 Set to either the string true or to the Jaeger instance name. When the sidecar is injected, the agent can then be accessed at its default location on localhost . 3.2.6.2. Manually injecting sidecars The Red Hat OpenShift distributed tracing platform Operator can only automatically inject Jaeger Agent sidecars into Deployment workloads. For controller types other than Deployments , such as StatefulSets`and `DaemonSets , you can manually define the Jaeger agent sidecar in your specification. The following snippet shows the manual definition you can include in your containers section for a Jaeger agent sidecar: Sidecar definition example for a StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc The agent can then be accessed at its default location on localhost. 3.3. Configuring and deploying distributed tracing data collection The Red Hat OpenShift distributed tracing data collection Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat OpenShift distributed tracing data collection resources. You can either install the default configuration or modify the file to better suit your business requirements. 3.3.1. OpenTelemetry Collector configuration options Important The Red Hat OpenShift distributed tracing data collection Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The OpenTelemetry Collector consists of three components that access telemetry data: Receivers - A receiver, which can be push or pull based, is how data gets into the Collector. Generally, a receiver accepts data in a specified format, translates it into the internal format and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. Processors - (Optional) Processors are run on data between being received and being exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, it may be recommended that multiple processors be enabled. In addition, it is important to note that the order of processors matters. Exporters - An exporter, which can be push or pull based, is how you send data to one or more backends/destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters may support one or more data sources. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings. You can define multiple instances of components in a custom resource YAML file. Once configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice you should only enable the components that you need. sample OpenTelemetry collector custom resource file apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: processors: exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" service: pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger] Note If a component is configured, but not defined within the service section then it is not enabled. Table 3.17. Parameters used by the Operator to define the OpenTelemetry Collector Parameter Description Values Default A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. otlp , jaeger None The oltp and jaeger receivers come with default settings, specifying the name of the receiver is enough to configure it. Processors run on data between being received and being exported. By default, no processors are enabled. None An exporter sends data to one or more backends/destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings. logging , jaeger None The jaeger exporter's endpoint must be of the form <name>-collector-headless.<namespace>.svc , with the name and namespace of the Jaeger deployment, for a secure connection to be established. Path to the CA certificate. For a client this verifies the server certificate. For a server this verifies client certificates. If empty uses system root CA. Components are enabled by adding them to a pipeline under services.pipeline . You enable receivers for tracing by adding them under service.pipelines.traces . None You enable processors for tracing by adding them under service.pipelines.traces . None You enable exporters for tracing by adding them under service.pipelines.traces . None 3.3.2. Validating your deployment 3.3.3. Accessing the Jaeger console To access the Jaeger console you must have either Red Hat OpenShift Service Mesh or Red Hat OpenShift distributed tracing installed, and Red Hat OpenShift distributed tracing platform installed, configured, and deployed. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the control plane project, for example tracing-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, tracing-system is the control plane namespace. USD export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 3.4. Upgrading distributed tracing Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in OpenShift Container Platform. OLM queries for available Operators as well as upgrades for installed Operators. For more information about how OpenShift Container Platform handles upgrades, see the Operator Lifecycle Manager documentation. During an update, the Red Hat OpenShift distributed tracing Operators upgrade the managed distributed tracing instances to the version associated with the Operator. Whenever a new version of the Red Hat OpenShift distributed tracing platform Operator is installed, all the distributed tracing platform application instances managed by the Operator are upgraded to the Operator's version. For example, after upgrading the Operator from 1.10 installed to 1.11, the Operator scans for running distributed tracing platform instances and upgrades them to 1.11 as well. For specific instructions on how to update the OpenShift Elasticsearch Operator, see Updating OpenShift Logging . 3.4.1. Changing the Operator channel for 2.0 Red Hat OpenShift distributed tracing 2.0.0 made the following changes: Renamed the Red Hat OpenShift Jaeger Operator to the Red Hat OpenShift distributed tracing platform Operator. Stopped support for individual release channels. Going forward, the Red Hat OpenShift distributed tracing platform Operator will only support the stable Operator channel. Maintenance channels, for example 1.24-stable , will no longer be supported by future Operators. As part of the update to version 2.0, you must update your OpenShift Elasticsearch and Red Hat OpenShift distributed tracing platform Operator subscriptions. Prerequisites The OpenShift Container Platform version is 4.6 or later. You have updated the OpenShift Elasticsearch Operator. You have backed up the Jaeger custom resource file. An account with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Important If you have not already updated your OpenShift Elasticsearch Operator as described in Updating OpenShift Logging complete that update before updating your Red Hat OpenShift distributed tracing platform Operator. For instructions on how to update the Operator channel, see Updating installed Operators . 3.5. Removing distributed tracing The steps for removing Red Hat OpenShift distributed tracing from an OpenShift Container Platform cluster are as follows: Shut down any Red Hat OpenShift distributed tracing pods. Remove any Red Hat OpenShift distributed tracing instances. Remove the Red Hat OpenShift distributed tracing platform Operator. Remove the Red Hat OpenShift distributed tracing data collection Operator. 3.5.1. Removing a Red Hat OpenShift distributed tracing platform instance using the web console Note When deleting an instance that uses the in-memory storage, all data is permanently lost. Data stored in a persistent storage such as Elasticsearch is not be deleted when a Red Hat OpenShift distributed tracing platform instance is removed. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select the name of the project where the Operators are installed from the Project menu, for example, openshift-operators . Click the Red Hat OpenShift distributed tracing platform Operator. Click the Jaeger tab. Click the Options menu to the instance you want to delete and select Delete Jaeger . In the confirmation message, click Delete . 3.5.2. Removing a Red Hat OpenShift distributed tracing platform instance from the CLI Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> To display the distributed tracing platform instances run the command: USD oc get deployments -n <jaeger-project> For example, USD oc get deployments -n openshift-operators The names of Operators have the suffix -operator . The following example shows two Red Hat OpenShift distributed tracing platform Operators and four distributed tracing platform instances: USD oc get deployments -n openshift-operators You should see output similar to the following: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m To remove an instance of distributed tracing platform, run the following command: USD oc delete jaeger <deployment-name> -n <jaeger-project> For example: USD oc delete jaeger tracing2 -n openshift-operators To verify the deletion, run the oc get deployments command again: USD oc get deployments -n <jaeger-project> For example: USD oc get deployments -n openshift-operators You should see generated output that is similar to the following example: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s 3.5.3. Removing the Red Hat OpenShift distributed tracing Operators Procedure Follow the instructions for Deleting Operators from a cluster . Remove the Red Hat OpenShift distributed tracing platform Operator. After the Red Hat OpenShift distributed tracing platform Operator has been removed, if appropriate, remove the OpenShift Elasticsearch Operator.
[ "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443", "oc new-project tracing-system", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "oc create -n tracing-system -f jaeger.yaml", "oc get pods -n tracing-system -w", "NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443", "oc new-project tracing-system", "oc create -n tracing-system -f jaeger-production.yaml", "oc get pods -n tracing-system -w", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans #Note: If brokers are not defined,AMQStreams 1.4.0+ will self-provision Kafka. brokers: my-cluster-kafka-brokers.kafka:9092 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443", "oc new-project tracing-system", "oc create -n tracing-system -f jaeger-streaming.yaml", "oc get pods -n tracing-system -w", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "collector: replicas:", "spec: collector: options: {}", "options: collector: num-workers:", "options: collector: queue-size:", "options: kafka: producer: topic: jaeger-spans", "options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092", "options: log-level:", "spec: sampling: options: {} default_strategy: service_strategy:", "default_strategy: type: service_strategy: type:", "default_strategy: param: service_strategy: param:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5", "spec: sampling: options: default_strategy: type: probabilistic param: 1", "spec: storage: type:", "storage: secretname:", "storage: options: {}", "storage: esIndexCleaner: enabled:", "storage: esIndexCleaner: numberOfDays:", "storage: esIndexCleaner: schedule:", "elasticsearch: properties: doNotProvision:", "elasticsearch: properties: name:", "elasticsearch: nodeCount:", "elasticsearch: resources: requests: cpu:", "elasticsearch: resources: requests: memory:", "elasticsearch: resources: limits: cpu:", "elasticsearch: resources: limits: memory:", "elasticsearch: redundancyPolicy:", "elasticsearch: useCertManagement:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy", "es: server-urls:", "es: max-doc-count:", "es: max-num-spans:", "es: max-span-age:", "es: sniffer:", "es: sniffer-tls-enabled:", "es: timeout:", "es: username:", "es: password:", "es: version:", "es: num-replicas:", "es: num-shards:", "es: create-index-templates:", "es: index-prefix:", "es: bulk: actions:", "es: bulk: flush-interval:", "es: bulk: size:", "es: bulk: workers:", "es: tls: ca:", "es: tls: cert:", "es: tls: enabled:", "es: tls: key:", "es: tls: server-name:", "es: token-file:", "es-archive: bulk: actions:", "es-archive: bulk: flush-interval:", "es-archive: bulk: size:", "es-archive: bulk: workers:", "es-archive: create-index-templates:", "es-archive: enabled:", "es-archive: index-prefix:", "es-archive: max-doc-count:", "es-archive: max-num-spans:", "es-archive: max-span-age:", "es-archive: num-replicas:", "es-archive: num-shards:", "es-archive: password:", "es-archive: server-urls:", "es-archive: sniffer:", "es-archive: sniffer-tls-enabled:", "es-archive: timeout:", "es-archive: tls: ca:", "es-archive: tls: cert:", "es-archive: tls: enabled:", "es-archive: tls: key:", "es-archive: tls: server-name:", "es-archive: token-file:", "es-archive: username:", "es-archive: version:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true", "spec: query: replicas:", "spec: query: options: {}", "options: log-level:", "options: query: base-path:", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger", "spec: ingester: options: {}", "options: deadlockInterval:", "options: kafka: consumer: topic:", "options: kafka: consumer: brokers:", "options: log-level:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200", "apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion", "apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc", "apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: processors: exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" service: pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger]", "receivers:", "receivers: otlp:", "processors:", "exporters:", "exporters: jaeger: endpoint:", "exporters: jaeger: tls: ca_file:", "service: pipelines:", "service: pipelines: traces: receivers:", "service: pipelines: traces: processors:", "service: pipelines: traces: exporters:", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')", "oc login --username=<NAMEOFUSER>", "oc get deployments -n <jaeger-project>", "oc get deployments -n openshift-operators", "oc get deployments -n openshift-operators", "NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m", "oc delete jaeger <deployment-name> -n <jaeger-project>", "oc delete jaeger tracing2 -n openshift-operators", "oc get deployments -n <jaeger-project>", "oc get deployments -n openshift-operators", "NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/distributed_tracing/distributed-tracing-installation
Chapter 14. ComplianceService
Chapter 14. ComplianceService 14.1. GetAggregatedResults GET /v1/compliance/aggregatedresults 14.1.1. Description 14.1.2. Parameters 14.1.2.1. Query Parameters Name Description Required Default Pattern groupBy String - null unit - UNKNOWN where.query - null where.pagination.limit - null where.pagination.offset - null where.pagination.sortOption.field - null where.pagination.sortOption.reversed - null where.pagination.sortOption.aggregateBy.aggrFunc - UNSET where.pagination.sortOption.aggregateBy.distinct - null 14.1.3. Return Type StorageComplianceAggregationResponse 14.1.4. Content Type application/json 14.1.5. Responses Table 14.1. HTTP Response Codes Code Message Datatype 200 A successful response. StorageComplianceAggregationResponse 0 An unexpected error response. GooglerpcStatus 14.1.6. Samples 14.1.7. Common object reference 14.1.7.1. ComplianceAggregationAggregationKey Field Name Required Nullable Type Description Format scope StorageComplianceAggregationScope UNKNOWN, STANDARD, CLUSTER, CATEGORY, CONTROL, NAMESPACE, NODE, DEPLOYMENT, CHECK, id String 14.1.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 14.1.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 14.1.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 14.1.7.4. StorageComplianceAggregationResponse Field Name Required Nullable Type Description Format results List of StorageComplianceAggregationResult sources List of StorageComplianceAggregationSource errorMessage String 14.1.7.5. StorageComplianceAggregationResult Field Name Required Nullable Type Description Format aggregationKeys List of ComplianceAggregationAggregationKey unit StorageComplianceAggregationScope UNKNOWN, STANDARD, CLUSTER, CATEGORY, CONTROL, NAMESPACE, NODE, DEPLOYMENT, CHECK, numPassing Integer int32 numFailing Integer int32 numSkipped Integer int32 14.1.7.6. StorageComplianceAggregationScope Enum Values UNKNOWN STANDARD CLUSTER CATEGORY CONTROL NAMESPACE NODE DEPLOYMENT CHECK 14.1.7.7. StorageComplianceAggregationSource Field Name Required Nullable Type Description Format clusterId String standardId String successfulRun StorageComplianceRunMetadata failedRuns List of StorageComplianceRunMetadata 14.1.7.8. StorageComplianceRunMetadata Field Name Required Nullable Type Description Format runId String standardId String clusterId String startTimestamp Date date-time finishTimestamp Date date-time success Boolean errorMessage String domainId String 14.2. GetRunResults GET /v1/compliance/runresults 14.2.1. Description 14.2.2. Parameters 14.2.2.1. Query Parameters Name Description Required Default Pattern clusterId - null standardId - null runId Specifies the run ID for which to return results. If empty, the most recent run is returned. CAVEAT: Setting this field circumvents the results cache on the server-side, which may lead to significantly increased memory pressure and decreased performance. - null 14.2.3. Return Type V1GetComplianceRunResultsResponse 14.2.4. Content Type application/json 14.2.5. Responses Table 14.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetComplianceRunResultsResponse 0 An unexpected error response. GooglerpcStatus 14.2.6. Samples 14.2.7. Common object reference 14.2.7.1. ComplianceResultValueEvidence Field Name Required Nullable Type Description Format state StorageComplianceState COMPLIANCE_STATE_UNKNOWN, COMPLIANCE_STATE_SKIP, COMPLIANCE_STATE_NOTE, COMPLIANCE_STATE_SUCCESS, COMPLIANCE_STATE_FAILURE, COMPLIANCE_STATE_ERROR, message String messageId Integer int32 14.2.7.2. ComplianceRunResultsEntityResults Field Name Required Nullable Type Description Format controlResults Map of StorageComplianceResultValue 14.2.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 14.2.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 14.2.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 14.2.7.5. StorageComplianceDomain Field Name Required Nullable Type Description Format id String cluster StorageComplianceDomainCluster nodes Map of StorageComplianceDomainNode deployments Map of StorageComplianceDomainDeployment 14.2.7.6. StorageComplianceDomainCluster Field Name Required Nullable Type Description Format id String name String 14.2.7.7. StorageComplianceDomainDeployment Field Name Required Nullable Type Description Format id String name String type String namespace String namespaceId String clusterId String clusterName String 14.2.7.8. StorageComplianceDomainNode Field Name Required Nullable Type Description Format id String name String clusterId String clusterName String 14.2.7.9. StorageComplianceResultValue Field Name Required Nullable Type Description Format evidence List of ComplianceResultValueEvidence overallState StorageComplianceState COMPLIANCE_STATE_UNKNOWN, COMPLIANCE_STATE_SKIP, COMPLIANCE_STATE_NOTE, COMPLIANCE_STATE_SUCCESS, COMPLIANCE_STATE_FAILURE, COMPLIANCE_STATE_ERROR, 14.2.7.10. StorageComplianceRunMetadata Field Name Required Nullable Type Description Format runId String standardId String clusterId String startTimestamp Date date-time finishTimestamp Date date-time success Boolean errorMessage String domainId String 14.2.7.11. StorageComplianceRunResults Field Name Required Nullable Type Description Format domain StorageComplianceDomain runMetadata StorageComplianceRunMetadata clusterResults ComplianceRunResultsEntityResults nodeResults Map of ComplianceRunResultsEntityResults deploymentResults Map of ComplianceRunResultsEntityResults machineConfigResults Map of ComplianceRunResultsEntityResults 14.2.7.12. StorageComplianceState Enum Values COMPLIANCE_STATE_UNKNOWN COMPLIANCE_STATE_SKIP COMPLIANCE_STATE_NOTE COMPLIANCE_STATE_SUCCESS COMPLIANCE_STATE_FAILURE COMPLIANCE_STATE_ERROR 14.2.7.13. V1GetComplianceRunResultsResponse Field Name Required Nullable Type Description Format results StorageComplianceRunResults failedRuns List of StorageComplianceRunMetadata 14.3. GetStandards GET /v1/compliance/standards 14.3.1. Description 14.3.2. Parameters 14.3.3. Return Type V1GetComplianceStandardsResponse 14.3.4. Content Type application/json 14.3.5. Responses Table 14.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetComplianceStandardsResponse 0 An unexpected error response. GooglerpcStatus 14.3.6. Samples 14.3.7. Common object reference 14.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 14.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 14.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 14.3.7.3. V1ComplianceStandardMetadata Field Name Required Nullable Type Description Format id String name String description String numImplementedChecks Integer int32 scopes List of V1ComplianceStandardMetadataScope dynamic Boolean hideScanResults Boolean 14.3.7.4. V1ComplianceStandardMetadataScope Enum Values UNSET CLUSTER NAMESPACE DEPLOYMENT NODE 14.3.7.5. V1GetComplianceStandardsResponse Field Name Required Nullable Type Description Format standards List of V1ComplianceStandardMetadata 14.4. GetStandard GET /v1/compliance/standards/{id} 14.4.1. Description 14.4.2. Parameters 14.4.2.1. Path Parameters Name Description Required Default Pattern id X null 14.4.3. Return Type V1GetComplianceStandardResponse 14.4.4. Content Type application/json 14.4.5. Responses Table 14.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetComplianceStandardResponse 0 An unexpected error response. GooglerpcStatus 14.4.6. Samples 14.4.7. Common object reference 14.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 14.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 14.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 14.4.7.3. V1ComplianceControl Field Name Required Nullable Type Description Format id String standardId String groupId String name String description String implemented Boolean interpretationText String 14.4.7.4. V1ComplianceControlGroup Field Name Required Nullable Type Description Format id String standardId String name String description String numImplementedChecks Integer int32 14.4.7.5. V1ComplianceStandard Field Name Required Nullable Type Description Format metadata V1ComplianceStandardMetadata groups List of V1ComplianceControlGroup controls List of V1ComplianceControl 14.4.7.6. V1ComplianceStandardMetadata Field Name Required Nullable Type Description Format id String name String description String numImplementedChecks Integer int32 scopes List of V1ComplianceStandardMetadataScope dynamic Boolean hideScanResults Boolean 14.4.7.7. V1ComplianceStandardMetadataScope Enum Values UNSET CLUSTER NAMESPACE DEPLOYMENT NODE 14.4.7.8. V1GetComplianceStandardResponse Field Name Required Nullable Type Description Format standard V1ComplianceStandard 14.5. UpdateComplianceStandardConfig PATCH /v1/compliance/standards/{id} 14.5.1. Description 14.5.2. Parameters 14.5.2.1. Path Parameters Name Description Required Default Pattern id X null 14.5.2.2. Body Parameter Name Description Required Default Pattern body ComplianceServiceUpdateComplianceStandardConfigBody X 14.5.3. Return Type Object 14.5.4. Content Type application/json 14.5.5. Responses Table 14.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 14.5.6. Samples 14.5.7. Common object reference 14.5.7.1. ComplianceServiceUpdateComplianceStandardConfigBody Field Name Required Nullable Type Description Format hideScanResults Boolean 14.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 14.5.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 14.5.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics.
[ "Next available tag: 3", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 3", "Next available tag: 5", "Next available tag: 5", "Next available tag: 5", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 5", "These must mirror the tags _exactly_ in cluster.proto for backwards compatibility", "This must mirror the tags _exactly_ in deployment.proto for backwards compatibility", "These must mirror the tags _exactly_ in node.proto for backwards compatibility", "Next available tag: 5", "Next available tag: 6", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/complianceservice
6.11. Setting Up Server-side Key Generation
6.11. Setting Up Server-side Key Generation Server-side key generation means that keys are generated by a Key Recovery Authority (KRA), an optional Certificate System subsystem. Generating keys by the KRA is necessary to allow recovery of keys on lost or damaged tokens, or key retrieval in the case of external registration. This section describes how to configure server-side key generation in TMS. During TPS installation you are asked to specify whether you want to use key archival. If you confirm, setup will perform automatic basic configuration, specifically the following parameters: TPS connector parameters for the KRA: TPS profile-specific parameters for server-side key generation: Set the serverKeygen.enable=true option for serverKeygen.archive to take effect. Important The LunaSA HSM does not support a smaller key size than 2048 bits for RSA encryption. For example, to configure a key size of 2048 bits, set the following parameter in the /var/lib/pki/ instance_name /tps/conf/CS.cfg file: TKS configuration: The following configures the nickname of the transport certificate used for communication between the TKS and KRA (via TPS): The referenced transport certificate must also exist in the TKS instance security module. For example: KRA configuration Depending on the PKCS#11 token, parameters kra.keygen.temporaryPairs , kra.keygen.sensitivePairs , and kra.keygen.extractablePairs can be customized for key generation options. These parameters are all set to false by default. The following values for these parameters have been tested with some of the security modules supported by Red Hat Certificate System: NSS (when in FIPS mode): nCipher nShield Connect 6000 (works by default without specifying): For specifying RSA keys: (Do not specify any other parameters.) For generating ECC keys: LunaSA CKE - Key Export Model (non-FIPS mode): Note Gemalto SafeNet LunaSA only supports PKI private key extraction in its CKE - Key Export model, and only in non-FIPS mode. The LunaSA Cloning model and the CKE model in FIPS mode do not support PKI private key extraction. Note When LunaSA CKE - Key Export Model is in FIPS mode, pki private keys cannot be extracted.
[ "tps.connector.kra1.enable=true tps.connector.kra1.host=host1.EXAMPLE.com tps.connector.kra1.maxHttpConns=15 tps.connector.kra1.minHttpConns=1 tps.connector.kra1.nickName=subsystemCert cert-pki-tomcat tps.connector.kra1.port=8443 tps.connector.kra1.timeout=30 tps.connector.kra1.uri.GenerateKeyPair=/kra/agent/kra/GenerateKeyPair tps.connector.kra1.uri.TokenKeyRecovery=/kra/agent/kra/TokenKeyRecovery", "op.enroll.userKey.keyGen.encryption.serverKeygen.archive=true op.enroll.userKey.keyGen.encryption.serverKeygen.drm.conn=kra1 op.enroll.userKey.keyGen.encryption.serverKeygen.enable=true", "op.enroll.userKey.keyGen.encryption.keySize=2048", "tks.drm_transport_cert_nickname=transportCert cert-pki-tomcat KRA", "transportCert cert-pki-tomcat KRA u,u,u", "kra.keygen.extractablePairs=true", "kra.keygen.temporaryPairs=true", "kra.keygen.temporaryPairs=true kra.keygen.sensitivePairs=false kra.keygen.extractablePairs=true", "kra.keygen.temporaryPairs=true kra.keygen.sensitivePairs=true kra.keygen.extractablePairs=true" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/sect-server-key-generation-setup
Chapter 6. Updating the OpenShift Data Foundation external secret
Chapter 6. Updating the OpenShift Data Foundation external secret Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation. Note Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.16.x to 4.16.y. Prerequisites Update the OpenShift Container Platform cluster to the latest stable release of 4.16.z, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. Red Hat Ceph Storage must have a Ceph dashboard installed and configured. Procedure Download the ceph-external-cluster-details-exporter.py python script that matches your OpenShift Data Foundation version. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. The updated permissions for the user are set as: Run the previously downloaded python script and save the JSON output that gets generated, from the external Red Hat Ceph Storage cluster. Run the previously downloaded python script: Note Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. Ensure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port , are the same as that you used during the original deployment of OpenShift Data Foundation in external mode. --rbd-data-pool-name Is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint Is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> . --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. Additional flags: rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED). ceph-conf (Optional) The name of the Ceph configuration file. cluster-name (Optional) The Ceph cluster name. output (Optional) The file where the output is required to be stored. cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. dry-run (Optional) This parameter helps to print the executed commands without running them. Save the JSON output generated after running the script in the step. Example output: Upload the generated JSON file. Log in to the OpenShift Web Console. Click Workloads Secrets . Set project to openshift-storage . Click rook-ceph-external-cluster-details . Click Actions (...) Edit Secret . Click Browse and upload the JSON file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. If verification steps fail, contact Red Hat Support .
[ "oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}'| base64 --decode > ceph-external-cluster-details-exporter.py", "python3 ceph-external-cluster-details-exporter.py --upgrade", "client.csi-cephfs-node key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs = client.csi-cephfs-provisioner key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/updating_openshift_data_foundation/updating-the-openshift-data-foundation-external-secret_rhodf
Chapter 2. Differences from upstream OpenJDK 8
Chapter 2. Differences from upstream OpenJDK 8 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 8 changes: FIPS support. Red Hat build of OpenJDK 8 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 8 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 8 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.412/rn-openjdk-diff-from-upstream
17.3. Shared Cache Stores
17.3. Shared Cache Stores A shared cache store is a cache store that is shared by multiple cache instances. A shared cache store is useful when all instances in a cluster communicate with the same remote, shared database using the same JDBC settings. In such an instance, configuring a shared cache store prevents the unnecessary repeated write operations that occur when various cache instances attempt to write the same data to the cache store. Report a bug 17.3.1. Invalidation Mode and Shared Cache Stores When used in conjunction with a shared cache store, Red Hat JBoss Data Grid's invalidation mode causes remote caches to see the shared cache store to retrieve modified data. The benefits of using invalidation mode in conjunction with shared cache stores include the following: Compared to replication messages, which contain the updated data, invalidation messages are much smaller and result in reduced network traffic. The remaining cluster caches look up modified data from the shared cache store lazily and only when required to do so, resulting in further reduced network traffic. Report a bug 17.3.2. The Cache Store and Cache Passivation In Red Hat JBoss Data Grid, a cache store can be used to enforce the passivation of entries and to activate eviction in a cache. Whether passivation mode or activation mode are used, the configured cache store both reads from and writes to the data store. When passivation is disabled in JBoss Data Grid, after the modification, addition or removal of an element is carried out the cache store steps in to persist the changes in the store. Report a bug 17.3.3. Application Cachestore Registration It is not necessary to register an application cache store for an isolated deployment. This is not a requirement in Red Hat JBoss Data Grid because lazy deserialization is used to work around this problem. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-shared_cache_stores
Chapter 15. System and Subscription Management
Chapter 15. System and Subscription Management PowerTOP now respects user-defined report file names Previously, PowerTOP report file names were generated in an unclear, undocumented way. With this update, the implementation has been improved, and the generated file names now respect the names requested by the user. This applies to both CSV and HTML reports. Amended yum-config-manager commands Previously, running the yum-config-manager --disable command disabled all configured repositories, while the yum-config-manager --enable command did not enable any. This inconsistency has been fixed. The --disable and --enable commands now require the use of '\*' in the syntax, and yum-config-manager --enable \* enables repositories. Running the commands without the addition of '\*' prints a message asking the user to run yum-config-manager --disable \* or yum-config-manager --enable \* if they want to disable or enable repositories. New search-disabled-repos plug-in for yum The search-disabled-repos plug-in for yum has been added to the subscription-manager packages. This plug-in allows users to successfully complete yum operations that fail due to the source repository being dependent on a disabled repository. When search-disabled-repos is installed in the described scenario, yum displays instructions to temporarily enable repositories that are currently disabled and to search for missing dependencies. If you choose to follow the instructions and turn off the default notify_only behavior in the /etc/yum/pluginconf.d/search-disabled-repos.conf file, future yum operations will prompt you to temporarily or permanently enable all the disabled repositories needed to fulfill the yum transaction. Acquiring hypervisor data in parallel With this update, virt-who is able to acquire data from multiple hypervisors in parallel. Previously, virt-who could read data only from a single hypervisor at a time, and if one hypervisor in a series was nonfunctional, virt-who waited for its response and thus failed. Reading parallel hypervisors works around this problem and prevents the described failure. Filtering for hypervisors reported by virt-who The virt-who service introduces a filtering mechanism for the Subscription Manager reports. As a result, users can now choose which hosts virt-who should display according to the specified parameters. For example, they can filter out hosts that do not run any Red Hat Enterprise Linux guests, or hosts that run guests of a specified version of Red Hat Enterprise Linux. Improved visualization of host-to-guest association The -p option has been added to the virt-who utility. When used with -p , virt-who output displays Javascript Object Notation (JSON)-encoded map of the host-guest association. In addition, the information on host-guest association logged in the /var/log/rhsm/rhsm.log file is now formatted in JSON as well. virt-who output displayed as host names It is now possible to configure the virt-who query so that its results are displayed as host names instead of as Universally Unique Identifiers (UUIDs) when viewed in Red Hat Satellite and Red Hat Customer Portal. To enable the function, add the hypervisor_id=hostname option to the configuration file in the /etc/virt-who.d/ directory. Ideally, this should be done before using virt-who for the first time, otherwise changing the configuration duplicates the hypervisor. Pre-filled virt-who configuration file A default configuration file for virt-who has been placed in the /etc/virt-who.d/ directory. It contains a template and instructions for the user to configure virt-who. This replaces the deprecated configuration that uses the /etc/sysconfig/virt-who file. Enhanced proxy connection options With Red Hat Enterprise Linux 7.2, the virt-who utility can handle the HTTP_PROXY and HTTPS_PROXY environment variables, and thus correctly uses the proxy server when requested. This allows virt-who to connect to the Hyper-V hypervisor and Red Hat Enterprise Virtualization Manager through proxy. Subscription Manager now supports syslog The subscription-manager tool can now use the syslog as the log handler and formatter in addition to separate log used previously. The handler and formatter is configured in the /etc/rhsm/logging.conf configuration file. Subscription Manager is now part of Initial Setup The Subscription Manager component of Firstboot has been ported to the Initial Setup utility. Users are now able to register the system from the main menu of Initial Setup after installing a Red Hat Enterprise Linux 7 system and rebooting for the first time. Subscription Manager now displays the server URL when registering on a command line When registering a system using the subscription-manager command on a command line, the tool now also shows the URL of the server being used for the registration when asking for user name and password. This helps the user determine which credentials to use. Manage Repositories dialog in Subscription Manager is now more responsive The Manage Repositories dialog in the graphical version of Subscription Manager (the subscription-manager-gui package) has been updated to no longer fetch information on each checkbox change. Instead, the system state is only synchronized when the new save button is clicked. This removes delays users experienced in versions caused by the system state being updated on each checkbox action, and repository management is now significantly more responsive. ReaR now works also on interfaces other than eth0 Previously, the rescue system produced by ReaR did not support mounting an NFS server using an interface other than eth0. In that case, the rescue system and backup files could not be downloaded and the system could not be restored. With this update, this has been fixed, and other interfaces, such as eth1, eth2, and so on, can now be used.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/system_and_subscription_management
Chapter 2. Managing virtual machines using the Web Console
Chapter 2. Managing virtual machines using the Web Console 2.1. Creating a virtual machine from a template using the Web Console Log in to the Web Console. Click the hostname, then click oVirt Machines Template . Click the New VM button beside the template that you want to use. Specify a Name for your VM and click Create . Your new virtual machine is created on one of the hosts in your hyperconverged cluster. 2.2. Updating a virtual machine using the Web Console You cannot currently update virtual machines using the Web Console. See Upgrading Red Hat Hyperconverged Infrastructure for Virtualization for information about updating the Hosted Engine virtual machine using the Administration Portal. See Updating Virtual Machine Guest Agents and Drivers in the Red Hat Virtualization 4.4 documentation for instructions on updating virtualization related software on a virtual machine using the Administration Portal. 2.3. Starting a virtual machine using the Web Console Log in to the Web Console. Click the hostname, then click oVirt Machines Cluster . Click Run beside the virtual machine you want to start. 2.4. Pausing a virtual machine using the Web Console Log in to the Web Console. Click the hostname, then click oVirt Machines Host . Click the virtual machine to pause. Click Suspend . 2.5. Resuming a virtual machine using the Web Console Log in to the Web Console. Click the hostname, then click oVirt Machines Host . Click the virtual machine to resume. Click Resume . 2.6. Deleting a virtual machine using the Web Console Log in to the Web Console. Click the hostname, then click oVirt Machines Host . Click the virtual machine to delete. Click Shut Down to shut down the virtual machine before deletion. Click Delete . Confirm deletion. 2.7. Shutting down a virtual machine using the Web Console Log in to the Web Console on the host that is running the virtual machine. Click the hostname, then click oVirt Machines Host . Click on the virtual machine you want to shut down. Click Shut Down . This shuts the virtual machine down gracefully. If your virtual machine is not responding, click the dropdown arrow beside Shut Down and click Force Shut Down instead. 2.8. Migrating a virtual machine to a different hyperconverged host using the Web Console Log in to the Web Console. Click the hostname, then click oVirt Machines Host . Click the virtual machine to migrate. Click the oVirt section. The oVirt section of the virtual machine summary Specify a host in the dropdown menu, or use the default value of Automatically selected host . Click Migrate to and wait for the virtual machine to migrate. Click the Cluster subtab and verify that the virtual machine is now running on a different host. 2.9. Accessing the console of a virtual machine using the Web Console Log in to the Web Console. Click the hostname, then click oVirt Machines Host . Click the Console subtab. Select a Console Type . For the Hosted Engine virtual machine The default console type for the Hosted Engine virtual machine is Graphics Console (VNC) . The console loads after several seconds. Graphics Console (VNC) Click anywhere in the console and log in to the Hosted Engine virtual machine to perform any administrative operations. For any other virtual machine: Graphics Console in Desktop Viewer On Red Hat Enterprise Linux based systems, click Launch Remote Viewer to launch the Remote Viewer application. Otherwise, use the information under Manual Connection to connect to the console with your preferred client.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/managing_virtual_machines_using_the_web_console/assembly-managing-vms-using-cockpit
Chapter 4. Installing a cluster
Chapter 4. Installing a cluster 4.1. Cleaning up installations In case of an earlier failed deployment, remove the artifacts from the failed attempt before trying to deploy OpenShift Container Platform again. Procedure Power off all bare-metal nodes before installing the OpenShift Container Platform cluster by using the following command: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove all old bootstrap resources if any remain from an earlier deployment attempt by using the following script: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Delete the artifacts that the earlier installation generated by using the following command: USD cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.json Re-create the OpenShift Container Platform manifests by using the following command: USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests 4.2. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 4.3. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log 4.4. Verifying static IP address configuration If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node's network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address. Note The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. Verify the network configuration is working properly. Procedure Check the network interface configuration on the node. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly. 4.5. Additional resources Understanding update channels and releases
[ "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installing-a-cluster
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_spring_boot_starter/making-open-source-more-inclusive
Chapter 6. Subscriptions
Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication, either as a source or destination, requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured, the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/planning_your_deployment/subscriptions_rhodf
Chapter 11. Searching IdM entries using the ldapsearch command
Chapter 11. Searching IdM entries using the ldapsearch command You can use the ipa find command to search through the Identity Management entries. For more information about ipa command see Structure of IPA commands section. This section introduces the basics of an alternative search option using ldapsearch command line command through the Identity Management entries. 11.1. Using the ldapsearch command The ldapsearch command has the following format: To configure the authentication method, specify the -x option to use simple binds or the -Y option to set the Simple Authentication and Security Layer (SASL) mechanism. Note that you need to obtain a Kerberos ticket if you are using the -Y GSSAPI option. The options are the ldapsearch command options described in a table below. The search_filter is an LDAP search filter. The list_of_attributes is a list of the attributes that the search results return. For example, you want to search all the entries of a base LDAP tree for the user name user01 : The -x option tells the ldapsearch command to authenticate with the simple bind. Note that if you do not provide the Distinguish Name (DN) with the -D option, the authentication is anonymous. The -H option connects you to the ldap://ldap.example.com . The -s sub option tells the ldapsearch command to search all the entries, starting from the base DN, for the user with the name user01 . The "(uid=user01)" is a filter. Note that if you do not provide the starting point for the search with the -b option, the command searches in the default tree. It is specified in the BASE parameter of the etc/openldap/ldap.conf file. Table 11.1. The ldapsearch command options Option Description -b The starting point for the search. If your search parameters contain an asterisk (*) or other character, that the command line can interpret into a code, you must wrap the value in single or double quotation marks. For example, -b cn=user,ou=Product Development,dc=example,dc=com . -D The Distinguished Name (DN) with which you want to authenticate. -H An LDAP URL to connect to the server. The -H option replaces the -h and -p options. -l The time limit in seconds to wait for a search request to complete. -s scope The scope of the search. You can choose one of the following for the scope: base searches only the entry from the -b option or defined by the LDAP_BASEDN environment variable. one searches only the children of the entry from the -b option. sub a subtree search from the -b option starting point. -W Requests for the password. -x Disables the default SASL connection to allow simple binds. -Y SASL_mechanism Sets the SASL mechanism for the authentication. -z number The maximum number of entries in the search result. Note, you must specify one of the authentication mechanisms with the -x or -Y option with the ldapsearch command. Additional resources For details on how to use ldapsearch , see ldapsearch(1) man page on your system. 11.2. Using the ldapsearch filters The ldapsearch filters allow you to narrow down the search results. For example, you want the search result to contain all the entries with a common names set to example : In this case, the equal sign (=) is the operator, and example is the value. Table 11.2. The ldapsearch filter operators Search type Operator Description Equality = Returns the entries with the exact match to the value. For example, cn=example . Substring =string* string Returns all entries with the substring match. For example, cn=exa*l . The asterisk (*) indicates zero (0) or more characters. Greater than or equal to >= Returns all entries with attributes that are greater than or equal to the value. For example, uidNumber >= 5000 . Less than or equal to <= Returns all entries with attributes that are less than or equal to the value. For example, uidNumber <= 5000 . Presence =* Returns all entries with one or more attributes. For example, cn=* . Approximate ~= Returns all entries with the similar to the value attributes. For example, l~=san fransico can return l=san francisco . You can use boolean operators to combine multiple filters to the ldapsearch command. Table 11.3. The ldapsearch filter boolean operators Search type Operator Description AND & Returns all entries where all statements in the filters are true. For example, (&(filter)(filter)(filter)... ) . OR | Returns all entries where at least one statement in the filters is true. For example, (|(filter)(filter)(filter)... ) . NOT ! Returns all entries where the statement in the filter is not true. For example, (!(filter)) .
[ "ldapsearch [-x | -Y mechanism] [options] [search_filter] [list_of_attributes]", "ldapsearch -x -H ldap://ldap.example.com -s sub \"(uid=user01)\"", "\"(cn=example)\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_searching-idm-entries_managing-users-groups-hosts
3.8. Backing Up and Restoring a Cluster Configuration
3.8. Backing Up and Restoring a Cluster Configuration As of the Red Hat Enterprise Linux 7.1 release, you can back up the cluster configuration in a tarball with the following command. If you do not specify a file name, the standard output will be used. Use the following command to restore the cluster configuration files on all cluster nodes from the backup. Specifying the --local option restores the cluster configuration files only on the node from which you run this command. If you do not specify a file name, the standard input will be used.
[ "pcs config backup filename", "pcs config restore [--local] [ filename ]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-pcsbackuprestore-HAAR
Chapter 61. BrokerCapacity schema reference
Chapter 61. BrokerCapacity schema reference Used in: CruiseControlSpec Property Property type Description disk string The disk property has been deprecated. The Cruise Control disk capacity setting has been deprecated, is ignored, and will be removed in the future Broker capacity for disk in bytes. Use a number value with either standard OpenShift byte units (K, M, G, or T), their bibyte (power of two) equivalents (Ki, Mi, Gi, or Ti), or a byte value with or without E notation. For example, 100000M, 100000Mi, 104857600000, or 1e+11. cpuUtilization integer The cpuUtilization property has been deprecated. The Cruise Control CPU capacity setting has been deprecated, is ignored, and will be removed in the future Broker capacity for CPU resource utilization as a percentage (0 - 100). cpu string Broker capacity for CPU resource in cores or millicores. For example, 1, 1.500, 1500m. For more information on valid CPU resource units see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu . inboundNetwork string Broker capacity for inbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. outboundNetwork string Broker capacity for outbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. overrides BrokerCapacityOverride array Overrides for individual brokers. The overrides property lets you specify a different capacity configuration for different brokers.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-BrokerCapacity-reference
24.2. Default Settings
24.2. Default Settings After defining the Server Name , Webmaster email address , and Available Addresses , click the Virtual Hosts tab and click the Edit Default Settings button. A window as shown in Figure 24.3, "Site Configuration" appears. Configure the default settings for your Web server in this window. If you add a virtual host, the settings you configure for the virtual host take precedence for that virtual host. For a directive not defined within the virtual host settings, the default value is used. 24.2.1. Site Configuration The default values for the Directory Page Search List and Error Pages work for most servers. If you are unsure of these settings, do not modify them. Figure 24.3. Site Configuration The entries listed in the Directory Page Search List define the DirectoryIndex directive. The DirectoryIndex is the default page served by the server when a user requests an index of a directory by specifying a forward slash (/) at the end of the directory name. For example, when a user requests the page http://www.example.com/this_directory/ , they are going to get either the DirectoryIndex page, if it exists, or a server-generated directory list. The server tries to find one of the files listed in the DirectoryIndex directive and returns the first one it finds. If it does not find any of these files and if Options Indexes is set for that directory, the server generates and returns a list, in HTML format, of the subdirectories and files in the directory. Use the Error Code section to configure Apache HTTP Server to redirect the client to a local or external URL in the event of a problem or error. This option corresponds to the ErrorDocument directive. If a problem or error occurs when a client tries to connect to the Apache HTTP Server, the default action is to display the short error message shown in the Error Code column. To override this default configuration, select the error code and click the Edit button. Choose Default to display the default short error message. Choose URL to redirect the client to an external URL and enter a complete URL, including the http:// , in the Location field. Choose File to redirect the client to an internal URL and enter a file location under the document root for the Web server. The location must begin the a slash (/) and be relative to the Document Root. For example, to redirect a 404 Not Found error code to a webpage that you created in a file called 404.html , copy 404.html to DocumentRoot /../error/404.html . In this case, DocumentRoot is the Document Root directory that you have defined (the default is /var/www/html/ ). If the Document Root is left as the default location, the file should be copied to /var/www/error/404.html . Then, choose File as the Behavior for 404 - Not Found error code and enter /error/404.html as the Location . From the Default Error Page Footer menu, you can choose one of the following options: Show footer with email address - Display the default footer at the bottom of all error pages along with the email address of the website maintainer specified by the ServerAdmin directive. Refer to Section 24.3.1.1, "General Options" for information about configuring the ServerAdmin directive. Show footer - Display just the default footer at the bottom of error pages. No footer - Do not display a footer at the bottom of error pages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/HTTPD_Configuration-Default_Settings
14.4. Reversing Changes in Between Snapshots
14.4. Reversing Changes in Between Snapshots To reverse changes made between two existing Snapper snapshots, use the undochange command in the following format, where 1 is the first snapshot and 2 is the second snapshot: Important Using the undochange command does not revert the Snapper volume back to its original state and does not provide data consistency. Any file modification that occurs outside of the specified range, for example after snapshot 2, will remain unchanged after reverting back, for example to the state of snapshot 1. For example, if undochange is run to undo the creation of a user, any files owned by that user can still remain. There is also no mechanism to ensure file consistency as a snapshot is made, so any inconsistencies that already exist can be transferred back to the snapshot when the undochange command is used. Do not use the Snapper undochange command with the root file system, as doing so is likely to lead to a failure. The following diagram demonstrates how the undochange command works: Figure 14.1. Snapper Status over Time The diagram shows the point in time in which snapshot_1 is created, file_a is created, then file_b deleted. Snapshot_2 is then created, after which file_a is edited and file_c is created. This is now the current state of the system. The current system has an edited version of file_a , no file_b , and a newly created file_c . When the undochange command is called, Snapper generates a list of modified files between the first listed snapshot and the second. In the diagram, if you use the snapper -c SnapperExample undochange 1..2 command, Snapper creates a list of modified files (that is, file_a is created; file_b is deleted) and applies them to the current system. Therefore: the current system will not have file_a , as it has yet to be created when snapshot_1 was created. file_b will exist, copied from snapshot_1 into the current system. file_c will exist, as its creation was outside the specified time. Be aware that if file_b and file_c conflict, the system can become corrupted. You can also use the snapper -c SnapperExample undochange 2..1 command. In this case, the current system replaces the edited version of file_a with one copied from snapshot_1 , which undoes edits of that file made after snapshot_2 was created. Using the mount and unmount Commands to Reverse Changes The undochange command is not always the best way to revert modifications. With the status and diff command, you can make a qualified decision, and use the mount and unmount commands instead of Snapper. The mount and unmount commands are only useful if you want to mount snapshots and browse their content independently of Snapper workflow. If needed, the mount command activates respective LVM Snapper snapshot before mounting. Use the mount and unmount commands if you are, for example, interested in mounting snapshots and extracting older version of several files manually. To revert files manually, copy them from a mounted snapshot to the current file system. The current file system, snapshot 0, is the live file system created in Procedure 14.1, "Creating a Snapper Configuration File" . Copy the files to the subtree of the original /mount-point. Use the mount and unmount commands for explicit client-side requests. The /etc/snapper/configs/ config_name file contains the ALLOW_USERS= and ALLOW_GROUPS= variables where you can add users and groups. Then, snapperd allows you to perform mount operations for the added users and groups.
[ "snapper -c config_name undochange 1 .. 2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/snapper-undochange
Network Observability
Network Observability OpenShift Container Platform 4.14 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-hostnetwork namespace: netobserv spec: podSelector: matchLabels: app: netobserv-operator ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: '' policyTypes: - Ingress", "apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network", "oc adm groups new cluster-admin", "oc adm groups add-users cluster-admin <username>", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admin", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3", "spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>", "oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'", "apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1", "oc apply -f migrate-flowcollector-v1alpha1.yaml", "oc edit crd flowcollectors.flows.netobserv.io", "oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'", "oc get flowcollector/cluster", "NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready", "oc get pods -n netobserv", "NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m", "oc get pods -n netobserv-privileged", "NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h", "oc describe flowcollector/cluster", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: \"ipfix-collector.ipfix.svc.cluster.local\" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address", "oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\"", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: [\"openshift-console\", \"openshift-monitoring\"] 2", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: \"High incoming traffic.\" expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace=\"openshift-ingress\"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: \"1000000000\" 3 buckets: [\".001\", \".005\", \".01\", \".02\", \".03\", \".04\", \".05\", \".075\", \".1\", \".25\", \"1\"] 4", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: \"sum(rate(USDMETRIC[2m]))\" legend: \"\" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: \"sum(rate(USDMETRIC{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0\" legend: \"p99\" - dashboardName: Main 3 sectionName: External title: \"Top external ingress sRTT per workload, p50 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\" - dashboardName: Main 4 sectionName: External title: \"Top external ingress sRTT per workload, p99 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"", "promQL: \"(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000\"", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags]", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags]", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Incoming SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags=\"2\"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Outgoing SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags=\"2\"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: \"At any number of requests are responded with the rate limit error code.\" expr: sum(irate(loki_request_duration_seconds_count{status_code=\"429\"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: cacheMaxFlows: 200000 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: advanced: scheduling: tolerations: - key: \"<taint key>\" operator: \"Equal\" value: \"<taint value>\" effect: \"<taint effect>\" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: \"\"\"", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1", "oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.39\" ], \"mac\": \"0a:58:0a:81:02:27\", \"default\": true, \"dns\": {} }, { \"name\": \"my-vms/l2-network\", 1 \"interface\": \"podc0f69e19ba2\", 2 \"ips\": [ 3 \"10.10.10.15\" ], \"mac\": \"02:fb:f8:00:00:12\", 4 \"dns\": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: status:", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4", "curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64", "chmod +x ./oc-netobserv-amd64", "sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv", "oc netobserv version", "Netobserv CLI version <version>", "oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }", "sqlite3 ./output/flow/<capture_date_time>.db", "sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;", "12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli", "oc netobserv cleanup", "oc netobserv [<command>] [<feature_option>] [<command_options>] 1", "oc netobserv flows [<feature_option>] [<command_options>]", "oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv packets [<option>]", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv metrics [<option>]", "oc netobserv metrics --enable_pkt_drop --protocol=TCP", "oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather", "oc -n netobserv get flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false", "oc edit console.operator.openshift.io cluster", "spec: plugins: - netobserv-plugin", "oc -n netobserv edit flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true", "oc get pods -n openshift-console -l app=console", "oc delete pods -n openshift-console -l app=console", "oc get pods -n netobserv -l app=netobserv-plugin", "NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s", "oc logs -n netobserv -l app=netobserv-plugin", "time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server", "oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer", "oc edit -n netobserv flowcollector.yaml -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1", "oc edit subscription netobserv-operator -n openshift-netobserv-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace=\"my-namespace\"}' | jq", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/network_observability/index
Chapter 3. glance
Chapter 3. glance The following chapter contains information about the configuration options in the glance service. 3.1. glance-api.conf This section contains options for the /etc/glance/glance-api.conf file. 3.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-api.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota Deprecated since: Ussuri Reason: This option is redundant. Control custom image property usage via the image_property_quota configuration option. This option is scheduled to be removed during the Victoria development cycle. allow_anonymous_access = False boolean value Allow limited access to unauthenticated users. Assign a boolean to determine API access for unauthenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware. Possible values: True False Related options: None api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default backlog = 4096 integer value Set the number of incoming connection requests. Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096. An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic. Possible values: Positive integer Related options: None bind_host = 0.0.0.0 host address value IP address to bind the glance servers to. Provide an IP address to bind the glance server to. The default value is 0.0.0.0 . Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server. Possible values: A valid IPv4 address A valid IPv6 address Related options: None bind_port = None port value Port number on which the server will listen. Provide a valid port number to bind the server's socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191. Possible values: A valid port number (0 to 65535) Related options: None client_socket_timeout = 900 integer value Timeout for client connections' socket operations. Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds. The value zero implies wait forever. Possible values: Zero Positive integer Related options: None conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_publisher_id = image.localhost string value Default publisher_id for outgoing Glance notifications. This is the value that the notification driver will use to identify messages for events originating from the Glance service. Typically, this is the hostname of the instance that generated the message. Possible values: Any reasonable instance identifier, for example: image.host1 Related options: None delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None disabled_notifications = [] list value List of notifications to be disabled. Specify a list of notifications that should not be emitted. A notification can be given either as a notification type to disable a single event notification, or as a notification group prefix to disable all event notifications within a group. Possible values: A comma-separated list of individual notification types or notification groups to be disabled. Currently supported groups: image image.member task metadef_namespace metadef_object metadef_property metadef_resource_type metadef_tag Related options: None enabled_backends = None dict value Key:Value pair of store identifier and store type. In case of multiple backends should be separated using comma. enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None http_keepalive = True boolean value Set keep alive option for HTTP over TCP. Provide a boolean value to determine sending of keep alive packets. If set to False , the server returns the header "Connection: close". If set to True , the server returns a "Connection: Keep-Alive" in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request. This option must be set to False if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client. Possible values: True False Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max location_strategy = location_order string value Strategy to determine the preference order of image locations. This configuration option indicates the strategy to determine the order in which an image's locations must be accessed to serve the image's data. Glance then retrieves the image data from the first responsive active location it finds in this list. This option takes one of two possible values location_order and store_type . The default value is location_order , which suggests that image data be served by using locations in the order they are stored in Glance. The store_type value sets the image location preference based on the order in which the storage backends are listed as a comma separated list for the configuration option store_type_preference . Possible values: location_order store_type Related options: store_type_preference log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_header_line = 16384 integer value Maximum line size of message headers. Provide an integer value representing a length to limit the size of message headers. The default value is 16384. Note max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for max_header_line would flood the logs. Setting max_header_line to 0 sets no limit for the line size of message headers. Possible values: 0 Positive integer Related options: None max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_request_id_length = 64 integer value Limit the request ID length. Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs. Possible values: Integer value between 0 and 16384 Related options: None metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir pipe-handle = None string value This argument is used internally on Windows. Glance passes a pipe handle to child processes, which is then used for inter-process communication. property_protection_file = None string value The location of the property protection file. Provide a valid path to the property protection file which contains the rules for property protections and the roles/policies associated with them. A property protection file, when set, restricts the Glance image properties to be created, read, updated and/or deleted by a specific set of users that are identified by either roles or policies. If this configuration option is not set, by default, property protections won't be enforced. If a value is specified and the file is not found, the glance-api service will fail to start. More information on property protections can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html Possible values: Empty string Valid path to the property protection configuration file Related options: property_protection_rule_format property_protection_rule_format = roles string value Rule format for property protection. Provide the desired way to set property protection on Glance image properties. The two permissible values are roles and policies . The default value is roles . If the value is roles , the property protection file must contain a comma separated list of user roles indicating permissions for each of the CRUD operations on each property being protected. If set to policies , a policy defined in policy.yaml is used to express property protections for each of the CRUD operations. Examples of how property protections are enforced based on roles or policies can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html#examples Possible values: roles policies Related options: property_protection_file public_endpoint = None string value Public url endpoint to use for Glance versions response. This is the public url endpoint that will appear in the Glance "versions" response. If no value is specified, the endpoint that is displayed in the version's response is that of the host running the API service. Change the endpoint to represent the proxy URL if the API service is running behind a proxy. If the service is running behind a load balancer, add the load balancer's URL for this value. Possible values: None Proxy URL Load balancer URL Related options: None publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Set the wait time before a connection recheck. Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds. Setting tcp_keepidle helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment. Possible values: Positive integer value representing time in seconds Related options: None transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_keystone_limits = False boolean value Utilize per-tenant resource limits registered in Keystone. Enabling this feature will cause Glance to retrieve limits set in keystone for resource consumption and enforce them against API users. Before turning this on, the limits need to be registered in Keystone or all quotas will be considered to be zero, and thus reject all new resource requests. These per-tenant resource limits are independent from the static global ones configured in this config file. If this is enabled, the relevant static global limits will be ignored. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. This has no effect if use_keystone_limits is enabled. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: use_keystone_limits watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. worker_self_reference_url = None string value The URL to this worker. If this is set, other glance workers will know how to contact this one directly if needed. For image import, a single worker stages the image and other workers need to be able to proxy the import request to the right one. If unset, this will be considered to be public_endpoint , which normally would be set to the same value on all workers, effectively disabling the proxying behavior. Possible values: A URL by which this worker is reachable from other workers Related options: public_endpoint workers = None integer value Number of Glance worker processes to start. Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for workers limited to 8. For example if the processor count is 6, 6 workers will be used, if the processor count is 24 only 8 workers will be used. The limit will only apply to the default value, if 24 workers is configured, 24 is used. Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000. Note Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000. Possible values: 0 Positive integer value (typically equal to the number of CPUs) Related options: None 3.1.2. barbican The following table outlines the options available under the [barbican] group in the /etc/glance/glance-api.conf file. Table 3.1. barbican Configuration option = Default value Type Description auth_endpoint = http://localhost/identity/v3 string value Use this endpoint to connect to Keystone barbican_api_version = None string value Version of the Barbican API, for example: "v1" barbican_endpoint = None string value Use this endpoint to connect to Barbican, for example: "http://localhost:9311/" barbican_endpoint_type = public string value Specifies the type of endpoint. Allowed values are: public, private, and admin barbican_region_name = None string value Specifies the region of the chosen endpoint. number_of_retries = 60 integer value Number of times to retry poll for key creation completion retry_delay = 1 integer value Number of seconds to wait before retrying poll for key creation completion send_service_user_token = False boolean value When True, if sending a user token to a REST API, also send a service token. Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user's behalf, we include a service token along with the user token. Should the user's token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware. verify_ssl = True boolean value Specifies if insecure TLS (https) requests. If False, the server's certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile. verify_ssl_path = None string value A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored. 3.1.3. barbican_service_user The following table outlines the options available under the [barbican_service_user] group in the /etc/glance/glance-api.conf file. Table 3.2. barbican_service_user Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file split-loggers = False boolean value Log requests to multiple loggers. timeout = None integer value Timeout value for http requests 3.1.4. cinder The following table outlines the options available under the [cinder] group in the /etc/glance/glance-api.conf file. Table 3.3. cinder Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_do_extend_attached = False boolean value If this is set to True, glance will perform an extend operation on the attached volume. Only enable this option if the cinder backend driver supports the functionality of extending online (in-use) volumes. Supported from cinder microversion 3.42 and onwards. By default, it is set to False. Possible values: True or False cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_domain_name = Default string value Domain of the project where the image volume is stored in cinder. Possible values: A valid domain name of the project specified by cinder_store_project_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_user_domain_name = Default string value Domain of the user to authenticate against cinder. Possible values: A valid domain name for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_name cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following non-domain-related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None 3.1.5. cors The following table outlines the options available under the [cors] group in the /etc/glance/glance-api.conf file. Table 3.4. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['Content-MD5', 'X-Image-Meta-Checksum', 'X-Storage-Token', 'Accept-Encoding', 'X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Image-Meta-Checksum', 'X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 3.1.6. database The following table outlines the options available under the [database] group in the /etc/glance/glance-api.conf file. Table 3.5. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). Deprecated since: 12.1.0 *Reason:*Support for the MySQL NDB Cluster storage engine has been deprecated and will be removed in a future release. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= mysql_wsrep_sync_wait = None integer value For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don't configure any setting. pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 3.1.7. file The following table outlines the options available under the [file] group in the /etc/glance/glance-api.conf file. Table 3.6. file Configuration option = Default value Type Description filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None 3.1.8. glance.store.http.store The following table outlines the options available under the [glance.store.http.store] group in the /etc/glance/glance-api.conf file. Table 3.7. glance.store.http.store Configuration option = Default value Type Description http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file 3.1.9. glance.store.rbd.store The following table outlines the options available under the [glance.store.rbd.store] group in the /etc/glance/glance-api.conf file. Table 3.8. glance.store.rbd.store Configuration option = Default value Type Description rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None Deprecated since: Zed Reason: This option has not had any effect in years. Users willing to set a timeout for connecting to the Ceph cluster should use client_mount_timeout in Ceph's configuration file. `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None 3.1.10. glance.store.s3.store The following table outlines the options available under the [glance.store.s3.store] group in the /etc/glance/glance-api.conf file. Table 3.9. glance.store.s3.store Configuration option = Default value Type Description s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket `s3_store_cacert = ` string value The path to the CA cert bundle to use. The default value (an empty string) forces the use of the default CA cert bundle used by botocore. Possible values: A path to the CA cert bundle to use An empty string to use the default CA cert bundle used by botocore s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools `s3_store_region_name = ` string value The S3 region name. This parameter will set the region_name used by boto. If this parameter is not set, we we will try to compute it from the s3_store_host. Possible values: A valid region name Related Options: s3_store_host s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size 3.1.11. glance.store.swift.store The following table outlines the options available under the [glance.store.swift.store] group in the /etc/glance/glance-api.conf file. Table 3.10. glance.store.swift.store Configuration option = Default value Type Description default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size 3.1.12. glance.store.vmware_datastore.store The following table outlines the options available under the [glance.store.vmware_datastore.store] group in the /etc/glance/glance-api.conf file. Table 3.11. glance.store.vmware_datastore.store Configuration option = Default value Type Description vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.13. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-api.conf file. Table 3.12. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_do_extend_attached = False boolean value If this is set to True, glance will perform an extend operation on the attached volume. Only enable this option if the cinder backend driver supports the functionality of extending online (in-use) volumes. Supported from cinder microversion 3.42 and onwards. By default, it is set to False. Possible values: True or False cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_domain_name = Default string value Domain of the project where the image volume is stored in cinder. Possible values: A valid domain name of the project specified by cinder_store_project_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_user_domain_name = Default string value Domain of the user to authenticate against cinder. Possible values: A valid domain name for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_name cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following non-domain-related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. default_backend = None string value The store identifier for the default backend in which data will be stored. The value must be defined as one of the keys in the dict defined by the enabled_backends configuration option in the DEFAULT configuration group. If a value is not defined for this option: the consuming service may refuse to start store_add calls that do not specify a specific backend will raise a glance_store.exceptions.UnknownScheme exception Related Options: enabled_backends default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd cinder vsphere s3 Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None Deprecated since: Zed Reason: This option has not had any effect in years. Users willing to set a timeout for connecting to the Ceph cluster should use client_mount_timeout in Ceph's configuration file. `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket `s3_store_cacert = ` string value The path to the CA cert bundle to use. The default value (an empty string) forces the use of the default CA cert bundle used by botocore. Possible values: A path to the CA cert bundle to use An empty string to use the default CA cert bundle used by botocore s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools `s3_store_region_name = ` string value The S3 region name. This parameter will set the region_name used by boto. If this parameter is not set, we we will try to compute it from the s3_store_host. Possible values: A valid region name Related Options: s3_store_host s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd cinder vmware s3 Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.14. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/glance/glance-api.conf file. Table 3.13. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 3.1.15. image_format The following table outlines the options available under the [image_format] group in the /etc/glance/glance-api.conf file. Table 3.14. image_format Configuration option = Default value Type Description container_formats = ['ami', 'ari', 'aki', 'bare', 'ovf', 'ova', 'docker', 'compressed'] list value Supported values for the container_format image attribute disk_formats = ['ami', 'ari', 'aki', 'vhd', 'vhdx', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso', 'ploop'] list value Supported values for the disk_format image attribute vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] list value A list of strings describing allowed VMDK create-type subformats that will be allowed. This is recommended to only include single-file-with-sparse-header variants to avoid potential host file exposure due to processing named extents. If this list is empty, then no VDMK image types allowed. Note that this is currently only checked during image conversion (if enabled), and limits the types of VMDK images we will convert from. 3.1.16. key_manager The following table outlines the options available under the [key_manager] group in the /etc/glance/glance-api.conf file. Table 3.15. key_manager Configuration option = Default value Type Description auth_type = None string value The type of authentication credential to create. Possible values are token , password , keystone_token , and keystone_password . Required if no context is passed to the credential factory. auth_url = None string value Use this endpoint to connect to Keystone. backend = barbican string value Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time. domain_id = None string value Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type. domain_name = None string value Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type. password = None string value Password for authentication. Required for password and keystone_password auth_type. project_domain_id = None string value Project's domain ID for project. Optional for keystone_token and keystone_password auth_type. project_domain_name = None string value Project's domain name for project. Optional for keystone_token and keystone_password auth_type. project_id = None string value Project ID for project scoping. Optional for keystone_token and keystone_password auth_type. project_name = None string value Project name for project scoping. Optional for keystone_token and keystone_password auth_type. reauthenticate = True boolean value Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type. token = None string value Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory. trust_id = None string value Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type. user_domain_id = None string value User's domain ID for authentication. Optional for keystone_token and keystone_password auth_type. user_domain_name = None string value User's domain name for authentication. Optional for keystone_token and keystone_password auth_type. user_id = None string value User ID for authentication. Optional for keystone_token and keystone_password auth_type. username = None string value Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type. 3.1.17. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/glance/glance-api.conf file. Table 3.16. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = True boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 3.1.18. os_brick The following table outlines the options available under the [os_brick] group in the /etc/glance/glance-api.conf file. Table 3.17. os_brick Configuration option = Default value Type Description lock_path = None string value Directory to use for os-brick lock files. Defaults to oslo_concurrency.lock_path which is a sensible default for compute nodes, but not for HCI deployments or controllers where Glance uses Cinder as a backend, as locks should use the same directory. wait_mpath_device_attempts = 4 integer value Number of attempts for the multipath device to be ready for I/O after it was created. Readiness is checked with multipath -C . See related wait_mpath_device_interval config option. Default value is 4. wait_mpath_device_interval = 1 integer value Interval value to wait for multipath device to be ready for I/O. Max number of attempts is set in wait_mpath_device_attempts . Time in seconds to wait for each retry is base ^ attempt * interval , so for 4 attempts (1 attempt 3 retries) and 1 second interval will yield: 2, 4 and 8 seconds. Note that there is no wait before first attempt. Default value is 1. 3.1.19. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-api.conf file. Table 3.18. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.1.20. oslo_limit The following table outlines the options available under the [oslo_limit] group in the /etc/glance/glance-api.conf file. Table 3.19. oslo_limit Configuration option = Default value Type Description auth-url = None string value Authentication URL cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. endpoint_id = None string value The service's endpoint id which is registered in Keystone. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = None string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = None list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 3.1.21. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/glance/glance-api.conf file. Table 3.20. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 3.1.22. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/glance/glance-api.conf file. Table 3.21. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 3.1.23. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/glance/glance-api.conf file. Table 3.22. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 3.1.24. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/glance/glance-api.conf file. Table 3.23. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. If rabbit_quorum_queue is enabled, queues will be durable and this value will be ignored. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait (in seconds) before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_quorum_delivery_limit = 0 integer value Each time a message is redelivered to a consumer, a counter is incremented. Once the redelivery count exceeds the delivery limit the message gets dropped or dead-lettered (if a DLX exchange has been configured) Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_bytes = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of memory bytes used by the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_length = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of messages in the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_queue = False boolean value Use quorum queues in RabbitMQ (x-queue-type: quorum). The quorum queue is a modern queue type for RabbitMQ implementing a durable, replicated FIFO queue based on the Raft consensus algorithm. It is available as of RabbitMQ 3.8.0. If set this option will conflict with the HA queues ( rabbit_ha_queues ) aka mirrored queues, in other words the HA queues should be disabled, quorum queues durable by default so the amqp_durable_queues opion is ignored when this option enabled. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). ssl_enforce_fips_mode = False boolean value Global toggle for enforcing the OpenSSL FIPS mode. This feature requires Python support. This is available in Python 3.9 in all environments and may have been backported to older Python versions on select environments. If the Python executable used does not support OpenSSL FIPS mode, an exception will be raised. `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 3.1.25. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/glance/glance-api.conf file. Table 3.24. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 3.1.26. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-api.conf file. Table 3.25. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = True boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False , the deprecated policy check string is logically OR'd with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior. enforce_scope = True boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.1.27. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/glance/glance-api.conf file. Table 3.26. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 3.1.28. paste_deploy The following table outlines the options available under the [paste_deploy] group in the /etc/glance/glance-api.conf file. Table 3.27. paste_deploy Configuration option = Default value Type Description config_file = None string value Name of the paste configuration file. Provide a string value representing the name of the paste configuration file to use for configuring pipelines for server application deployments. NOTES: Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors. If no value is specified for this option, the paste.ini file with the prefix of the corresponding Glance service's configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in glance-api.conf , the service will look for a file named glance-api-paste.ini .) If the paste configuration file is not found, the service will not start. Possible values: A string value representing the name of the paste configuration file. Related Options: flavor flavor = None string value Deployment flavor to use in the server application pipeline. Provide a string value representing the appropriate deployment flavor used in the server application pipeline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed. For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set flavor to keystone . Possible values: String value representing a partial pipeline name. Related Options: config_file 3.1.29. profiler The following table outlines the options available under the [profiler] group in the /etc/glance/glance-api.conf file. Table 3.28. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 3.1.30. store_type_location_strategy The following table outlines the options available under the [store_type_location_strategy] group in the /etc/glance/glance-api.conf file. Table 3.29. store_type_location_strategy Configuration option = Default value Type Description store_type_preference = [] list value Preference order of storage backends. Provide a comma separated list of store names in the order in which images should be retrieved from storage backends. These store names must be registered with the stores configuration option. Note The store_type_preference configuration option is applied only if store_type is chosen as a value for the location_strategy configuration option. An empty list will not change the location order. Possible values: Empty list Comma separated list of registered store names. Legal values are: file http rbd swift cinder vmware Related options: location_strategy stores 3.1.31. task The following table outlines the options available under the [task] group in the /etc/glance/glance-api.conf file. Table 3.30. task Configuration option = Default value Type Description task_executor = taskflow string value Task executor to be used to run task scripts. Provide a string value representing the executor to use for task executions. By default, TaskFlow executor is used. TaskFlow helps make task executions easy, consistent, scalable and reliable. It also enables creation of lightweight task objects and/or functions that are combined together into flows in a declarative manner. Possible values: taskflow Related Options: None task_time_to_live = 48 integer value Time in hours for which a task lives after, either succeeding or failing work_dir = None string value Absolute path to the work directory to use for asynchronous task operations. The directory set here will be used to operate over images - normally before they are imported in the destination store. Note When providing a value for work_dir , please make sure that enough space is provided for concurrent tasks to run efficiently without running out of space. A rough estimation can be done by multiplying the number of max_workers with an average image size (e.g 500MB). The image size estimation should be done based on the average size in your deployment. Note that depending on the tasks running you may need to multiply this number by some factor depending on what the task does. For example, you may want to double the available size if image conversion is enabled. All this being said, remember these are just estimations and you should do them based on the worst case scenario and be prepared to act in case they were wrong. Possible values: String value representing the absolute path to the working directory Related Options: None 3.1.32. taskflow_executor The following table outlines the options available under the [taskflow_executor] group in the /etc/glance/glance-api.conf file. Table 3.31. taskflow_executor Configuration option = Default value Type Description conversion_format = None string value Set the desired image conversion format. Provide a valid image format to which you want images to be converted before they are stored for consumption by Glance. Appropriate image format conversions are desirable for specific storage backends in order to facilitate efficient handling of bandwidth and usage of the storage infrastructure. By default, conversion_format is not set and must be set explicitly in the configuration file. The allowed values for this option are raw , qcow2 and vmdk . The raw format is the unstructured disk format and should be chosen when RBD or Ceph storage backends are used for image storage. qcow2 is supported by the QEMU emulator that expands dynamically and supports Copy on Write. The vmdk is another common disk format supported by many common virtual machine monitors like VMWare Workstation. Possible values: qcow2 raw vmdk Related options: disk_formats engine_mode = parallel string value Set the taskflow engine mode. Provide a string type value to set the mode in which the taskflow engine would schedule tasks to the workers on the hosts. Based on this mode, the engine executes tasks either in single or multiple threads. The possible values for this configuration option are: serial and parallel . When set to serial , the engine runs all the tasks in a single thread which results in serial execution of tasks. Setting this to parallel makes the engine run tasks in multiple threads. This results in parallel execution of tasks. Possible values: serial parallel Related options: max_workers max_workers = 10 integer value Set the number of engine executable tasks. Provide an integer value to limit the number of workers that can be instantiated on the hosts. In other words, this number defines the number of parallel tasks that can be executed at the same time by the taskflow engine. This value can be greater than one when the engine mode is set to parallel. Possible values: Integer value greater than or equal to 1 Related options: engine_mode 3.1.33. vault The following table outlines the options available under the [vault] group in the /etc/glance/glance-api.conf file. Table 3.32. vault Configuration option = Default value Type Description approle_role_id = None string value AppRole role_id for authentication with vault approle_secret_id = None string value AppRole secret_id for authentication with vault kv_mountpoint = secret string value Mountpoint of KV store in Vault to use, for example: secret kv_version = 2 integer value Version of KV store in Vault to use, for example: 2 namespace = None string value Vault Namespace to use for all requests to Vault. Vault Namespaces feature is available only in Vault Enterprise root_token_id = None string value root token for vault ssl_ca_crt_file = None string value Absolute path to ca cert file use_ssl = False boolean value SSL Enabled/Disabled vault_url = http://127.0.0.1:8200 string value Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" 3.1.34. wsgi The following table outlines the options available under the [wsgi] group in the /etc/glance/glance-api.conf file. Table 3.33. wsgi Configuration option = Default value Type Description python_interpreter = None string value Path to the python interpreter to use when spawning external processes. If left unspecified, this will be sys.executable, which should be the same interpreter running Glance itself. However, in some situations (for example, uwsgi) sys.executable may not actually point to a python interpreter and an alternative value must be set. task_pool_threads = 16 integer value The number of threads (per worker process) in the pool for processing asynchronous tasks. This controls how many asynchronous tasks (i.e. for image interoperable import) each worker can run at a time. If this is too large, you may have increased memory footprint per worker and/or you may overwhelm other system resources such as disk or outbound network bandwidth. If this is too small, image import requests will have to wait until a thread becomes available to begin processing. 3.2. glance-scrubber.conf This section contains options for the /etc/glance/glance-scrubber.conf file. 3.2.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-scrubber.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota Deprecated since: Ussuri Reason: This option is redundant. Control custom image property usage via the image_property_quota configuration option. This option is scheduled to be removed during the Victoria development cycle. api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default daemon = False boolean value Run scrubber as a daemon. This boolean configuration option indicates whether scrubber should run as a long-running process that wakes up at regular intervals to scrub images. The wake up interval can be specified using the configuration option wakeup_time . If this configuration option is set to False , which is the default value, scrubber runs once to scrub images and exits. In this case, if the operator wishes to implement continuous scrubbing of images, scrubber needs to be scheduled as a cron job. Possible values: True False Related options: wakeup_time debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. restore = None string value Restore the image status from pending_delete to active . This option is used by administrator to reset the image's status from pending_delete to active when the image is deleted by mistake and pending delete feature is enabled in Glance. Please make sure the glance-scrubber daemon is stopped before restoring the image to avoid image data inconsistency. Possible values: image's uuid scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_keystone_limits = False boolean value Utilize per-tenant resource limits registered in Keystone. Enabling this feature will cause Glance to retrieve limits set in keystone for resource consumption and enforce them against API users. Before turning this on, the limits need to be registered in Keystone or all quotas will be considered to be zero, and thus reject all new resource requests. These per-tenant resource limits are independent from the static global ones configured in this config file. If this is enabled, the relevant static global limits will be ignored. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. This has no effect if use_keystone_limits is enabled. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: use_keystone_limits wakeup_time = 300 integer value Time interval, in seconds, between scrubber runs in daemon mode. Scrubber can be run either as a cron job or daemon. When run as a daemon, this configuration time specifies the time period between two runs. When the scrubber wakes up, it fetches and scrubs all pending_delete images that are available for scrubbing after taking scrub_time into consideration. If the wakeup time is set to a large number, there may be a large number of images to be scrubbed for each run. Also, this impacts how quickly the backend storage is reclaimed. Possible values: Any non-negative integer Related options: daemon delayed_delete watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. worker_self_reference_url = None string value The URL to this worker. If this is set, other glance workers will know how to contact this one directly if needed. For image import, a single worker stages the image and other workers need to be able to proxy the import request to the right one. If unset, this will be considered to be public_endpoint , which normally would be set to the same value on all workers, effectively disabling the proxying behavior. Possible values: A URL by which this worker is reachable from other workers Related options: public_endpoint 3.2.2. database The following table outlines the options available under the [database] group in the /etc/glance/glance-scrubber.conf file. Table 3.34. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). Deprecated since: 12.1.0 *Reason:*Support for the MySQL NDB Cluster storage engine has been deprecated and will be removed in a future release. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= mysql_wsrep_sync_wait = None integer value For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don't configure any setting. pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 3.2.3. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-scrubber.conf file. Table 3.35. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_do_extend_attached = False boolean value If this is set to True, glance will perform an extend operation on the attached volume. Only enable this option if the cinder backend driver supports the functionality of extending online (in-use) volumes. Supported from cinder microversion 3.42 and onwards. By default, it is set to False. Possible values: True or False cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_domain_name = Default string value Domain of the project where the image volume is stored in cinder. Possible values: A valid domain name of the project specified by cinder_store_project_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_user_domain_name = Default string value Domain of the user to authenticate against cinder. Possible values: A valid domain name for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_name cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following non-domain-related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd cinder vsphere s3 Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None Deprecated since: Zed Reason: This option has not had any effect in years. Users willing to set a timeout for connecting to the Ceph cluster should use client_mount_timeout in Ceph's configuration file. `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket `s3_store_cacert = ` string value The path to the CA cert bundle to use. The default value (an empty string) forces the use of the default CA cert bundle used by botocore. Possible values: A path to the CA cert bundle to use An empty string to use the default CA cert bundle used by botocore s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools `s3_store_region_name = ` string value The S3 region name. This parameter will set the region_name used by boto. If this parameter is not set, we we will try to compute it from the s3_store_host. Possible values: A valid region name Related Options: s3_store_host s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd cinder vmware s3 Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.2.4. os_brick The following table outlines the options available under the [os_brick] group in the /etc/glance/glance-scrubber.conf file. Table 3.36. os_brick Configuration option = Default value Type Description lock_path = None string value Directory to use for os-brick lock files. Defaults to oslo_concurrency.lock_path which is a sensible default for compute nodes, but not for HCI deployments or controllers where Glance uses Cinder as a backend, as locks should use the same directory. wait_mpath_device_attempts = 4 integer value Number of attempts for the multipath device to be ready for I/O after it was created. Readiness is checked with multipath -C . See related wait_mpath_device_interval config option. Default value is 4. wait_mpath_device_interval = 1 integer value Interval value to wait for multipath device to be ready for I/O. Max number of attempts is set in wait_mpath_device_attempts . Time in seconds to wait for each retry is base ^ attempt * interval , so for 4 attempts (1 attempt 3 retries) and 1 second interval will yield: 2, 4 and 8 seconds. Note that there is no wait before first attempt. Default value is 1. 3.2.5. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-scrubber.conf file. Table 3.37. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.2.6. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-scrubber.conf file. Table 3.38. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = True boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False , the deprecated policy check string is logically OR'd with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior. enforce_scope = True boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.3. glance-cache.conf This section contains options for the /etc/glance/glance-cache.conf file. 3.3.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-cache.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota Deprecated since: Ussuri Reason: This option is redundant. Control custom image property usage via the image_property_quota configuration option. This option is scheduled to be removed during the Victoria development cycle. api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_keystone_limits = False boolean value Utilize per-tenant resource limits registered in Keystone. Enabling this feature will cause Glance to retrieve limits set in keystone for resource consumption and enforce them against API users. Before turning this on, the limits need to be registered in Keystone or all quotas will be considered to be zero, and thus reject all new resource requests. These per-tenant resource limits are independent from the static global ones configured in this config file. If this is enabled, the relevant static global limits will be ignored. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. This has no effect if use_keystone_limits is enabled. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: use_keystone_limits watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. worker_self_reference_url = None string value The URL to this worker. If this is set, other glance workers will know how to contact this one directly if needed. For image import, a single worker stages the image and other workers need to be able to proxy the import request to the right one. If unset, this will be considered to be public_endpoint , which normally would be set to the same value on all workers, effectively disabling the proxying behavior. Possible values: A URL by which this worker is reachable from other workers Related options: public_endpoint 3.3.2. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-cache.conf file. Table 3.39. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_do_extend_attached = False boolean value If this is set to True, glance will perform an extend operation on the attached volume. Only enable this option if the cinder backend driver supports the functionality of extending online (in-use) volumes. Supported from cinder microversion 3.42 and onwards. By default, it is set to False. Possible values: True or False cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_domain_name = Default string value Domain of the project where the image volume is stored in cinder. Possible values: A valid domain name of the project specified by cinder_store_project_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_project_domain_name cinder_store_user_domain_name cinder_store_user_domain_name = Default string value Domain of the user to authenticate against cinder. Possible values: A valid domain name for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_name cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following non-domain-related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_store_project_domain_name cinder_store_user_domain_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd cinder vsphere s3 Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None Deprecated since: Zed Reason: This option has not had any effect in years. Users willing to set a timeout for connecting to the Ceph cluster should use client_mount_timeout in Ceph's configuration file. `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket `s3_store_cacert = ` string value The path to the CA cert bundle to use. The default value (an empty string) forces the use of the default CA cert bundle used by botocore. Possible values: A path to the CA cert bundle to use An empty string to use the default CA cert bundle used by botocore s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools `s3_store_region_name = ` string value The S3 region name. This parameter will set the region_name used by boto. If this parameter is not set, we we will try to compute it from the s3_store_host. Possible values: A valid region name Related Options: s3_store_host s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd cinder vmware s3 Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.3.3. os_brick The following table outlines the options available under the [os_brick] group in the /etc/glance/glance-cache.conf file. Table 3.40. os_brick Configuration option = Default value Type Description lock_path = None string value Directory to use for os-brick lock files. Defaults to oslo_concurrency.lock_path which is a sensible default for compute nodes, but not for HCI deployments or controllers where Glance uses Cinder as a backend, as locks should use the same directory. wait_mpath_device_attempts = 4 integer value Number of attempts for the multipath device to be ready for I/O after it was created. Readiness is checked with multipath -C . See related wait_mpath_device_interval config option. Default value is 4. wait_mpath_device_interval = 1 integer value Interval value to wait for multipath device to be ready for I/O. Max number of attempts is set in wait_mpath_device_attempts . Time in seconds to wait for each retry is base ^ attempt * interval , so for 4 attempts (1 attempt 3 retries) and 1 second interval will yield: 2, 4 and 8 seconds. Note that there is no wait before first attempt. Default value is 1. 3.3.4. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-cache.conf file. Table 3.41. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = True boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False , the deprecated policy check string is logically OR'd with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior. enforce_scope = True boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check
[ "For a complete listing and description of each event refer to: https://docs.openstack.org/glance/latest/admin/notifications.html", "The values must be specified as: <group_name>.<event_name> For example: image.create,task.success,metadef_tag", "'glance-direct', 'copy-image' and 'web-download' are enabled by default. 'glance-download' is available, but requires federated deployments.", "Related options: ** [DEFAULT]/node_staging_uri", "'glance-direct', 'copy-image' and 'web-download' are enabled by default. 'glance-download' is available, but requires federated deployments.", "Related options: ** [DEFAULT]/node_staging_uri", "'glance-direct', 'copy-image' and 'web-download' are enabled by default. 'glance-download' is available, but requires federated deployments.", "Related options: ** [DEFAULT]/node_staging_uri" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuration_reference/glance
Support
Support Red Hat Advanced Cluster Security for Kubernetes 4.5 Getting support for Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/support/index
Chapter 1. Preparing to install on IBM Cloud
Chapter 1. Preparing to install on IBM Cloud The installation workflows documented in this section are for IBM Cloud(R) infrastructure environments. IBM Cloud(R) classic is not supported at this time. For more information about the difference between classic and VPC infrastructures, see the IBM(R) documentation . 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on IBM Cloud Before installing OpenShift Container Platform on IBM Cloud(R), you must create a service account and configure an IBM Cloud(R) account. See Configuring an IBM Cloud(R) account for details about creating an account, enabling API services, configuring DNS, IBM Cloud(R) account limits, and supported IBM Cloud(R) regions. You must manually manage your cloud credentials when installing a cluster to IBM Cloud(R). Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 1.3. Choosing a method to install OpenShift Container Platform on IBM Cloud You can install OpenShift Container Platform on IBM Cloud(R) using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Cloud(R) using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Cloud(R) infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Cloud(R) : You can install a customized cluster on IBM Cloud(R) infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Cloud(R) with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on IBM Cloud(R) into an existing VPC : You can install OpenShift Container Platform on an existing IBM Cloud(R). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing Virtual Private Cloud (VPC). You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on IBM Cloud VPC in a restricted network : You can install OpenShift Container Platform on IBM Cloud VPC on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. 1.4. steps Configuring an IBM Cloud(R) account
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_cloud/preparing-to-install-on-ibm-cloud
Chapter 4. Red Hat Quay tenancy model
Chapter 4. Red Hat Quay tenancy model Before creating repositories to contain your container images in Quay.io, you should consider how these repositories will be structured. With Quay.io, each repository requires a connection with either an Organization or a User . This affiliation defines ownership and access control for the repositories. 4.1. Tenancy model Organizations provide a way of sharing repositories under a common namespace that does not belong to a single user. Instead, these repositories belong to several users in a shared setting, such as a company. Teams provide a way for an Organization to delegate permissions. Permissions can be set at the global level (for example, across all repositories), or on specific repositories. They can also be set for specific sets, or groups, of users. Users can log in to a registry through the web UI or a by using a client like Podman and using their respective login commands, for example, USD podman login . Each user automatically gets a user namespace, for example, <quay-server.example.com>/<user>/<username> , or quay.io/<username> if you are using Quay.io. Robot accounts provide automated access to repositories for non-human users like pipeline tools. Robot accounts are similar to OpenShift Container Platform Service Accounts . Permissions can be granted to a robot account in a repository by adding that account like you would another user or team. 4.2. Logging into Quay A user account for Quay.io represents an individual with authenticated access to the platform's features and functionalities. Through this account, you gain the capability to create and manage repositories, upload and retrieve container images, and control access permissions for these resources. This account is pivotal for organizing and overseeing your container image management within Quay.io. Note Not all features on Quay.io require that users be logged in. For example, you can anonymously pull an image from Quay.io without being logged in, so long as the image you are pulling comes from a public repository. Users have two options for logging into Quay.io: By logging in through Quay.io. This option provides users with the legacy UI, as well as an option to use the beta UI environment, which adheres to PatternFly UI principles. By logging in through the Red Hat Hybrid Cloud Console . This option uses Red Hat SSO for authentication, and is a public managed service offering by Red Hat. This option always requires users to login. Like other managed services, Quay on the Red Hat Hybrid Cloud Console enhances the user experience by adhering to PatternFly UI principles. Differences between using Quay.io directly and Quay on the Red Hat Hybrid Cloud Console are negligible, including for users on the free tier. Whether you are using Quay.io directly, on the Hybrid Cloud Console, features that require login, such as pushing to a repository, use your Quay.io username specifications. 4.2.1. Logging into Quay.io Use the following procedure to log into Quay.io. Prerequisites You have created a Red Hat account and a Quay.io account. For more information, see "Creating a Quay.io account". Procedure Navigate to Quay.io . In the navigation pane, select Sign In and log in using your Red Hat credentials. If it is your first time logging in, you must confirm the automatically-generated username. Click Confirm Username to log in. You are redirected to the Quay.io repository landing page. 4.2.2. Logging into Quay through the Hybrid Cloud Console Prerequisites You have created a Red Hat account and a Quay.io account. For more information, see "Creating a Quay.io account". Procedure Navigate to the Quay on the Red Hat Hybrid Cloud Console and log in using your Red Hat account. You are redirected to the Quay repository landing page:
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/user-org-intro_quay-io
Chapter 6. PersistentVolumeClaim [v1]
Chapter 6. PersistentVolumeClaim [v1] Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. 6.1.1. .spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 6.1.2. .spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.3. .spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 6.1.4. .spec.resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.5. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 6.1.6. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 6.1.7. .status Description PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources object (Quantity) allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound resizeStatus string resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. Possible enum values: - "" When expansion is complete, the empty string is set by resize controller or kubelet. - "ControllerExpansionFailed" State set when expansion has failed in resize controller with a terminal error. Transient errors such as timeout should not set this status and should leave ResizeStatus unmodified, so as resize controller can resume the volume expansion. - "ControllerExpansionInProgress" State set when resize controller starts expanding the volume in control-plane - "NodeExpansionFailed" State set when expansion has failed in kubelet with a terminal error. Transient errors don't set NodeExpansionFailed. - "NodeExpansionInProgress" State set when kubelet starts expanding the volume. - "NodeExpansionPending" State set when resize controller has finished expanding the volume but further expansion is needed on the node. 6.1.8. .status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 6.1.9. .status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string 6.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumeclaims GET : list or watch objects of kind PersistentVolumeClaim /api/v1/watch/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims DELETE : delete collection of PersistentVolumeClaim GET : list or watch objects of kind PersistentVolumeClaim POST : create a PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} DELETE : delete a PersistentVolumeClaim GET : read the specified PersistentVolumeClaim PATCH : partially update the specified PersistentVolumeClaim PUT : replace the specified PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} GET : watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status GET : read status of the specified PersistentVolumeClaim PATCH : partially update status of the specified PersistentVolumeClaim PUT : replace status of the specified PersistentVolumeClaim 6.2.1. /api/v1/persistentvolumeclaims Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.2. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty 6.2.2. /api/v1/watch/persistentvolumeclaims Table 6.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /api/v1/namespaces/{namespace}/persistentvolumeclaims Table 6.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PersistentVolumeClaim Table 6.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.8. Body parameters Parameter Type Description body DeleteOptions schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolumeClaim Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.4. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims Table 6.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.18. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim namespace string object name and auth scope, such as for teams and projects Table 6.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PersistentVolumeClaim Table 6.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.21. Body parameters Parameter Type Description body DeleteOptions schema Table 6.22. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolumeClaim Table 6.23. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolumeClaim Table 6.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.25. Body parameters Parameter Type Description body Patch schema Table 6.26. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolumeClaim Table 6.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.28. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.29. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.6. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.30. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim namespace string object name and auth scope, such as for teams and projects Table 6.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.7. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status Table 6.33. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim namespace string object name and auth scope, such as for teams and projects Table 6.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PersistentVolumeClaim Table 6.35. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolumeClaim Table 6.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.37. Body parameters Parameter Type Description body Patch schema Table 6.38. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolumeClaim Table 6.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.40. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.41. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/persistentvolumeclaim-v1
Chapter 32. Networking
Chapter 32. Networking Bad offload warnings are no longer displayed using virtio_net Previously, using the virtio_net network adapter in bridge connections, user space programs sometimes generated Generic Segmentation Offload (GSO) packets with no checksum offload and passed them to the kernel. As a consequence, the kernel checksum offloading code displayed bad offload warnings unnecessarily. With this update, a patch has been applied, and the kernel does not warn anymore about bad checksum offload messages for such packets. (BZ#1544920) The L2TP sequence number handling now works correctly Previously, the kernel did not handle Layer 2 Tunneling Protocol (L2TP) sequence numbers properly and it was not compliant with RFC 3931. As a consequence, L2TP sessions stopped working unexpectedly. With this update, a patch has been applied to correctly handle sequence numbers in case of a packet loss. As a result, when users enable sequence numbers, L2TP sessions work as expected in the described scenario. (BZ#1527799) The kernel no longer crashes when a tunnel_key mode is not specified Previously, parsing configuration data in the tunnel_key action rules was incorrect if neither set nor unset mode was specified in the configuration. As a consequence, the kernel dereferenced an incorrect pointer and terminated unexpectedly. With this update, the kernel does not install tunnel_key if set or unset was not specified. As a result, the kernel no longer crashes in the described scenario. (BZ#1554907) The sysctl net.ipv4.route.min_pmtu setting no longer set invalid values Previously, the value provided by administrators for the sysctl net.ipv4.route.min_pmtu setting was not restricted. As a consequence, administrators were able to set a negative value for net.ipv4.route.min_pmtu . This sometimes resulted in setting the path Maximum Transmission Unit (MTU) of some routes to very large values because of an integer overflow. This update restricts values for net.ipv4.route.min_pmtu set to >= 68 , the minimum valid MTU for IPv4. As a result, net.ipv4.route.min_pmtu can no longer be set to invalid values (negative value or < 68 ). (BZ#1541250) wpa_supplicant no longer responds to packets whose destination address does not match the interface address Previously, when wpa_supplicant was running on a Linux interface that was configured in promiscuous mode, incoming Extensible Authentication Protocol over LAN (EAPOL) packets were processed regardless of the destination address in the frame. However, wpa_supplicant checked the destination address only if the interface was enslaved to a bridge. As a consequence, in certain cases, wpa_supplicant was responding to EAPOL packets when the destination address was not the interface address. With this update, a socket filter has been added that allows the kernel to discard unicast EAPOL packets whose destination address does not match the interface address, and the described problem no longer occurs. (BZ# 1434434 ) NetworkManager no longer fails to detect duplicate IPv4 addresses Previously, NetworkManager used to spawn an instance of the arping process to detect duplicate IPv4 addresses on the network. As a consequence, if the timeout configured for IPv4 Duplicate Address Detection (DAD) was short and the system was overloaded, NetworkManager sometimes failed to detect a duplicate address in time. With this update, the detection of duplicate IPv4 addresses is now performed internally to NetworkManager without spawning external binaries, and the described problem no longer occurs. (BZ# 1507864 ) firewalld now prevents partially applied rules Previously, if a direct rule failed to be inserted for any reason, then all following direct rules with a higher priority also failed to insert. As a consequence, direct rules were not applied completely. The processing has been changed to either apply all direct rules successfully or revert them all. As a result, if a rule failure occurs at startup, firewalld enters the failed status and allows the user to remedy the situation. This prevents unexpected results by having partially applied rules. (BZ# 1498923 ) The wpa_supplicant upgrade no longer causes disconnections Previously, the upgrade of the wpa_supplicant package caused a restart of the wpa_supplicant service. As a consequence, the network disconnected temporarily. With this update, the systemd unit is not restarted during the upgrade. As a result, the network connectivity no longer fails during the wpa_supplicant upgrade. (BZ#1505404)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/bug_fixes_networking
F.7. Project Menu
F.7. Project Menu Figure F.9. Project Menu The individual actions in the Project menu are described below: Open Project - Launches the Open Project dialog. Close Project - Closes the currently selected project(s). Build All - Validates the contents of the entire workspace. Any errors or warnings will appear in the Problems View. Build Project - Validates the contents of the selected project(s). Any errors or warnings will appear in the Problems View. Build Working Set - Validates the contents of the selected working set. Any errors or warnings will appear in the Problems View. Clean.. - Launches the Clean dialog. Build Automatically - Sets the Build Automatically flag on or off. When on, a checkmark appears to the left of this menu item. When this is turned on, validation of changes is done automatically each time a Save is done. Clone Project - Launches the Clone Project dialog. Build Project Imports - Reconciles all model import dependencies for models contained within the selected project. Build All Imports - Reconciles all model import dependencies for models contained within the workspace. Build Packages - TBD Validate Model Transformations - Revalidates all transformations for the selected view model. Properties - Displays the operating system's file properties dialog for the selected file.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/project_menu
13.2. Using Storage Pools
13.2. Using Storage Pools This section provides information about using storage pools with virtual machines. It provides conceptual information , as well as detailed instructions on creating , configuring , and deleting storage pools using virsh commands and the Virtual Machine Manager . 13.2.1. Storage Pool Concepts A storage pool is a file, directory, or storage device, managed by libvirt to provide storage to virtual machines. Storage pools are divided into storage volumes that store virtual machine images or are attached to virtual machines as additional storage. Multiple guests can share the same storage pool, allowing for better allocation of storage resources. Storage pools can be either local or network-based (shared): Local storage pools Local storage pools are attached directly to the host server. They include local directories, directly attached disks, physical partitions, and Logical Volume Management (LVM) volume groups on local devices. Local storage pools are useful for development, testing, and small deployments that do not require migration or large numbers of virtual machines. Local storage pools may not be suitable for many production environments, because they cannot be used for live migration. Networked (shared) storage pools Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between hosts with virt-manager , but is optional when migrating with virsh . For more information on migrating virtual machines, see Chapter 15, KVM Migration . The following is a list of storage pool types supported by Red Hat Enterprise Linux: Directory-based storage pools Disk-based storage pools Partition-based storage pools GlusterFS storage pools iSCSI-based storage pools LVM-based storage pools NFS-based storage pools vHBA-based storage pools with SCSI devices The following is a list of libvirt storage pool types that are not supported by Red Hat Enterprise Linux: Multipath-based storage pool RBD-based storage pool Sheepdog-based storage pool Vstorage-based storage pool ZFS-based storage pool Note Some of the unsupported storage pool types appear in the Virtual Machine Manager interface. However, they should not be used. 13.2.2. Creating Storage Pools This section provides general instructions for creating storage pools using virsh and the Virtual Machine Manager . Using virsh enables you to specify all parameters, whereas using Virtual Machine Manager provides a graphic method for creating simpler storage pools. 13.2.2.1. Creating Storage Pools with virsh Note This section shows the creation of a partition-based storage pool as an example. Procedure 13.2. Creating Storage Pools with virsh Read recommendations and ensure that all prerequisites are met For some storage pools, this guide recommends that you follow certain practices. In addition, there are prerequisites for some types of storage pools. To see the recommendations and prerequisites, if any, see Section 13.2.3, "Storage Pool Specifics" . Define the storage pool Storage pools can be persistent or transient. A persistent storage pool survives a system restart of the host machine. A transient storage pool only exists until the host reboots. Do one of the following: Define the storage pool using an XML file. a. Create a temporary XML file containing the storage pool information required for the new device. The XML file must contain specific fields, based on the storage pool type. For more information, see Section 13.2.3, "Storage Pool Specifics" . The following shows an example a storage pool definition XML file. In this example, the file is saved to ~/guest_images.xml <pool type='fs'> <name>guest_images_fs</name> <source> <device path='/dev/sdc1'/> </source> <target> <path>/guest_images</path> </target> </pool> b. Use the virsh pool-define command to create a persistent storage pool or the virsh pool-create command to create and start a transient storage pool. or c. Delete the XML file created in step a. Use the virsh pool-define-as command to create a persistent storage pool or the virsh pool-create-as command to create a transient storage pool. The following examples create a persistent and then a transient filesystem-based storage pool mapped to /dev/sdc1 from the /guest_images directory. or Note When using the virsh interface, option names in the commands are optional. If option names are not used, use dashes for fields that do not need to be specified. Verify that the pool was created List all existing storage pools using the virsh pool-list --all . Define the storage pool target path Use the virsh pool-build command to create a storage pool target path for a pre-formatted file system storage pool, initialize the storage source device, and define the format of the data. Then use the virsh pool-list command to ensure that the storage pool is listed. Note Building the target path is only necessary for disk-based, file system-based, and logical storage pools. If libvirt detects that the source storage device's data format differs from the selected storage pool type, the build fails, unless the overwrite option is specified. Start the storage pool Use the virsh pool-start command to prepare the source device for usage. The action taken depends on the storage pool type. For example, for a file system-based storage pool, the virsh pool-start command mounts the file system. For an LVM-based storage pool, the virsh pool-start command activates the volume group usng the vgchange command. Then use the virsh pool-list command to ensure that the storage pool is active. Note The virsh pool-start command is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created. Turn on autostart (optional) By default, a storage pool defined with virsh is not set to automatically start each time libvirtd starts. You can configure the storage pool to start automatically using the virsh pool-autostart command. The storage pool is now automatically started each time libvirtd starts. Verify the storage pool Verify that the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running . Verify there is a "lost+found" directory in the target path on the file system, indicating that the device is mounted. 13.2.2.2. Creating storage pools with Virtual Machine Manager Note This section shows the creation of a disk-based storage pool as an example. Procedure 13.3. Creating Storage Pools with Virtual Machine Manager Prepare the medium on which the storage pool will be created This will differ for different types of storage pools. For details, see Section 13.2.3, "Storage Pool Specifics" . In this example, you may need to relabel the disk with a GUID Partition Table . Open the storage settings In Virtual Machine Manager , select the host connection you want to configure. Open the Edit menu and select Connection Details . Click the Storage tab in the Connection Details window. Figure 13.1. Storage tab Create a new storage pool Note Using Virtual Machine Manager , you can only create persistent storage pools. Transient storage pools can only be created using virsh . Add a new storage pool (part 1) Click the button at the bottom of the window. The Add a New Storage Pool wizard appears. Enter a Name for the storage pool. This example uses the name guest_images_fs . Select a storage pool type to create from the Type drop-down list. This example uses fs: Pre-Formatted Block Device . Figure 13.2. Storage pool name and type Click the Forward button to continue. Add a new pool (part 2) Figure 13.3. Storage pool path Configure the storage pool with the relevant parameters. For information on the parameters for each type of storage pool, see Section 13.2.3, "Storage Pool Specifics" . For some types of storage pools, a Build Pool check box appears in the dialog. If you want to build the storage pool from the storage, check the Build Pool check box. Verify the details and click the Finish button to create the storage pool. 13.2.3. Storage Pool Specifics This section provides information specific to each type of storage pool, including prerequisites, parameters, and additional information. It includes the following topics: Section 13.2.3.1, "Directory-based storage pools" Section 13.2.3.2, "Disk-based storage pools" Section 13.2.3.3, "Filesystem-based storage pools" Section 13.2.3.4, "GlusterFS-based storage pools" Section 13.2.3.5, "iSCSI-based storage pools" Section 13.2.3.6, "LVM-based storage pools" Section 13.2.3.7, "NFS-based storage pools" Section 13.2.3.8, "vHBA-based storage pools using SCSI devices" 13.2.3.1. Directory-based storage pools Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating a directory-based storage pool. Table 13.1. Directory-based storage pool parameters Description XML pool-define-as Virtual Machine Manager The type of storage pool <pool type='dir'> [type] directory dir: Filesystem Directory The name of the storage pool <name> name </name> [name] name Name The path specifying the target. This will be the path used for the storage pool. <target> <path> target_path </path> </target> target path_to_pool Target Path If you are using virsh to create the storage pool, continue by verifying that the pool was created . Examples The following is an example of an XML file for a storage pool based on the /guest_images directory: <pool type='dir'> <name>dirpool</name> <target> <path>/guest_images</path> </target> </pool> The following is an example of a command for creating a storage pool based on the /guest_images directory: The following images show an example of the Virtual Machine Manager Add a New Storage Pool dialog boxes for creating a storage pool based on the /guest_images directory: Figure 13.4. Add a new directory-based storage pool example 13.2.3.2. Disk-based storage pools Recommendations Be aware of the following before creating a disk-based storage pool: Depending on the version of libvirt being used, dedicating a disk to a storage pool may reformat and erase all data currently stored on the disk device. It is strongly recommended that you back up the data on the storage device before creating a storage pool. Guests should not be given write access to whole disks or block devices (for example, /dev/sdb ). Use partitions (for example, /dev/sdb1 ) or LVM volumes. If you pass an entire block device to the guest, the guest will likely partition it or create its own LVM groups on it. This can cause the host physical machine to detect these partitions or LVM groups and cause errors. Prerequisites Note The steps in this section are only required if you do not run the virsh pool-build command. Before a disk-based storage pool can be created on a host disk, the disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating up to 128 partitions on each device. After relabeling the disk, continue creating the storage pool with defining the storage pool . Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating a disk-based storage pool. Table 13.2. Disk-based storage pool parameters Description XML pool-define-as Virtual Machine Manager The type of storage pool <pool type='disk'> [type] disk disk: Physical Disk Device The name of the storage pool <name> name </name> [name] name Name The path specifying the storage device. For example, /dev/sdb <source> <device path=/dev/ sdb/ > <source> source-dev path_to_disk Source Path The path specifying the target. This will be the path used for the storage pool. <target> <path>/ path_to_pool </path> </target> target path_to_pool Target Path If you are using virsh to create the storage pool, continue with defining the storage pool . Examples The following is an example of an XML file for a disk-based storage pool: <pool type='disk'> <name>phy_disk</name> <source> <device path='/dev/sdb'/> <format type='gpt'/> </source> <target> <path>/dev</path> </target> </pool> The following is an example of a command for creating a disk-based storage pool: The following images show an example of the virtual machine XML configuration Virtual Machine Manager Add a New Storage Pool dialog boxes for creating a disk-based storage pool: Figure 13.5. Add a new disk-based storage pool example 13.2.3.3. Filesystem-based storage pools Recommendations Do not use the procedures in this section to assign an entire disk as a storage pool (for example, /dev/sdb ). Guests should not be given write access to whole disks or block devices. This method should only be used to assign partitions (for example, /dev/sdb1 ) to storage pools. Prerequisites Note This is only required if you do not run the virsh pool-build command. To create a storage pool from a partition, format the file system to ext4. After formatting the file system, continue creating the storage pool with defining the storage pool . Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating a filesystem-based storage pool from a partition. Table 13.3. Filesystem-based storage pool parameters Description XML pool-define-as Virtual Machine Manager The type of storage pool <pool type='fs'> [type] fs fs: Pre-Formatted Block Device The name of the storage pool <name> name </name> [name] name Name The path specifying the partition. For example, /dev/sdc1 <source> <device path=' source_path ' /> [source] path_to_partition Source Path The filesystem type, for example ext4 <format type=' fs_type ' /> </source> [source format] FS-format N/A The path specifying the target. This will be the path used for the storage pool. <target> <path>/ path_to_pool </path> </target> [target] path_to_pool Target Path If you are using virsh to create the storage pool, continue with verifying that the storage pool was created . Examples The following is an example of an XML file for a filesystem-based storage pool: <pool type='fs'> <name>guest_images_fs</name> <source> <device path='/dev/sdc1'/> <format type='auto'/> </source> <target> <path>/guest_images</path> </target> </pool> The following is an example of a command for creating a partition-based storage pool: The following images show an example of the virtual machine XML configuration Virtual Machine Manager Add a New Storage Pool dialog boxes for creating a filesystem-based storage pool: Figure 13.6. Add a new filesystem-based storage pool example 13.2.3.4. GlusterFS-based storage pools Recommendations GlusterFS is a user space file system that uses File System in User Space (FUSE). Prerequisites Before a GlusterFS-based storage pool can be created on a host, a Gluster server must be prepared. Procedure 13.4. Preparing a Gluster server Obtain the IP address of the Gluster server by listing its status with the following command: If not installed, install the glusterfs-fuse package. If not enabled, enable the virt_use_fusefs boolean. Check that it is enabled. After ensuring that the required packages are installed and enabled, continue creating the storage pool continue creating the storage pool with defining the storage pool . Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating a GlusterFS-based storage pool. Table 13.4. GlusterFS-based storage pool parameters Description XML pool-define-as Virtual Machine Manager The type of storage pool <pool type='gluster'> [type] gluster Gluster: Gluster Filesystem The name of the storage pool <name> name </name> [name] name Name The hostname or IP address of the Gluster server <source> <hostname=' hostname ' /> source-host hostname Host Name The name of the Gluster server <name=' Gluster-name ' /> source-name Gluster-name Source Name The path on the Gluster server used for the storage pool <dir path=' Gluster-path ' /> </source> source-path Gluster-path Source Path If you are using virsh to create the storage pool, continue with verifying that the storage pool was created . Examples The following is an example of an XML file for a GlusterFS-based storage pool: <pool type='gluster'> <name>Gluster_pool</name> <source> <host name='111.222.111.222'/> <dir path='/'/> <name>gluster-vol1</name> </source> </pool> The following is an example of a command for creating a GlusterFS-based storage pool: The following images show an example of the virtual machine XML configuration Virtual Machine Manager Add a New Storage Pool dialog boxes for creating a GlusterFS-based storage pool: Figure 13.7. Add a new GlusterFS-based storage pool example 13.2.3.5. iSCSI-based storage pools Recommendations Internet Small Computer System Interface (iSCSI) is a network protocol for sharing storage devices. iSCSI connects initiators (storage clients) to targets (storage servers) using SCSI instructions over the IP layer. Using iSCSI-based devices to store guest virtual machines allows for more flexible storage options, such as using iSCSI as a block storage device. The iSCSI devices use a Linux-IO (LIO) target. This is a multi-protocol SCSI target for Linux. In addition to iSCSI, LIO also supports Fibre Channel and Fibre Channel over Ethernet (FCoE). Prerequisites Before an iSCSI-based storage pool can be created, iSCSI targets must be created. iSCSI targets are created with the targetcli package, which provides a command set for creating software-backed iSCSI targets. Procedure 13.5. Creating an iSCSI target Install the targetcli package Launch the targetcli command set Create storage objects Create three storage objects, using a storage pool. Create a block storage object Navigate to the /backstores/block directory. Run the create command. For example: Create a fileio object Navigate to the /fileio directory. Run the create command. For example: Create a ramdisk object Navigate to the /ramdisk directory. Run the create command. For example: Make note of the names of the disks created in this step. They will be used later. Create an iSCSI target Navigate to the /iscsi directory. Create the target in one of two ways: Run the create command with no parameters. The iSCSI qualified name (IQN) is generated automatically. Run the create command specifying the IQN and the server. For example: Define the portal IP address To export the block storage over iSCSI, the portal, LUNs, and access control lists ACLs must first be configured. The portal includes the IP address and TCP that the target monitors, and the initiators to which it connects. iSCSI uses port 3260. This port is configured by default. To connect to port 3260: Navigate to the /tpg directory. Run the following: This command makes all available IP addresses listening to port 3260. If you want only a single IP address to listen to port 3260, add the IP address to the end of the command. For example: Configure the LUNs and assign storage objects to the fabric This step uses the storage objects created in creating storage objects . Navigate to the luns directory for the TPG created in defining the portal IP address . For example: Assign the first LUN to the ramdisk. For example: Assign the second LUN to the block disk. For example: Assign the third LUN to the fileio disk. For example: List the resulting LUNs. Create ACLs for each initiator Enable authentication when the initiator connects. You can also resrict specified LUNs to specified intiators. Targets and initiators have unique names. iSCSI initiators use IQNs. Find the IQN of the iSCSI initiator, using the initiator name. For example: This IQN is used to create the ACLs. Navigate to the acls directory. Create ACLs by doing one of the following: Create ACLS for all LUNs and initiators by running the create command with no parameters. Create an ACL for a specific LUN and initiator, run the create command specifying the IQN of the iSCSI intiator. For example: Configure the kernel target to use a single user ID and password for all initiators. After completing this procedure, continue by securing the storage pool . Save the configuration Make the configuration persistent by overwriting the boot settings. Enable the service To apply the saved settings on the boot, enable the service. Optional procedures There are a number of optional procedures that you can perform with the iSCSI targets before creating the iSCSI-based storage pool. Procedure 13.6. Configuring a logical volume on a RAID array Create a RAID5 array For information on creating a RAID5 array, see the Red Hat Enterprise Linux 7 Storage Administration Guide . Create an LVM logical volume on the RAID5 array For information on creating an LVM logical volume on a RAID5 array, see the Red Hat Enterprise Linux 7 Logical Volume Manager Administration Guide . Procedure 13.7. Testing discoverability Ensure that the new iSCSI device is discoverable. Procedure 13.8. Testing device attachment Attach the new iSCSI device Attach the new device ( iqn.2010-05.com.example.server1:iscsirhel7guest ) to determine whether the device can be attached. Detach the device Procedure 13.9. Using libvirt secrets for an iSCSI storage pool Note This procedure is required if a user_ID and password were defined when creating an iSCSI target . User name and password parameters can be configured with virsh to secure an iSCSI storage pool. This can be configured before or after the pool is defined, but the pool must be started for the authentication settings to take effect. Create a libvirt secret file Create a libvirt secret file with a challenge-handshake authentication protocol (CHAP) user name. For example: <secret ephemeral='no' private='yes'> <description>Passphrase for the iSCSI example.com server</description> <usage type='iscsi'> <target>iscsirhel7secret</target> </usage> </secret> Define the secret Verify the UUID Assign a secret to the UID Use the following commands to assign a secret to the UUID in the output of the step. This ensures that the CHAP username and password are in a libvirt-controlled secret list. Add an authentication entry to the storage pool Modify the <source> entry in the storage pool's XML file using virsh edit , and add an <auth> element, specifying authentication type , username , and secret usage . For example: <pool type='iscsi'> <name>iscsirhel7pool</name> <source> <host name='192.168.122.1'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> <auth type='chap' username='redhat'> <secret usage='iscsirhel7secret'/> </auth> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> Note The <auth> sub-element exists in different locations within the guest XML's <pool> and <disk> elements. For a <pool> , <auth> is specified within the <source> element, as this describes where to find the pool sources, since authentication is a property of some pool sources (iSCSI and RBD). For a <disk> , which is a sub-element of a domain, the authentication to the iSCSI or RBD disk is a property of the disk. In addition, the <auth> sub-element for a disk differs from that of a storage pool. <auth username='redhat'> <secret type='iscsi' usage='iscsirhel7secret'/> </auth> Activate the changes The storage pool must be started to activate these changes. If the storage pool has not yet been started, follow the steps in Creating Storage Pools with virsh to define and start the storage pool. If the pool has already been started, enter the following commands to stop and restart the storage pool: Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating an iSCSI-based storage pool. Table 13.5. iSCSI-based storage pool parameters Description XML pool-define-as Virtual Machine Manager The type of storage pool <pool type='iscsi'> [type] iscsi iscsi: iSCSI Target The name of the storage pool <name> name </name> [name] name Name The name of the host. <source> <host name=' hostname ' /> source-host hostname Host Name The iSCSI IQN. device path=" iSCSI_IQN " /> </source> source-dev iSCSI_IQN Source IQN The path specifying the target. This will be the path used for the storage pool. <target> <path>/dev/ disk/by-path </path> </target> target path_to_pool Target Path (Optional) The IQN of the iSCSI initiator. This is only needed when the ACL restricts the LUN to a particular initiator. <initiator> <iqn name=' initiator0 ' /> </initiator> See the note below. Initiator IQN Note The IQN of the iSCSI initiator can be determined using the virsh find-storage-pool-sources-as iscsi command. If you are using virsh to create the storage pool, continue with verifying that the storage pool was created . Examples The following is an example of an XML file for an iSCSI-based storage pool: <pool type='iscsi'> <name>iSCSI_pool</name> <source> <host name='server1.example.com'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> The following is an example of a command for creating an iSCSI-based storage pool: The following images show an example of the virtual machine XML configuration Virtual Machine Manager Add a New Storage Pool dialog boxes for creating an iSCSI-based storage pool: Figure 13.8. Add a new iSCSI-based storage pool example 13.2.3.6. LVM-based storage pools Recommendations Be aware of the following before creating an LVM-based storage pool: LVM-based storage pools do not provide the full flexibility of LVM. libvirt supports thin logical volumes, but does not provide the features of thin storage pools. LVM-based storage pools are volume groups. You can create volume groups using Logical Volume Manager commands or virsh commands. To manage volume groups using the virsh interface, use the virsh commands to create volume groups. For more information about volume groups, see the Red Hat Enterprise Linux Logical Volume Manager Administration Guide . LVM-based storage pools require a full disk partition. If activating a new partition or device with these procedures, the partition will be formatted and all data will be erased. If using the host's existing Volume Group (VG) nothing will be erased. It is recommended to back up the storage device before commencing the following procedure. For information on creating LVM volume groups, see the Red Hat Enterprise Linux Logical Volume Manager Administration Guide . If you create an LVM-based storage pool on an existing VG, you should not run the pool-build command. After ensuring that the VG is prepared, continue creating the storage pool with defining the storage pool . Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating an LVM-based storage pool. Table 13.6. LVM-based storage pool parameters Description XML pool-define-as Virtual Machine Manager The type of storage pool <pool type='logical'> [type] logical logical: LVM Volume Group The name of the storage pool <name> name </name> [name] name Name The path to the device for the storage pool <source> <device path=' device_path ' /> source-dev device_path Source Path The name of the volume group <name=' VG-name ' /> source-name VG-name Source Path The virtual group format <format type='lvm2' /> </source> source-format lvm2 N/A The target path <target> <path=' target-path ' /> </target> target target-path Target Path Note If the logical volume group is made of multiple disk partitions, there may be multiple source devices listed. For example: <source> <device path='/dev/sda1'/> <device path='/dev/sdb3'/> <device path='/dev/sdc2'/> ... </source> If you are using virsh to create the storage pool, continue with verifying that the storage pool was created . Examples The following is an example of an XML file for an LVM-based storage pool: <pool type='logical'> <name>guest_images_lvm</name> <source> <device path='/dev/sdc'/> <name>libvirt_lvm</name> <format type='lvm2'/> </source> <target> <path>/dev/libvirt_lvm</path> </target> </pool> The following is an example of a command for creating an LVM-based storage pool: The following images show an example of the virtual machine XML configuration Virtual Machine Manager Add a New Storage Pool dialog boxes for creating an LVM-based storage pool: Figure 13.9. Add a new LVM-based storage pool example 13.2.3.7. NFS-based storage pools Prerequisites To create an Network File System (NFS)-based storage pool, an NFS Server should already be configured to be used by the host machine. For more information about NFS, see the Red Hat Enterprise Linux Storage Administration Guide . After ensuring that the NFS Server is properly configured, continue creating the storage pool with defining the storage pool . Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating an NFS-based storage pool. Table 13.7. NFS-based storage pool parameters Description XML pool-define-as Virtual Machine Manager The type of storage pool <pool type='netfs'> [type] netfs netfs: Network Exported Directory The name of the storage pool <name> name </name> [name] name Name The hostname of the NFS server where the mount point is located. This can be a hostname or an IP address. <source> <host name=' host_name ' /> source-host host_name Host Name The directory used on the NFS server <dir path=' source_path ' /> </source> source-path source_path Source Path The path specifying the target. This will be the path used for the storage pool. <target> <path>/ target_path </path> </target> target target_path Target Path If you are using virsh to create the storage pool, continue with verifying that the storage pool was created . Examples The following is an example of an XML file for an NFS-based storage pool: <pool type='netfs'> <name>nfspool</name> <source> <host name='localhost'/> <dir path='/home/net_mount'/> </source> <target> <path>/var/lib/libvirt/images/nfspool</path> </target> </pool> The following is an example of a command for creating an NFS-based storage pool: The following images show an example of the virtual machine XML configuration Virtual Machine Manager Add a New Storage Pool dialog boxes for creating an NFS-based storage pool: Figure 13.10. Add a new NFS-based storage pool example 13.2.3.8. vHBA-based storage pools using SCSI devices Note You cannot use Virtual Machine Manager to create vHBA-based storage pools using SCSI devices. Recommendations N_Port ID Virtualization (NPIV) is a software technology that allows sharing of a single physical Fibre Channel host bus adapter (HBA). This allows multiple guests to see the same storage from multiple physical hosts, and thus allows for easier migration paths for the storage. As a result, there is no need for the migration to create or copy storage, as long as the correct storage path is specified. In virtualization, the virtual host bus adapter , or vHBA , controls the Logical Unit Numbers (LUNs) for virtual machines. For a host to share one Fibre Channel device path between multiple KVM guests, a vHBA must be created for each virtual machine. A single vHBA must not be used by multiple KVM guests. Each vHBA for NPIV is identified by its parent HBA and its own World Wide Node Name (WWNN) and World Wide Port Name (WWPN). The path to the storage is determined by the WWNN and WWPN values. The parent HBA can be defined as scsi_host # or as a WWNN/WWPN pair. Note If a parent HBA is defined as scsi_host # and hardware is added to the host machine, the scsi_host # assignment may change. Therefore, it is recommended that you define a parent HBA using a WWNN/WWPN pair. It is recommended that you define a libvirt storage pool based on the vHBA, because this preserves the vHBA configuration. Using a libvirt storage pool has two primary advantages: The libvirt code can easily find the LUN's path using the virsh command output. Virtual machine migration requires only defining and starting a storage pool with the same vHBA name on the target machine. To do this, the vHBA LUN, libvirt storage pool and volume name must be specified in the virtual machine's XML configuration. Refer to Section 13.2.3.8, "vHBA-based storage pools using SCSI devices" for an example. Note Before creating a vHBA, it is recommended that you configure storage array (SAN)-side zoning in the host LUN to provide isolation between guests and prevent the possibility of data corruption. To create a persistent vHBA configuration, first create a libvirt 'scsi' storage pool XML file using the format below. When creating a single vHBA that uses a storage pool on the same physical HBA, it is recommended to use a stable location for the <path> value, such as one of the /dev/disk/by-{path|id|uuid|label} locations on your system. When creating multiple vHBAs that use storage pools on the same physical HBA, the value of the <path> field must be only /dev/ , otherwise storage pool volumes are visible only to one of the vHBAs, and devices from the host cannot be exposed to multiple guests with the NPIV configuration. For more information on <path> and the elements in <target> , see upstream libvirt documentation . Prerequisites Before creating a vHBA-based storage pools with SCSI devices, create a vHBA: Procedure 13.10. Creating a vHBA Locate HBAs on the host system To locate the HBAs on your host system, use the virsh nodedev-list --cap vports command. The following example shows a host that has two HBAs that support vHBA: Check the HBA's details Use the virsh nodedev-dumpxml HBA_device command to see the HBA's details. The output from the command lists the <name> , <wwnn> , and <wwpn> fields, which are used to create a vHBA. <max_vports> shows the maximum number of supported vHBAs. For example: <device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <unique_id>0</unique_id> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device> In this example, the <max_vports> value shows there are a total 127 virtual ports available for use in the HBA configuration. The <vports> value shows the number of virtual ports currently being used. These values update after creating a vHBA. Create a vHBA host device Create an XML file similar to one of the following for the vHBA host. In this examples, the file is named vhba_host3.xml . This example uses scsi_host 3 to describe the parent vHBA. # cat vhba_host3.xml <device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device> This example uses a WWNN/WWPN pair to describe the parent vHBA. # cat vhba_host3.xml <device> <name>vhba</name> <parent wwnn='20000000c9848140' wwpn='10000000c9848140'/> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device> Note The WWNN and WWPN values must match those in the HBA details seen in Procedure 13.10, "Creating a vHBA" . The <parent> field specifies the HBA device to associate with this vHBA device. The details in the <device> tag are used in the step to create a new vHBA device for the host. For more information on the nodedev XML format, see the libvirt upstream pages . Create a new vHBA on the vHBA host device To create a vHBA on the basis of vhba_host3 , use the virsh nodedev-create command: Verify the vHBA Verify the new vHBA's details ( scsi_host5 ) with the virsh nodedev-dumpxml command: # virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <unique_id>2</unique_id> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device> After verifying the vHBA, continue creating the storage pool with defining the storage pool . Parameters The following table provides a list of required parameters for the XML file, the virsh pool-define-as command, and the Virtual Machine Manager application, for creating a vHBA-based storage pool. Table 13.8. vHBA-based storage pool parameters Description XML pool-define-as The type of storage pool <pool type='scsi'> scsi The name of the storage pool <name> name </name> --adapter-name name The identifier of the vHBA. The parent attribute is optional. <source> <adapter type='fc_host' [parent= parent_scsi_device ] wwnn=' WWNN ' wwpn=' WWPN ' /> </source> [--adapter-parent parent ] --adapter-wwnn wwnn --adapter-wpnn wwpn The path specifying the target. This will be the path used for the storage pool. <target> <path> target_path </path> </target> target path_to_pool Important When the <path> field is /dev/ , libvirt generates a unique short device path for the volume device path. For example, /dev/sdc . Otherwise, the physical host path is used. For example, /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0 . The unique short device path allows the same volume to be listed in multiple guests by multiple storage pools. If the physical host path is used by multiple guests, duplicate device type warnings may occur. Note The parent attribute can be used in the <adapter> field to identify the physical HBA parent from which the NPIV LUNs by varying paths can be used. This field, scsi_host N , is combined with the vports and max_vports attributes to complete the parent identification. The parent , parent_wwnn , parent_wwpn , or parent_fabric_wwn attributes provide varying degrees of assurance that after the host reboots the same HBA is used. If no parent is specified, libvirt uses the first scsi_host N adapter that supports NPIV. If only the parent is specified, problems can arise if additional SCSI host adapters are added to the configuration. If parent_wwnn or parent_wwpn is specified, after the host reboots the same HBA is used. If parent_fabric_wwn is used, after the host reboots an HBA on the same fabric is selected, regardless of the scsi_host N used. If you are using virsh to create the storage pool, continue with verifying that the storage pool was created . Examples The following are examples of XML files for vHBA-based storage pools. The first example is for an example of a storage pool that is the only storage pool on the HBA. The second example is for a storage pool that is one of several storage pools that use a single vHBA and uses the parent attribute to identify the SCSI host device. <pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> <pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> The following is an example of a command for creating a vHBA-based storage pool: Note The virsh command does not provide a way to define the parent_wwnn , parent_wwpn , or parent_fabric_wwn attributes. Configuring a virtual machine to use a vHBA LUN After a storage pool is created for a vHBA, the vHBA LUN must be added to the virtual machine configuration. Create a disk volume on the virtual machine in the virtual machine's XML. Specify the storage pool and the storage volume in the <source> parameter. The following shows an example: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='vhbapool_host3' volume='unit:0:4:0'/> <target dev='hda' bus='ide'/> </disk> 13.2.4. Deleting Storage Pools You can delete storage pools using virsh or the Virtual Machine Manager . 13.2.4.1. Prerequisites for deleting a storage pool To avoid negatively affecting other guest virtual machines that use the storage pool you want to delete, it is recommended that you stop the storage pool and release any resources being used by it. 13.2.4.2. Deleting storage pools using virsh List the defined storage pools: Stop the storage pool you want to delete. (Optional) For some types of storage pools, you can optionally remove the directory where the storage pool resides: Remove the storage pool's definition. Confirm the pool is undefined: 13.2.4.3. Deleting storage pools using Virtual Machine Manager Select the storage pool you want to delete in the storage pool list in the Storage tab of the Connection Details window . Click at the bottom of the Storage window. This stops the storage pool and releases any resources in use by it. Click . Note The icon is only enabled if the storage pool is stopped. The storage pool is deleted.
[ "<pool type='fs'> <name>guest_images_fs</name> <source> <device path='/dev/sdc1'/> </source> <target> <path>/guest_images</path> </target> </pool>", "virsh pool-define ~/guest_images.xml Pool defined from guest_images_fs", "virsh pool-create ~/guest_images.xml Pool created from guest_images_fs", "virsh pool-define-as guest_images_fs fs - - /dev/sdc1 - \"/guest_images\" Pool guest_images_fs defined", "virsh pool-create-as guest_images_fs fs - - /dev/sdc1 - \"/guest_images\" Pool guest_images_fs created", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs inactive no", "virsh pool-build guest_images_fs Pool guest_images_fs built ls -la / guest_images total 8 drwx------. 2 root root 4096 May 31 19:38 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 .. virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs inactive no", "virsh pool-start guest_images_fs Pool guest_images_fs started virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs active no", "virsh pool-autostart guest_images_fs Pool guest_images_fs marked as autostarted virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs active yes", "virsh pool-info guest_images_fs Name: guest_images_fs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Persistent: yes Autostart: yes Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.20 GB mount | grep /guest_images /dev/sdc1 on /guest_images type ext4 (rw) ls -la /guest_images total 24 drwxr-xr-x. 3 root root 4096 May 31 19:47 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 .. drwx------. 2 root root 16384 May 31 14:18 lost+found", "<pool type='dir'> <name>dirpool</name> <target> <path>/guest_images</path> </target> </pool>", "virsh pool-define-as dirpool dir --target \"/guest_images\" Pool FS_directory defined", "parted /dev/sdb GNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel New disk label type? gpt (parted) quit Information: You may need to update /etc/fstab. #", "<pool type='disk'> <name>phy_disk</name> <source> <device path='/dev/sdb'/> <format type='gpt'/> </source> <target> <path>/dev</path> </target> </pool>", "virsh pool-define-as phy_disk disk --source-format=gpt --source-dev=/dev/sdb --target /dev Pool phy_disk defined", "mkfs.ext4 /dev/sdc1", "<pool type='fs'> <name>guest_images_fs</name> <source> <device path='/dev/sdc1'/> <format type='auto'/> </source> <target> <path>/guest_images</path> </target> </pool>", "virsh pool-define-as guest_images_fs fs --source-dev /dev/sdc1 --target /guest_images Pool guest_images_fs defined", "gluster volume status Status of volume: gluster-vol1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 222.111.222.111:/gluster-vol1 49155 Y 18634 Task Status of Volume gluster-vol1 ------------------------------------------------------------------------------ There are no active volume tasks", "setsebool virt_use_fusefs on getsebool virt_use_fusefs virt_use_fusefs --> on", "<pool type='gluster'> <name>Gluster_pool</name> <source> <host name='111.222.111.222'/> <dir path='/'/> <name>gluster-vol1</name> </source> </pool>", "pool-define-as --name Gluster_pool --type gluster --source-host 111.222.111.222 --source-name gluster-vol1 --source-path / Pool Gluster_pool defined", "yum install targetcli", "targetcli", "create [block-name][filepath]", "create block1 dev=/dev/sdb1", "create [fileio-name][image-name] [image-size]", "create fileio1 /foo.img 50M", "create [ramdisk-name] [ramdisk-size]", "create ramdisk1 1M", "create iqn.2010-05.com.example.server1:iscsirhel7guest", "portals/ create", "portals/ create 143.22.16.33", "iscsi>iqn.iqn.2010-05.com.example.server1:iscsirhel7guest", "create /backstores/ramdisk/ramdisk1", "create /backstores/block/block1", "create /backstores/fileio/fileio1", "/iscsi/iqn.20...csirhel7guest ls o- tgp1 ............................................................[enabled, auth] o- acls...................................................................[0 ACL] o- luns..................................................................[3 LUNs] | o- lun0......................................................[ramdisk/ramdisk1] | o- lun1...............................................[block/block1 (dev/vdb1)] | o- lun2................................................[fileio/file1 (foo.img)] o- portals.............................................................[1 Portal] o- IP-ADDRESS:3260.........................................................[OK]", "cat /etc/iscsi/initiator2.iscsi InitiatorName=create iqn.2010-05.com.example.server1:iscsirhel7guest", "create", "create iqn.2010-05.com.example.server1:888", "set auth userid= user_ID set auth password= password set attribute authentication=1 set attribute generate_node_acls=1", "saveconfig", "systemctl enable target.service", "iscsiadm --mode discovery --type sendtargets --portal server1.example.com 143.22.16.33:3260,1 iqn.2010-05.com.example.server1:iscsirhel7guest", "iscsiadm -d2 -m node --login scsiadm: Max file limits 1024 1024 Logging in to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 143.22.16.33,3260] Login to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 143.22.16.33,3260] successful.", "iscsiadm -d2 -m node --logout scsiadm: Max file limits 1024 1024 Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 143.22.16.33,3260 Logout of [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 143.22.16.33,3260] successful.", "<secret ephemeral='no' private='yes'> <description>Passphrase for the iSCSI example.com server</description> <usage type='iscsi'> <target>iscsirhel7secret</target> </usage> </secret>", "virsh secret-define secret.xml", "virsh secret-list UUID Usage -------------------------------------------------------------------------------- 2d7891af-20be-4e5e-af83-190e8a922360 iscsi iscsirhel7secret", "MYSECRET=`printf %s \" password123 \" | base64` virsh secret-set-value 2d7891af-20be-4e5e-af83-190e8a922360 USDMYSECRET", "<pool type='iscsi'> <name>iscsirhel7pool</name> <source> <host name='192.168.122.1'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> <auth type='chap' username='redhat'> <secret usage='iscsirhel7secret'/> </auth> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "<auth username='redhat'> <secret type='iscsi' usage='iscsirhel7secret'/> </auth>", "virsh pool-destroy iscsirhel7pool virsh pool-start iscsirhel7pool", "<pool type='iscsi'> <name>iSCSI_pool</name> <source> <host name='server1.example.com'/> <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "virsh pool-define-as --name iSCSI_pool --type iscsi --source-host server1.example.com --source-dev iqn.2010-05.com.example.server1:iscsirhel7guest --target /dev/disk/by-path Pool iSCSI_pool defined", "<source> <device path='/dev/sda1'/> <device path='/dev/sdb3'/> <device path='/dev/sdc2'/> </source>", "<pool type='logical'> <name>guest_images_lvm</name> <source> <device path='/dev/sdc'/> <name>libvirt_lvm</name> <format type='lvm2'/> </source> <target> <path>/dev/libvirt_lvm</path> </target> </pool>", "virsh pool-define-as guest_images_lvm logical --source-dev=/dev/sdc --source-name libvirt_lvm --target /dev/libvirt_lvm Pool guest_images_lvm defined", "<pool type='netfs'> <name>nfspool</name> <source> <host name='localhost'/> <dir path='/home/net_mount'/> </source> <target> <path>/var/lib/libvirt/images/nfspool</path> </target> </pool>", "virsh pool-define-as nfspool netfs --source-host localhost --source-path /home/net_mount --target /var/lib/libvirt/images/nfspool Pool nfspool defined", "virsh nodedev-list --cap vports scsi_host3 scsi_host4", "virsh nodedev-dumpxml scsi_host3", "<device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <unique_id>0</unique_id> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device>", "cat vhba_host3.xml <device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>", "cat vhba_host3.xml <device> <name>vhba</name> <parent wwnn='20000000c9848140' wwpn='10000000c9848140'/> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>", "virsh nodedev-create vhba_host3.xml Node device scsi_host5 created from vhba_host3.xml", "virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <unique_id>2</unique_id> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device>", "<pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "<pool type='scsi'> <name>vhbapool_host3</name> <source> <adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>", "virsh pool-define-as vhbapool_host3 scsi --adapter-parent scsi_host3 --adapter-wwnn 5001a4a93526d0a1 --adapter-wwpn 5001a4ace3ee047d --target /dev/disk/by-path Pool vhbapool_host3 defined", "<disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='vhbapool_host3' volume='unit:0:4:0'/> <target dev='hda' bus='ide'/> </disk>", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_pool active yes", "virsh pool-destroy guest_images_disk", "virsh pool-delete guest_images_disk", "virsh pool-undefine guest_images_disk", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/storage_pools
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/proc-providing-feedback-on-redhat-documentation
Chapter 2. Abstract Login Modules
Chapter 2. Abstract Login Modules The abstract login modules are abstract Java classes that are extended by the other login modules in order to provide common functionality and configuration options. The abstract login modules may never be used directly, but the configuration options are available to any login modules that extend them. 2.1. AbstractServer Login Module Short name : AbstractServerLoginModule Full name : org.jboss.security.auth.spi.AbstractServerLoginModule The AbstractServer Login Module serves as a base class for many login modules as well as several abstract login modules. It implements the common functionality required for a JAAS server side login module and implements the PicketBox standard Subject usage pattern of storing identities and roles. Option Type Default Description principalClass A fully qualified classname org.jboss.security.SimplePrincipal A Principal implementation class which contains a constructor that takes String argument for the principal name. module String none A reference to a jboss-module that can be used to load a custom callback/validator. unauthenticatedIdentity String none This defines the principal name that should be assigned to requests that contain no authentication information. This can allow unprotected servlets to invoke methods on Jakarta Enterprise Beans that do not require a specific role. Such a principal has no associated roles and can only access unsecured Jakarta Enterprise Beans or Jakarta Enterprise Beans methods that are associated with the unchecked permission constraint. See the Unauthenticated Identity section for more details. password-stacking useFirstPass or false false See the Password Stacking section for more details. 2.1.1. Unauthenticated Identity Not all requests are received in an authenticated format. The unauthenticatedIdentity login module configuration assigns a specific identity, guest for example, to requests that are made with no associated authentication information. This can be used to allow unprotected servlets to invoke methods on Jakarta Enterprise Beans that do not require a specific role. Such a principal has no associated roles and so can only access either unsecured Jakarta Enterprise Beans or Jakarta Enterprise Beans methods that are associated with the unchecked permission constraint. For example, this configuration option can be used in the UsersRoles and Remoting Login Modules 2.1.2. Password Stacking Multiple login modules can be chained together in a stack, with each login module providing both the credentials verification and role assignment during authentication. This works for many use cases, but sometimes credentials verification and role assignment are split across multiple user management stores. Consider the case where users are managed in a central LDAP server but application-specific roles are stored in the application's relational database. The password-stacking module option captures this relationship. To use password stacking, each login module should set the password-stacking attribute to useFirstPass , which is located in the <module-option> section. If a module configured for password stacking has authenticated the user, all the other stacking modules will consider the user authenticated and only attempt to provide a set of roles for the authorization step. When password-stacking option is set to useFirstPass , this module first looks for a shared user name and password under the property names javax.security.auth.login.name and javax.security.auth.login.password respectively in the login module shared state map. If found, these properties are used as the principal name and password. If not found, the principal name and password are set by this login module and stored under the property names javax.security.auth.login.name and javax.security.auth.login.password respectively. Note When using password stacking, set all modules to be required. This ensures that all modules are considered, and have the chance to contribute roles to the authorization process. 2.2. UsernamePassword Login Module Short name : UsernamePasswordLoginModule Full name : org.jboss.security.auth.spi.UsernamePasswordLoginModule Parent : AbstractServer Login Module The UsernamePassword Login Module is an abstract login module that imposes an identity == String username , credentials == String password view on the login process. It inherits all the fields from Abstract Server login module in addition to the below fields. Option Type Default Description ignorePasswordCase boolean false A flag indicating if the password comparison should ignore case. digestCallback A fully qualified classname none The class name of the org.jboss.crypto.digest.DigestCallback implementation that includes pre/post digest content like salts for hashing the input password. Only used if hashAlgorithm has been specified and hashUserPassword is set to true . storeDigestCallback A fully qualified classname none The class name of the org.jboss.crypto.digest.DigestCallback implementation that includes pre/post digest content like salts for hashing the store/expected password. Only used if hashStorePassword is true and hashAlgorithm has been specified. throwValidateError boolean false A flag that indicates whether validation errors should be exposed to clients or not. inputValidator A fully qualified classname none The instance of the org.jboss.security.auth.spi.InputValidator implementation used to validate the user name and password supplied by the client. Note The UsernamePassword Login Module options, regarding password hashing, are described in the section. 2.2.1. Password Hashing Most login modules must compare a client-supplied password to a password stored in a user management system. These modules generally work with plain text passwords, but can be configured to support hashed passwords to prevent plain text passwords from being stored on the server side. JBoss EAP supports the ability to configure the hashing algorithm, encoding, and character set as well as when the user password and store password are hashed. The following are password hashing options that can be configured as part of a login module that has UsernamePassword Login Module as a parent: Option Type Default Description hashAlgorithm String representing a password hashing algorithm. none Name of the java.security.MessageDigest algorithm to be used to hash the password. There is no default so this option must be specified to enable hashing. Typical values are SHA-256 , SHA-1 and MD5 . When hashAlgorithm is specified and hashUserPassword is set to true , the clear text password obtained from the CallbackHandler is hashed before it is passed to UsernamePasswordLoginModule.validatePassword as the inputPassword argument. hashEncoding String base64 The string format for the hashed password, if hashAlgorithm is also set. May specify one of three encoding types: base64 , hex or rfc2617 . hashCharset String The default encoding set in the container's runtime environment The name of the charset/encoding to use when converting the password string to a byte array. hashUserPassword boolean true A flag indicating if the user entered password should be hashed. The hashed user password is compared against the value in the login module, which is expected to be a hash of the password. hashStorePassword boolean false A flag indicating if the store password returned should be hashed. This is used for digest authentication, where the user submits a hash of the user password along with a request-specific tokens from the server to be compared. The hash algorithm, for digest, this would be rfc2617 , is utilized to compute a server-side hash, which should match the hashed value sent from the client. passwordIsA1Hash boolean A flag used by the org.jboss.security.auth.callback.RFC2617Digest when it is configured as the digestCallback or storeDigestCallback . If true, incoming password will not be hashed since it is already hashed. 2.3. AbstractPasswordCredential Login Module Short name : AbstractPasswordCredentialLoginModule Full name : org.picketbox.datasource.security.AbstractPasswordCredentialLoginModule Parent : AbstractServer Login Module AbstractPasswordCredential Login Module is a base login module that handles PasswordCredentials. 2.4. Common Login Module Short name : CommonLoginModule Full name : org.jboss.security.negotiation.common.CommonLoginModule Parent : AbstractServer Login Module Common Login Module is an abstract login module that serves as a base login module for some login modules within JBoss Negotiation.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/abstract_login_modules
REST API Guide
REST API Guide Red Hat Virtualization 4.4 Using the Red Hat Virtualization REST Application Programming Interface Red Hat Virtualization Documentation Team [email protected] Abstract This guide describes the Red Hat Virtualization Manager Representational State Transfer Application Programming Interface. This guide is generated from documentation comments in the ovirt-engine-api-model code, and is currently partially complete. Updated versions of this documentation will be published as new content becomes available.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/rest_api_guide/index
Chapter 16. Conduits
Chapter 16. Conduits Abstract Conduits are a low-level piece of the transport architecture that are used to implement outbound connections. Their behavior and life-cycle can effect system performance and processing load. Overview Conduits manage the client-side, or outbound, transport details in the Apache CXF runtime. They are responsible for opening ports, establishing outbound connections, sending messages, and listening for any responses between an application and a single external endpoint. If an application connects to multiple endpoints, it will have one conduit instance for each endpoint. Each transport type implements its own conduit using the Conduit interface. This allows for a standardized interface between the application level functionality and the transports. In general, you only need to worry about the conduits being used by your application when configuring the client-side transport details. The underlying semantics of how the runtime handles conduits is, generally, not something a developer needs to worry about. However, there are cases when an understanding of conduit's can prove helpful: Implementing a custom transport Advanced application tuning to manage limited resources Conduit life-cycle Conduits are managed by the client implementation object. Once created, a conduit lives for the duration of the client implementation object. The conduit's life-cycle is: When the client implementation object is created, it is given a reference to a ConduitSelector object. When the client needs to send a message is request's a reference to a conduit from the conduit selector. If the message is for a new endpoint, the conduit selector creates a new conduit and passes it to the client implementation. Otherwise, it passes the client a reference to the conduit for the target endpoint. The conduit sends messages when needed. When the client implementation object is destroyed, all of the conduits associated with it are destroyed. Conduit weight The weight of a conduit object depends on the transport implementation. HTTP conduits are extremely light weight. JMS conduits are heavy because they are associated with the JMS Session object and one or more JMSListenerContainer objects.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfconduits
Chapter 4. Fence Devices
Chapter 4. Fence Devices This chapter documents the fence devices currently supported in Red Hat Enterprise Linux High-Availability Add-On. Table 4.1, "Fence Device Summary" lists the fence devices, the fence device agents associated with the fence devices, and provides a reference to the table documenting the parameters for the fence devices. Table 4.1. Fence Device Summary Fence Device Fence Agent Reference to Parameter Description APC Power Switch (telnet/SSH) fence_apc Table 4.2, "APC Power Switch (telnet/SSH)" APC Power Switch over SNMP fence_apc_snmp Table 4.3, "APC Power Switch over SNMP" Brocade Fabric Switch fence_brocade Table 4.4, "Brocade Fabric Switch" Cisco MDS fence_cisco_mds Table 4.5, "Cisco MDS" Cisco UCS fence_cisco_ucs Table 4.6, "Cisco UCS" Dell DRAC 5 fence_drac5 Table 4.7, "Dell DRAC 5" Dell iDRAC fence_idrac Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Eaton Network Power Switch (SNMP Interface) fence_eaton_snmp Table 4.8, "Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later)" Egenera BladeFrame fence_egenera Table 4.9, "Egenera BladeFrame" Emerson Network Power Switch (SNMP Interface) fence_emerson Table 4.10, "Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise Linux 6.7 and later)" ePowerSwitch fence_eps Table 4.11, "ePowerSwitch" Fence virt (Serial/VMChannel Mode) fence_virt Table 4.12, "Fence virt (Serial/VMChannel Mode)" Fence virt (fence_xvm/Multicast Mode) fence_xvm Table 4.13, "Fence virt (Multicast Mode) " Fujitsu Siemens Remoteview Service Board (RSB) fence_rsb Table 4.14, "Fujitsu Siemens Remoteview Service Board (RSB)" HP BladeSystem fence_hpblade Table 4.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" HP iLO Device fence_ilo Table 4.16, "HP iLO (Integrated Lights Out) and HP iLO2" HP iLO over SSH Device fence_ilo3_ssh Table 4.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO4 Device fence_ilo4 Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" HP iLO4 over SSH Device fence_ilo4_ssh Table 4.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO MP fence_ilo_mp Table 4.18, "HP iLO (Integrated Lights Out) MP" HP Moonshot iLO fence_ilo_moonshot Table 4.19, "HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later)" IBM BladeCenter fence_bladecenter Table 4.20, "IBM BladeCenter" IBM BladeCenter SNMP fence_ibmblade Table 4.21, "IBM BladeCenter SNMP" IBM Integrated Management Module fence_imm Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" IBM iPDU fence_ipdu Table 4.22, "IBM iPDU (Red Hat Enterprise Linux 6.4 and later)" IF MIB fence_ifmib Table 4.23, "IF MIB" Intel Modular fence_intelmodular Table 4.24, "Intel Modular" IPMI (Intelligent Platform Management Interface) Lan fence_ipmilan Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Fence kdump fence_kdump Table 4.26, "Fence kdump" Multipath Persistent Reservation Fencing fence_mpath Table 4.27, "Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later)" RHEV-M fencing fence_rhevm Table 4.28, "RHEV-M REST API (RHEL 6.2 and later against RHEV 3.0 and later)" SCSI Fencing fence_scsi Table 4.29, "SCSI Reservation Fencing" VMware Fencing (SOAP Interface) fence_vmware_soap Table 4.30, "VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)" WTI Power Switch fence_wti Table 4.31, "WTI Power Switch" 4.1. APC Power Switch over Telnet and SSH Table 4.2, "APC Power Switch (telnet/SSH)" lists the fence device parameters used by fence_apc , the fence agent for APC over telnet/SSH. Table 4.2. APC Power Switch (telnet/SSH) luci Field cluster.conf Attribute Description Name name A name for the APC device connected to the cluster into which the fence daemon logs by means of telnet/ssh. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use to connect to the device. The default port is 23, or 22 if Use SSH is selected. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port port The port. Switch (optional) switch The switch number for the APC switch that connects to the node when you have multiple daisy-chained switches. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Figure 4.1, "APC Power Switch" shows the configuration screen for adding an APC Power Switch fence device. Figure 4.1. APC Power Switch The following command creates a fence device instance for a APC device: The following is the cluster.conf entry for the fence_apc device:
[ "ccs -f cluster.conf --addfencedev apc agent=fence_apc ipaddr=192.168.0.1 login=root passwd=password123", "<fencedevices> <fencedevice agent=\"fence_apc\" name=\"apc\" ipaddr=\"apc-telnet.example.com\" login=\"root\" passwd=\"password123\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/ch-fence-devices-CA
Updating clusters
Updating clusters OpenShift Container Platform 4.12 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team
[ "oc adm upgrade --include-not-recommended", "Cluster version is 4.10.22 Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.11 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) Recommended updates: VERSION IMAGE 4.10.26 quay.io/openshift-release-dev/ocp-release@sha256:e1fa1f513068082d97d78be643c369398b0e6820afab708d26acda2262940954 4.10.25 quay.io/openshift-release-dev/ocp-release@sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc 4.10.24 quay.io/openshift-release-dev/ocp-release@sha256:aab51636460b5a9757b736a29bc92ada6e6e6282e46b06e6fd483063d590d62a 4.10.23 quay.io/openshift-release-dev/ocp-release@sha256:e40e49d722cb36a95fa1c03002942b967ccbd7d68de10e003f0baa69abad457b Supported but not recommended updates: Version: 4.11.0 Image: quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 Recommended: False Reason: RPMOSTreeTimeout Message: Nodes with substantial numbers of containers and CPU contention may not reconcile machine configuration https://bugzilla.redhat.com/show_bug.cgi?id=2111817#c22", "oc get clusterversion version -o json | jq '.status.availableUpdates'", "[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]", "oc get clusterversion version -o json | jq '.status.conditionalUpdates'", "[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]", "oc adm release extract <release image>", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata", "0000_<runlevel>_<component>_<manifest-name>.yaml", "0000_03_config-operator_01_proxy.crd.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h", "oc adm upgrade channel <channel>", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb", "Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)", "Cluster update time = 60 + (6 x 5) = 90 minutes", "Cluster update time = 60 + (3 x 5) = 75 minutes", "oc get apirequestcounts", "NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H poddisruptionbudgets.v1.policy 391 8114 poddisruptionbudgets.v1beta1.policy 1.25 2 23 podmonitors.v1.monitoring.coreos.com 3 70 podnetworkconnectivitychecks.v1alpha1.controlplane.operator.openshift.io 612 11748 pods.v1 1531 38634 podsecuritypolicies.v1beta1.policy 1.25 3 39 podtemplates.v1 2 79 preprovisioningimages.v1alpha1.metal3.io 2 39 priorityclasses.v1.scheduling.k8s.io 12 248 prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 1.26 3 86", "oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!=\"\")]}{.status.removedInRelease}{\"\\t\"}{.metadata.name}{\"\\n\"}{end}'", "1.26 flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 1.26 horizontalpodautoscalers.v2beta2.autoscaling 1.25 poddisruptionbudgets.v1beta1.policy 1.25 podsecuritypolicies.v1beta1.policy 1.26 prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io", "oc get apirequestcounts <resource>.<version>.<group> -o yaml", "oc get apirequestcounts poddisruptionbudgets.v1beta1.policy -o yaml", "oc get apirequestcounts poddisruptionbudgets.v1beta1.policy -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{\",\"}{.username}{\",\"}{.userAgent}{\"\\n\"}{end}' | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT", "VERBS USERNAME USERAGENT watch system:serviceaccount:openshift-operators:3scale-operator manager/v0.0.0 watch system:serviceaccount:openshift-operators:datadog-operator-controller-manager manager/v0.0.0", "oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.11-kube-1.25-api-removals-in-4.12\":\"true\"}}' --type=merge", "oc get mcp", "NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False", "oc adm upgrade channel eus-<4.y+2>", "oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'", "oc adm upgrade --to-latest", "Updating to latest version <4.y+1.z>", "oc adm upgrade", "Cluster version is <4.y+1.z>", "oc adm upgrade --to-latest", "oc adm upgrade", "Cluster version is <4.y+2.z>", "oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'", "oc get mcp", "NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc adm release extract --credentials-requests --cloud=<provider_type> --to=<path_to_directory_with_list_of_credentials_requests>/credrequests quay.io/<path_to>/ocp-release:<version>", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1", "oc create namespace <component_namespace>", "ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1", "oc get machinehealthcheck -n openshift-machine-api", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-", "oc adm upgrade", "Cluster version is 4.9.23 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.9 (available channels: candidate-4.10, candidate-4.9, fast-4.10, fast-4.9, stable-4.10, stable-4.9, eus-4.10) Recommended updates: VERSION IMAGE 4.9.24 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 4.9.25 quay.io/openshift-release-dev/ocp-release@sha256:2eafde815e543b92f70839972f585cc52aa7c37aa72d5f3c8bc886b0fd45707a 4.9.26 quay.io/openshift-release-dev/ocp-release@sha256:3ccd09dd08c303f27a543351f787d09b83979cd31cf0b4c6ff56cd68814ef6c8 4.9.27 quay.io/openshift-release-dev/ocp-release@sha256:1c7db78eec0cf05df2cead44f69c0e4b2c3234d5635c88a41e1b922c3bedae16 4.9.28 quay.io/openshift-release-dev/ocp-release@sha256:4084d94969b186e20189649b5affba7da59f7d1943e4e5bc7ef78b981eafb7a8 4.9.29 quay.io/openshift-release-dev/ocp-release@sha256:b04ca01d116f0134a102a57f86c67e5b1a3b5da1c4a580af91d521b8fa0aa6ec 4.9.31 quay.io/openshift-release-dev/ocp-release@sha256:2a28b8ebb53d67dd80594421c39e36d9896b1e65cb54af81fbb86ea9ac3bf2d7 4.9.32 quay.io/openshift-release-dev/ocp-release@sha256:ecdb6d0df547b857eaf0edb5574ddd64ca6d9aff1fa61fd1ac6fb641203bedfa", "oc adm upgrade channel <channel>", "oc adm upgrade channel stable-4.12", "oc adm upgrade --to-latest=true 1", "oc adm upgrade --to=<version> 1", "oc adm upgrade", "oc get clusterversion", "Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.10 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) No updates available. You may force an upgrade to a specific release image, but doing so might not be supported and might result in downtime or data loss.", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.25.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.25.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.25.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.25.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.25.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.25.0", "oc adm upgrade --include-not-recommended", "oc adm upgrade --allow-not-recommended --to <version> <.>", "oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge", "clusterversion.config.openshift.io/version patched", "oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes", "ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm", "oc label node <node_name> node-role.kubernetes.io/<custom_label>=", "oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=", "node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3", "oc create -f <file_name>", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: \"\"", "oc create -f machineConfigPool.yaml", "machineconfigpool.machineconfiguration.openshift.io/worker-perf created", "oc label node worker-a node-role.kubernetes.io/worker-perf=''", "oc label node worker-b node-role.kubernetes.io/worker-perf=''", "oc label node worker-c node-role.kubernetes.io/worker-perf=''", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "oc create -f new-machineconfig.yaml", "oc label node worker-a node-role.kubernetes.io/worker-perf-canary=''", "oc label node worker-a node-role.kubernetes.io/worker-perf-", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: \"\"", "oc create -f machineConfigPool-Canary.yaml", "machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m", "oc get nodes", "NAME STATUS ROLES AGE VERSION worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5", "systemctl status kdump.service", "NAME STATUS ROLES AGE VERSION kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS)", "cat /proc/cmdline", "crashkernel=512M", "oc label node worker-a node-role.kubernetes.io/worker-perf=''", "oc label node worker-a node-role.kubernetes.io/worker-perf-canary-", "oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge", "oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched", "oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge", "oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched", "oc get machineconfigpools", "oc label node <node_name> node-role.kubernetes.io/<custom_label>-", "oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-", "node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m", "oc delete mcp <mcp_name>", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version", "Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64", "variant: openshift version: 4.12.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml", "oc apply -f ./99-worker-bootupctl-update.yaml", "--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"", "[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml", "systemctl disable --now firewalld.service", "subscription-manager repos --disable=rhocp-4.11-for-rhel-8-x86_64-rpms --disable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhocp-4.12-for-rhel-8-x86_64-rpms", "yum swap ansible ansible-core", "yum update openshift-ansible openshift-clients", "subscription-manager repos --disable=rhocp-4.11-for-rhel-8-x86_64-rpms --enable=rhocp-4.12-for-rhel-8-x86_64-rpms", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1", "oc get node", "NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.25.0 mycluster-control-plane-1 Ready master 145m v1.25.0 mycluster-control-plane-2 Ready master 145m v1.25.0 mycluster-rhel8-0 Ready worker 98m v1.25.0 mycluster-rhel8-1 Ready worker 98m v1.25.0 mycluster-rhel8-2 Ready worker 98m v1.25.0 mycluster-rhel8-3 Ready worker 98m v1.25.0", "yum update", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "mkdir -p <directory_name>", "cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "tar xvzf oc-mirror.tar.gz", "chmod +x oc-mirror", "sudo mv oc-mirror /usr/local/bin/.", "oc mirror help", "oc mirror init --registry example.com/mirror/oc-mirror-metadata > imageset-config.yaml 1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.12 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi8/ubi:latest 9 helm: {}", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2", "oc mirror --config=./imageset-config.yaml \\ 1 file://<path_to_output_directory> 2", "cd <path_to_output_directory>", "ls", "mirror_seq1_000000.tar", "oc mirror --from=./mirror_seq1_000000.tar \\ 1 docker://registry.example:5000 2", "oc apply -f ./oc-mirror-workspace/results-1639608409/", "oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/", "oc get imagecontentsourcepolicy", "oc get catalogsource -n openshift-marketplace", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3", "Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt", "cd oc-mirror-workspace/", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: aws-load-balancer-operator", "oc mirror --config=./imageset-config.yaml \\ 1 --use-oci-feature \\ 2 --oci-feature-action=copy \\ 3 oci://my-oci-catalog 4", "[[registry]] location = \"registry.redhat.io:5000\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"preprod-registry.example.com\" insecure = false", "ls -l", "my-oci-catalog 1 oc-mirror-workspace 2 olm_artifacts 3", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 mirror: operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog/redhat-operator-index 1 packages: - name: aws-load-balancer-operator", "oc mirror --config=./imageset-config.yaml \\ 1 --use-oci-feature \\ 2 --oci-feature-action=mirror \\ 3 docker://registry.example:5000 4", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz", "repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "architectures: - amd64 - arm64", "channels: - name: stable-4.10 - name: stable-4.12", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: channels: - name: stable-4.10 minVersion: 4.10.10", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.12 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.12 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 full: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"s390x\" channels: - name: stable-4.12 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest", "export OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1", "oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature", "oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1", "oc create -f <filename>.yaml", "oc create -f update-service-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group spec: targetNamespaces: - openshift-update-service", "oc -n openshift-update-service create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"", "oc create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-subscription.yaml", "oc -n openshift-update-service get clusterserviceversions", "NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded", "FROM registry.access.redhat.com/ubi8/ubi:8.1 RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]", "podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest", "podman push registry.example.com/openshift/graph-data:latest", "NAMESPACE=openshift-update-service", "NAME=service", "RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images", "GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest", "oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF", "while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done", "while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done", "NAMESPACE=openshift-update-service", "NAME=service", "POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"", "PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"", "oc patch clusterversion version -p USDPATCH --type merge", "oc get machinehealthcheck -n openshift-machine-api", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-", "oc adm release info -o 'jsonpath={.digest}{\"\\n\"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE}", "sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d", "oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest>", "skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.25.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.25.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.25.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.25.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.25.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.25.0", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6", "oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry", "oc apply -f imageContentSourcePolicy.yaml", "oc get ImageContentSourcePolicy -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}", "oc get updateservice -n openshift-update-service", "NAME AGE service 6s", "oc delete updateservice service -n openshift-update-service", "updateservice.updateservice.operator.openshift.io \"service\" deleted", "oc project openshift-update-service", "Now using project \"openshift-update-service\" on server \"https://example.com:6443\".", "oc get operatorgroup", "NAME AGE openshift-update-service-fprx2 4m41s", "oc delete operatorgroup openshift-update-service-fprx2", "operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted", "oc get subscription", "NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1", "oc get subscription update-service-operator -o yaml | grep \" currentCSV\"", "currentCSV: update-service-operator.v0.0.1", "oc delete subscription update-service-operator", "subscription.operators.coreos.com \"update-service-operator\" deleted", "oc delete clusterserviceversion update-service-operator.v0.0.1", "clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted", "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.25.0 control-plane-node-1 Ready master 75m v1.25.0 control-plane-node-2 Ready master 75m v1.25.0", "oc adm cordon <control_plane_node>", "oc wait --for=condition=Ready node/<control_plane_node>", "oc adm uncordon <control_plane_node>", "oc get nodes -l node-role.kubernetes.io/worker", "NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.25.0 compute-node-1 Ready worker 30m v1.25.0 compute-node-2 Ready worker 30m v1.25.0", "oc adm cordon <compute_node>", "oc adm drain <compute_node> [--pod-selector=<pod_selector>]", "oc wait --for=condition=Ready node/<compute_node>", "oc adm uncordon <compute_node>", "type PreflightValidationOCPSpec struct { // releaseImage describes the OCP release image that all Modules need to be checked against. // +kubebuilder:validation:Required ReleaseImage string `json:\"releaseImage\"` 1 // Boolean flag that determines whether images build during preflight must also // be pushed to a defined repository // +optional PushBuiltImage bool `json:\"pushBuiltImage\"` 2 }", "type CRStatus struct { // Status of Module CR verification: true (verified), false (verification failed), // error (error during verification process), unknown (verification has not started yet) // +required // +kubebuilder:validation:Required // +kubebuilder:validation:Enum=True;False VerificationStatus string `json:\"verificationStatus\"` 1 // StatusReason contains a string describing the status source. // +optional StatusReason string `json:\"statusReason,omitempty\"` 2 // Current stage of the verification process: // image (image existence verification), build(build process verification) // +required // +kubebuilder:validation:Required // +kubebuilder:validation:Enum=Image;Build;Sign;Requeued;Done VerificationStage string `json:\"verificationStage\"` 3 // LastTransitionTime is the last time the CR status transitioned from one status to another. // This should be when the underlying status changed. If that is not known, then using the time when the API field changed is acceptable. // +required // +kubebuilder:validation:Required // +kubebuilder:validation:Type=string // +kubebuilder:validation:Format=date-time LastTransitionTime metav1.Time `json:\"lastTransitionTime\" protobuf:\"bytes,4,opt,name=lastTransitionTime\"` 4 }", "RUN depmod -b /opt USD{KERNEL_VERSION}", "quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/updating_clusters/index
7.8. automake
7.8. automake 7.8.1. RHSA-2013:0526 - Low: automake security update An updated automake package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Automake is a tool for automatically generating Makefile.in files compliant with the GNU Coding Standards. Security Fix CVE-2012-3386 It was found that the distcheck rule in Automake-generated Makefiles made a directory world-writable when preparing source archives. If a malicious, local user could access this directory, they could execute arbitrary code with the privileges of the user running "make distcheck". Red Hat would like to thank Jim Meyering for reporting this issue. Upstream acknowledges Stefano Lattarini as the original reporter. Users of automake are advised to upgrade to this updated package, which corrects this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/automake
7.4. RHEA-2013:1626 - new packages: p11-kit
7.4. RHEA-2013:1626 - new packages: p11-kit New p11-kit packages are now available for Red Hat Enterprise Linux 6. The p11-kit package provides a mechanism to manage PKCS#11 modules. The p11-kit-trust subpackage includes a PKCS#11 trust module that provides certificate anchors and black lists based on configuration files. This enhancement update adds the p11-kit packages to Red Hat Enterprise Linux 6. (BZ# 915798 ) * Red Hat Enterprise Linux 6.5 provides the p11-kit package to implement the Shared System Certificates feature. If enabled by the administrator, it ensures system-wide trust store of static data that is used by crypto toolkits as input for certificate trust decisions. (BZ# 977886 ) These new packages had several bugs fixed during testing: * Support for using the freebl3 library for the SHA1 and MD5 cryptographic hash functions has been added even though the hashing is done in a strictly non-cryptographic context. (BZ# 983384 ) * All file handles opened by p11-kit are created with the O_CLOEXEC flag, so that they are automatically closed on the execve() function and do not leak to subprocesses. (BZ# 984986 ) * When expanding the "USDHOME" variable or the "~/" path for SUID and SGID programs, the expand_home() function returns NULL. This change allows for avoiding vulnerabilities that could occur if SUID or SGID programs accidentally trusted this environment. Also, documentation concerning the fact that user directories are not read for SUID/SGID programs has been added. (BZ# 985014 ) * Users need to use the standard environment USDTMPDIR variable for locating the temp directory. (BZ# 985017 ) * If a critical module fails to initialize, module initialization stops and the user is informed about the failure. (BZ# 985023 ) * The p11_kit_space_strlen() function returns a "0" value for empty strings. (BZ# 985416 ) * Arguments of the size_t variable are correctly passed to the "p11_hash_xxx" functions. (BZ# 985421 ) * Changes in the code ensures that the memdup() function is not called with a zero length or NULL pointers. (BZ# 985433 ) All users who require the Shared System Certificates feature are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rhea-2013-1626
Chapter 3. Managing secured clusters
Chapter 3. Managing secured clusters To secure a Kubernetes or an OpenShift Container Platform cluster, you must deploy Red Hat Advanced Cluster Security for Kubernetes (RHACS) services into the cluster. You can generate deployment files in the RHACS portal by navigating to the Platform Configuration Clusters view, or you can use the roxctl CLI. 3.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 3.2. Generating Sensor deployment files Generating files for Kubernetes systems Procedure Generate the required sensor configuration for your Kubernetes cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate k8s --name <cluster_name> --central "USDROX_ENDPOINT" Generating files for OpenShift Container Platform systems Procedure Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "USDROX_ENDPOINT" 1 1 For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x . Read the --help output to see other options that you might need to use depending on your system architecture. Verify that the endpoint you provide for --central can be reached from the cluster where you are deploying Red Hat Advanced Cluster Security for Kubernetes services. Important If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), follow these guidelines: Use the WebSocket Secure ( wss ) protocol. To use wss , prefix the address with wss:// , and Add the port number after the address, for example: USD roxctl sensor generate k8s --central wss://stackrox-central.example.com:443 3.3. Installing Sensor by using the sensor.sh script When you generate the Sensor deployment files, roxctl creates a directory called sensor-<cluster_name> in your working directory. The script to install Sensor is located in this directory. Procedure Run the sensor installation script to install Sensor: USD ./sensor- <cluster_name> /sensor.sh If you get a warning that you do not have the required permissions to install Sensor, follow the on-screen instructions, or contact your cluster administrator for help. 3.4. Downloading Sensor bundles for existing clusters Procedure Run the following command to download Sensor bundles for existing clusters by specifying a cluster name or ID : USD roxctl sensor get-bundle <cluster_name_or_id> 3.5. Deleting cluster integration Procedure Before deleting the cluster, ensure you have the correct cluster name that you want to remove from Central: USD roxctl cluster delete --name= <cluster_name> Important Deleting the cluster integration does not remove the RHACS services running in the cluster, depending on the installation method. You can remove the services by running the delete-sensor.sh script from the Sensor installation bundle.
[ "export ROX_ENDPOINT= <host:port> 1", "roxctl sensor generate k8s --name <cluster_name> --central \"USDROX_ENDPOINT\"", "roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1", "roxctl sensor generate k8s --central wss://stackrox-central.example.com:443", "./sensor- <cluster_name> /sensor.sh", "roxctl sensor get-bundle <cluster_name_or_id>", "roxctl cluster delete --name= <cluster_name>" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/roxctl_cli/managing-secured-clusters-1
2.8. Performance Issues: Check the Red Hat Customer Portal
2.8. Performance Issues: Check the Red Hat Customer Portal For information on best practices for deploying and upgrading Red Hat Enterprise Linux clusters using the High Availability Add-On and Red Hat Global File System 2 (GFS2) see the article "Red Hat Enterprise Linux Cluster, High Availability, and GFS Deployment Best Practices" on Red Hat Customer Portal at https://access.redhat.com/site/articles/40051 .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-customer-portal
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack Before you install a OpenShift Container Platform cluster that uses single-root I/O virtualization (SR-IOV) or Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you must understand the requirements for each technology and then perform preparatory tasks. 2.1. Requirements for clusters on RHOSP that use either SR-IOV or OVS-DPDK If you use SR-IOV or OVS-DPDK with your deployment, you must meet the following requirements: RHOSP compute nodes must use a flavor that supports huge pages. 2.1.1. Requirements for clusters on RHOSP that use SR-IOV To use single-root I/O virtualization (SR-IOV) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) SR-IOV deployment . OpenShift Container Platform must support the NICs that you use. For a list of supported NICs, see "About Single Root I/O Virtualization (SR-IOV) hardware networks" in the "Hardware networks" subsection of the "Networking" documentation. For each node that will have an attached SR-IOV NIC, your RHOSP cluster must have: One instance from the RHOSP quota One port attached to the machines subnet One port for each SR-IOV Virtual Function A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 2.1.2. Requirements for clusters on RHOSP that use OVS-DPDK To use Open vSwitch with the Data Plane Development Kit (OVS-DPDK) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. 2.2. Preparing to install a cluster that uses SR-IOV You must configure RHOSP before you install a cluster that uses SR-IOV on it. When installing a cluster using SR-IOV, you must deploy clusters using cgroup v1. For more information, Enabling Linux control group version 1 (cgroup v1) . Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 2.2.1. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 2.3. Preparing to install a cluster that uses OVS-DPDK You must configure RHOSP before you install a cluster that uses SR-IOV on it. Complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. After you perform preinstallation tasks, install your cluster by following the most relevant OpenShift Container Platform on RHOSP installation instructions. Then, perform the tasks under " steps" on this page. 2.4. steps For either type of deployment: Configure the Node Tuning Operator with huge pages support . To complete SR-IOV configuration after you deploy your cluster: Install the SR-IOV Operator . Configure your SR-IOV network device . Create SR-IOV compute machines . Consult the following references after you deploy your cluster to improve its performance: A test pod template for clusters that use OVS-DPDK on OpenStack . A test pod template for clusters that use SR-IOV on OpenStack . A performance profile template for clusters that use OVS-DPDK on OpenStack
[ "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_openstack/installing-openstack-nfv-preparing
11.2.2. Creating an LVM2 Logical Volume for Swap
11.2.2. Creating an LVM2 Logical Volume for Swap To add a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to add): Create the LVM2 logical volume of size 256 MB: Format the new swap space: Add the following entry to the /etc/fstab file: Enable the extended logical volume: Test that the logical volume has been extended properly:
[ "lvm lvcreate VolGroup00 -n LogVol02 -L 256M", "mkswap /dev/VolGroup00/LogVol02", "/dev/VolGroup00/LogVol02 swap swap defaults 0 0", "swapon -va", "cat /proc/swaps # free" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/adding_swap_space-creating_an_lvm2_logical_volume_for_swap
2.6. GRUB Commands
2.6. GRUB Commands GRUB allows a number of useful commands in its command line interface. Some of the commands accept options after their name; these options should be separated from the command and other options on that line by space characters. The following is a list of useful commands: boot - Boots the operating system or chain loader that was last loaded. chainloader </path/to/file> - Loads the specified file as a chain loader. If the file is located on the first sector of the specified partition, use the blocklist notation, +1 , instead of the file name. The following is an example chainloader command: displaymem - Displays the current use of memory, based on information from the BIOS. This is useful to determine how much RAM a system has prior to booting it. initrd </path/to/initrd> - Enables users to specify an initial RAM disk to use when booting. An initrd is necessary when the kernel needs certain modules in order to boot properly, such as when the root partition is formatted with the ext3 file system. The following is an example initrd command: install <stage-1> <install-disk> <stage-2> p config-file - Installs GRUB to the system MBR. <stage-1> - Signifies a device, partition, and file where the first boot loader image can be found, such as (hd0,0)/grub/stage1 . <install-disk> - Specifies the disk where the stage 1 boot loader should be installed, such as (hd0) . <stage-2> - Passes the stage 2 boot loader location to the stage 1 boot loader, such as (hd0,0)/grub/stage2 . p <config-file> - This option tells the install command to look for the menu configuration file specified by <config-file> , such as (hd0,0)/grub/grub.conf . Warning The install command overwrites any information already located on the MBR. kernel </path/to/kernel> <option-1> <option-N> ... - Specifies the kernel file to load when booting the operating system. Replace </path/to/kernel> with an absolute path from the partition specified by the root command. Replace <option-1> with options for the Linux kernel, such as root=/dev/VolGroup00/LogVol00 to specify the device on which the root partition for the system is located. Multiple options can be passed to the kernel in a space separated list. The following is an example kernel command: The option in the example specifies that the root file system for Linux is located on the hda5 partition. root ( <device-type> <device-number> , <partition> ) - Configures the root partition for GRUB, such as (hd0,0) , and mounts the partition. The following is an example root command: rootnoverify ( <device-type> <device-number> , <partition> ) - Configures the root partition for GRUB, just like the root command, but does not mount the partition. Other commands are also available; type help --all for a full list of commands. For a description of all GRUB commands, refer to the documentation available online at http://www.gnu.org/software/grub/manual/ .
[ "chainloader +1", "initrd /initrd-2.6.8-1.523.img", "kernel /vmlinuz-2.6.8-1.523 ro root=/dev/VolGroup00/LogVol00", "root (hd0,0)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-grub-commands
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/edge_management/1-latest/html/create_rhel_for_edge_images_and_configure_automated_management/making-open-source-more-inclusive
Chapter 12. Translators
Chapter 12. Translators 12.1. JBoss Data Virtualization Connector Architecture The process of integrating data from an enterprise information system into JBoss Data Virtualization requires one to two components: a translator (mandatory) and a resource adapter (optional), also known as a connector. Most of the time, this will be a Java EE Connector Architecture (JCA) Adapter. A translator is used to: translate JBoss Data Virtualization commands into commands understood by the datasource for which the translator is being used, execute those commands, return batches of results from the datasource, translated into the formats that JBoss Data Virtualization is expecting. A resource adapter (or connector): handles all communications with individual enterprise information systems, (which can include databases, data feeds, flat files and so forth), can be a JCA Adapter or any other custom connection provider (the JCA specification ensures the writing, packaging and configuration are undertaken in a consistent manner), Note Many software vendors provide JCA Adapters to access different systems. Red Hat recommends using vendor-supplied JCA Adapters when using JMS with JCA. See http://docs.oracle.com/cd/E21764_01/integration.1111/e10231/adptr_jms.htm removes concerns such as connection information, resource pooling, and authentication for translators. With a suitable translator (and optional resource adapter), any datasource or Enterprise Information System can be integrated with JBoss Data Virtualization.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-translators
Appendix B. Metadata Server daemon configuration Reference
Appendix B. Metadata Server daemon configuration Reference Refer to this list of commands that can be used for the Metadata Server (MDS) daemon configuration. mon_force_standby_active Description If set to true , monitors force MDS in standby replay mode to be active. Set under the [mon] or [global] section in the Ceph configuration file. Type Boolean Default true max_mds Description The number of active MDS daemons during cluster creation. Set under the [mon] or [global] section in the Ceph configuration file. Type 32-bit Integer Default 1 mds_cache_memory_limit Description The memory limit the MDS enforces for its cache. Red Hat recommends using this parameter instead of the mds cache size parameter. Type 64-bit Integer Unsigned Default 4294967296 mds_cache_reservation Description The cache reservation, memory or inodes, for the MDS cache to maintain. The value is a percentage of the maximum cache configured. Once the MDS begins dipping into its reservation, it recalls client state until its cache size shrinks to restore the reservation. Type Float Default 0.05 mds_cache_size Description The number of inodes to cache. A value of 0 indicates an unlimited number. Red Hat recommends to use the mds_cache_memory_limit to limit the amount of memory the MDS cache uses. Type 32-bit Integer Default 0 mds_cache_mid Description The insertion point for new items in the cache LRU, from the top. Type Float Default 0.7 mds_dir_commit_ratio Description The fraction of directory that contains erroneous information before Ceph commits using a full update instead of partial update. Type Float Default 0.5 mds_dir_max_commit_size Description The maximum size of a directory update in MB before Ceph breaks the directory into smaller transactions. Type 32-bit Integer Default 90 mds_decay_halflife Description The half-life of the MDS cache temperature. Type Float Default 5 mds_beacon_interval Description The frequency, in seconds, of beacon messages sent to the monitor. Type Float Default 4 mds_beacon_grace Description The interval without beacons before Ceph declares a MDS laggy and possibly replaces it. Type Float Default 15 mds_blacklist_interval Description The blacklist duration for failed MDS daemons in the OSD map. Type Float Default 24.0*60.0 mds_session_timeout Description The interval, in seconds, of client inactivity before Ceph times out capabilities and leases. Type Float Default 60 mds_session_autoclose Description The interval, in seconds, before Ceph closes a laggy client's session. Type Float Default 300 mds_reconnect_timeout Description The interval, in seconds, to wait for clients to reconnect during a MDS restart. Type Float Default 45 mds_tick_interval Description How frequently the MDS performs internal periodic tasks. Type Float Default 5 mds_dirstat_min_interval Description The minimum interval, in seconds, to try to avoid propagating recursive statistics up the tree. Type Float Default 1 mds_scatter_nudge_interval Description How quickly changes in directory statistics propagate up. Type Float Default 5 mds_client_prealloc_inos Description The number of inode numbers to preallocate per client session. Type 32-bit Integer Default 1000 mds_early_reply Description Determines whether the MDS allows clients to see request results before they commit to the journal. Type Boolean Default true mds_use_tmap Description Use trivialmap for directory updates. Type Boolean Default true mds_default_dir_hash Description The function to use for hashing files across directory fragments. Type 32-bit Integer Default 2 ,that is, rjenkins mds_log Description Set to true if the MDS should journal metadata updates. Disable for benchmarking only. Type Boolean Default true mds_log_skip_corrupt_events Description Determines whether the MDS tries to skip corrupt journal events during journal replay. Type Boolean Default false mds_log_max_events Description The maximum events in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default -1 mds_log_max_segments Description The maximum number of segments or objects in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default 30 mds_log_max_expiring Description The maximum number of segments to expire in parallels. Type 32-bit Integer Default 20 mds_log_eopen_size Description The maximum number of inodes in an EOpen event. Type 32-bit Integer Default 100 mds_bal_sample_interval Description Determines how frequently to sample directory temperature when making fragmentation decisions. Type Float Default 3 mds_bal_replicate_threshold Description The maximum temperature before Ceph attempts to replicate metadata to other nodes. Type Float Default 8000 mds_bal_unreplicate_threshold Description The minimum temperature before Ceph stops replicating metadata to other nodes. Type Float Default 0 mds_bal_frag Description Determines whether or not the MDS fragments directories. Type Boolean Default false mds_bal_split_size Description The maximum directory size before the MDS splits a directory fragment into smaller bits. The root directory has a default fragment size limit of 10000. Type 32-bit Integer Default 10000 mds_bal_split_rd Description The maximum directory read temperature before Ceph splits a directory fragment. Type Float Default 25000 mds_bal_split_wr Description The maximum directory write temperature before Ceph splits a directory fragment. Type Float Default 10000 mds_bal_split_bits Description The number of bits by which to split a directory fragment. Type 32-bit Integer Default 3 mds_bal_merge_size Description The minimum directory size before Ceph tries to merge adjacent directory fragments. Type 32-bit Integer Default 50 mds_bal_merge_rd Description The minimum read temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_merge_wr Description The minimum write temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_interval Description The frequency, in seconds, of workload exchanges between MDS nodes. Type 32-bit Integer Default 10 mds_bal_fragment_interval Description The frequency, in seconds, of adjusting directory fragmentation. Type 32-bit Integer Default 5 mds_bal_idle_threshold Description The minimum temperature before Ceph migrates a subtree back to its parent. Type Float Default 0 mds_bal_max Description The number of iterations to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_max_until Description The number of seconds to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_mode Description The method for calculating MDS load: 1 = Hybrid. 2 = Request rate and latency. 3 = CPU load. Type 32-bit Integer Default 0 mds_bal_min_rebalance Description The minimum subtree temperature before Ceph migrates. Type Float Default 0.1 mds_bal_min_start Description The minimum subtree temperature before Ceph searches a subtree. Type Float Default 0.2 mds_bal_need_min Description The minimum fraction of target subtree size to accept. Type Float Default 0.8 mds_bal_need_max Description The maximum fraction of target subtree size to accept. Type Float Default 1.2 mds_bal_midchunk Description Ceph migrates any subtree that is larger than this fraction of the target subtree size. Type Float Default 0.3 mds_bal_minchunk Description Ceph ignores any subtree that is smaller than this fraction of the target subtree size. Type Float Default 0.001 mds_bal_target_removal_min Description The minimum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 5 mds_bal_target_removal_max Description The maximum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 10 mds_replay_interval Description The journal poll interval when in standby-replay mode for a hot standby . Type Float Default 1 mds_shutdown_check Description The interval for polling the cache during MDS shutdown. Type 32-bit Integer Default 0 mds_thrash_exports Description Ceph randomly exports subtrees between nodes. For testing purposes only. Type 32-bit Integer Default 0 mds_thrash_fragments Description Ceph randomly fragments or merges directories. Type 32-bit Integer Default 0 mds_dump_cache_on_map Description Ceph dumps the MDS cache contents to a file on each MDS map. Type Boolean Default false mds_dump_cache_after_rejoin Description Ceph dumps MDS cache contents to a file after rejoining the cache during recovery. Type Boolean Default false mds_verify_scatter Description Ceph asserts that various scatter/gather invariants are true . For developer use only. Type Boolean Default false mds_debug_scatterstat Description Ceph asserts that various recursive statistics invariants are true . For developer use only. Type Boolean Default false mds_debug_frag Description Ceph verifies directory fragmentation invariants when convenient. For developer use only. Type Boolean Default false mds_debug_auth_pins Description The debug authentication pin invariants. For developer use only. Type Boolean Default false mds_debug_subtrees Description Debugging subtree invariants. For developer use only. Type Boolean Default false mds_kill_mdstable_at Description Ceph injects a MDS failure in a MDS Table code. For developer use only. Type 32-bit Integer Default 0 mds_kill_export_at Description Ceph injects a MDS failure in the subtree export code. For developer use only. Type 32-bit Integer Default 0 mds_kill_import_at Description Ceph injects a MDS failure in the subtree import code. For developer use only. Type 32-bit Integer Default 0 mds_kill_link_at Description Ceph injects a MDS failure in a hard link code. For developer use only. Type 32-bit Integer Default 0 mds_kill_rename_at Description Ceph injects a MDS failure in the rename code. For developer use only. Type 32-bit Integer Default 0 mds_wipe_sessions Description Ceph deletes all client sessions on startup. For testing purposes only. Type Boolean Default 0 mds_wipe_ino_prealloc Description Ceph deletes inode preallocation metadata on startup. For testing purposes only. Type Boolean Default 0 mds_skip_ino Description The number of inode numbers to skip on startup. For testing purposes only. Type 32-bit Integer Default 0 mds_standby_for_name Description The MDS daemon is a standby for another MDS daemon of the name specified in this setting. Type String Default N/A mds_standby_for_rank Description An instance of the MDS daemon is a standby for another MDS daemon instance of this rank. Type 32-bit Integer Default -1 mds_standby_replay Description Determines whether the MDS daemon polls and replays the log of an active MDS when used as a hot standby . Type Boolean Default false
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/file_system_guide/metadata-server-daemon-configuration-reference_fs
5.4. Optimizing the Query Process
5.4. Optimizing the Query Process Query performance depends on several criteria: The Lucene query. The number of objects loaded: use pagination or index projection where required. The way the Query Module interacts with the Lucene readers defines the appropriate reader strategy. Caching frequently extracted values from the index. Report a bug 5.4.1. Caching Index Values: FieldCache The Lucene index identifies matches to queries. Once the query is performed the results must be analyzed to extract useful information. The Lucene-based Query API is used to extract the Class type and the primary key. Extracting the required values from the index reduces performance. In some cases this may be minor, other cases may require caching. Requirements depend on the kind of projections being used and in some cases the Class type is not required. The @CacheFromIndex annotation is used to perform caching on the main metadata fields required by the Lucene-based Query API. Example 5.28. The @CacheFromIndex Annotation It is possible to cache Class types and IDs using this annotation: CLASS : The Query Module uses a Lucene FieldCache to improve performance of the Class type extraction from the index. This value is enabled by default. The Lucene-based Query API applies this value when the @CacheFromIndex annotation is not specified. ID : Extracting the primary identifier uses a cache. This method produces the best querying results, however it may reduce performance. Note Measure the performance and memory consumption impact after warmup (executing some queries). Performance may improve by enabling Field Caches but this is not always the case. Using a FieldCache has following two disadvantages: Memory usage: Typically the CLASS cache has lower requirements than the ID cache. Index warmup: When using field caches, the first query on a new index or segment is slower than when caching is disabled. Some queries may not require a classtype, and ignore the CLASS field cache even when enabled. For example, when targeting a single class, all returned values are of that type. The ID FieldCache requires the ids of targeted entities to be using a TwoWayFieldBridge . All types being loaded in a specific query must use the fieldname for the id and have ids of the same type. This is evaluated at query execution. Report a bug
[ "import static org.hibernate.search.annotations.FieldCacheType.CLASS; import static org.hibernate.search.annotations.FieldCacheType.ID; @Indexed @CacheFromIndex( { CLASS, ID } ) public class Essay {" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-optimizing_the_query_process
probe::ioscheduler_trace.unplug_io
probe::ioscheduler_trace.unplug_io Name probe::ioscheduler_trace.unplug_io - Fires when a request queue is unplugged; Synopsis ioscheduler_trace.unplug_io Values name Name of the probe point rq_queue request queue Description Either, when number of pending requests in the queue exceeds threshold or, upon expiration of timer that was activated when queue was plugged.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ioscheduler-trace-unplug-io
Chapter 1. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer
Chapter 1. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer From OpenShift Container Platform 4.15 and later versions, you can use the Assisted Installer to install a cluster on Oracle(R) Cloud Infrastructure (OCI) by using infrastructure that you provide. 1.1. The Assisted Installer and OCI overview You can run cluster workloads on Oracle(R) Cloud Infrastructure (OCI) infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. Both Red Hat and Oracle test, validate, and support running OCI in an OpenShift Container Platform cluster on OCI. The Assisted Installer supports the OCI platform, and you can use the Assisted Installer to access an intuitive interactive workflow for the purposes of automating cluster installation tasks on OCI. Figure 1.1. Workflow for using the Assisted Installer in a connected environment to install a cluster on OCI OCI provides services that can meet your needs for regulatory compliance, performance, and cost-effectiveness. You can access OCI Resource Manager configurations to provision and configure OCI resources. Important The steps for provisioning OCI resources are provided as an example only. You can also choose to create the required resources through other methods; the scripts are just an example. Installing a cluster with infrastructure that you provide requires knowledge of the cloud provider and the installation process on OpenShift Container Platform. You can access OCI Resource Manager configurations to complete these steps, or use the configurations to model your own custom script. Follow the steps in the Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer document to understand how to use the Assisted Installer to install a OpenShift Container Platform cluster on OCI. This document demonstrates the use of the OCI Cloud Controller Manager (CCM) and Oracle's Container Storage Interface (CSI) objects to link your OpenShift Container Platform cluster with the OCI API. Important To ensure the best performance conditions for your cluster workloads that operate on OCI, ensure that volume performance units (VPUs) for your block volume are sized for your workloads. The following list provides guidance for selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Basic environment: 500 GB, and 60 VPUs. Heavy production environment: More than 500 GB, and 100 or more VPUs. Consider reserving additional VPUs to provide sufficient capacity for updates and scaling activities. For more information about VPUs, see Volume Performance Units (Oracle documentation). If you are unfamiliar with the OpenShift Container Platform Assisted Installer, see "Assisted Installer for OpenShift Container Platform". Additional resources Assisted Installer for OpenShift Container Platform Internet access for OpenShift Container Platform Volume Performance Units (Oracle documentation) Instance Sizing Recommendations for OpenShift Container Platform on OCI Nodes (Oracle) documentation 1.2. Creating OCI resources and services Create Oracle(R) Cloud Infrastructure (OCI) resources and services so that you can establish infrastructure with governance standards that meets your organization's requirements. Prerequisites You configured an OCI account to host the cluster. See Prerequisites (Oracle documentation) . Procedure Log in to your Oracle Cloud Infrastructure (OCI) account with administrator privileges. Download an archive file from an Oracle resource. The archive file includes files for creating cluster resources and custom manifests. The archive file also includes a script, and when you run the script, the script creates OCI resources, such as DNS records, an instance, and so on. For more information, see Configuration Files (Oracle documentation) . 1.3. Using the Assisted Installer to generate an OCI-compatible discovery ISO image Generate a discovery ISO image and upload the image to Oracle(R) Cloud Infrastructure (OCI), so that the agent can perform hardware and network validation checks before you install an OpenShift Container Platform cluster on OCI. From the OCI web console, you must create the following resources: A compartment for better organizing, restricting access, and setting usage limits to OCI resources. An object storage bucket for safely and securely storing the discovery ISO image. You can access the image at a later stage for the purposes of booting the instances, so that you can then create your cluster. Prerequisites You created a child compartment and an object storage bucket on OCI. See Provisioning Cloud Infrastructure (OCI Console) in the Oracle documentation. You reviewed details about the OpenShift Container Platform installation and update processes. If you use a firewall and you plan to use a Telemetry service, you configured your firewall to allow OpenShift Container Platform to access the sites required. Before you create a virtual machines (VM), see Cloud instance types (Red Hat Ecosystem Catalog portal) to identify the supported OCI VM shapes. Procedure From the Install OpenShift with the Assisted Installer page on the Hybrid Cloud Console, generate the discovery ISO image by completing all the required Assisted Installer steps. In the Cluster Details step, complete the following fields: Field Action required Cluster name Specify the name of your cluster, such as ocidemo . Base domain Specify the base domain of the cluster, such as splat-oci.devcluster.openshift.com . Provided you previously created a compartment on OCI, you can get this information by going to DNS management Zones List scope and then selecting the parent compartment. Your base domain should show under the Public zones tab. OpenShift version Specify OpenShift 4.15 or a later version. CPU architecture Specify x86_64 or Arm64 . Integrate with external partner platforms Specify Oracle Cloud Infrastructure . After you specify this value, the Include custom manifests checkbox is selected by default. On the Operators page, click . On the Host Discovery page, click Add hosts . For the SSH public key field, add your SSH key from your local system. Tip You can create an SSH authentication key pair by using the ssh-keygen tool. Click Generate Discovery ISO to generate the discovery ISO image file. Download the file to your local system. Upload the discovery ISO image to the OCI bucket. See Uploading an Object Storage Object to a Bucket (Oracle documentation) . You must create a pre-authenticated request for your uploaded discovery ISO image. Ensure that you make note of the URL from the pre-authenticated request, because you must specify the URL at a later stage when you create an OCI stack. Additional resources Installation and update Configuring your firewall 1.4. Provisioning OCI infrastructure for your cluster By using the Assisted Installer to create details for your OpenShift Container Platform cluster, you can specify these details in a stack. A stack is an OCI feature where you can automate the provisioning of all necessary OCI infrastructure resources, such as the custom image, that are required for installing an OpenShift Container Platform cluster on OCI. The Oracle(R) Cloud Infrastructure (OCI) Compute Service creates a virtual machine (VM) instance on OCI. This instance can then automatically attach to a virtual network interface controller (vNIC) in the virtual cloud network (VCN) subnet. On specifying the IP address of your OpenShift Container Platform cluster in the custom manifest template files, the OCI instance can communicate with your cluster over the VCN. Prerequisites You uploaded the discovery ISO image to the OCI bucket. For more information, see "Using the Assisted Installer to generate an OCI-compatible discovery ISO image". Procedure Complete the steps for provisioning OCI infrastructure for your OpenShift Container Platform cluster. See Creating OpenShift Container Platform Infrastructure Using Resource Manager (Oracle documentation) . Create a stack, and then edit the custom manifest files according to the steps in the Editing the OpenShift Custom Manifests (Oracle documentation) . 1.5. Completing the remaining Assisted Installer steps After you provision Oracle(R) Cloud Infrastructure (OCI) resources and upload OpenShift Container Platform custom manifest configuration files to OCI, you must complete the remaining cluster installation steps on the Assisted Installer before you can create an instance OCI. Prerequisites You created a resource stack on OCI that includes the custom manifest configuration files and OCI Resource Manager configuration resources. See "Provisioning OCI infrastructure for your cluster". Procedure From the Red Hat Hybrid Cloud Console web console, go to the Host discovery page. Under the Role column, select either Control plane node or Worker for each targeted hostname. Important Before, you can continue to the steps, wait for each node to reach the Ready status. Accept the default settings for the Storage and Networking steps, and then click . On the Custom manifests page, in the Folder field, select manifest . This is the Assisted Installer folder where you want to save the custom manifest file. In the File name field, enter a value such as oci-ccm.yml . From the Content section, click Browse , and select the CCM manifest from your drive located in custom_manifest/manifests/oci-ccm.yml . Expand the Custom manifest section and repeat the same steps for the following manifests: CSI driver manifest: custom_manifest/manifests/oci-csi.yml CCM machine configuration: custom_manifest/openshift/machineconfig-ccm.yml CSI driver machine configuration: custom_manifest/openshift/machineconfig-csi.yml From the Review and create page, click Install cluster to create your OpenShift Container Platform cluster on OCI. After the cluster installation and initialization operations, the Assisted Installer indicates the completion of the cluster installation operation. For more information, see "Completing the installation" section in the Assisted Installer for OpenShift Container Platform document. Additional resources Assisted Installer for OpenShift Container Platform 1.6. Verifying a successful cluster installation on OCI Verify that your cluster was installed and is running effectively on Oracle(R) Cloud Infrastructure (OCI). Procedure From the Hybrid Cloud Console, go to Clusters > Assisted Clusters and select your cluster's name. Check that the Installation progress bar is at 100% and a message displays indicating "Installation completed successfully". To access the OpenShift Container Platform web console, click the provided Web Console URL. Go to the Nodes menu page. Locate your node from the Nodes table. From the Overview tab, check that your node has a Ready status. Select the YAML tab. Check the labels parameter, and verify that the listed labels apply to your configuration. For example, the topology.kubernetes.io/region=us-sanjose-1 label indicates in what OCI region the node was deployed. 1.7. Troubleshooting the installation of a cluster on OCI If you experience issues with using the Assisted Installer to install an OpenShift Container Platform cluster on Oracle(R) Cloud Infrastructure (OCI), read the following sections to troubleshoot common problems. The Ingress Load Balancer in OCI is not at a healthy status This issue is classed as a Warning because by using the Resource Manager to create a stack, you created a pool of compute nodes, 3 by default, that are automatically added as backend listeners for the Ingress Load Balancer. By default, the OpenShift Container Platform deploys 2 router pods, which are based on the default values from the OpenShift Container Platform manifest files. The Warning is expected because a mismatch exists with the number of router pods available, two, to run on the three compute nodes. Figure 1.2. Example of a Warning message that is under the Backend set information tab on OCI: You do not need to modify the Ingress Load Balancer configuration. Instead, you can point the Ingress Load Balancer to specific compute nodes that operate in your cluster on OpenShift Container Platform. To do this, use placement mechanisms, such as annotations, on OpenShift Container Platform to ensure router pods only run on the compute nodes that you originally configured on the Ingress Load Balancer as backend listeners. OCI create stack operation fails with an Error: 400-InvalidParameter message On attempting to create a stack on OCI, you identified that the Logs section of the job outputs an error message. For example: Error: 400-InvalidParameter, DNS Label oci-demo does not follow Oracle requirements Suggestion: Please update the parameter(s) in the Terraform config as per error message DNS Label oci-demo does not follow Oracle requirements Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_vcn Go to the Install OpenShift with the Assisted Installer page on the Hybrid Cloud Console, and check the Cluster name field on the Cluster Details step. Remove any special characters, such as a hyphen ( - ), from the name, because these special characters are not compatible with the OCI naming conventions. For example, change oci-demo to ocidemo . Additional resources Troubleshooting OpenShift Container Platform on OCI (Oracle documentation) Installing an on-premise cluster using the Assisted Installer
[ "Error: 400-InvalidParameter, DNS Label oci-demo does not follow Oracle requirements Suggestion: Please update the parameter(s) in the Terraform config as per error message DNS Label oci-demo does not follow Oracle requirements Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_vcn" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_oci/installing-oci-assisted-installer
probe::nfs.aop.write_end
probe::nfs.aop.write_end Name probe::nfs.aop.write_end - NFS client complete writing data Synopsis nfs.aop.write_end Values sb_flag super block flags __page the address of page page_index offset within mapping, can used a page identifier and position identifier in the page frame to end address of this write operation ino inode number i_flag file flags size write bytes dev device identifier offset start address of this write operation i_size file length in bytes Description Fires when do a write operation on nfs, often after prepare_write Update and possibly write a cached page of an NFS file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-aop-write-end
25.19. Modifying Link Loss Behavior
25.19. Modifying Link Loss Behavior This section describes how to modify the link loss behavior of devices that use either Fibre Channel or iSCSI protocols. 25.19.1. Fibre Channel If a driver implements the Transport dev_loss_tmo callback, access attempts to a device through a link will be blocked when a transport problem is detected. To verify if a device is blocked, run the following command: This command will return blocked if the device is blocked. If the device is operating normally, this command will return running . Procedure 25.15. Determining the State of a Remote Port To determine the state of a remote port, run the following command: This command will return Blocked when the remote port (along with devices accessed through it) are blocked. If the remote port is operating normally, the command will return Online . If the problem is not resolved within dev_loss_tmo seconds, the rport and devices will be unblocked and all I/O running on that device (along with any new I/O sent to that device) will be failed. Procedure 25.16. Changing dev_loss_tmo To change the dev_loss_tmo value, echo in the desired value to the file. For example, to set dev_loss_tmo to 30 seconds, run: For more information about dev_loss_tmo , refer to Section 25.4.1, "Fibre Channel API" . When a link loss exceeds dev_loss_tmo , the scsi_device and sd N devices are removed. Typically, the Fibre Channel class will leave the device as is; i.e. /dev/sd x will remain /dev/sd x . This is because the target binding is saved by the Fibre Channel driver so when the target port returns, the SCSI addresses are recreated faithfully. However, this cannot be guaranteed; the sd x will be restored only if no additional change on in-storage box configuration of LUNs is made. 25.19.2. iSCSI Settings with dm-multipath If dm-multipath is implemented, it is advisable to set iSCSI timers to immediately defer commands to the multipath layer. To configure this, nest the following line under device { in /etc/multipath.conf : This ensures that I/O errors are retried and queued if all paths are failed in the dm-multipath layer. You may need to adjust iSCSI timers further to better monitor your SAN for problems. Available iSCSI timers you can configure are NOP-Out Interval/Timeouts and replacement_timeout , which are discussed in the following sections. 25.19.2.1. NOP-Out Interval/Timeout To help monitor problems the SAN, the iSCSI layer sends a NOP-Out request to each target. If a NOP-Out request times out, the iSCSI layer responds by failing any running commands and instructing the SCSI layer to requeue those commands when possible. When dm-multipath is being used, the SCSI layer will fail those running commands and defer them to the multipath layer. The multipath layer then retries those commands on another path. If dm-multipath is not being used, those commands are retried five times before failing altogether. Intervals between NOP-Out requests are 5 seconds by default. To adjust this, open /etc/iscsi/iscsid.conf and edit the following line: Once set, the iSCSI layer will send a NOP-Out request to each target every [interval value] seconds. By default, NOP-Out requests time out in 5 seconds [9] . To adjust this, open /etc/iscsi/iscsid.conf and edit the following line: This sets the iSCSI layer to timeout a NOP-Out request after [timeout value] seconds. SCSI Error Handler If the SCSI Error Handler is running, running commands on a path will not be failed immediately when a NOP-Out request times out on that path. Instead, those commands will be failed after replacement_timeout seconds. For more information about replacement_timeout , refer to Section 25.19.2.2, " replacement_timeout " . To verify if the SCSI Error Handler is running, run: 25.19.2.2. replacement_timeout replacement_timeout controls how long the iSCSI layer should wait for a timed-out path/session to reestablish itself before failing any commands on it. The default replacement_timeout value is 120 seconds. To adjust replacement_timeout , open /etc/iscsi/iscsid.conf and edit the following line: The 1 queue_if_no_path option in /etc/multipath.conf sets iSCSI timers to immediately defer commands to the multipath layer (refer to Section 25.19.2, "iSCSI Settings with dm-multipath " ). This setting prevents I/O errors from propagating to the application; because of this, you can set replacement_timeout to 15-20 seconds. By configuring a lower replacement_timeout , I/O is quickly sent to a new path and executed (in the event of a NOP-Out timeout) while the iSCSI layer attempts to re-establish the failed path/session. If all paths time out, then the multipath and device mapper layer will internally queue I/O based on the settings in /etc/multipath.conf instead of /etc/iscsi/iscsid.conf . Important Whether your considerations are failover speed or security, the recommended value for replacement_timeout will depend on other factors. These factors include the network, target, and system workload. As such, it is recommended that you thoroughly test any new configurations to replacements_timeout before applying it to a mission-critical system. iSCSI and DM Multipath overrides The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override recovery_tmo values: The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices. For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value. The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices. The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout . Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo when the multipathd service reloads. 25.19.3. iSCSI Root When accessing the root partition directly through an iSCSI disk, the iSCSI timers should be set so that iSCSI layer has several chances to try to reestablish a path/session. In addition, commands should not be quickly re-queued to the SCSI layer. This is the opposite of what should be done when dm-multipath is implemented. To start with, NOP-Outs should be disabled. You can do this by setting both NOP-Out interval and timeout to zero. To set this, open /etc/iscsi/iscsid.conf and edit as follows: In line with this, replacement_timeout should be set to a high number. This will instruct the system to wait a long time for a path/session to reestablish itself. To adjust replacement_timeout , open /etc/iscsi/iscsid.conf and edit the following line: After configuring /etc/iscsi/iscsid.conf , you must perform a re-discovery of the affected storage. This will allow the system to load and use any new values in /etc/iscsi/iscsid.conf . For more information on how to discover iSCSI devices, refer to Section 25.15, "Scanning iSCSI Interconnects" . Configuring Timeouts for a Specific Session You can also configure timeouts for a specific session and make them non-persistent (instead of using /etc/iscsi/iscsid.conf ). To do so, run the following command (replace the variables accordingly): Important The configuration described here is recommended for iSCSI sessions involving root partition access. For iSCSI sessions involving access to other types of storage (namely, in systems that use dm-multipath ), refer to Section 25.19.2, "iSCSI Settings with dm-multipath " . [9] Prior to Red Hat Enterprise Linux 5.4, the default NOP-Out requests time out was 15 seconds.
[ "cat /sys/block/ device /device/state", "cat /sys/class/fc_remote_port/rport- H : B : R /port_state", "echo 30 > /sys/class/fc_remote_port/rport- H : B : R /dev_loss_tmo", "features \"1 queue_if_no_path\"", "node.conn[0].timeo.noop_out_interval = [interval value]", "node.conn[0].timeo.noop_out_timeout = [timeout value]", "iscsiadm -m session -P 3", "node.session.timeo.replacement_timeout = [replacement_timeout]", "node.conn[0].timeo.noop_out_interval = 0 node.conn[0].timeo.noop_out_timeout = 0", "node.session.timeo.replacement_timeout = replacement_timeout", "iscsiadm -m node -T target_name -p target_IP : port -o update -n node.session.timeo.replacement_timeout -v USD timeout_value" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/modifying-link-loss-behavior
15.3.3. Performing a Network Installation
15.3.3. Performing a Network Installation When you start an installation with the askmethod or repo= options, you can install Red Hat Enterprise Linux from a network server using FTP, HTTP, HTTPS, or NFS protocols. Anaconda uses the same network connection to consult additional software repositories later in the installation process. If your system has more than one network device, anaconda presents you with a list of all available devices and prompts you to select one to use during installation. If your system only has a single network device, anaconda automatically selects it and does not present this dialog. Figure 15.6. Networking Device If you are not sure which device in the list corresponds to which physical socket on the system, select a device in the list then press the Identify button. The Identify NIC dialog appears. Figure 15.7. Identify NIC The sockets of most network devices feature an activity light (also called a link light ) - an LED that flashes to indicate that data is flowing through the socket. Anaconda can flash the activity light of the network device that you selected in the Networking Device dialog for up to 30 seconds. Enter the number of seconds that you require, then press OK . When anaconda finishes flashing the light, it returns you to the Networking Device dialog. When you select a network device, anaconda prompts you to choose how to configure TCP/IP: IPv4 options Dynamic IP configuration (DHCP) Anaconda uses DHCP running on the network to supply the network configuration automatically. Manual configuration Anaconda prompts you to enter the network configuration manually, including the IP address for this system, the netmask, the gateway address, and the DNS address. IPv6 options Automatic Anaconda uses router advertisement (RA) and DHCP for automatic configuration, based on the network environment. (Equivalent to the Automatic option in NetworkManager ) Automatic, DHCP only Anaconda does not use RA, but requests information from DHCPv6 directly to create a stateful configuration. (Equivalent to the Automatic, DHCP only option in NetworkManager ) Manual configuration Anaconda prompts you to enter the network configuration manually, including the IP address for this system, the netmask, the gateway address, and the DNS address. Anaconda supports the IPv4 and IPv6 protocols. However, if you configure an interface to use both IPv4 and IPv6, the IPv4 connection must succeed or the interface will not work, even if the IPv6 connection succeeds. Figure 15.8. Configure TCP/IP By default, anaconda uses DHCP to provide network settings automatically for IPv4 and automatic configuration to provide network settings for IPv6. If you choose to configure TCP/IP manually, anaconda prompts you to provide the details in the Manual TCP/IP Configuration dialog: Figure 15.9. Manual TCP/IP Configuration The dialog provides fields for IPv4 and IPv6 addresses and prefixes, depending on the protocols that you chose to configure manually, together with fields for the network gateway and name server. Enter the details for your network, then press OK . When the installation process completes, it will transfer these settings to your system. If you are installing via NFS, proceed to Section 15.3.4, "Installing via NFS" . If you are installing via Web or FTP, proceed to Section 15.3.5, "Installing via FTP, HTTP, or HTTPS" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-begininstall-perform-nfs-ppc
Chapter 17. Configuring JAX-WS Endpoints
Chapter 17. Configuring JAX-WS Endpoints Abstract JAX-WS endpoints are configured using one of three Spring configuration elements. The correct element depends on what type of endpoint you are configuring and which features you wish to use. For consumers you use the jaxws:client element. For service providers you can use either the jaxws:endpoint element or the jaxws:server element. The information used to define an endpoint is typically defined in the endpoint's contract. You can use the configuration element's to override the information in the contract. You can also use the configuration elements to provide information that is not provided in the contract. You must use the configuration elements to activate advanced features such as WS-RM. This is done by providing child elements to the endpoint's configuration element. Note that when dealing with endpoints developed using a Java-first approach it is likely that the SEI serving as the endpoint's contract is lacking information about the type of binding and transport to use. 17.1. Configuring Service Providers 17.1.1. Elements for Configuring Service Providers Apache CXF has two elements that can be used to configure a service provider: Section 17.1.2, "Using the jaxws:endpoint Element" Section 17.1.3, "Using the jaxws:server Element" The differences between the two elements are largely internal to the runtime. The jaxws:endpoint element injects properties into the org.apache.cxf.jaxws.EndpointImpl object created to support a service endpoint. The jaxws:server element injects properties into the org.apache.cxf.jaxws.support.JaxWsServerFactoryBean object created to support the endpoint. The EndpointImpl object passes the configuration data to the JaxWsServerFactoryBean object. The JaxWsServerFactoryBean object is used to create the actual service object. Because either configuration element will configure a service endpoint, you can choose based on the syntax you prefer. 17.1.2. Using the jaxws:endpoint Element Overview The jaxws:endpoint element is the default element for configuring JAX-WS service providers. Its attributes and children specify all of the information needed to instantiate a service provider. Many of the attributes map to information in the service's contract. The children are used to configure interceptors and other advanced features. Identifying the endpoint being configured For the runtime to apply the configuration to the proper service provider, it must be able to identify it. The basic means for identifying a service provider is to specify the class that implements the endpoint. This is done using the jaxws:endpoint element's implementor attribute. For instances where different endpoint's share a common implementation, it is possible to provide different configuration for each endpoint. There are two approaches for distinguishing a specific endpoint in configuration: a combination of the serviceName attribute and the endpointName attribute The serviceName attribute specifies the wsdl:service element defining the service's endpoint. The endpointName attribute specifies the specific wsdl:port element defining the service's endpoint. Both attributes are specified as QNames using the format ns : name . ns is the namespace of the element and name is the value of the element's name attribute. Note If the wsdl:service element only has one wsdl:port element, the endpointName attribute can be omitted. the name attribute The name attribute specifies the QName of the specific wsdl:port element defining the service's endpoint. The QName is provided in the format { ns } localPart . ns is the namespace of the wsdl:port element and localPart is the value of the wsdl:port element's name attribute. Attributes The attributes of the jaxws:endpoint element configure the basic properties of the endpoint. These properties include the address of the endpoint, the class that implements the endpoint, and the bus that hosts the endpoint. Table 17.1, "Attributes for Configuring a JAX-WS Service Provider Using the jaxws:endpoint Element" describes the attribute of the jaxws:endpoint element. Table 17.1. Attributes for Configuring a JAX-WS Service Provider Using the jaxws:endpoint Element Attribute Description id Specifies a unique identifier that other configuration elements can use to refer to the endpoint. implementor Specifies the class implementing the service. You can specify the implementation class using either the class name or an ID reference to a Spring bean configuring the implementation class. This class must be on the classpath. implementorClass Specifies the class implementing the service. This attribute is useful when the value provided to the implementor attribute is a reference to a bean that is wrapped using Spring AOP. address Specifies the address of an HTTP endpoint. This value overrides the value specified in the services contract. wsdlLocation Specifies the location of the endpoint's WSDL contract. The WSDL contract's location is relative to the folder from which the service is deployed. endpointName Specifies the value of the service's wsdl:port element's name attribute. It is specified as a QName using the format ns : name where ns is the namespace of the wsdl:port element. serviceName Specifies the value of the service's wsdl:service element's name attribute. It is specified as a QName using the format ns : name where ns is the namespace of the wsdl:service element. publish Specifies if the service should be automatically published. If this is set to false , the developer must explicitly publish the endpointas described in Chapter 31, Publishing a Service . bus Specifies the ID of the Spring bean configuring the bus used to manage the service endpoint. This is useful when configuring several endpoints to use a common set of features. bindingUri Specifies the ID of the message binding the service uses. A list of valid binding IDs is provided in Chapter 23, Apache CXF Binding IDs . name Specifies the stringified QName of the service's wsdl:port element. It is specified as a QName using the format { ns } localPart . ns is the namespace of the wsdl:port element and localPart is the value of the wsdl:port element's name attribute. abstract Specifies if the bean is an abstract bean. Abstract beans act as parents for concrete bean definitions and are not instantiated. The default is false . Setting this to true instructs the bean factory not to instantiate the bean. depends-on Specifies a list of beans that the endpoint depends on being instantiated before it can be instantiated. createdFromAPI Specifies that the user created that bean using Apache CXF APIs, such as Endpoint.publish() or Service.getPort() . The default is false . Setting this to true does the following: Changes the internal name of the bean by appending .jaxws-endpoint to its id Makes the bean abstract publishedEndpointUrl The URL that is placed in the address element of the generated WSDL. If this value is not specified, the value of the address attribute is used. This attribute is useful when the "public" URL is not be the same as the URL on which the service is deployed. In addition to the attributes listed in Table 17.1, "Attributes for Configuring a JAX-WS Service Provider Using the jaxws:endpoint Element" , you might need to use multiple xmlns: shortName attributes to declare the namespaces used by the endpointName and serviceName attributes. Example Example 17.1, "Simple JAX-WS Endpoint Configuration" shows the configuration for a JAX-WS endpoint that specifies the address where the endpoint is published. The example assumes that you want to use the defaults for all other values or that the implementation has specified values in the annotations. Example 17.1. Simple JAX-WS Endpoint Configuration Example 17.2, "JAX-WS Endpoint Configuration with a Service Name" shows the configuration for a JAX-WS endpoint whose contract contains two service definitions. In this case, you must specify which service definition to instantiate using the serviceName attribute. Example 17.2. JAX-WS Endpoint Configuration with a Service Name The xmlns:samp attribute specifies the namespace in which the WSDL service element is defined. Example 17.3, "JAX-WS Endpoint Configuration with with HTTP/2 enabled" shows the configuration for a JAX-WS endpoint that specifies the address with HTTP/2 enabled. Configuring HTTP/2 for Apache CXF HTTP/2 is supported when using the standalone Apache CXF Undertow transport ( http-undertow ) on Apache Karaf. To enable the HTTP/2 protocol you must set the jaxws:endpoint element's address attribute as an absolute URL and set the org.apache.cxf.transports.http_undertow.EnableHttp2 property as true . Note This HTTP/2 implementation only supports server side HTTP/2 transport with plain HTTP or HTTPS. Example 17.3. JAX-WS Endpoint Configuration with with HTTP/2 enabled Note For improved performance, Red Hat recommends using the servlet transport on Apache Karaf ( pax-web-undertow ) which enables centralized configuration and tuning of the web container, however, pax-web-undertow does not support the HTTP/2 transport protocol. 17.1.3. Using the jaxws:server Element Overview The jaxws:server element is an element for configuring JAX-WS service providers. It injects the configuration information into the org.apache.cxf.jaxws.support.JaxWsServerFactoryBean . This is a Apache CXF specific object. If you are using a pure Spring approach to building your services, you will not be forced to use Apache CXF specific APIs to interact with the service. The attributes and children of the jaxws:server element specify all of the information needed to instantiate a service provider. The attributes specify the information that is required to instantiate an endpoint. The children are used to configure interceptors and other advanced features. Identifying the endpoint being configured In order for the runtime to apply the configuration to the proper service provider, it must be able to identify it. The basic means for identifying a service provider is to specify the class that implements the endpoint. This is done using the jaxws:server element's serviceBean attribute. For instances where different endpoint's share a common implementation, it is possible to provide different configuration for each endpoint. There are two approaches for distinguishing a specific endpoint in configuration: a combination of the serviceName attribute and the endpointName attribute The serviceName attribute specifies the wsdl:service element defining the service's endpoint. The endpointName attribute specifies the specific wsdl:port element defining the service's endpoint. Both attributes are specified as QNames using the format ns : name . ns is the namespace of the element and name is the value of the element's name attribute. Note If the wsdl:service element only has one wsdl:port element, the endpointName attribute can be omitted. the name attribute The name attribute specifies the QName of the specific wsdl:port element defining the service's endpoint. The QName is provided in the format { ns } localPart . ns is the namespace of the wsdl:port element and localPart is the value of the wsdl:port element's name attribute. Attributes The attributes of the jaxws:server element configure the basic properties of the endpoint. These properties include the address of the endpoint, the class that implements the endpoint, and the bus that hosts the endpoint. Table 17.2, "Attributes for Configuring a JAX-WS Service Provider Using the jaxws:server Element" describes the attribute of the jaxws:server element. Table 17.2. Attributes for Configuring a JAX-WS Service Provider Using the jaxws:server Element Attribute Description id Specifies a unique identifier that other configuration elements can use to refer to the endpoint. serviceBean Specifies the class implementing the service. You can specify the implementation class using either the class name or an ID reference to a Spring bean configuring the implementation class. This class must be on the classpath. serviceClass Specifies the class implementing the service. This attribute is useful when the value provided to the implementor attribute is a reference to a bean that is wrapped using Spring AOP. address Specifies the address of an HTTP endpoint. This value will override the value specified in the services contract. wsdlLocation Specifies the location of the endpoint's WSDL contract. The WSDL contract's location is relative to the folder from which the service is deployed. endpointName Specifies the value of the service's wsdl:port element's name attribute. It is specified as a QName using the format ns : name , where ns is the namespace of the wsdl:port element. serviceName Specifies the value of the service's wsdl:service element's name attribute. It is specified as a QName using the format ns : name , where ns is the namespace of the wsdl:service element. publish Specifies if the service should be automatically published. If this is set to false , the developer must explicitly publish the endpointas described in Chapter 31, Publishing a Service . bus Specifies the ID of the Spring bean configuring the bus used to manage the service endpoint. This is useful when configuring several endpoints to use a common set of features. bindingId Specifies the ID of the message binding the service uses. A list of valid binding IDs is provided in Chapter 23, Apache CXF Binding IDs . name Specifies the stringified QName of the service's wsdl:port element. It is specified as a QName using the format { ns } localPart , where ns is the namespace of the wsdl:port element and localPart is the value of the wsdl:port element's name attribute. abstract Specifies if the bean is an abstract bean. Abstract beans act as parents for concrete bean definitions and are not instantiated. The default is false . Setting this to true instructs the bean factory not to instantiate the bean. depends-on Specifies a list of beans that the endpoint depends on being instantiated before the endpoint can be instantiated. createdFromAPI Specifies that the user created that bean using Apache CXF APIs, such as Endpoint.publish() or Service.getPort() . The default is false . Setting this to true does the following: Changes the internal name of the bean by appending .jaxws-endpoint to its id Makes the bean abstract In addition to the attributes listed in Table 17.2, "Attributes for Configuring a JAX-WS Service Provider Using the jaxws:server Element" , you might need to use multiple xmlns: shortName attributes to declare the namespaces used by the endpointName and serviceName attributes. Example Example 17.4, "Simple JAX-WS Server Configuration" shows the configuration for a JAX-WS endpoint that specifies the address where the endpoint is published. Example 17.4. Simple JAX-WS Server Configuration 17.1.4. Adding Functionality to Service Providers Overview The jaxws:endpoint and the jaxws:server elements provide the basic configuration information needed to instantiate a service provider. To add functionality to your service provider or to perform advanced configuration you must add child elements to the configuration. Child elements allow you to do the following: Chapter 19, Apache CXF Logging Chapter 59, Configuring Endpoints to Use Interceptors Chapter 20, Deploying WS-Addressing Chapter 21, Enabling Reliable Messaging Section 17.1.5, "Enable Schema Validation on a JAX-WS Endpoint" Elements Table 17.3, "Elements Used to Configure JAX-WS Service Providers" describes the child elements that jaxws:endpoint supports. Table 17.3. Elements Used to Configure JAX-WS Service Providers Element Description jaxws:handlers Specifies a list of JAX-WS Handler implementations for processing messages. For more information on JAX-WS Handler implementations see Chapter 43, Writing Handlers . jaxws:inInterceptors Specifies a list of interceptors that process inbound requests. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:inFaultInterceptors Specifies a list of interceptors that process inbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:outInterceptors Specifies a list of interceptors that process outbound replies. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:outFaultInterceptors Specifies a list of interceptors that process outbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:binding Specifies a bean configuring the message binding used by the endpoint. Message bindings are configured using implementations of the org.apache.cxf.binding.BindingFactory interface. [a] jaxws:dataBinding [b] Specifies the class implementing the data binding used by the endpoint. This is specified using an embedded bean definition. jaxws:executor Specifies a Java executor that is used for the service. This is specified using an embedded bean definition. jaxws:features Specifies a list of beans that configure advanced features of Apache CXF. You can provide either a list of bean references or a list of embedded beans. jaxws:invoker Specifies an implementation of the org.apache.cxf.service.Invoker interface used by the service. [c] jaxws:properties Specifies a Spring map of properties that are passed along to the endpoint. These properties can be used to control features like enabling MTOM support. jaxws:serviceFactory Specifies a bean configuring the JaxWsServiceFactoryBean object used to instantiate the service. [a] The SOAP binding is configured using the soap:soapBinding bean. [b] The jaxws:endpoint element does not support the jaxws:dataBinding element. [c] The Invoker implementation controls how a service is invoked. For example, it controls whether each request is handled by a new instance of the service implementation or if state is preserved across invocations. 17.1.5. Enable Schema Validation on a JAX-WS Endpoint Overview You can set the schema-validation-enabled property to enable schema validation on a jaxws:endpoint element or on a jaxws:server element. When schema validation is enabled, the messages sent between client and server are checked for conformity to the schema. By default, schema validation is turned off, because it has a significant impact on performance. Example To enable schema validation on a JAX-WS endpoint, set the schema-validation-enabled property in the jaxws:properties child element of the jaxws:endpoint element or of the jaxws:server element. For example, to enable schema validation on a jaxws:endpoint element: For the list of allowed values of the schema-validation-enabled property, see Section 24.3.4.7, "Schema Validation Type Values" . 17.2. Configuring Consumer Endpoints Overview JAX-WS consumer endpoints are configured using the jaxws:client element. The element's attributes provide the basic information necessary to create a consumer. To add other functionality, like WS-RM, to the consumer you add children to the jaxws:client element. Child elements are also used to configure the endpoint's logging behavior and to inject other properties into the endpoint's implementation. Basic Configuration Properties The attributes described in Table 17.4, "Attributes Used to Configure a JAX-WS Consumer" provide the basic information necessary to configure a JAX-WS consumer. You only need to provide values for the specific properties you want to configure. Most of the properties have sensible defaults, or they rely on information provided by the endpoint's contract. Table 17.4. Attributes Used to Configure a JAX-WS Consumer Attribute Description address Specifies the HTTP address of the endpoint where the consumer will make requests. This value overrides the value set in the contract. bindingId Specifies the ID of the message binding the consumer uses. A list of valid binding IDs is provided in Chapter 23, Apache CXF Binding IDs . bus Specifies the ID of the Spring bean configuring the bus managing the endpoint. endpointName Specifies the value of the wsdl:port element's name attribute for the service on which the consumer is making requests. It is specified as a QName using the format ns : name , where ns is the namespace of the wsdl:port element. serviceName Specifies the value of the wsdl:service element's name attribute for the service on which the consumer is making requests. It is specified as a QName using the format ns : name where ns is the namespace of the wsdl:service element. username Specifies the username used for simple username/password authentication. password Specifies the password used for simple username/password authentication. serviceClass Specifies the name of the service endpoint interface(SEI). wsdlLocation Specifies the location of the endpoint's WSDL contract. The WSDL contract's location is relative to the folder from which the client is deployed. name Specifies the stringified QName of the wsdl:port element for the service on which the consumer is making requests. It is specified as a QName using the format { ns } localPart , where ns is the namespace of the wsdl:port element and localPart is the value of the wsdl:port element's name attribute. abstract Specifies if the bean is an abstract bean. Abstract beans act as parents for concrete bean definitions and are not instantiated. The default is false . Setting this to true instructs the bean factory not to instantiate the bean. depends-on Specifies a list of beans that the endpoint depends on being instantiated before it can be instantiated. createdFromAPI Specifies that the user created that bean using Apache CXF APIs like Service.getPort() . The default is false . Setting this to true does the following: Changes the internal name of the bean by appending .jaxws-client to its id Makes the bean abstract In addition to the attributes listed in Table 17.4, "Attributes Used to Configure a JAX-WS Consumer" , it might be necessary to use multiple xmlns: shortName attributes to declare the namespaces used by the endpointName and the serviceName attributes. Adding functionality To add functionality to your consumer or to perform advanced configuration, you must add child elements to the configuration. Child elements allow you to do the following: Chapter 19, Apache CXF Logging Chapter 59, Configuring Endpoints to Use Interceptors Chapter 20, Deploying WS-Addressing Chapter 21, Enabling Reliable Messaging the section called "Enable schema validation on a JAX-WS consumer" Table 17.5, "Elements For Configuring a Consumer Endpoint" describes the child element's you can use to configure a JAX-WS consumer. Table 17.5. Elements For Configuring a Consumer Endpoint Element Description jaxws:binding Specifies a bean configuring the message binding used by the endpoint. Message bindings are configured using implementations of the org.apache.cxf.binding.BindingFactory interface. [a] jaxws:dataBinding Specifies the class implementing the data binding used by the endpoint. You specify this using an embedded bean definition. The class implementing the JAXB data binding is org.apache.cxf.jaxb.JAXBDataBinding . jaxws:features Specifies a list of beans that configure advanced features of Apache CXF. You can provide either a list of bean references or a list of embedded beans. jaxws:handlers Specifies a list of JAX-WS Handler implementations for processing messages. For more information in JAX-WS Handler implementations see Chapter 43, Writing Handlers . jaxws:inInterceptors Specifies a list of interceptors that process inbound responses. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:inFaultInterceptors Specifies a list of interceptors that process inbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:outInterceptors Specifies a list of interceptors that process outbound requests. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:outFaultInterceptors Specifies a list of interceptors that process outbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxws:properties Specifies a map of properties that are passed to the endpoint. jaxws:conduitSelector Specifies an org.apache.cxf.endpoint.ConduitSelector implementation for the client to use. A ConduitSelector implementation will override the default process used to select the Conduit object that is used to process outbound requests. [a] The SOAP binding is configured using the soap:soapBinding bean. Example Example 17.5, "Simple Consumer Configuration" shows a simple consumer configuration. Example 17.5. Simple Consumer Configuration Enable schema validation on a JAX-WS consumer To enable schema validation on a JAX-WS consumer, set the schema-validation-enabled property in the jaxws:properties child element of the jaxws:client element-for example: For the list of allowed values of the schema-validation-enabled property, see Section 24.3.4.7, "Schema Validation Type Values" .
[ "<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <jaxws:endpoint id=\"example\" implementor=\"org.apache.cxf.example.DemoImpl\" address=\"http://localhost:8080/demo\" /> </beans>", "<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <jaxws:endpoint id=\"example2\" implementor=\"org.apache.cxf.example.DemoImpl\" serviceName=\"samp:demoService2\" xmlns:samp=\"http://org.apache.cxf/wsdl/example\" /> </beans>", "<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <cxf:bus> <cxf:properties> <entry key=\"org.apache.cxf.transports.http_undertow.EnableHttp2\" value=\"true\"/> </cxf:properties> </cxf:bus> <jaxws:endpoint id=\"example3\" implementor=\"org.apache.cxf.example.DemoImpl\" address=\"http://localhost:8080/demo\" /> </jaxws:endpoint> </beans>", "<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <jaxws:server id=\"exampleServer\" serviceBean=\"org.apache.cxf.example.DemoImpl\" address=\"http://localhost:8080/demo\" /> </beans>", "<jaxws:endpoint name=\"{http://apache.org/hello_world_soap_http}SoapPort\" wsdlLocation=\"wsdl/hello_world.wsdl\" createdFromAPI=\"true\"> <jaxws:properties> <entry key=\"schema-validation-enabled\" value=\"BOTH\" /> </jaxws:properties> </jaxws:endpoint>", "<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <jaxws:client id=\"bookClient\" serviceClass=\"org.apache.cxf.demo.BookClientImpl\" address=\"http://localhost:8080/books\"/> </beans>", "<jaxws:client name=\"{http://apache.org/hello_world_soap_http}SoapPort\" createdFromAPI=\"true\"> <jaxws:properties> <entry key=\"schema-validation-enabled\" value=\"BOTH\" /> </jaxws:properties> </jaxws:client>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFDeployJAXWSEndptConfig
16.2.3. Starting and Stopping the Server
16.2.3. Starting and Stopping the Server Important When the DHCP server is started for the first time, it fails unless the dhcpd.leases file exists. Use the command touch /var/lib/dhcpd/dhcpd.leases to create the file if it does not exist. If the same server is also running BIND as a DNS server, this step is not necessary, as starting the named service automatically checks for a dhcpd.leases file. To start the DHCP service, use the command /sbin/service dhcpd start . To stop the DHCP server, use the command /sbin/service dhcpd stop . By default, the DHCP service does not start at boot time. For information on how to configure the daemon to start automatically at boot time, see Chapter 12, Services and Daemons . If more than one network interface is attached to the system, but the DHCP server should only be started on one of the interfaces, configure the DHCP server to start only on that device. In /etc/sysconfig/dhcpd , add the name of the interface to the list of DHCPDARGS : This is useful for a firewall machine with two network cards. One network card can be configured as a DHCP client to retrieve an IP address to the Internet. The other network card can be used as a DHCP server for the internal network behind the firewall. Specifying only the network card connected to the internal network makes the system more secure because users cannot connect to the daemon through the Internet. Other command-line options that can be specified in /etc/sysconfig/dhcpd include: -p <portnum> - Specifies the UDP port number on which dhcpd should listen. The default is port 67. The DHCP server transmits responses to the DHCP clients at a port number one greater than the UDP port specified. For example, if the default port 67 is used, the server listens on port 67 for requests and responds to the client on port 68. If a port is specified here and the DHCP relay agent is used, the same port on which the DHCP relay agent should listen must be specified. See Section 16.2.4, "DHCP Relay Agent" for details. -f - Runs the daemon as a foreground process. This is mostly used for debugging. -d - Logs the DHCP server daemon to the standard error descriptor. This is mostly used for debugging. If this is not specified, the log is written to /var/log/messages . -cf <filename> - Specifies the location of the configuration file. The default location is /etc/dhcp/dhcpd.conf . -lf <filename> - Specifies the location of the lease database file. If a lease database file already exists, it is very important that the same file be used every time the DHCP server is started. It is strongly recommended that this option only be used for debugging purposes on non-production machines. The default location is /var/lib/dhcpd/dhcpd.leases . -q - Do not print the entire copyright message when starting the daemon.
[ "Command line options here DHCPDARGS=eth0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch16s02s03
13.5. Configuring the Base RDMA Subsystem
13.5. Configuring the Base RDMA Subsystem Startup of the rdma service is automatic. When RDMA capable hardware, whether InfiniBand or iWARP or RoCE/IBoE is detected, udev instructs systemd to start the rdma service. Users need not enable the rdma service, but they can if they want to force it on all the time. To do that, enter the following command as root: 13.5.1. Configuration of the rdma.conf file The rdma service reads /etc/rdma/rdma.conf to find out which kernel-level and user-level RDMA protocols the administrator wants to be loaded by default. Users should edit this file to turn various drivers on or off. The various drivers that can be enabled and disabled are: IPoIB - This is an IP network emulation layer that allows IP applications to run over InfiniBand networks. SRP - This is the SCSI Request Protocol. It allows a machine to mount a remote drive or drive array that is exported through the SRP protocol on the machine as though it were a local hard disk. SRPT - This is the target mode, or server mode, of the SRP protocol. This loads the kernel support necessary for exporting a drive or drive array for other machines to mount as though it were local on their machine. Further configuration of the target mode support is required before any devices will actually be exported. See the documentation in the targetd and targetcli packages for further information. ISER - This is a low-level driver for the general iSCSI layer of the Linux kernel that provides transport over InfiniBand networks for iSCSI devices. RDS - This is the Reliable Datagram Service in the Linux kernel. It is not enabled in Red Hat Enterprise Linux 7 kernels and so cannot be loaded. 13.5.2. Usage of 70-persistent-ipoib.rules The rdma package provides the file /etc/udev.d/rules.d/70-persistent-ipoib.rules . This udev rules file is used to rename IPoIB devices from their default names (such as ib0 and ib1 ) to more descriptive names. Users must edit this file to change how their devices are named. First, find out the GUID address for the device to be renamed: Immediately after link/infiniband is the 20 byte hardware address for the IPoIB interface. The final 8 bytes of the address, marked in bold above, is all that is required to make a new name. Users can make up whatever naming scheme suits them. For example, use a device_fabric naming convention such as mlx4_ib0 if a mlx4 device is connected to an ib0 subnet fabric. The only thing that is not recommended is to use the standard names, like ib0 or ib1 , as these can conflict with the kernel assigned automatic names. , add an entry in the rules file. Copy the existing example in the rules file, replace the 8 bytes in the ATTR{address} entry with the highlighted 8 bytes from the device to be renamed, and enter the new name to be used in the NAME field. 13.5.3. Relaxing memlock restrictions for users RDMA communications require that physical memory in the computer be pinned (meaning that the kernel is not allowed to swap that memory out to a paging file in the event that the overall computer starts running short on available memory). Pinning memory is normally a very privileged operation. In order to allow users other than root to run large RDMA applications, it will likely be necessary to increase the amount of memory that non- root users are allowed to pin in the system. This is done by adding a file in the /etc/security/limits.d/ directory with contents such as the following: 13.5.4. Configuring Mellanox cards for Ethernet operation Certain hardware from Mellanox is capable of running in either InfiniBand or Ethernet mode. These cards generally default to InfiniBand. Users can set the cards to Ethernet mode. There is currently support for setting the mode only on ConnectX family hardware (which uses either the mlx5 or mlx4 driver). To configure Mellanox mlx5 cards, use the mstconfig program from the mstflint package. For more details, see the Configuring Mellanox mlx5 cards in Red Hat Enterprise Linux 7 Knowledge Base article on the Red Hat Customer Portal. To configure Mellanox mlx4 cards, use mstconfig to set the port types on the card as described in the Knowledge Base article . If mstconfig does not support your card, edit the /etc/rdma/mlx4.conf file and follow the instructions in that file to set the port types properly for RoCE/IBoE usage. In this case is also necessary to rebuild the initramfs to make sure the updated port settings are copied into the initramfs . Once the port type has been set, if one or both ports are set to Ethernet and mstconfig was not used to set the port types, then users might see this message in their logs: This is normal and does not affect operation. The script responsible for setting the port type has no way of knowing when the driver has finished switching port 2 to the requested type internally, and from the time that the script issues a request for port 2 to switch until that switch is complete, the attempts to set port 1 to a different type get rejected. The script retries until the command succeeds or until a timeout has passed indicating that the port switch never completed. 13.5.5. Connecting to a Remote Linux SRP Target The SCSI RDMA Protocol (SRP) is a network protocol that enables a system to use RDMA to access SCSI devices that are attached to another system. To allow an SRP initiator to connect an SRP target on the SRP target side, you must add an access control list (ACL) entry for the host channel adapter (HCA) port used in the initiator. ACL IDs for HCA ports are not unique. The ACL IDs depend on the GID format of the HCAs. HCAs that use the same driver, for example ib_qib , can have different format of GIDs. The ACL ID also depends on how you initiate the connection request. Connecting to a Remote Linux SRP Target: High-Level Overview Prepare the target side: Create storage back end. For example get the /dev/sdc1 partition: Create an SRP target: Create a LUN based on the back end created in step a: Create a Node ACL for the remote SRP client: Note that the Node ACL is different for srp_daemon and ibsrpdm . Initiate an SRP connection with srp_daemon or ibsrpdm for the client side: Optional. It is recommended to verify the SRP connection with different tools, such as lsscsi or dmesg . Procedure 13.3. Connecting to a Remote Linux SRP Target with srp_daemon or ibsrpdm Use the ibstat command on the target to determine the State and Port GUID values. The HCA must be in Active state. The ACL ID is based on the Port GUID : Get the SRP target ID, which is based on the GUID of the HCA port. Note that you need a dedicated disk partition as a back end for a SRP target, for example /dev/sdc1 . The following command replaces the default prefix of fe80, removes the colon, and adds the new prefix to the remainder of the string: Use the targetcli tool to create the LUN vol1 on the block device, create an SRP target, and export the LUN: Use the ibstat command on the initiator to check if the state is Active and determine the Port GUID : Use the following command to scan without connecting to a remote SRP target. The target GUID shows that the initiator had found remote target. The ID string shows that the remote target is a Linux software target ( ib_srpt.ko ). To verify the SRP connection, use the lsscsi command to list SCSI devices and compare the lsscsi output before and after the initiator connects to target. To connect to a remote target without configuring a valid ACL for the initiator port, which is expected to fail, use the following commands for srp_daemon or ibsrpdm : The output of the dmesg shows why the SRP connection operation failed. In a later step, the dmesg command on the target side is used to make the situation clear. Because of failed LOGIN, the output of the lsscsi command is the same as in the earlier step. Using the dmesg on the target side ( ib_srpt.ko ) provides an explanation of why LOGIN failed. Also, the output contains the valid ACL ID provided by srp_daemon : 0x7edd770000751100001175000077d708 . Use the targetcli tool to add a valid ACL: Verify the SRP LOGIN operation: Wait for 60 seconds to allow srp_daemon to re-try logging in: Verify the SRP LOGIN operation: For a kernel log of SRP target discovery, use:
[ "~]# systemctl status rdma ● rdma.service - Initialize the iWARP/InfiniBand/RDMA stack in the kernel Loaded: loaded (/usr/lib/systemd/system/rdma.service; disabled; vendor preset: disabled) Active: inactive (dead) Docs: file:/etc/rdma/rdma.conf", "~]# systemctl enable rdma", "~]USD ip link show ib0 8: ib0: >BROADCAST,MULTICAST,UP,LOWER_UP< mtu 65520 qdisc pfifo_fast state UP mode DEFAULT qlen 256 link/infiniband 80:00:02:00:fe:80:00:00:00:00:00:00: f4:52:14:03:00:7b:cb:a1 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff", "~]USD more /etc/security/limits.d/rdma.conf configuration for rdma tuning * soft memlock unlimited * hard memlock unlimited rdma tuning end", "mlx4_core 0000:05:00.0: Requested port type for port 1 is not supported on this HCA", "/> /backstores/block create vol1 /dev/sdc1", "/> /srpt create 0xfe80000000000000001175000077dd7e", "/> /srpt/ib.fe80000000000000001175000077dd7e/luns create /backstores/block/vol1", "/> /srpt/ib.fe80000000000000001175000077dd7e/acls create 0x7edd770000751100001175000077d708", "srp_daemon -e -n -i qib0 -p 1 -R 60 &", "ibsrpdm -c -d /dev/infiniband/umad0 > /sys/class/infiniband_srp/srp-qib0-1/add_target", "ibstat CA 'qib0' CA type: InfiniPath_QLE7342 Number of ports: 1 Firmware version: Hardware version: 2 Node GUID: 0x001175000077dd7e System image GUID: 0x001175000077dd7e Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 1 LMC: 0 SM lid: 1 Capability mask: 0x0769086a Port GUID: 0x001175000077dd7e Link layer: InfiniBand", "ibstatus | grep ' <default-gid> ' | sed -e 's/ <default-gid> ://' -e 's/://g' | grep 001175000077dd7e fe80000000000000001175000077dd7e", "targetcli /> /backstores/block create vol1 /dev/sdc1 Created block storage object vol1 using /dev/sdc1. /> /srpt create 0xfe80000000000000001175000077dd7e Created target ib.fe80000000000000001175000077dd7e. /> /srpt/ib.fe80000000000000001175000077dd7e/luns create /backstores/block/vol1 Created LUN 0. /> ls / o- / ............................................................................. [...] o- backstores .................................................................. [...] | o- block ...................................................... [Storage Objects: 1] | | o- vol1 ............................... [/dev/sdc1 (77.8GiB) write-thru activated] | o- fileio ..................................................... [Storage Objects: 0] | o- pscsi ...................................................... [Storage Objects: 0] | o- ramdisk .................................................... [Storage Objects: 0] o- iscsi ................................................................ [Targets: 0] o- loopback ............................................................. [Targets: 0] o- srpt ................................................................. [Targets: 1] o- ib.fe80000000000000001175000077dd7e ............................... [no-gen-acls] o- acls ................................................................ [ACLs: 0] o- luns ................................................................ [LUNs: 1] o- lun0 ............................................... [block/vol1 (/dev/sdc1)] />", "ibstat CA 'qib0' CA type: InfiniPath_QLE7342 Number of ports: 1 Firmware version: Hardware version: 2 Node GUID: 0x001175000077d708 System image GUID: 0x001175000077d708 Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 2 LMC: 0 SM lid: 1 Capability mask: 0x07690868 Port GUID: 0x001175000077d708 Link layer: InfiniBand", "srp_daemon -a -o IO Unit Info: port LID: 0001 port GID: fe80000000000000001175000077dd7e change ID: 0001 max controllers: 0x10 controller[ 1] GUID: 001175000077dd7e vendor ID: 000011 device ID: 007322 IO class : 0100 ID: Linux SRP target service entries: 1 service[ 0]: 001175000077dd7e / SRP.T10:001175000077dd7e", "lsscsi [0:0:10:0] disk IBM-ESXS ST9146803SS B53C /dev/sda", "srp_daemon -e -n -i qib0 -p 1 -R 60 & [1] 4184", "ibsrpdm -c -d /dev/infiniband/umad0 > /sys/class/infiniband_srp/srp-qib0-1/add_target", "dmesg -c [ 1230.059652] scsi host5: ib_srp: REJ received [ 1230.059659] scsi host5: ib_srp: SRP LOGIN from fe80:0000:0000:0000:0011:7500:0077:d708 to fe80:0000:0000:0000:0011:7500:0077:dd7e REJECTED, reason 0x00010006 [ 1230.073792] scsi host5: ib_srp: Connection 0/2 failed [ 1230.078848] scsi host5: ib_srp: Sending CM DREQ failed", "lsscsi [0:0:10:0] disk IBM-ESXS ST9146803SS B53C /dev/sda", "dmesg [ 1200.303001] ib_srpt Received SRP_LOGIN_REQ with i_port_id 0x7edd770000751100:0x1175000077d708, t_port_id 0x1175000077dd7e:0x1175000077dd7e and it_iu_len 260 on port 1 (guid=0xfe80000000000000:0x1175000077dd7e) [ 1200.322207] ib_srpt Rejected login because no ACL has been configured yet for initiator 0x7edd770000751100001175000077d708.", "targetcli targetcli shell version 2.1.fb41 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> /srpt/ib.fe80000000000000001175000077dd7e/acls create 0x7edd770000751100001175000077d708 Created Node ACL for ib.7edd770000751100001175000077d708 Created mapped LUN 0.", "sleep 60", "lsscsi [0:0:10:0] disk IBM-ESXS ST9146803SS B53C /dev/sda [7:0:0:0] disk LIO-ORG vol1 4.0 /dev/sdb", "dmesg -c [ 1354.182072] scsi host7: SRP.T10:001175000077DD7E [ 1354.187258] scsi 7:0:0:0: Direct-Access LIO-ORG vol1 4.0 PQ: 0 ANSI: 5 [ 1354.208688] scsi 7:0:0:0: alua: supports implicit and explicit TPGS [ 1354.215698] scsi 7:0:0:0: alua: port group 00 rel port 01 [ 1354.221409] scsi 7:0:0:0: alua: port group 00 state A non-preferred supports TOlUSNA [ 1354.229147] scsi 7:0:0:0: alua: Attached [ 1354.233402] sd 7:0:0:0: Attached scsi generic sg1 type 0 [ 1354.233694] sd 7:0:0:0: [sdb] 163258368 512-byte logical blocks: (83.5 GB/77.8 GiB) [ 1354.235127] sd 7:0:0:0: [sdb] Write Protect is off [ 1354.235128] sd 7:0:0:0: [sdb] Mode Sense: 43 00 00 08 [ 1354.235550] sd 7:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 1354.255491] sd 7:0:0:0: [sdb] Attached SCSI disk [ 1354.265233] scsi host7: ib_srp: new target: id_ext 001175000077dd7e ioc_guid 001175000077dd7e pkey ffff service_id 001175000077dd7e sgid fe80:0000:0000:0000:0011:7500:0077:d708 dgid fe80:0000:0000:0000:0011:7500:0077:dd7e xyx" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Configuring_the_Base_RDMA_Subsystem
Chapter 38. Using an ID view to override a user attribute value on an IdM client
Chapter 38. Using an ID view to override a user attribute value on an IdM client If an Identity Management (IdM) user want to override some of their user or group attributes stored in the IdM LDAP server, for example the login name, home directory, certificate used for authentication, or SSH keys, you as IdM administrator can redefine these values on specific IdM clients by using IdM ID views. For example, you can specify a different home directory for a user on the IdM client that the user most commonly uses for logging in to IdM. This chapter describes how to redefine a POSIX attribute value associated with an IdM user on a host enrolled into IdM as a client. 38.1. ID views An ID view in Identity Management (IdM) is an IdM client-side view specifying the following information: New values for centrally defined POSIX user or group attributes The client host or hosts on which the new values apply. An ID view contains one or more overrides. An override is a specific replacement of a centrally defined POSIX attribute value. You can only define an ID view for an IdM client centrally on IdM servers. You cannot configure client-side overrides for an IdM client locally. For example, you can use ID views to achieve the following goals: Define different attribute values for different environments. For example, you can allow the IdM administrator or another IdM user to have different home directories on different IdM clients: you can configure /home/encrypted/username to be this user's home directory on one IdM client and /dropbox/username on another client. Using ID views in this situation is convenient as alternatively, for example, changing fallback_homedir , override_homedir or other home directory variables in the client's /etc/sssd/sssd.conf file would affect all users. See Adding an ID view to override an IdM user home directory on an IdM client for an example procedure. Replace a previously generated attribute value with a different value, such as overriding a user's UID. This ability can be useful when you want to achieve a system-wide change that would otherwise be difficult to do on the LDAP side, for example make 1009 the UID of an IdM user. IdM ID ranges, which are used to generate an IdM user UID, never start as low as 1000 or even 10000. If a reason exists for an IdM user to impersonate a local user with UID 1009 on all IdM clients, you can use ID views to override the UID of this IdM user that was generated when the user was created in IdM. Important You can only apply ID views to IdM clients, not to IdM servers. Additional resources Using ID views for Active Directory users SSSD Client-side Views 38.2. Potential negative impact of ID views on SSSD performance When you define an ID view, IdM places the desired override value in the IdM server's System Security Services Daemon (SSSD) cache. The SSSD running on an IdM client then retrieves the override value from the server cache. Applying an ID view can have a negative impact on System Security Services Daemon (SSSD) performance, because certain optimizations and ID views cannot run at the same time. For example, ID views prevent SSSD from optimizing the process of looking up groups on the server: With ID views, SSSD must check every member on the returned list of group member names if the group name is overridden. Without ID views, SSSD can only collect the user names from the member attribute of the group object. This negative effect becomes most apparent when the SSSD cache is empty or after you clear the cache, which makes all entries invalid. 38.3. Attributes an ID view can override ID views consist of user and group ID overrides. The overrides define the new POSIX attribute values. User and group ID overrides can define new values for the following POSIX attributes: User attributes Login name ( uid ) GECOS entry ( gecos ) UID number ( uidNumber ) GID number ( gidNumber ) Login shell ( loginShell ) Home directory ( homeDirectory ) SSH public keys ( ipaSshPubkey ) Certificate ( userCertificate ) Group attributes Group name ( cn ) Group GID number ( gidNumber ) 38.4. Getting help for ID view commands You can get help for commands involving Identity Management (IdM) ID views on the IdM command-line interface (CLI). Prerequisites You have obtained a Kerberos ticket for an IdM user. Procedure To display all commands used to manage ID views and overrides: To display detailed help for a particular command, add the --help option to the command: 38.5. Using an ID view to override the login name of an IdM user on a specific host Follow this procedure to create an ID view for a specific IdM client that overrides a POSIX attribute value associated with a specific IdM user. The procedure uses the example of an ID view that enables an IdM user named idm_user to log in to an IdM client named host1 using the user_1234 login name. Prerequisites You are logged in as IdM administrator. Procedure Create a new ID view. For example, to create an ID view named example_for_host1 : Add a user override to the example_for_host1 ID view. To override the user login: Enter the ipa idoverrideuser-add command Add the name of the ID view Add the user name, also called the anchor Add the --login option: For a list of the available options, run ipa idoverrideuser-add --help. Note The ipa idoverrideuser-add --certificate command replaces all existing certificates for the account in the specified ID view. To append an additional certificate, use the ipa idoverrideuser-add-cert command instead: Optional: Using the ipa idoverrideuser-mod command, you can specify new attribute values for an existing user override. Apply example_for_host1 to the host1.idm.example.com host: Note The ipa idview-apply command also accepts the --hostgroups option. The option applies the ID view to hosts that belong to the specified host group, but does not associate the ID view with the host group itself. Instead, the --hostgroups option expands the members of the specified host group and applies the --hosts option individually to every one of them. This means that if a host is added to the host group in the future, the ID view does not apply to the new host. To apply the new configuration to the host1.idm.example.com system immediately: SSH to the system as root: Clear the SSSD cache: Restart the SSSD daemon: Verification If you have the credentials of user_1234 , you can use them to log in to IdM on host1 : SSH to host1 using user_1234 as the login name: Display the working directory: Alternatively, if you have root credentials on host1 , you can use them to check the output of the id command for idm_user and user_1234 : 38.6. Modifying an IdM ID view An ID view in Identity Management (IdM) overrides a POSIX attribute value associated with a specific IdM user. Follow this procedure to modify an existing ID view. Specifically, it describes how to modify an ID view to enable the user named idm_user to use the /home/user_1234/ directory as the user home directory instead of /home/idm_user/ on the host1.idm.example.com IdM client. Prerequisites You have root access to host1.idm.example.com . You are logged in as a user with the required privileges, for example admin . You have an ID view configured for idm_user that applies to the host1 IdM client. Procedure As root, create the directory that you want idm_user to use on host1.idm.example.com as the user home directory: Change the ownership of the directory: Display the ID view, including the hosts to which the ID view is currently applied. To display the ID view named example_for_host1 : The output shows that the ID view currently applies to host1.idm.example.com . Modify the user override of the example_for_host1 ID view. To override the user home directory: Enter the ipa idoverrideuser-add command Add the name of the ID view Add the user name, also called the anchor Add the --homedir option: For a list of the available options, run ipa idoverrideuser-mod --help . To apply the new configuration to the host1.idm.example.com system immediately: SSH to the system as root: Clear the SSSD cache: Restart the SSSD daemon: Verification SSH to host1 as idm_user : Print the working directory: Additional resources Defining global attributes for an AD user by modifying the Default Trust View 38.7. Adding an ID view to override an IdM user home directory on an IdM client An ID view in Identity Management (IdM) overrides a POSIX attribute value associated with a specific IdM user. Follow this procedure to create an ID view that applies to idm_user on an IdM client named host1 to enable the user to use the /home/user_1234/ directory as the user home directory instead of /home/idm_user/ . Prerequisites You have root access to host1.idm.example.com . You are logged in as a user with the required privileges, for example admin . Procedure As root, create the directory that you want idm_user to use on host1.idm.example.com as the user home directory: Change the ownership of the directory: Create an ID view. For example, to create an ID view named example_for_host1 : Add a user override to the example_for_host1 ID view. To override the user home directory: Enter the ipa idoverrideuser-add command Add the name of the ID view Add the user name, also called the anchor Add the --homedir option: Apply example_for_host1 to the host1.idm.example.com host: Note The ipa idview-apply command also accepts the --hostgroups option. The option applies the ID view to hosts that belong to the specified host group, but does not associate the ID view with the host group itself. Instead, the --hostgroups option expands the members of the specified host group and applies the --hosts option individually to every one of them. This means that if a host is added to the host group in the future, the ID view does not apply to the new host. To apply the new configuration to the host1.idm.example.com system immediately: SSH to the system as root: Clear the SSSD cache: Restart the SSSD daemon: Verification SSH to host1 as idm_user : Print the working directory: Additional resources Overriding Default Trust View attributes for an AD user on an IdM client with an ID view 38.8. Applying an ID view to an IdM host group The ipa idview-apply command accepts the --hostgroups option. However, the option acts as a one-time operation that applies the ID view to hosts that currently belong to the specified host group, but does not dynamically associate the ID view with the host group itself. The --hostgroups option expands the members of the specified host group and applies the --hosts option individually to every one of them. If you add a new host to the host group later, you must apply the ID view to the new host manually, using the ipa idview-apply command with the --hosts option. Similarly, if you remove a host from a host group, the ID view is still assigned to the host after the removal. To unapply the ID view from the removed host, you must run the ipa idview-unapply id_view_name --hosts= name_of_the_removed_host command. Follow this procedure to achieve the following goals: How to create a host group and add hosts to it. How to apply an ID view to the host group. How to add a new host to the host group and apply the ID view to the new host. Prerequisites Ensure that the ID view you want to apply to the host group exists in IdM. For example, to create an ID view to override the GID for an AD user, see Overriding Default Trust View attributes for an AD user on an IdM client with an ID view Procedure Create a host group and add hosts to it: Create a host group. For example, to create a host group named baltimore : Add hosts to the host group. For example, to add the host102 and host103 to the baltimore host group: Apply an ID view to the hosts in the host group. For example, to apply the example_for_host1 ID view to the baltimore host group: Add a new host to the host group and apply the ID view to the new host: Add a new host to the host group. For example, to add the somehost.idm.example.com host to the baltimore host group: Optional: Display the ID view information. For example, to display the details about the example_for_host1 ID view: The output shows that the ID view is not applied to somehost.idm.example.com , the newly-added host in the baltimore host group. Apply the ID view to the new host. For example, to apply the example_for_host1 ID view to somehost.idm.example.com : Verification Display the ID view information again: The output shows that ID view is now applied to somehost.idm.example.com , the newly-added host in the baltimore host group. 38.9. Using Ansible to override the login name and home directory of an IdM user on a specific host Complete this procedure to use the idoverrideuser ansible-freeipa module to create an ID view for a specific Identity Management (IdM) client that overrides a POSIX attribute value associated with a specific IdM user. The procedure uses the example of an ID view that enables an IdM user named idm_user to log in to an IdM client named host1.idm.example.com by using the user_1234 login name. Additionally, the ID view modifies the home directory of idm_user so that after logging in to host1, the user home directory is /home/user_1234/ . Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You are using RHEL 9.4 or later. You have stored your ipaadmin_password in the secret.yml Ansible vault. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create your Ansible playbook file add-idoverrideuser-with-name-and-homedir.yml with the following content: Run the playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file:: Optional: If you have root credentials, you can apply the new configuration to the host1.idm.example.com system immediately: SSH to the system as root : Clear the SSSD cache: Restart the SSSD daemon: Verification SSH to host1 as idm_user : Print the working directory: Additional resources The idoverrideuser module in ansible-freeipa upstream docs 38.10. Using Ansible to configure an ID view that enables an SSH key login on an IdM client Complete this procedure to use the idoverrideuser ansible-freeipa module to ensure that an IdM user can use a specific SSH key to log in to a specific IdM client. The procedure uses the example of an ID view that enables an IdM user named idm_user to log in to an IdM client named host1.idm.example.com with an SSH key. Note This ID view can be used to enhance a specific HBAC rule. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You are using RHEL 9.4 or later. You have stored your ipaadmin_password in the secret.yml Ansible vault. You have access to the idm_user 's SSH public key. The idview_for_host1 ID view exists. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create your Ansible playbook file ensure-idoverrideuser-can-login-with-sshkey.yml with the following content: Run the playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Optional: If you have root credentials, you can apply the new configuration to the host1.idm.example.com system immediately: SSH to the system as root : Clear the SSSD cache: Restart the SSSD daemon: Verification Use the public key to SSH to host1 : The output confirms that you have logged in successfully. Additional resources The idoverrideuser module in ansible-freeipa upstream docs 38.11. Using Ansible to give a user ID override access to the local sound card on an IdM client You can use the ansible-freeipa group and idoverrideuser modules to make Identity Management (IdM) or Active Directory (AD) users members of the local audio group on an IdM client. This grants the IdM or AD users privileged access to the sound card on the host. The procedure uses the example of the Default Trust View ID view to which the [email protected] ID override is added in the first playbook task. In the playbook task, an audio group is created in IdM with the GID of 63, which corresponds to the GID of local audio groups on RHEL hosts. At the same time, the [email protected] ID override is added to the IdM audio group as a member. Prerequisites You have root access to the IdM client on which you want to perform the first part of the procedure. In the example, this is client.idm.example.com . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 9.4 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The AD forest is in trust with IdM. In the example, the name of the AD domain is addomain.com and the fully-qualified domain name (FQDN) of the AD user whose presence in the local audio group is being ensured is [email protected] . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure On client.idm.example.com , add [SUCCESS=merge] to the /etc/nsswitch.conf file: Identify the GID of the local audio group: On your Ansible control node, create an add-aduser-to-audio-group.yml playbook with a task to add the [email protected] user override to the Default Trust View: Use another playbook task in the same playbook to add the group audio to IdM with the GID of 63. Add the aduser idoverrideuser to the group: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in to the IdM client as the AD user: Verify the group membership of the AD user: Additional resources The idoverrideuser and ipagroup ansible-freeipa upstream documentation Enabling group merging for local and remote groups in IdM 38.12. Using Ansible to ensure an IdM user is present in an ID view with a specific UID If you are working in a lab where you have our own computer but your /home/ directory is in a shared drive exported by a server, you can have two users: One that is system-wide user, stored centrally in Identity Management (IdM). One whose account is local, that is stored on the system in question. If you need to have full access to your files whether you are logged in as an IdM user or as a local user, you can do so by giving both users the same UID . Complete this procedure to use the ansible-freeipa idoverrideuser module to: Apply an ID view to host01 named idview_for_host01 . Ensure, in idview_for_host01, the presence of a user ID override for idm_user with the UID of 20001 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 9.4 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The idview_for_host1 ID view exists. Procedure On your Ansible control node, create an ensure-idmuser-and-local-user-have-access-to-same-files.yml playbook with the following content: Save the file. Run the playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources The idoverrideuser module in ansible-freeipa upstream docs 38.13. Using Ansible to ensure an IdM user can log in to an IdM client with two certificates If you want an Identity Management (IdM) user that normally logs in to IdM with a password to authenticate to a specific IdM client by using a smart card only, you can create an ID view that requires certification for the user on that client. Complete this procedure to use the ansible-freeipa idoverrideuser module to: Apply an ID view to host01 named idview_for_host01 . Ensure, in idview_for_host01, the presence of a user ID override for idm_user with two certificates. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 9.4 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The example assumes that cert1.b64 and cert2.b64 certificates are located in the same directory in which you are executing the playbook. The idview_for_host01 ID view exists. Procedure On your Ansible control node, create an ensure-idmuser-present-in-idview-with-certificates.yml playbook with the following content: The rstrip=False directive causes the white space not to be removed from the end of the looked-up file. Save the file. Run the playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources The idoverrideuser module in ansible-freeipa upstream docs 38.14. Using Ansible to give an IdM group access to the sound card on an IdM client You can use the ansible-freeipa idview and idoverridegroup modules to make Identity Management (IdM) or Active Directory (AD) users members of the local audio group on an IdM client. This grants the IdM or AD users privileged access to the sound card on the host. The procedure uses the example of the idview_for_host01 ID view to which the audio group ID override is added with the GID` of 63 , which corresponds to the GID of local audio groups on RHEL hosts. The idview_for_host01 ID view is applied to an IdM client named host01.idm.example.com . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 9.4 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . Procedure Optional: Identify the GID of the local audio group on a RHEL host: On your Ansible control node, create an give-idm-group-access-to-sound-card-on-idm-client.yml playbook with the following tasks: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification On an IdM client, obtain IdM administrator's credentials: Create a test IdM user: Add the user to the IdM audio group: Log in to host01.idm.example.com as tuser: Verify the group membership of the user: Additional resources The idoverridegroup , idview and ipagroup ansible-freeipa upstream documentation Enabling group merging for local and remote groups in IdM 38.15. Migrating NIS domains to Identity Management You can use ID views to set host specific UIDs and GIDs for existing hosts to prevent changing permissions for files and directories when migrating NIS domains into IdM. Prerequisites You authenticated yourself as an admin using the kinit admin command. Procedure Add users and groups in the IdM domain. Create users using the ipa user-add command. For more information see: Adding users to IdM . Create groups using the ipa group-add command. For more information see: Adding groups to IdM . Override IDs IdM generated during the user creation: Create a new ID view using ipa idview-add command. For more information see: Getting help for ID view commands . Add ID overrides for the users and groups to the ID view using ipa idoverrideuser-add and idoverridegroup-add respectively. Assign the ID view to the specific hosts using ipa idview-apply command. Decommission the NIS domains. Verification To check if all users and groups were added to the ID view correctly, use the ipa idview-show command.
[ "ipa help idviews ID Views Manage ID Views IPA allows to override certain properties of users and groups[...] [...] Topic commands: idoverridegroup-add Add a new Group ID override idoverridegroup-del Delete a Group ID override [...]", "ipa idview-add --help Usage: ipa [global-options] idview-add NAME [options] Add a new ID View. Options: -h, --help show this help message and exit --desc=STR Description [...]", "ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1", "ipa idoverrideuser-add example_for_host1 idm_user --login=user_1234 ----------------------------- Added User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user User login: user_1234", "ipa idoverrideuser-add-cert example_for_host1 user --certificate=\"MIIEATCC...\"", "ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------", "ssh root@host1 Password:", "root@host1 ~]# sss_cache -E", "root@host1 ~]# systemctl restart sssd", "ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD", "[user_1234@host1 ~]USD pwd /home/idm_user/", "id idm_user uid=779800003(user_1234) gid=779800003(idm_user) groups=779800003(idm_user) user_1234 uid=779800003(user_1234) gid=779800003(idm_user) groups=779800003(idm_user)", "mkdir /home/user_1234/", "chown idm_user:idm_user /home/user_1234/", "ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 User object override: idm_user Hosts the view applies to: host1.idm.example.com objectclass: ipaIDView, top, nsContainer", "ipa idoverrideuser-mod example_for_host1 idm_user --homedir=/home/user_1234 ----------------------------- Modified a User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user User login: user_1234 Home directory: /home/user_1234/", "ssh root@host1 Password:", "root@host1 ~]# sss_cache -E", "root@host1 ~]# systemctl restart sssd", "ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD", "[user_1234@host1 ~]USD pwd /home/user_1234/", "mkdir /home/user_1234/", "chown idm_user:idm_user /home/user_1234/", "ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1", "ipa idoverrideuser-add example_for_host1 idm_user --homedir=/home/user_1234 ----------------------------- Added User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user Home directory: /home/user_1234/", "ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------", "ssh root@host1 Password:", "root@host1 ~]# sss_cache -E", "root@host1 ~]# systemctl restart sssd", "ssh [email protected] Password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [idm_user@host1 /]USD", "[idm_user@host1 /]USD pwd /home/user_1234/", "ipa hostgroup-add --desc=\"Baltimore hosts\" baltimore --------------------------- Added hostgroup \"baltimore\" --------------------------- Host-group: baltimore Description: Baltimore hosts", "ipa hostgroup-add-member --hosts={host102,host103} baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com ------------------------- Number of members added 2 -------------------------", "ipa idview-apply --hostgroups=baltimore ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: host102.idm.example.com, host103.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 2 ---------------------------------------------", "ipa hostgroup-add-member --hosts=somehost.idm.example.com baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com,somehost.idm.example.com ------------------------- Number of members added 1 -------------------------", "ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com objectclass: ipaIDView, top, nsContainer", "ipa idview-apply --host=somehost.idm.example.com ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: somehost.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------", "ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com, somehost.idm.example.com objectclass: ipaIDView, top, nsContainer", "--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure idview_for_host1 is present idview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 - name: Ensure idview_for_host1 is applied to host1.idm.example.com idview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 host: host1.idm.example.com action: member - name: Ensure idm_user is present in idview_for_host1 with homedir /home/user_1234 and name user_1234 ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host1 anchor: idm_user name: user_1234 homedir: /home/user_1234", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/add-idoverrideuser-with-name-and-homedir.yml", "ssh root@host1 Password:", "root@host1 ~]# sss_cache -E", "root@host1 ~]# systemctl restart sssd", "ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD", "[user_1234@host1 ~]USD pwd /home/user_1234/", "--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure test user idm_user is present in idview idview_for_host1 with sshpubkey ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host1 anchor: idm_user sshpubkey: - ssh-rsa AAAAB3NzaC1yc2EAAADAQABAAABgQCqmVDpEX5gnSjKuv97Ay - name: Ensure idview_for_host1 is applied to host1.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 host: host1.idm.example.com action: member", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/ensure-idoverrideuser-can-login-with-sshkey.yml", "ssh root@host1 Password:", "root@host1 ~]# sss_cache -E", "root@host1 ~]# systemctl restart sssd", "ssh -i ~/.ssh/id_rsa.pub [email protected] Last login: Sun Jun 21 22:34:25 2023 from 192.168.122.229 [idm_user@host1 ~]USD", "Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files", "getent group audio --------------------- audio:x:63", "--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false tasks: - name: Add [email protected] user to the Default Trust View ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]", "- name: Add the audio group with the aduser member and GID of 63 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio idoverrideuser: - [email protected] gidnumber: 63", "ansible-playbook --vault-password-file=password_file -v -i inventory add-aduser-to-audio-group.yml", "ssh [email protected]@client.idm.example.com", "id [email protected] uid=702801456([email protected]) gid=63(audio) groups=63(audio)", "--- - name: Ensure both local user and IdM user have access to same files hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idview_for_host1 is applied to host1.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host1.idm.example.com - name: Ensure idmuser is present in idview_for_host01 with the UID of 20001 ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: idm_user UID: 20001", "ansible-playbook --vault-password-file=password_file -v -i inventory ensure-idmuser-and-local-user-have-access-to-same-files.yml", "--- - name: Ensure both local user and IdM user have access to same files hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idview_for_host1 is applied to host01.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host01.idm.example.com - name: Ensure an IdM user is present in ID view with two certificates ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: idm_user certificate: - \"{{ lookup('file', 'cert1.b64', rstrip=False) }}\" - \"{{ lookup('file', 'cert2.b64', rstrip=False) }}\"", "ansible-playbook --vault-password-file=password_file -v -i inventory ensure-idmuser-present-in-idview-with-certificates.yml", "getent group audio --------------------- audio:x:63", "--- - name: Playbook to give IdM group access to sound card on IdM client hosts: ipaserver become: false tasks: - name: Ensure the audio group exists in IdM ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio - name: Ensure idview_for_host01 exists and is applied to host01.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host01.idm.example.com - name: Add an override for the IdM audio group with GID 63 to idview_for_host01 ipaidoverridegroup: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: audio GID: 63", "ansible-playbook --vault-password-file=password_file -v -i inventory give-idm-group-access-to-sound-card-on-idm-client.yml", "kinit admin Password:", "ipa user-add testuser --first test --last user --password User login [tuser]: Password: Enter Password again to verify: ------------------ Added user \"tuser\" ------------------", "ipa group-add-member --tuser audio", "ssh [email protected]", "id tuser uid=702801456(tuser) gid=63(audio) groups=63(audio)", "ipa idview-show example-view ID View Name: example-view User object overrides: example-user1 Group object overrides: example-group" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/using-an-id-view-to-override-a-user-attribute-value-on-an-idm-client_managing-users-groups-hosts
Preface
Preface Preface
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/quick_starts_guide/pr01
Chapter 11. Configuring a System for Accessibility
Chapter 11. Configuring a System for Accessibility Accessibility in Red Hat Enterprise Linux 7 is ensured by the Orca screen reader, which is included in the default installation of the operating system. This chapter explains how a system administrator can configure a system to support users with a visual impairment. Orca reads information from the screen and communicates it to the user using: a speech synthesizer, which provides a speech output a braille display, which provides a tactile output For more information on Orca settings, see its help page . In order that Orca 's communication outputs function properly, the system administrator needs to: configure the brltty service, as described in Section 11.1, "Configuring the brltty Service" switch on the Always Show Universal Access Menu , as described in Section 11.2, "Switch On Always Show Universal Access Menu " enable the Festival speech synthesizer, as described in Section 11.3, "Enabling the Festival Speech Synthesis System " 11.1. Configuring the brltty Service The Braille display uses the brltty service to provide tactile output for visually impaired users. Enable the brltty Service The braille display cannot work unless brltty is running. By default, brltty is disabled. Enable brltty to be started on boot: Authorize Users to Use the Braille Display To set the users who are authorized to use the braille display, choose one of the following procedures, which have an equal effect. The procedure using the /etc/brltty.conf file is suitable even for the file systems where users or groups cannot be assigned to a file. The procedure using the /etc/brlapi.key file is suitable only for the file systems where users or groups can be assigned to a file. Setting Access to Braille Display by Using /etc/brltty.conf Open the /etc/brltty.conf file, and find the section called Application Programming Interface Parameters . Specify the users. To specify one or more individual users, list the users on the following line: To specify a user group, enter its name on the following line: Setting Access to Braille Display by Using /etc/brlapi.key Create the /etc/brlapi.key file. Change ownership of the /etc/brlapi.key to particular user or group. To specify an individual user: To specify a group: Adjust the content of /etc/brltty.conf to include this: Set the Braille Driver The braille-driver directive in /etc/brltty.conf specifies a two-letter driver identification code of the driver for the braille display. Setting the Braille Driver Decide whether you want to use the autodetection for finding the appropriate braille driver. If you want to use autodetection, leave braille driver specified to auto , which is the default option. Warning Autodetection tries all drivers. Therefore, it might take a long time or even fail. For this reason, setting up a particular braille driver is recommended. If you do not want to use the autodetection, specify the identification code of the required braille driver in the braille-driver directive. Choose the identification code of required braille driver from the list provided in /etc/brltty.conf , for example: You can also set multiple drivers, separated by commas, and autodetection is then performed among them. Set the Braille Device The braille-device directive in /etc/brltty.conf specifies the device to which the braille display is connected. The following device types are supported (see Table 11.1, "Braille Device Types and the Corresponding Syntax" ): Table 11.1. Braille Device Types and the Corresponding Syntax Braille Device Type Syntax of the Type serial device serial:path [a] USB device [serial-number] [b] Bluetooth device bluetooth:address [a] Relative paths are at /dev . [b] The brackets here indicate optionality. Examples of settings for particular devices: You can also set multiple devices, separated by commas, and each of them will be probed in turn. Warning If the device is connected by a serial-to-USB adapter, setting braille-device to usb: does not work. In this case, identify the virtual serial device that the kernel has created for the adapter. The virtual serial device can look like this: Set Specific Parameters for Particular Braille Displays If you need to set specific parameters for particular braille displays, use the braille-parameters directive in /etc/brltty.conf . The braille-parameters directive passes non-generic parameters through to the braille driver. Choose the required parameters from the list in /etc/brltty.conf . Set the Text Table The text-table directive in /etc/brltty.conf specifies which text table is used to encode the symbols. Relative paths to text tables are in the /etc/brltty/Text/ directory. Setting the Text Table Decide whether you want to use the autoselection for finding the appropriate text table. If you want to use the autoselection, leave text-table specified to auto , which is the default option. This ensures that local-based autoselection with fallback to en-nabcc is performed. If you do not want to use the autoselection, choose the required text-table from the list in /etc/brltty.conf . For example, to use the text table for American English: Set the Contraction Table The contraction-table directive in /etc/brltty.conf specifies which table is used to encode the abbreviations. Relative paths to particular contraction tables are in the /etc/brltty/Contraction/ directory. Choose the required contraction-table from the list in /etc/brltty.conf . For example, to use the contraction table for American English, grade 2: Warning If not specified, no contraction table is used. 11.2. Switch On Always Show Universal Access Menu To switch on the Orca screen reader, press the Super + Alt + S key combination. As a result, the Universal Access Menu icon is displayed on the top bar. Warning The icon disappears in case that the user switches off all of the provided options from the Universal Access Menu. Missing icon can cause difficulties to users with a visual impairment. System administrators can prevent the inaccessibility of the icon by switching on the Always Show Universal Access Menu . When the Always Show Universal Access Menu is switched on, the icon is displayed on the top bar even in the situation when all options from this menu are switched off. Switching On Always Show Universal Access Menu Open the Gnome settings menu, and click Universal Access . Switch on Always Show Universal Access Menu . Optional: Verify that the Universal Access Menu icon is displayed on the top bar even if all options from this menu are switched off. 11.3. Enabling the Festival Speech Synthesis System By default, Orca uses the eSpeak speech synthesizer, but it also supports the Festival Speech Synthesis System . Both eSpeak and Festival Speech Synthesis System (Festival) synthesize voice differently. Some users might prefer Festival to the default eSpeak synthesizer. To enable Festival, follow these steps: Installing Festival and Making it Running on Boot Install Festival: Make Festival running on boot: Create a new systemd unit file: Create a file in the /etc/systemd/system/ directory and make it executable. Ensure that the script in the /usr/bin/festival_server file is used to run Festival. Add the following content to the /etc/systemd/system/festival.service file: Notify systemd that a new festival.service file exists: Enable festival.service : Choose a Voice for Festival Festival provides multiples voices. To make a voice available, install the relevant package from the following list: festvox-awb-arctic-hts festvox-bdl-arctic-hts festvox-clb-arctic-hts festvox-kal-diphone festvox-ked-diphone festvox-rms-arctic-hts festvox-slt-arctic-hts hispavoces-pal-diphone hispavoces-sfl-diphone To see detailed information about a particular voice: To make the required voice available, install the package with this voice and then reboot:
[ "~]# systemctl enable brltty.service", "api-parameters Auth=user: user_1, user_2, ... # Allow some local user", "api-parameters Auth=group: group # Allow some local group", "~]# mcookie > /etc/brlapi.key", "~]# chown user_1 /etc/brlapi.key", "~]# chown group_1 /etc/brlapi.key", "api-parameters Auth=keyfile: /etc/brlapi.key", "braille-driver auto # autodetect", "braille-driver xw # XWindow", "braille-device serial:ttyS0 # First serial device braille-device usb: # First USB device matching braille driver braille-device usb:nnnnn # Specific USB device by serial number braille-device bluetooth:xx:xx:xx:xx:xx:xx # Specific Bluetooth device by address", "serial:ttyUSB0", "You can find the actual device name in the kernel messages on the device plug with the following command:", "~]# dmesg | fgrep ttyUSB0", "text-table auto # locale-based autoselection", "text-table en_US # English (United States)", "contraction-table en-us-g2 # English (US, grade 2)", "~]# yum install festival festival-freebsoft-utils", "~]# touch /etc/systemd/system/festival.service ~]# chmod 664 /etc/systemd/system/festival.service", "[Unit] Description=Festival speech synthesis server [Service] ExecStart=/usr/bin/festival_server Type=simple", "~]# systemctl daemon-reload ~]# systemctl start festival.service", "~]# systemctl enable festival.service", "~]# yum info package_name", "~]# yum install package_name ~]# reboot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-accessbility
Red Hat OpenShift Data Foundation architecture
Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation 4.16 Overview of OpenShift Data Foundation architecture and the roles that the components and services perform. Red Hat Storage Documentation Team Abstract This document provides an overview of the OpenShift Data Foundation architecture.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/red_hat_openshift_data_foundation_architecture/index
Preface
Preface For OpenShift Data Foundation, node replacement can be performed proactively for an operational node and reactively for a failed node for the following deployments: For Amazon Web Services (AWS) User-provisioned infrastructure Installer-provisioned infrastructure For VMware User-provisioned infrastructure Installer-provisioned infrastructure For Microsoft Azure Installer-provisioned infrastructure For local storage devices Bare metal VMware IBM Power For replacing your storage nodes in external mode, see Red Hat Ceph Storage documentation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_nodes/preface-replacing-nodes
Chapter 12. Deploying IPv6 on Red Hat Quay on OpenShift Container Platform
Chapter 12. Deploying IPv6 on Red Hat Quay on OpenShift Container Platform Note Currently, deploying IPv6 on the Red Hat Quay on OpenShift Container Platform is not supported on IBM Power and IBM Z. Your Red Hat Quay on OpenShift Container Platform deployment can now be served in locations that only support IPv6, such as Telco and Edge environments. For a list of known limitations, see IPv6 limitations 12.1. Enabling the IPv6 protocol family Use the following procedure to enable IPv6 support on your Red Hat Quay deployment. Prerequisites You have updated Red Hat Quay to at least version 3.8. Your host and container software platform (Docker, Podman) must be configured to support IPv6. Procedure In your deployment's config.yaml file, add the FEATURE_LISTEN_IP_VERSION parameter and set it to IPv6 , for example: # ... FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false # ... Start, or restart, your Red Hat Quay deployment. Check that your deployment is listening to IPv6 by entering the following command: USD curl <quay_endpoint>/health/instance {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} After enabling IPv6 in your deployment's config.yaml , all Red Hat Quay features can be used as normal, so long as your environment is configured to use IPv6 and is not hindered by the IPv6 and dual-stack limitations . Warning If your environment is configured to IPv4, but the FEATURE_LISTEN_IP_VERSION configuration field is set to IPv6 , Red Hat Quay will fail to deploy. 12.2. IPv6 limitations Currently, attempting to configure your Red Hat Quay deployment with the common Microsoft Azure Blob Storage configuration will not work on IPv6 single stack environments. Because the endpoint of Microsoft Azure Blob Storage does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4433 . Currently, attempting to configure your Red Hat Quay deployment with Amazon S3 CloudFront will not work on IPv6 single stack environments. Because the endpoint of Amazon S3 CloudFront does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4470 .
[ "FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false", "curl <quay_endpoint>/health/instance {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_operator_features/operator-ipv6-dual-stack
Chapter 10. Understanding and creating service accounts
Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44 10.3. Examples of granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the top-secret project: USD oc policy add-role-to-group view system:serviceaccounts -n top-secret To allow all service accounts in the managers project to edit resources in the top-secret project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n top-secret
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "oc policy add-role-to-user <role_name> -z <service_account_name>", "oc policy add-role-to-group view system:serviceaccounts -n top-secret", "oc policy add-role-to-group edit system:serviceaccounts:managers -n top-secret" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/understanding-and-creating-service-accounts
9.3. Configuring Publishing to an OCSP
9.3. Configuring Publishing to an OCSP The general process to configure publishing involves setting up a publisher to publish the certificates or CRLs to the specific location. There can be a single publisher or multiple publishers, depending on how many locations will be used. The locations can be split by certificates and CRLs or finer definitions, such as certificate type. Rules determine which type to publish and to what location by being associated with the publisher. Publishing to an OCSP Manager is a way to publish CRLs to a specific location for client verification. A publisher must be created and configured for each publishing location; publishers are not automatically created for publishing to the OCSP responder. Create a single publisher to publish everything to s single location, or create a publisher for every location to which CRLs will be published. Each location can contain a different kind of CRL. 9.3.1. Enabling Publishing to an OCSP with Client Authentication Log into the Certificate Manager Console. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing , and then Publishers . Click Add to open the Select Publisher Plug-in Implementation window, which lists registered publisher modules. Select the OCSPPublisher module, then open the editor window. This is the publisher module that enables the Certificate Manager to publish CRLs to the Online Certificate Status Manager. The publisher ID must be an alphanumeric string with no spaces, like PublishCertsToOCSP . The host can be the fully-qualified domain name, such as ocspResponder.example.com , or an IPv4 or IPv6 address. The default path is the directory to send the CRL to, like /ocsp/agent/ocsp/addCRL . If client authentication is used ( enableClientAuth is checked), then the nickname field gives the nickname of the certificate to use for authentication. This certificate must already exist in the OCSP security database; this will usually be the CA subsystem certificate. Create a user entry for the CA on the OCSP Manager. The user is used to authenticate to the OCSP when sending a new CRL. There are two things required: Name the OCSP user entry after the CA server, like CA- hostname-EEport . Use whatever certificate was specified in the publisher configuration as the user certificate in the OCSP user account. This is usually the CA's subsystem certificate. Setting up subsystem users is covered in Section 15.3.2.1, "Creating Users" . After configuring the publisher, configure the rules for the published certificates and CRLs, as described in Section 9.5, "Creating Rules" . Note pkiconsole is being deprecated.
[ "pkiconsole https://server.example.com:8443/ca" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/configuring_publishers_for_publishing_to_ocsp
Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service
Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service Red Hat Developer Hub 1.4 Red Hat Customer Content Services
[ "create namespace rhdh-operator", "-n rhdh-operator create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<user_name> \\ 1 --docker-password=<password> \\ 2 --docker-email=<email> 3", "cat <<EOF | kubectl -n rhdh-operator apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog spec: sourceType: grpc image: registry.redhat.io/redhat/redhat-operator-index:v4.17 secrets: - \"rhdh-pull-secret\" displayName: Red Hat Operators EOF", "cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhdh-operator-group EOF", "cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhdh namespace: rhdh-operator spec: channel: fast installPlanApproval: Automatic name: rhdh source: redhat-catalog sourceNamespace: rhdh-operator startingCSV: rhdh-operator.v1.4.2 EOF", "-n rhdh-operator get pods -w", "-n rhdh-operator patch deployment rhdh.fast --patch '{\"spec\":{\"template\":{\"spec\":{\"imagePullSecrets\":[{\"name\":\"rhdh-pull-secret\"}]}}}}' --type=merge", "-n rhdh-operator edit configmap backstage-default-config", "db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED ---", "deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED ---", "service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED ---", "apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: https://<rhdh_dns_name> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: \"USD{BACKEND_SECRET}\" baseUrl: https://<rhdh_dns_name> cors: origin: https://<rhdh_dns_name>", "apiVersion: v1 kind: Secret metadata: name: my-rhdh-secrets stringData: # TODO: See https://backstage.io/docs/auth/service-to-service-auth/#setup BACKEND_SECRET: \"xxx\"", "node-p'require(\"crypto\").randomBytes(24).toString(\"base64\")'", "patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"rhdh-pull-secret\"}]}' -n <your_namespace>", "apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: # TODO: this the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - \"rhdh-pull-secret\" route: enabled: false appConfig: configMaps: - name: \"app-config-rhdh\" extraEnvs: secrets: - name: my-rhdh-secrets", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: # TODO: this the name of your Developer Hub Ingress name: my-rhdh annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <rhdh_dns_name> spec: ingressClassName: alb rules: # TODO: Set your application domain name. - host: <rhdh_dns_name> http: paths: - path: / pathType: Prefix backend: service: # TODO: my-rhdh is the name of your `Backstage` custom resource. # Adjust if you changed it! name: backstage-my-rhdh port: name: http-backend", "helm repo add openshift-helm-charts https://charts.openshift.io/", "create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<user_name> \\ 1 --docker-password=<password> \\ 2 --docker-email=<email> 3", "global: # TODO: Set your application domain name. host: <your Developer Hub domain name> route: enabled: false upstream: service: # NodePort is required for the ALB to route to the Service type: NodePort ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <your rhdh domain name> backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: # you can assign any random value as fsGroup fsGroup: 2000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true # you can assign any random value as fsGroup fsGroup: 3000 volumePermissions: enabled: true", "helm install rhdh openshift-helm-charts/redhat-developer-hub [--version 1.4.2] --values /path/to/values.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/installing_red_hat_developer_hub_on_amazon_elastic_kubernetes_service/index
Chapter 3. Defining methods and metrics
Chapter 3. Defining methods and metrics An application plan sets limits and pricing rules for consumer access to your API. To enable enforcement of limits and rules, designate methods in your API for which to collect individual usage data or add metrics. Add a mapping rule to each designated method and each custom metric. The mapping rule specifies details about the usage data that you want to capture. For more information about methods and metrics, see Designating methods and adding metrics for capturing usage details . 3.1. Adding methods to products and backends Adding a method to a product or backend means that you are designating a method in your API for which you want to capture individual usage details. An application plan provides the ability to set a limit for each method that you add to a product or backend. The procedure for adding a method or metric to a product is similar to adding a method or metric to a backend. Procedure Navigate to [Your_product_name] > Integration > Methods & Metrics or [Your_backend_name] > Methods & Metrics . Click New method . In the Friendly name field, enter a short description of the method. This name is displayed in different sections of the 3scale Admin Portal. The friendly name must be unique for the product. Important Be careful with changing the system name of the methods or deleting them. These changes can break your already deployed 3scale integration if there are mapping rules pointing to the system name of the method. In the System name field, enter the name of the method in your API to use to report the usage through the 3scale Service Management API. The system name must conform to these rules: Unique in the product or backend Contain only alphanumeric characters, underscore _ , hyphen - or forward slash / No spaces Otherwise, you are free to decide what the system name looks like. It can be the same as the endpoint ( /status ), or, for example, it can include the method and the path ( GET_/status ). Optional: In the Description field, enter a more detailed description of the method. Click Create Method . Verification steps Added methods are available in your application plans. steps Edit limits and pricing rules for each method by going to [Your_product_name] > Applications > Application Plans > [plan_you_want_to_edit] . 3.2. Adding metrics to products and backends Adding a metric specifies a usage unit that you want to capture for all calls to your API. An application plan provides the ability to set a limit for each metric that you add to a product or backend. The procedure for adding a method or metric to a product is similar to adding a method or metric to a backend. Procedure Navigate to [Your_product_name] > Integration > Methods & Metrics or [Your_backend_name] > Methods & Metrics . Click New metric . In the Friendly name field, enter a short description of the metric. This name is displayed in different sections of the 3scale Admin Portal. The friendly name must be unique for the product. Important Be careful with changing the system name of the metrics or deleting them. These changes can break your already deployed 3scale integration if there are mapping rules pointing to the system name of the metric. In the System name field, enter the name of the metric in your API to use to report the usage through the 3scale Service Management API. The system name must conform to these rules: Unique in the product or backend Contain only alphanumeric characters, underscore _ , hyphen - or forward slash / No spaces Otherwise, you are free to decide what the system name looks like. In the Unit field, enter the unit. Use a singular noun, for example, hit . The singular will become plural in the analytics charts. Optional: In the Description field, enter a more detailed description of the metric. Click Create Metric . Verification steps Added metrics are available in your application plans. steps Edit limits and pricing rules for each metric by going to [Your_product_name] > Applications > Application Plans > [plan_you_want_to_edit] . Map your metrics to one or more URL pattern by going to [Your_product_name] > Integration > Mapping Rules . See Adding mapping rules to methods and metrics . 3.3. Alternatives for importing methods and metrics If your API has multiple endpoints, there are two ways to automatically designate methods and add metrics to 3scale products and backends: Importing via Swagger spec . Importing via RAML spec . 3.4. Adding mapping rules to methods and metrics Mapping rules are operations that are mapped to previously created methods and metrics in your products and backends. Note Mapping rules are required in your previously created methods, however, they are optional for metrics. Procedure Navigate to [Your_product_name] > Integration > Mapping Rules . Click Add Mapping Rule . The Verb field is pre-populated with the HTTP method, GET , however you can select other options from the dropdown list. In the Pattern field, add a valid URL that starts with an forward slash / . The URL can be from a wildcard you specified inside curly brackets {} . In the Metric or Method to increment field, select from one of your previously created methods or metrics. The Increment by field is pre-populated with 1 , however, change this to suit your own needs. Click the Create Mapping Rule button. Verification steps To verify your mapping rules, navigate to [Your_product_name] > Integration > Methods & Metrics . Each method and metric should have a check mark in the Mapped column.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/administering_red_hat_openshift_api_management/defining-methods-metrics_rhoam-defining-methods-metrics
6.4. Resource Meta Options
6.4. Resource Meta Options In addition to the resource-specific parameters, you can configure additional resource options for any resource. These options are used by the cluster to decide how your resource should behave. Table 6.3, "Resource Meta Options" describes these options. Table 6.3. Resource Meta Options Field Default Description priority 0 If not all resources can be active, the cluster will stop lower priority resources in order to keep higher priority ones active. target-role Started What state should the cluster attempt to keep this resource in? Allowed values: * Stopped - Force the resource to be stopped * Started - Allow the resource to be started (In the case of multistate resources, they will not promoted to master) * Master - Allow the resource to be started and, if appropriate, promoted is-managed true Is the cluster allowed to start and stop the resource? Allowed values: true , false resource-stickiness 0 Value to indicate how much the resource prefers to stay where it is. requires Calculated Indicates under what conditions the resource can be started. Defaults to fencing except under the conditions noted below. Possible values: * nothing - The cluster can always start the resource. * quorum - The cluster can only start this resource if a majority of the configured nodes are active. This is the default value if stonith-enabled is false or the resource's standard is stonith . * fencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been powered off. * unfencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been powered off and only on nodes that have been unfenced . This is the default value if the provides=unfencing stonith meta option has been set for a fencing device. migration-threshold INFINITY How many failures may occur for this resource on a node, before this node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible); by contrast, the cluster treats INFINITY (the default) as a very large but finite number. This option has an effect only if the failed operation has on-fail=restart (the default), and additionally for failed start operations if the cluster property start-failure-is-fatal is false . For information on configuring the migration-threshold option, see Section 8.2, "Moving Resources Due to Failure" . For information on the start-failure-is-fatal option, see Table 12.1, "Cluster Properties" . failure-timeout 0 (disabled) Used in conjunction with the migration-threshold option, indicates how many seconds to wait before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. As with any time-based actions, this is not guaranteed to be checked more frequently than the value of the cluster-recheck-interval cluster parameter. For information on configuring the failure-timeout option, see Section 8.2, "Moving Resources Due to Failure" . multiple-active stop_start What should the cluster do if it ever finds the resource active on more than one node. Allowed values: * block - mark the resource as unmanaged * stop_only - stop all active instances and leave them that way * stop_start - stop all active instances and start the resource in one location only To change the default value of a resource option, use the following command. For example, the following command resets the default value of resource-stickiness to 100. Omitting the options parameter from the pcs resource defaults displays a list of currently configured default values for resource options. The following example shows the output of this command after you have reset the default value of resource-stickiness to 100. Whether you have reset the default value of a resource meta option or not, you can set a resource option for a particular resource to a value other than the default when you create the resource. The following shows the format of the pcs resource create command you use when specifying a value for a resource meta option. For example, the following command creates a resource with a resource-stickiness value of 50. You can also set the value of a resource meta option for an existing resource, group, cloned resource, or master resource with the following command. In the following example, there is an existing resource named dummy_resource . This command sets the failure-timeout meta option to 20 seconds, so that the resource can attempt to restart on the same node in 20 seconds. After executing this command, you can display the values for the resource to verity that failure-timeout=20s is set. For information on resource clone meta options, see Section 9.1, "Resource Clones" . For information on resource master meta options, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" .
[ "pcs resource defaults options", "pcs resource defaults resource-stickiness=100", "pcs resource defaults resource-stickiness:100", "pcs resource create resource_id standard:provider:type | type [ resource options ] [meta meta_options ...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 meta resource-stickiness=50", "pcs resource meta resource_id | group_id | clone_id | master_id meta_options", "pcs resource meta dummy_resource failure-timeout=20s", "pcs resource show dummy_resource Resource: dummy_resource (class=ocf provider=heartbeat type=Dummy) Meta Attrs: failure-timeout=20s Operations: start interval=0s timeout=20 (dummy_resource-start-timeout-20) stop interval=0s timeout=20 (dummy_resource-stop-timeout-20) monitor interval=10 timeout=20 (dummy_resource-monitor-interval-10)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-resourceopts-HAAR
Appendix E. Swift response headers
Appendix E. Swift response headers The response from the server should include an X-Auth-Token value. The response might also contain a X-Storage-Url that provides the API_VERSION / ACCOUNT prefix that is specified in other requests throughout the API documentation. Table E.1. Response Headers Name Description Type X-Storage-Token The authorization token for the X-Auth-User specified in the request. String X-Storage-Url The URL and API_VERSION / ACCOUNT path for the user. String
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/developer_guide/swift-response-headers_dev
Chapter 24. Validating an installation
Chapter 24. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. 24.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 24.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: 24.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster between minor versions for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. 24.4. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.23.0 control-plane-1.example.com Ready master 41m v1.23.0 control-plane-2.example.com Ready master 45m v1.23.0 compute-2.example.com Ready worker 38m v1.23.0 compute-3.example.com Ready worker 33m v1.23.0 control-plane-3.example.com Ready master 41m v1.23.0 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. 24.5. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . 24.6. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview Details Cluster ID OpenShift Cluster Manager to open your cluster's Overview tab in the OpenShift Cluster Manager web console. From the Overview tab on OpenShift Cluster Manager , review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. 24.7. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 24.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See Monitoring overview for more information about the OpenShift Container Platform monitoring stack. 24.8. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts for further details about alerting in OpenShift Container Platform. 24.9. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster .
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.23.0 control-plane-1.example.com Ready master 41m v1.23.0 control-plane-2.example.com Ready master 45m v1.23.0 compute-2.example.com Ready worker 38m v1.23.0 compute-3.example.com Ready worker 33m v1.23.0 control-plane-3.example.com Ready master 41m v1.23.0", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/validating-an-installation
Chapter 8. Hardening the Shared File System (Manila)
Chapter 8. Hardening the Shared File System (Manila) The Shared File Systems service (manila) provides a set of services for managing shared file systems in a multi-project cloud environment. It is similar to how OpenStack provides block-based storage management through the Block Storage service (cinder) project. With manila, you can create a shared file system and manage its properties, such as visibility, accessibility, and usage quotas. For more information on manila, see the Storage Guide: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/storage_guide/ 8.1. Security considerations for manila Manila is registered with keystone, allowing you to the locate the API using the manila endpoints command. For example: By default, the manila API service only listens on port 8786 with tcp6 , which supports both IPv4 and IPv6. Manila uses multiple configurations files; these are stored in /var/lib/config-data/puppet-generated/manila/ : It is recommended that you configure manila to run under a non-root service account, and change file permissions so that only the system administrator can modify them. Manila expects that only administrators can write to configuration files, and services can only read them through their group membership in the manila group. Other users must not be able to read these files, as they contain service account passwords. Note Only the root user should own be able to write to the configuration for manila-rootwrap in rootwrap.conf , and the manila-rootwrap command filters for share nodes in rootwrap.d/share.filters . 8.2. Network and security models for manila A share driver in manila is a Python class that can be set for the back end to manage share operations, some of which are vendor-specific. The back end is an instance of the manila-share service. Manila has share drivers for many different storage systems, supporting both commercial vendors and open source solutions. Each share driver supports one or more back end modes: share servers and no share servers . An administrator selects a mode by specifying it in manila.conf , using driver_handles_share_servers . A share server is a logical Network Attached Storage (NAS) server that exports shared file systems. Back-end storage systems today are sophisticated and can isolate data paths and network paths between different OpenStack projects. A share server provisioned by a manila share driver would be created on an isolated network that belongs to the project user creating it. The share servers mode can be configured with either a flat network, or a segmented network, depending on the network provider. It is possible to have separate drivers for different modes use the same hardware. Depending on the chosen mode, you might need to provide more configuration details through the configuration file. 8.3. Share backend modes Each share driver supports at least one of the available driver modes: Share servers - driver_handles_share_servers = True - The share driver creates share servers and manages the share server life cycle. No share servers - driver_handles_share_servers = False - An administrator (rather than a share driver) manages the bare metal storage with a network interface, instead of relying on the presence of the share servers. No share servers mode - In this mode, drivers will not set up share servers, and consequently will not need to set up any new network interfaces. It is assumed that storage controller being managed by the driver has all of the network interfaces it is going to need. Drivers create shares directly without previously creating a share server. To create shares using drivers operating in this mode, manila does not require users to create any private share networks either. Note In no share servers mode , manila will assume that the network interfaces through which any shares are exported are already reachable by all projects. In the no share servers mode a share driver does not handle share server life cycle. An administrator is expected to handle the storage, networking, and other host-side configuration that might be necessary to provide project isolation. In this mode an administrator can set storage as a host which exports shares. All projects within the OpenStack cloud share a common network pipe. Lack of isolation can impact security and quality of service. When using share drivers that do not handle share servers, cloud users cannot be sure that their shares cannot be accessed by untrusted users by a tree walk over the top directory of their file systems. In public clouds it is possible that all network bandwidth is used by one client, so an administrator should care for this not to happen. Network balancing can be done by any means, and not necessarily just with OpenStack tools. Share servers mode - In this mode, a driver is able to create share servers and plug them to existing OpenStack networks. Manila determines if a new share server is required, and provides all the networking information necessary for the share drivers to create the requisite share server. When creating shares in the driver mode that handles share servers, users must provide a share network that they expect their shares to be exported upon. Manila uses this network to create network ports for the share server on this network. Users can configure security services in both share servers and no share servers back end modes. But with the no share servers back end mode, an administrator must set the required authentication services manually on the host. And in share servers mode manila can configure security services identified by the users on the share servers it spawns. 8.4. Networking requirements for manila Manila can integrate with different network types: flat , GRE , VLAN , VXLAN . Note Manila is only storing the network information in the database, with the real networks being supplied by the network provider. Manila supports using the OpenStack Networking service (neutron) and also "standalone" pre-configured networking. In the share servers back end mode, a share driver creates and manages a share server for each share network. This mode can be divided in two variations: Flat network in share servers backend mode Segmented network in share servers backend mode Users can use a network and subnet from the OpenStack Networking (neutron) service to create share networks. If the administrator decides to use the StandAloneNetworkPlugin , users need not provide any networking information since the administrator pre-configures this in the configuration file. Note Share servers spawned by some share drivers are Compute servers created with the Compute service. A few of these drivers do not support network plugins. After a share network is created, manila retrieves network information determined by a network provider: network type, segmentation identifier (if the network uses segmentation) and the IP block in CIDR notation from which to allocate the network. Users can create security services that specify security requirements such as AD or LDAP domains or a Kerberos realm. Manila assumes that any hosts referred to in security service are reachable from a subnet where a share server is created, which limits the number of cases where this mode could be used. Note Some share drivers might not support all types of segmentation, for more details see the specification for the driver you are using. 8.5. Security services with manila Manila can restrict access to file shares by integrating with network authentication protocols. Each project can have its own authentication domain that functions separately from the cloud's keystone authentication domain. This project domain can be used to provide authorization (AuthZ) services to applications that run within the OpenStack cloud, including manila. Available authentication protocols include LDAP, Kerberos, and Microsoft Active Directory authentication service. 8.5.1. Introduction to security services After creating a share and getting its export location, users have no permissions to mount it and operate with files. Users need to explicitly grant access to the new share. The client authentication and authorization (authN/authZ) can be performed in conjunction with security services. Manila can use LDAP, Kerberos, or Microsoft Active directory if they are supported by the share drivers and back ends. Note In some cases, it is required to explicitly specify one of the security services, for example, NetApp, EMC and Windows drivers require Active Directory for the creation of shares with the CIFS protocol. 8.5.2. Security services management A security service is a manila entity that abstracts a set of options that define a security zone for a particular shared file system protocol, such as an Active Directory domain or a Kerberos domain. The security service contains all of the information necessary for manila to create a server that joins a given domain. Using the API, users can create, update, view, and delete a security service. Security Services are designed on the following assumptions: Projects provide details for the security service. Administrators care about security services: they configure the server side of such security services. Inside the manila API, a security_service is associated with the share_networks . Share drivers use data in the security service to configure newly created share servers. When creating a security service, you can select one of these authentication services: LDAP - The Lightweight Directory Access Protocol. An application protocol for accessing and maintaining distributed directory information services over an IP network. Kerberos - The network authentication protocol which works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Active Directory - A directory service that Microsoft developed for Windows domain networks. Uses LDAP, Microsoft's version of Kerberos, and DNS. Manila allows you to configure a security service with these options: A DNS IP address that is used inside the project network. An IP address or hostname of a security service. A domain of a security service. A user or group name that is used by a project. A password for a user, if you specify a username. An existing security service entity can be associated with share network entities that inform manila about security and network configuration for a group of shares. You can also see the list of all security services for a specified share network and disassociate them from a share network. An administrator and users as share owners can manage access to the shares by creating access rules with authentication through an IP address, user, group, or TLS certificates. Authentication methods depend on which share driver and security service you configure and use. You can then configure a back end to use a specific authentication service, which can operate with clients without manila and keystone. Note Different authentication services are supported by different share drivers. For details of supporting of features by different drivers, see https://docs.openstack.org/manila/latest/admin/share_back_ends_feature_support_mapping.html Support for a specific authentication service by a driver does not mean that it can be configured with any shared file system protocol. Supported shared file systems protocols are NFS, CEPHFS, CIFS, GlusterFS, and HDFS. See the driver vendor's documentation for information on a specific driver and its configuration for security services. Some drivers support security services and other drivers do not support any of the security services mentioned above. For example, Generic Driver with the NFS or the CIFS shared file system protocol supports only authentication method through the IP address. Note In most cases, drivers that support the CIFS shared file system protocol can be configured to use Active Directory and manage access through the user authentication. Drivers that support the GlusterFS protocol can be used with authentication using TLS certificates. With drivers that support NFS protocol authentication using an IP address is the only supported option. Since the HDFS shared file system protocol uses NFS access it also can be configured to authenticate using an IP address. The recommended configuration for production manila deployments is to create a share with the CIFS share protocol and add to it the Microsoft Active Directory directory service. With this configuration you will get the centralized database and the service that integrates the Kerberos and LDAP approaches. 8.6. Share access control Users can specify which specific clients have access to the shares they create. Due to the keystone service, shares created by individual users are only visible to themselves and other users within the same project. Manila allows users to create shares that are "publicly" visible. These shares are visible in dashboards of users that belong to other OpenStack projects if the owners grant them access, they might even be able to mount these shares if they are made accessible on the network. While creating a share, use key --public to make your share public for other projects to see it in a list of shares and see its detailed information. According to the policy.json file, an administrator and the users as share owners can manage access to shares by means of creating access rules. Using the manila access-allow , manila access-deny , and manila access-list commands, you can grant, deny and list access to a specified share correspondingly. Note Manila does not provide end-to-end management of the storage system. You will still need to separately protect the backend system from unauthorized access. As a result, the protection offered by the manila API can still be circumvented if someone compromises the backend storage device, thereby gaining out of band access. When a share is just created there are no default access rules associated with it and permission to mount it. This could be seen in mounting config for export protocol in use. For example, there is an NFS command exportfs or /etc/exports file on the storage which controls each remote share and defines hosts that can access it. It is empty if nobody can mount a share. For a remote CIFS server there is net conf list command which shows the configuration. The hosts deny parameter should be set by the share driver to 0.0.0.0/0 which means that any host is denied to mount the share. Using manila, you can grant or deny access to a share by specifying one of these supported share access levels: rw - Read and write (RW) access. This is the default value. ro - Read-only (RO) access. Note The RO access level can be helpful in public shares when the administrator gives read and write (RW) access for some certain editors or contributors and gives read-only (RO) access for the rest of users (viewers). You must also specify one of these supported authentication methods: ip - Uses an IP address to authenticate an instance. IP access can be provided to clients addressable by well-formed IPv4 or IPv6 addresses or subnets denoted in CIDR notation. cert - Uses a TLS certificate to authenticate an instance. Specify the TLS identity as the IDENTKEY . A valid value is any string up to 64 characters long in the common name (CN) of the certificate. user - Authenticates by a specified user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 32 characters long. Note Supported authentication methods depend on which share driver, security service and shared file system protocol you use. Supported shared file system protocols are MapRFS, CEPHFS, NFS, CIFS, GlusterFS, and HDFS. Supported security services are LDAP, Kerberos protocols, or Microsoft Active Directory service. To verify that access rules (ACL) were configured correctly for a share, you can list its permissions. Note When selecting a security service for your share, you will need to consider whether the share driver is able to create access rules using the available authentication methods. Supported security services are LDAP, Kerberos, and Microsoft Active Directory. 8.7. Share type access control A share type is an administrator-defined type of service , comprised of a project visible description, and a list of non-project-visible key-value pairs called extra specifications . The manila-scheduler uses extra specifications to make scheduling decisions, and drivers control the share creation. An administrator can create and delete share types, and can also manage extra specifications that give them meaning inside manila. Projects can list the share types and can use them to create new shares. Share types can be created as public and private . This is the level of visibility for the share type that defines whether other projects can or cannot see it in a share types list and use it to create a new share. By default, share types are created as public. While creating a share type, use --is_public parameter set to False to make your share type private which will prevent other projects from seeing it in a list of share types and creating new shares with it. On the other hand, public share types are available to every project in a cloud. Manila allows an administrator to grant or deny access to the private share types for projects. You can also get information about the access for a specified private share type. Note Since share types due to their extra specifications help to filter or choose back ends before users create a share, using access to the share types you can limit clients in choice of specific back ends. For example, an administrator user in the admin project can create a private share type named my_type and see it in the list. In the console examples below, the logging in and out is omitted, and environment variables are provided to show the currently logged in user. The demo user in the demo project can list the types and the private share type named my_type is not visible for him. The administrator can grant access to the private share type for the demo project with the project ID equal to df29a37db5ae48d19b349fe947fada46 : As a result, users in the demo project can see the private share type and use it in the share creation: To deny access for a specified project, use manila type-access-remove <share_type> <project_id> . Note For an example that demonstrates the purpose of the share types, consider a situation where you have two back ends: LVM as a public storage and Ceph as a private storage. In this case you can grant access to certain projects and control access with user/group authentication method. 8.8. Policies The Shared File Systems service API is gated with role-based access control policies. These policies determine which user can access certain APIs in a certain way, and are defined in the service's policy.json file. Note The configuration file policy.json may be placed anywhere. The path /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json is expected by default. Whenever an API call is made to manila, the policy engine uses the appropriate policy definitions to determine if the call can be accepted. A policy rule determines under which circumstances the API call is permitted. The /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json file has rules where an action is always permitted, when the rule is an empty string: "" ; the rules based on the user role or rules; rules with boolean expressions. Below is a snippet of the policy.json file for manila. It can be expected to change between OpenStack releases. Users must be assigned to groups and roles that you refer to in your policies. This is done automatically by the service when user management commands are used. Note Any changes to /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json are effective immediately, which allows new policies to be implemented while manila is running. Manual modification of the policy can have unexpected side effects and is not encouraged. Manila does not provide a default policy file; all the default policies are within the code base. You can generate the default policies from the manila code by executing: oslopolicy-sample-generator --config-file=var/lib/config-data/puppet-generated/manila/etc/manila/manila-policy-generator.conf
[ "manila endpoints +-------------+-----------------------------------------+ | manila | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v1/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v1/20787a7b...| | internalURL | http://172.18.198.55:8786/v1/20787a7b...| | id | 82cc5535aa444632b64585f138cb9b61 | +-------------+-----------------------------------------+ +-------------+-----------------------------------------+ | manilav2 | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v2/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v2/20787a7b...| | internalURL | http://172.18.198.55:8786/v2/20787a7b...| | id | 2e8591bfcac4405fa7e5dc3fd61a2b85 | +-------------+-----------------------------------------+", "api-paste.ini manila.conf policy.json rootwrap.conf rootwrap.d ./rootwrap.d: share.filters", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+----------------------------------+----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+----------------------------------+----------------------+ | 5..| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| +----+--------+-----------+-----------+----------------------------------+----------------------+", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD openstack project list +----------------------------------+--------------------+ | ID | Name | +----------------------------------+--------------------+ | ... | ... | | df29a37db5ae48d19b349fe947fada46 | demo | +----------------------------------+--------------------+ USD manila type-access-add my_type df29a37db5ae48d19b349fe947fada46", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "{ \"context_is_admin\": \"role:admin\", \"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\", \"default\": \"rule:admin_or_owner\", \"share_extension:quotas:show\": \"\", \"share_extension:quotas:update\": \"rule:admin_api\", \"share_extension:quotas:delete\": \"rule:admin_api\", \"share_extension:quota_classes\": \"\", }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/hardening_the_shared_file_system_manila
Chapter 1. SSL and TLS for Red Hat Quay
Chapter 1. SSL and TLS for Red Hat Quay The Secure Sockets Layer (SSL) protocol was originally developed by Netscape Corporation to provide a mechanism for secure communication over the Internet. Subsequently, the protocol was adopted by the Internet Engineering Task Force (IETF) and renamed to Transport Layer Security (TLS). TLS (Transport Layer Security) is a cryptographic protocol used to secure network communications. When hardening system security settings by configuring preferred key-exchange protocols, authentication methods, and encryption algorithms, it is necessary to bear in mind that the broader the range of supported clients, the lower the resulting security. Conversely, strict security settings lead to limited compatibility with clients, which can result in some users being locked out of the system. Be sure to target the strictest available configuration and only relax it when it is required for compatibility reasons. Red Hat Quay can be configured to use SSL/TLS certificates to ensure secure communication between clients and the Red Hat Quay server. This configuration involves the use of valid SSL/TLS certificates, which can be obtained from a trusted Certificate Authority (CA) or generated as self-signed certificates for internal use. 1.1. Creating a Certificate Authority Use the following procedure to set up your own CA and use it to issue a server certificate for your domain. This allows you to secure communications with SSL/TLS using your own certificates. Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: Example openssl.cnf file [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf Confirm your created certificates and files by entering the following command: USD ls /path/to/certificates Example output rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr 1.2. Configuring SSL/TLS for standalone Red Hat Quay deployments For standalone Red Hat Quay deployments, SSL/TLS certificates must be configured by using the command-line interface and by updating your config.yaml file manually. 1.2.1. Configuring custom SSL/TLS certificates by using the command line interface SSL/TLS must be configured by using the command-line interface (CLI) and updating your config.yaml file manually. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory Navigate to the configuration directory by entering the following command: USD cd /path/to/configuration_directory Edit the config.yaml file and specify that you want Red Hat Quay to handle SSL/TLS: Example config.yaml file # ... SERVER_HOSTNAME: <quay-server.example.com> ... PREFERRED_URL_SCHEME: https # ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop <quay_container_name> Restart the registry by entering the following command: 1.2.2. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 1.2.3. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates . 1.3. Configuring custom SSL/TLS certificates for Red Hat Quay on OpenShift Container Platform When Red Hat Quay is deployed on OpenShift Container Platform, the tls component of the QuayRegistry custom resource definition (CRD) is set to managed by default. As a result, OpenShift Container Platform's Certificate Authority is used to create HTTPS endpoints and to rotate SSL/TLS certificates. You can configure custom SSL/TLS certificates before or after the initial deployment of Red Hat Quay on OpenShift Container Platform. This process involves creating or updating the configBundleSecret resource within the QuayRegistry YAML file to integrate your custom certificates and setting the tls component to unmanaged . Important When configuring custom SSL/TLS certificates for Red Hat Quay, administrators are responsible for certificate rotation. The following procedures enable you to apply custom SSL/TLS certificates to ensure secure communication and meet specific security requirements for your Red Hat Quay on OpenShift Container Platform deployment. These steps assumed you have already created a Certificate Authority (CA) bundle or an ssl.key , and an ssl.cert . The procedure then shows you how to integrate those files into your Red Hat Quay on OpenShift Container Platform deployment, which ensures that your registry operates with the specified security settings and conforms to your organization's SSL/TLS policies. Note The following procedure is used for securing Red Hat Quay with an HTTPS certificate. Note that this differs from managing Certificate Authority Trust Bundles. CA Trust Bundles are used by system processes within the Quay container to verify certificates against trusted CAs, and ensure that services like LDAP, storage backend, and OIDC connections are trusted. If you are adding the certificates to an existing deployment, you must include the existing config.yaml file in the new config bundle secret, even if you are not making any configuration changes. 1.3.1. Creating a custom SSL/TLS configBundleSecret resource After creating your custom SSL/TLS certificates, you can create a custom configBundleSecret resource for Red Hat Quay on OpenShift Container Platform, which allows you to upload ssl.cert and ssl.key files. Prerequisites You have base64 decoded the original config bundle into a config.yaml file. For more information, see Downloading the existing configuration . You have generated custom SSL certificates and keys. Procedure Create a new YAML file, for example, custom-ssl-config-bundle-secret.yaml : USD touch custom-ssl-config-bundle-secret.yaml Create the custom-ssl-config-bundle-secret resource. Create the resource by entering the following command: USD oc -n <namespace> create secret generic custom-ssl-config-bundle-secret \ --from-file=config.yaml=</path/to/config.yaml> \ 1 --from-file=ssl.cert=</path/to/ssl.cert> \ 2 --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt \ 3 --from-file=ssl.key=</path/to/ssl.key> \ 4 --dry-run=client -o yaml > custom-ssl-config-bundle-secret.yaml 1 Where <config.yaml> is your base64 decoded config.yaml file. 2 Where <ssl.cert> is your ssl.cert file. 3 Optional. The --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt field allows Red Hat Quay to recognize custom Certificate Authority (CA) files. If you are using LDAP, OIDC, or another service that uses custom CAs, you must add them via the extra_ca_cert path. For more information, see "Adding additional Certificate Authorities to Red Hat Quay on OpenShift Container Platform." 4 Where <ssl.key> is your ssl.key file. Optional. You can check the content of the custom-ssl-config-bundle-secret.yaml file by entering the following command: USD cat custom-ssl-config-bundle-secret.yaml Example output apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKRElTVFJJQlVURURfU1R... ssl.cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDekFKQmdOVkJBWVR... extra_ca_cert_<name-of-certificate>:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDe... ssl.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2c0VWxZOVV1SVJPY1oKcFhpZk9MVEdqaS9neUxQMlpiMXQ... kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace> Create the configBundleSecret resource by entering the following command: USD oc create -n <namespace> -f custom-ssl-config-bundle-secret.yaml Example output secret/custom-ssl-config-bundle-secret created Update the QuayRegistry YAML file to reference the custom-ssl-config-bundle-secret object by entering the following command: USD oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"custom-ssl-config-bundle-secret"}}' Example output quayregistry.quay.redhat.com/example-registry patched Set the tls component of the QuayRegistry YAML to false by entering the following command: USD oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"components":[{"kind":"tls","managed":false}]}}' Example output quayregistry.quay.redhat.com/example-registry patched Ensure that your QuayRegistry YAML file has been updated to use the custom SSL configBundleSecret resource, and that your and tls resource is set to false by entering the following command: USD oc get quayregistry <registry_name> -n <namespace> -o yaml Example output # ... configBundleSecret: custom-ssl-config-bundle-secret # ... spec: components: - kind: tls managed: false # ... Verification Confirm a TLS connection to the server and port by entering the following command: USD openssl s_client -connect <quay-server.example.com>:443 Example output # ... SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 0E995850DC3A8EB1A838E2FF06CE56DBA81BD8443E7FA05895FBD6FBDE9FE737 Session-ID-ctx: Resumption PSK: 1EA68F33C65A0F0FA2655BF9C1FE906152C6E3FEEE3AEB6B1B99BA7C41F06077989352C58E07CD2FBDC363FA8A542975 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 7200 (seconds) # ...
[ "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "ls /path/to/certificates", "rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr", "cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory", "cd /path/to/configuration_directory", "SERVER_HOSTNAME: <quay-server.example.com> PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop <quay_container_name>", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay", "touch custom-ssl-config-bundle-secret.yaml", "oc -n <namespace> create secret generic custom-ssl-config-bundle-secret --from-file=config.yaml=</path/to/config.yaml> \\ 1 --from-file=ssl.cert=</path/to/ssl.cert> \\ 2 --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt \\ 3 --from-file=ssl.key=</path/to/ssl.key> \\ 4 --dry-run=client -o yaml > custom-ssl-config-bundle-secret.yaml", "cat custom-ssl-config-bundle-secret.yaml", "apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKRElTVFJJQlVURURfU1R ssl.cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDekFKQmdOVkJBWVR extra_ca_cert_<name-of-certificate>:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDe ssl.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2c0VWxZOVV1SVJPY1oKcFhpZk9MVEdqaS9neUxQMlpiMXQ kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace>", "oc create -n <namespace> -f custom-ssl-config-bundle-secret.yaml", "secret/custom-ssl-config-bundle-secret created", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"configBundleSecret\":\"custom-ssl-config-bundle-secret\"}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"components\":[{\"kind\":\"tls\",\"managed\":false}]}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc get quayregistry <registry_name> -n <namespace> -o yaml", "configBundleSecret: custom-ssl-config-bundle-secret spec: components: - kind: tls managed: false", "openssl s_client -connect <quay-server.example.com>:443", "SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 0E995850DC3A8EB1A838E2FF06CE56DBA81BD8443E7FA05895FBD6FBDE9FE737 Session-ID-ctx: Resumption PSK: 1EA68F33C65A0F0FA2655BF9C1FE906152C6E3FEEE3AEB6B1B99BA7C41F06077989352C58E07CD2FBDC363FA8A542975 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 7200 (seconds)" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/securing_red_hat_quay/ssl-tls-quay-overview
function::switch_file
function::switch_file Name function::switch_file - switch to the output file Synopsis Arguments None Description This function sends a signal to the stapio process, commanding it to rotate to the output file when output is sent to file(s).
[ "switch_file()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-switch-file
Chapter 5. Locked-down, secure Firefox in a container
Chapter 5. Locked-down, secure Firefox in a container This section explains how to deploy a secure container that runs Firefox. This container gives you an instance of Firefox, containerized, with the following features: Completely unprivileged - needs no extra SELinux tweaking Only the list of cgroups is passed into the container from the host No port redirection because the container is available only to the host No X11 clipboard events or X events shared with your real host No shared sound hardware Everything runs with normal, non-elevated user permissions except for systemd (and systemd runs only to reap the other processes) unsynced sound, flash, and interactivity. Running Firefox Securely in a Container Retrieve the base image that we use to build this container: Load the base image you just downloaded into the local Docker registry: Create a directory to hold the Dockerfile that will map out this container: Retrieve the Dockerfile by using this curl command: Build the container and tag it with a tag called isolated_firefox : Run the container: Retrieve the CONTAINER_ID by using the docker ps command: Retrieve the IP address of the container: Open the container in vncviewer: To hear the audio associated with this container, open a browser and go to the following location: Note Do not forget to include the port in the URL. That means that you should not forget to type :8000 after the URL. You can also send the address of the container to VLC to play the content in VLC. Run the following command to launch the VLC instance:
[ "curl -o Fedora-Docker-Base-22-20150521.x86_64.tar.xz -L https://download.fedoraproject.org/pub/fedora/linux/releases/22/Docker/x86_64/Fedora-Docker-Base-22-20150521.x86_64.tar.xz", "sudo docker load < Fedora-Docker-Base-22-20150521.x86_64.tar.xz", "mkdir -p isolated_firefox", "curl -o isolated_firefox/Dockerfile -L http://pastebin.com/raw.php?i=cgYXQvJu", "sudo docker build -t isolated_firefox isolated_firefox .", "sudo docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro isolated_firefox", "sudo docker ps", "sudo docker inspect CONTAINER_ID| grep IPAddress\\\":", "vncviewer CONTAINER_IP", "http://CONTAINER_IP:8000/firefox.ogg", "vlc http://CONTAINER_IP:8000/firefox.ogg" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/container_security_guide/locked_down_secure_firefox_in_a_container
3.2. Type Conversions
3.2. Type Conversions Data types may be converted from one form to another either explicitly or implicitly. Implicit conversions automatically occur in criteria and expressions to ease development. Explicit data type conversions require the use of the CONVERT function or CAST keyword. Note Array conversions are only valid if you use them to convert or cast to and from compatible object arrays. You cannot, for example, cast from integer[] to long[] . Type Conversion Considerations Any type may be implicitly converted to the OBJECT type. The OBJECT type may be explicitly converted to any other type. The NULL value may be converted to any type. Any valid implicit conversion is also a valid explicit conversion. Situations involving literal values that would normally require explicit conversions may have the explicit conversion applied implicitly if no loss of information occurs. If widenComparisonToString is false (the default), when Red Hat JBoss Data Virtualization detects that an explicit conversion that can not be applied implicitly in criteria, it will throw an exception. If widenComparisonToString is true, then depending upon the comparison, a widening conversion is applied or the criteria are treated as false. With widenComparisonToString is false and created_by is a date, rather than converting not a date to a date value, Red Hat JBoss Data Virtualization throws an exception. When Red Hat JBoss Data Virtualization detects that an explicit conversion can not be applied implicitly in criteria, the criteria will be treated as false. For example: SELECT * FROM my.table WHERE created_by = 'not a date' Given that created_by is typed as date, rather than converting 'not a date' to a date value, the criteria will remain as a string comparison and therefore be false. Explicit conversions that are not allowed between two types will result in an exception before execution. Allowed explicit conversions may still fail during processing if the runtime values are not actually convertible. Warning The JBoss Data Virtualization conversions of float/double/bigdecimal/timestamp to string rely on the JDBC/Java defined output formats. Pushdown behavior attempts to mimic these results, but may vary depending upon the actual source type and conversion logic. Care must be taken to not assume the string form in criteria or other places where a variation may cause different results. Table 3.2. Type Conversions Source Type Valid Implicit Target Types Valid Explicit Target Types string clob char, boolean, byte, short, integer, long, biginteger, float, double, bigdecimal, xml [a] char string boolean string, byte, short, integer, long, biginteger, float, double, bigdecimal byte string, short, integer, long, biginteger, float, double, bigdecimal boolean short string, integer, long, biginteger, float, double, bigdecimal boolean, byte integer string, long, biginteger, double, bigdecimal boolean, byte, short, float long string, biginteger, bigdecimal boolean, byte, short, integer, float, double biginteger string, bigdecimal boolean, byte, short, integer, long, float, double bigdecimal string boolean, byte, short, integer, long, biginteger, float, double date string, timestamp time string, timestamp timestamp string date, time clob string xml string [b] [a] string to xml is equivalent to XMLPARSE(DOCUMENT exp). [b] xml to string is equivalent to XMLSERIALIZE(exp AS STRING).
[ "SELECT * FROM my.table WHERE created_by = 'not a date'", "SELECT * FROM my.table WHERE created_by = 'not a date'" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/type_conversions
probe::sunrpc.clnt.create_client
probe::sunrpc.clnt.create_client Name probe::sunrpc.clnt.create_client - Create an RPC client Synopsis sunrpc.clnt.create_client Values servername the server machine name prot the IP protocol number authflavor the authentication flavor port the port number progname the RPC program name vers the RPC program version number prog the RPC program number
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-create-client
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/managing_system_content_and_patch_updates_with_red_hat_insights/proc-providing-feedback-on-redhat-documentation
Chapter 15. Sharing files between the host and its virtual machines
Chapter 15. Sharing files between the host and its virtual machines You may frequently require to share data between your host system and the virtual machines (VMs) it runs. To do so quickly and efficiently, you can set up NFS file shares on your system. 15.1. Sharing files between the host and its virtual machines by using NFS For efficient file sharing between the RHEL 8 host system and the virtual machines (VMs), you can export an NFS share that VMs can mount and access. Prerequisites The nfs-utils package is installed on the host. Virtual network of NAT or bridge type is configured to connect a host to VMs. Optional: For improved security, ensure your VMs are compatible with NFS version 4 or later. Procedure On the host, export a directory with the files you want to share as a network file system (NFS): Share an existing directory with VMs. If you do not want to share any of the existing directories, create a new one: Obtain the IP address of each VM to share files from the host, for example, testguest1 and testguest2 : Edit the /etc/exports file on the host and add a line that includes the directory you want to share, IPs of VMs to share, and additional options: The following example shares the /usr/local/shared-files directory on the host with testguest1 and testguest2 , and enables the VMs to edit the content of the directory: Note To share a directory with a Windows VM, you need to ensure the Windows NFS client has write permissions in the shared directory. You can use the all_squash , anonuid , and anongid options in the /etc/exports file. /usr/local/shared-files/ 192.0.2.2(rw,sync,all_squash,anonuid= <directory-owner-UID> ,anongid= <directory-owner-GID> ) The <directory-owner-UID> and <directory-owner-GID> are the UID and GID of the local user that owns the shared directory on the host. For other options to manage NFS client permissions, follow the Securing the NFS service guide. Export the updated file system: Start the nfs-server service: Obtain the IP address of the host system to mount the shared directory on the VMs: Note that the relevant network connects the host with VMs to share files. Usually, this is virbr0 . Mount the shared directory on a Linux VM that is specified in the /etc/exports file: 192.0.2.1 : The IP address of the host. /usr/local/shared-files : A file-system path to the exported directory on the host. /mnt/host-share : A mount point on the VM Note The mount point must be an empty directory. To mount the shared directory on a Windows VM as mentioned in the /etc/exports file: Open a PowerShell shell prompt as an Administrator. Install the NFS-Client package on the Windows. To install on a server version, enter: To install on a desktop version, enter: Mount the directory exported by the host on a Windows VM: In this example: 192.0.2.1 : The IP address of the host. /usr/local/shared-files : A file system path to the exported directory on the host. Z: : The drive letter for a mount point. Note You must choose a drive letter that is not in use on the system. Verification List the contents of the shared directory on the VM so that you can share files between the host and the VM: In this example, replace <mount_point> with a file system path to the mounted shared directory. Additional resources Deploying an NFS server
[ "yum install nfs-utils -y", "mkdir shared-files", "virsh domifaddr testguest1 Name MAC address Protocol Address ---------------------------------------------------------------- vnet0 52:53:00:84:57:90 ipv4 192.0.2.2/24 virsh domifaddr testguest2 Name MAC address Protocol Address ---------------------------------------------------------------- vnet1 52:53:00:65:29:21 ipv4 192.0.2.3/24", "/home/<username>/Downloads/<shared_directory>/ <VM1-IP(options)> <VM2-IP(options)>", "/usr/local/shared-files/ 192.0.2.2(rw,sync) 192.0.2.3(rw,sync)", "exportfs -a", "systemctl start nfs-server", "ip addr 5: virbr0: [BROADCAST,MULTICAST,UP,LOWER_UP] mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:32:ff:a5 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global virbr0 valid_lft forever preferred_lft forever", "mount 192.0.2.1:/usr/local/shared-files /mnt/host-share", "Install-WindowsFeature NFS-Client", "Enable-WindowsOptionalFeature -FeatureName ServicesForNFS-ClientOnly, ClientForNFS-Infrastructure -Online -NoRestart", "C:\\Windows\\system32\\mount.exe -o anon \\\\192.0.2.1\\usr\\local\\shared-files Z:", "ls <mount_point> shared-file1 shared-file2 shared-file3" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/sharing-files-between-the-host-and-its-virtual-machines_configuring-and-managing-virtualization
Chapter 52. EntityOperatorSpec schema reference
Chapter 52. EntityOperatorSpec schema reference Used in: KafkaSpec Property Property type Description topicOperator EntityTopicOperatorSpec Configuration of the Topic Operator. userOperator EntityUserOperatorSpec Configuration of the User Operator. tlsSidecar TlsSidecar The tlsSidecar property has been deprecated. TLS sidecar was removed in Streams for Apache Kafka 2.8. This property is ignored. TLS sidecar configuration. template EntityOperatorTemplate Template for Entity Operator resources. The template allows users to specify how a Deployment and Pod is generated.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-entityoperatorspec-reference
Chapter 12. Image Storage (glance) Parameters
Chapter 12. Image Storage (glance) Parameters You can modify the glance service with image service parameters. Parameter Description CephClusterName The Ceph cluster name. The default value is ceph . GlanceApiOptVolumes List of optional volumes to be mounted. GlanceBackend The short name of the OpenStack Image Storage (glance) backend to use. Should be one of swift, rbd, cinder, or file. The default value is swift . GlanceBackendID The default backend's identifier. The default value is default_backend . GlanceCacheEnabled Enable OpenStack Image Storage (glance) Image Cache. The default value is False . GlanceCinderMountPointBase The mount point base when glance is using cinder as store and cinder backend is NFS. This mount point is where the NFS volume is mounted on the glance node. The default value is /var/lib/glance/mnt . GlanceDiskFormats List of allowed disk formats in Glance; all formats are allowed when left unset. GlanceEnabledImportMethods List of enabled Image Import Methods. Valid values in the list are glance-direct and web-download . The default value is web-download . GlanceIgnoreUserRoles List of user roles to be ignored for injecting image metadata properties. The default value is admin . GlanceImageCacheDir Base directory that the Image Cache uses. The default value is /var/lib/glance/image-cache . GlanceImageCacheMaxSize The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. The default value is 10737418240 . GlanceImageCacheStallTime The amount of time, in seconds, to let an image remain in the cache without being accessed. The default value is 86400 . GlanceImageConversionOutputFormat Desired output format for image conversion plugin. The default value is raw . GlanceImageImportPlugins List of enabled Image Import Plugins. Valid values in the list are image_conversion , inject_metadata , no_op . The default value is ['no_op'] . GlanceImageMemberQuota Maximum number of image members per image. Negative values evaluate to unlimited. The default value is 128 . GlanceImagePrefetcherInterval The interval in seconds to run periodic job cache_images. The default value is 300 . GlanceInjectMetadataProperties Metadata properties to be injected in image. GlanceLogFile The filepath of the file to use for logging messages from OpenStack Image Storage (glance). GlanceMultistoreConfig Dictionary of settings when configuring additional glance backends. The hash key is the backend ID, and the value is a dictionary of parameter values unique to that backend. Multiple rbd backends are allowed, but cinder, file and swift backends are limited to one each. Example: # Default glance store is rbd. GlanceBackend: rbd GlanceStoreDescription: Default rbd store # GlanceMultistoreConfig specifies a second rbd backend, plus a cinder # backend. GlanceMultistoreConfig: rbd2_store: GlanceBackend: rbd GlanceStoreDescription: Second rbd store CephClusterName: ceph2 # Override CephClientUserName if this cluster uses a different # client name. CephClientUserName: client2 cinder_store: GlanceBackend: cinder GlanceStoreDescription: OpenStack Block Storage (cinder) store . GlanceNetappNfsEnabled When using GlanceBackend: file , Netapp mounts NFS share for image storage. The default value is False . GlanceNfsEnabled When using GlanceBackend: file , mount NFS share for image storage. The default value is False . GlanceNfsOptions NFS mount options for image storage when GlanceNfsEnabled is true. The default value is _netdev,bg,intr,context=system_u:object_r:svirt_sandbox_file_t:s0 . GlanceNfsShare NFS share to mount for image storage when GlanceNfsEnabled is true. GlanceNodeStagingUri URI that specifies the staging location to use when importing images. The default value is file:///var/lib/glance/staging . GlanceNotifierStrategy Strategy to use for OpenStack Image Storage (glance) notification queue. The default value is noop . GlancePassword The password for the image storage service and database account. GlanceShowMultipleLocations Whether to show multiple image locations e.g for copy-on-write support on RBD or Netapp backends. Potential security risk, see glance.conf for more information. The default value is False . GlanceSparseUploadEnabled When using GlanceBackend file and rbd to enable or not sparse upload. The default value is False . GlanceStagingNfsOptions NFS mount options for NFS image import staging. The default value is _netdev,bg,intr,context=system_u:object_r:svirt_sandbox_file_t:s0 . GlanceStagingNfsShare NFS share to mount for image import staging. GlanceStoreDescription User facing description for the OpenStack Image Storage (glance) backend. The default value is Default glance store backend. . GlanceWorkers Set the number of workers for the image storage service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. MemcacheUseAdvancedPool Use the advanced (eventlet safe) memcached client pool. The default value is True . MultipathdEnable Whether to enable the multipath daemon. The default value is False . NetappShareLocation Netapp share to mount for image storage (when GlanceNetappNfsEnabled is true). NotificationDriver Driver or drivers to handle sending notifications. The default value is noop .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/overcloud_parameters/ref_image-storage-glance-parameters_overcloud_parameters
A.5. The VDSM Hook Environment
A.5. The VDSM Hook Environment Most hook scripts are run as the vdsm user and inherit the environment of the VDSM process. The exceptions are hook scripts triggered by the before_vdsm_start and after_vdsm_stop events. Hook scripts triggered by these events run as the root user and do not inherit the environment of the VDSM process.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/vdsm_hooks_environment
12.3. Customizing Notification Messages
12.3. Customizing Notification Messages The email notifications are constructed using a template for each type of message. This allows messages to be informative, easily reproducible, and easily customizable. The CA uses templates for its notification messages. Separate templates exist for HTML and plain text messages. 12.3.1. Customizing CA Notification Messages Each type of CA notification message has an HTML template and a plain text template associated with it. Messages are constructed from text, tokens, and, for the HTML templates, HTML markup. Tokens are variables, identified by a dollar sign ( USD ), in the message that are replaced by the current value when the message is constructed. See Table 12.3, "Notification Variables" for a list of available tokens. The contents of any message type can be modified by changing the text and tokens in the message template. The appearance of the HTML messages can be changed by modifying the HTML commands in the HTML message template. The default text version of the certificate-issuance-notification message is as follows: This template can be customized as desired, by rearranging, adding, or removing tokens and text, as shown: Notification message templates are located in the /var/lib/pki/ instance_name /ca/emails directory. The name and location of these messages can be changed; make the appropriate changes when configuring the notification. All template names can be changed except for the certificate rejected templates; these names must remain the same. The templates associated with certificate issuance and certificate rejection must be located in the same directory and must use the same extension. Table 12.1, "Notification Templates" lists the default template files provided for creating notification messages. Table 12.2, "Job Notification Email Templates" lists the default template files provided for creating job summary messages. Table 12.1. Notification Templates Filename Description certIssued_CA Template for plain text notification emails to end entities when certificates are issued. certIssued_CA.html Template for HTML-based notification emails to end entities when certificates are issued. certRequestRejected.html Template for HTML-based notification emails to end entities when certificate requests are rejected. certRequestRevoked_CA Template for plain text notification emails to end entities when a certificate is revoked. certRequestRevoked_CA.html Template for HTML-based notification emails to end entities when a certificate is revoked. reqInQueue_CA Template for plain text notification emails to agents when a request enters the queue. reqInQueue_CA.html Template for HTML-based notification emails to agents when a request enters the queue. Table 12.2. Job Notification Email Templates Filename Description rnJob1.txt Template for formulating the message content sent to end entities to inform them that their certificates are about to expire and that the certificates should be renewed or replaced before they expire. rnJob1Summary.txt Template for constructing the summary report to be sent to agents and administrators. Uses the rnJob1Item.txt template to format items in the message. rnJob1Item.txt Template for formatting the items included in the summary report. riq1Item.html Template for formatting the items included in the summary table, which is constructed using the riq1Summary.html template. riq1Summary.html Template for formulating the report or table that summarizes how many requests are pending in the agent queue of a Certificate Manager. publishCerts Template for the report or table that summarizes the certificates to be published to the directory. Uses the publishCertsItem.html template to format the items in the table. publishCertsItem.html Template for formatting the items included in the summary table. ExpiredUnpublishJob Template for the report or table that summarizes removal of expired certificates from the directory. Uses the ExpiredUnpublishJobItem template to format the items in the table. ExpiredUnpublishJobItem Template for formatting the items included in the summary table. Table 12.3, "Notification Variables" lists and defines the variables that can be used in the notification message templates. Table 12.3. Notification Variables Token Description USDCertType Specifies the type of certificate; these can be any of the following: TLS client ( client ) TLS server ( server ) CA signing certificate ( ca ) other ( other ). USDExecutionTime Gives the time the job was run. USDHexSerialNumber Gives the serial number of the certificate that was issued in hexadecimal format. USDHttpHost Gives the fully qualified host name of the Certificate Manager to which end entities should connect to retrieve their certificates. USDHttpPort Gives the Certificate Manager's end-entities (non-TLS) port number. USDInstanceID Gives the ID of the subsystem that sent the notification. USDIssuerDN Gives the DN of the CA that issued the certificate. USDNotAfter Gives the end date of the validity period. USDNotBefore Gives the beginning date of the validity period. USDRecipientEmail Gives the email address of the recipient. USDRequestId Gives the request ID. USDRequestorEmail Gives the email address of the requester. USDRequestType Gives the type of request that was made. USDRevocationDate Gives the date the certificate was revoked. USDSenderEmail Gives the email address of the sender; this is the same as the one specified in the Sender's E-mail Address field in the notification configuration. USDSerialNumber Gives the serial number of the certificate that has been issued; the serial number is displayed as a hexadecimal value in the resulting message. USDStatus Gives the request status. USDSubjectDN Gives the DN of the certificate subject. USDSummaryItemList Lists the items in the summary notification. Each item corresponds to a certificate the job detects for renewal or removal from the publishing directory. USDSummaryTotalFailure Gives the total number of items in the summary report that failed. USDSummaryTotalNum Gives the total number of certificate requests that are pending in the queue or the total number of certificates to be renewed or removed from the directory in the summary report. USDSummaryTotalSuccess Shows how many of the total number of items in the summary report succeeded.
[ "Your certificate request has been processed successfully. SubjectDN= USDSubjectDN IssuerDN= USDIssuerDN notAfter= USDNotAfter notBefore= USDNotBefore Serial Number= 0xUSDHexSerialNumber To get your certificate, please follow this URL: https://USDHttpHost:USDHttpPort/displayBySerial?op=displayBySerial& serialNumber=USDSerialNumber Please contact your admin if there is any problem. And, of course, this is just a \\USDSAMPLE\\USD email notification form.", "THE EXAMPLE COMPANY CERTIFICATE ISSUANCE CENTER Your certificate has been issued! You can pick up your new certificate at the following website: https://USDHttpHost:USDHttpPort/displayBySerial?op=displayBySerial& serialNumber=USDSerialNumber This certificate has been issued with the following information: Serial Number= 0xUSDHexSerialNumber Name of Certificate Holder = USDSubjectDN Name of Issuer = USDIssuerDN Certificate Expiration Date = USDNotAfter Certificate Validity Date = USDNotBefore Contact IT by calling X1234, or going to the IT website http://IT if you have any problems." ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Customizing_Notification_Messages
function::task_ns_egid
function::task_ns_egid Name function::task_ns_egid - The effective group identifier of the task Synopsis Arguments task task_struct pointer Description This function returns the effective group id of the given task.
[ "task_ns_egid:long(task:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-ns-egid
8.240. xorg-x11-drv-nouveau
8.240. xorg-x11-drv-nouveau 8.240.1. RHBA-2013:1664 - xorg-x11-drv-nouveau bug fix update Updated xorg-x11-drv-nouveau packages that fix one bug are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-nouveau packages provide the X.Org X11 noveau video driver for NVIDIA graphics chipsets. Bug Fix BZ# 876566 Previously, when using a VGA-compatible controller for certain NVIDIA Quadro graphics cards, the rendercheck test suite was not able to perform the complete check due to rendering problems. The xorg-x11-drv-nouveau packages have been fixed, rendering problems no longer occur, and the test suite completes the check as expected. Users of xorg-x11-drv-nouveau are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/xorg-x11-drv-nouveau
13.3.2. Let the Installer Prompt You for a Driver Update
13.3.2. Let the Installer Prompt You for a Driver Update Begin the installation normally for whatever method you have chosen. If the installer cannot load drivers for a piece of hardware that is essential for the installation process (for example, if it cannot detect any network or storage controllers), it prompts you to insert a driver update disk: Figure 13.5. The no driver found dialog Select Use a driver disk and refer to Section 13.4, "Specifying the Location of a Driver Update Image File or a Driver Update Disk" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-Driver_updates-Let_the_installer_prompt_you_for_a_driver_update-ppc
Backup and restore
Backup and restore Red Hat Advanced Cluster Security for Kubernetes 4.5 Backing up and restoring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
[ "export ROX_API_TOKEN=<api_token>", "export ROX_ENDPOINT=<address>:<port_number>", "roxctl central backup 1", "export ROX_ENDPOINT=<address>:<port_number>", "roxctl -p <admin_password> central backup 1", "oc get central -n _<central-namespace>_ _<central-name>_ -o yaml > central-cr.yaml", "oc get secret -n _<central-namespace>_ central-tls -o json | jq 'del(.metadata.ownerReferences)' > central-tls.json", "oc get secret -n _<central-namespace>_ central-htpasswd -o json | jq 'del(.metadata.ownerReferences)' > central-htpasswd.json", "helm get values --all -n _<central-namespace>_ _<central-helm-release>_ -o yaml > central-values-backup.yaml", "export ROX_API_TOKEN=<api_token>", "export ROX_ENDPOINT=<address>:<port_number>", "roxctl central db restore <backup_file> 1", "export ROX_ENDPOINT=<address>:<port_number>", "roxctl -p <admin_password> \\ 1 central db restore <backup_file> 2", "roxctl central generate interactive", "Enter path to the backup bundle from which to restore keys and certificates (optional): _<backup-file-path>_", "./central-bundle/central/scripts/setup.sh", "oc create -R -f central-bundle/central", "oc get pod -n stackrox -w", "cat central-bundle/password", "oc apply -f central-tls.json", "oc apply -f central-htpasswd.json", "oc apply -f central-cr.yaml", "roxctl central generate k8s pvc --backup-bundle _<path-to-backup-file>_ --output-format \"helm-values\"", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f central-values-backup.yaml -f central-bundle/values-private.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html-single/backup_and_restore/index
Chapter 10. Admission plugins
Chapter 10. Admission plugins Admission plugins are used to help regulate how OpenShift Container Platform functions. 10.1. About admission plugins Admission plugins intercept requests to the master API to validate resource requests. After a request is authenticated and authorized, the admission plugins ensure that any associated policies are followed. For example, they are commonly used to enforce security policy, resource limitations or configuration requirements. Admission plugins run in sequence as an admission chain. If any admission plugin in the sequence rejects a request, the whole chain is aborted and an error is returned. OpenShift Container Platform has a default set of admission plugins enabled for each resource type. These are required for proper functioning of the cluster. Admission plugins ignore resources that they are not responsible for. In addition to the defaults, the admission chain can be extended dynamically through webhook admission plugins that call out to custom webhook servers. There are two types of webhook admission plugins: a mutating admission plugin and a validating admission plugin. The mutating admission plugin runs first and can both modify resources and validate requests. The validating admission plugin validates requests and runs after the mutating admission plugin so that modifications triggered by the mutating admission plugin can also be validated. Calling webhook servers through a mutating admission plugin can produce side effects on resources related to the target object. In such situations, you must take steps to validate that the end result is as expected. Warning Dynamic admission should be used cautiously because it impacts cluster control plane operations. When calling webhook servers through webhook admission plugins in OpenShift Container Platform 4.15, ensure that you have read the documentation fully and tested for side effects of mutations. Include steps to restore resources back to their original state prior to mutation, in the event that a request does not pass through the entire admission chain. 10.2. Default admission plugins Default validating and admission plugins are enabled in OpenShift Container Platform 4.15. These default plugins contribute to fundamental control plane functionality, such as ingress policy, cluster resource limit override and quota policy. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. The following lists contain the default admission plugins: Example 10.1. Validating admission plugins LimitRanger ServiceAccount PodNodeSelector Priority PodTolerationRestriction OwnerReferencesPermissionEnforcement PersistentVolumeClaimResize RuntimeClass CertificateApproval CertificateSigning CertificateSubjectRestriction autoscaling.openshift.io/ManagementCPUsOverride authorization.openshift.io/RestrictSubjectBindings scheduling.openshift.io/OriginPodNodeEnvironment network.openshift.io/ExternalIPRanger network.openshift.io/RestrictedEndpointsAdmission image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/SCCExecRestrictions route.openshift.io/IngressAdmission config.openshift.io/ValidateAPIServer config.openshift.io/ValidateAuthentication config.openshift.io/ValidateFeatureGate config.openshift.io/ValidateConsole operator.openshift.io/ValidateDNS config.openshift.io/ValidateImage config.openshift.io/ValidateOAuth config.openshift.io/ValidateProject config.openshift.io/DenyDeleteClusterConfiguration config.openshift.io/ValidateScheduler quota.openshift.io/ValidateClusterResourceQuota security.openshift.io/ValidateSecurityContextConstraints authorization.openshift.io/ValidateRoleBindingRestriction config.openshift.io/ValidateNetwork operator.openshift.io/ValidateKubeControllerManager ValidatingAdmissionWebhook ResourceQuota quota.openshift.io/ClusterResourceQuota Example 10.2. Mutating admission plugins NamespaceLifecycle LimitRanger ServiceAccount NodeRestriction TaintNodesByCondition PodNodeSelector Priority DefaultTolerationSeconds PodTolerationRestriction DefaultStorageClass StorageObjectInUseProtection RuntimeClass DefaultIngressClass autoscaling.openshift.io/ManagementCPUsOverride scheduling.openshift.io/OriginPodNodeEnvironment image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/DefaultSecurityContextConstraints MutatingAdmissionWebhook 10.3. Webhook admission plugins In addition to OpenShift Container Platform default admission plugins, dynamic admission can be implemented through webhook admission plugins that call webhook servers, to extend the functionality of the admission chain. Webhook servers are called over HTTP at defined endpoints. There are two types of webhook admission plugins in OpenShift Container Platform: During the admission process, the mutating admission plugin can perform tasks, such as injecting affinity labels. At the end of the admission process, the validating admission plugin can be used to make sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, OpenShift Container Platform schedules the object as configured. When an API request comes in, mutating or validating admission plugins use the list of external webhooks in the configuration and call them in parallel: If all of the webhooks approve the request, the admission chain continues. If any of the webhooks deny the request, the admission request is denied and the reason for doing so is based on the first denial. If more than one webhook denies the admission request, only the first denial reason is returned to the user. If an error is encountered when calling a webhook, the request is either denied or the webhook is ignored depending on the error policy set. If the error policy is set to Ignore , the request is unconditionally accepted in the event of a failure. If the policy is set to Fail , failed requests are denied. Using Ignore can result in unpredictable behavior for all clients. Communication between the webhook admission plugin and the webhook server must use TLS. Generate a CA certificate and use the certificate to sign the server certificate that is used by your webhook admission server. The PEM-encoded CA certificate is supplied to the webhook admission plugin using a mechanism, such as service serving certificate secrets. The following diagram illustrates the sequential admission chain process within which multiple webhook servers are called. Figure 10.1. API admission chain with mutating and validating admission plugins An example webhook admission plugin use case is where all pods must have a common set of labels. In this example, the mutating admission plugin can inject labels and the validating admission plugin can check that labels are as expected. OpenShift Container Platform would subsequently schedule pods that include required labels and reject those that do not. Some common webhook admission plugin use cases include: Namespace reservation. Limiting custom network resources managed by the SR-IOV network device plugin. Defining tolerations that enable taints to qualify which pods should be scheduled on a node. Pod priority class validation. Note The maximum default webhook timeout value in OpenShift Container Platform is 13 seconds, and it cannot be changed. 10.4. Types of webhook admission plugins Cluster administrators can call out to webhook servers through the mutating admission plugin or the validating admission plugin in the API server admission chain. 10.4.1. Mutating admission plugin The mutating admission plugin is invoked during the mutation phase of the admission process, which allows modification of resource content before it is persisted. One example webhook that can be called through the mutating admission plugin is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification. Sample mutating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None 1 Specifies a mutating admission plugin configuration. 2 The name for the MutatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. Important In OpenShift Container Platform 4.15, objects created by users or control loops through a mutating admission plugin might return unexpected results, especially if values set in an initial request are overwritten, which is not recommended. 10.4.2. Validating admission plugin A validating admission plugin is invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a webhook which is called by the validating admission plugin, to ensure that all nodeSelector fields are constrained by the node selector restrictions on the namespace. Sample validating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown 1 Specifies a validating admission plugin configuration. 2 The name for the ValidatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. 10.5. Configuring dynamic admission This procedure outlines high-level steps to configure dynamic admission. The functionality of the admission chain is extended by configuring a webhook admission plugin to call out to a webhook server. The webhook server is also configured as an aggregated API server. This allows other OpenShift Container Platform components to communicate with the webhook using internal credentials and facilitates testing using the oc command. Additionally, this enables role based access control (RBAC) into the webhook and prevents token information from other API servers from being disclosed to the webhook. Prerequisites An OpenShift Container Platform account with cluster administrator access. The OpenShift Container Platform CLI ( oc ) installed. A published webhook server container image. Procedure Build a webhook server container image and make it available to the cluster using an image registry. Create a local CA key and certificate and use them to sign the webhook server's certificate signing request (CSR). Create a new project for webhook resources: USD oc new-project my-webhook-namespace 1 1 Note that the webhook server might expect a specific name. Define RBAC rules for the aggregated API service in a file called rbac.yaml : apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server 1 Delegates authentication and authorization to the webhook server API. 2 Allows the webhook server to access cluster resources. 3 Points to resources. This example points to the namespacereservations resource. 4 Enables the aggregated API server to create admission reviews. 5 Points to resources. This example points to the namespacereservations resource. 6 Enables the webhook server to access cluster resources. 7 Role binding to read the configuration for terminating authentication. 8 Default cluster role and cluster role bindings for an aggregated API server. Apply those RBAC rules to the cluster: USD oc auth reconcile -f rbac.yaml Create a YAML file called webhook-daemonset.yaml that is used to deploy a webhook as a daemon set server in a namespace: apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: "true" spec: selector: matchLabels: server: "true" template: metadata: name: server labels: server: "true" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert 1 Note that the webhook server might expect a specific container name. 2 Points to a webhook server container image. Replace <image_registry_username>/<image_path>:<tag> with the appropriate value. 3 Specifies webhook container run commands. Replace <container_commands> with the appropriate value. 4 Defines the target port within pods. This example uses port 8443. 5 Specifies the port used by the readiness probe. This example uses port 8443. Deploy the daemon set: USD oc apply -f webhook-daemonset.yaml Define a secret for the service serving certificate signer, within a YAML file called webhook-secret.yaml : apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2 1 References the signed webhook server certificate. Replace <server_certificate> with the appropriate certificate in base64 format. 2 References the signed webhook server key. Replace <server_key> with the appropriate key in base64 format. Create the secret: USD oc apply -f webhook-secret.yaml Define a service account and service, within a YAML file called webhook-service.yaml : apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: "true" ports: - port: 443 1 targetPort: 8443 2 1 Defines the port that the service listens on. This example uses port 443. 2 Defines the target port within pods that the service forwards connections to. This example uses port 8443. Expose the webhook server within the cluster: USD oc apply -f webhook-service.yaml Define a custom resource definition for the webhook server, in a file called webhook-crd.yaml : apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7 1 Reflects CustomResourceDefinition spec values and is in the format <plural>.<group> . This example uses the namespacereservations resource. 2 REST API group name. 3 REST API version name. 4 Accepted values are Namespaced or Cluster . 5 Plural name to be included in URL. 6 Alias seen in oc output. 7 The reference for resource manifests. Apply the custom resource definition: USD oc apply -f webhook-crd.yaml Configure the webhook server also as an aggregated API server, within a file called webhook-api-service.yaml : apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1 1 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. Deploy the aggregated API service: USD oc apply -f webhook-api-service.yaml Define the webhook admission plugin configuration within a file called webhook-config.yaml . This example uses the validating admission plugin: apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - "*" resources: - projectrequests - operations: - CREATE apiGroups: - "" apiVersions: - "*" resources: - namespaces failurePolicy: Fail 1 Name for the ValidatingWebhookConfiguration object. This example uses the namespacereservations resource. 2 Name of the webhook to call. This example uses the namespacereservations resource. 3 Enables access to the webhook server through the aggregated API. 4 The webhook URL used for admission requests. This example uses the namespacereservation resource. 5 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. Deploy the webhook: USD oc apply -f webhook-config.yaml Verify that the webhook is functioning as expected. For example, if you have configured dynamic admission to reserve specific namespaces, confirm that requests to create those namespaces are rejected and that requests to create non-reserved namespaces succeed. 10.6. Additional resources Configuring the SR-IOV Network Operator Controlling pod placement using node taints Pod priority names
[ "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown", "oc new-project my-webhook-namespace 1", "apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server", "oc auth reconcile -f rbac.yaml", "apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: \"true\" spec: selector: matchLabels: server: \"true\" template: metadata: name: server labels: server: \"true\" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert", "oc apply -f webhook-daemonset.yaml", "apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2", "oc apply -f webhook-secret.yaml", "apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: \"true\" ports: - port: 443 1 targetPort: 8443 2", "oc apply -f webhook-service.yaml", "apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7", "oc apply -f webhook-crd.yaml", "apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1", "oc apply -f webhook-api-service.yaml", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - \"*\" resources: - projectrequests - operations: - CREATE apiGroups: - \"\" apiVersions: - \"*\" resources: - namespaces failurePolicy: Fail", "oc apply -f webhook-config.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/architecture/admission-plug-ins
B.80. rdesktop
B.80. rdesktop B.80.1. RHSA-2011:0506 - Moderate: rdesktop security update An updated rdesktop package that fixes one security issue is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. rdesktop is a client for the Remote Desktop Server (previously, Terminal Server) in Microsoft Windows. It uses the Remote Desktop Protocol (RDP) to remotely present a user's desktop. CVE-2011-1595 A directory traversal flaw was found in the way rdesktop shared a local path with a remote server. If a user connects to a malicious server with rdesktop, the server could use this flaw to cause rdesktop to read and write to arbitrary, local files accessible to the user running rdesktop. Red Hat would like to thank Cendio AB for reporting this issue. Cendio AB acknowledges an anonymous contributor working with the SecuriTeam Secure Disclosure program as the original reporter. Users of rdesktop should upgrade to this updated package, which contains a backported patch to resolve this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rdesktop