title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Installing OpenShift Service Mesh
|
Chapter 1. Installing OpenShift Service Mesh Installing OpenShift Service Mesh consists of three main tasks: installing the OpenShift Operator, deploying Istio, and customizing the Istio configuration. Then, you can also choose to install the sample bookinfo application to push data through the mesh and explore mesh functionality. 1.1. About deploying Istio using the Red Hat OpenShift Service Mesh Operator To deploy Istio using the Red Hat OpenShift Service Mesh Operator, you must create an Istio resource. Then, the Operator creates an IstioRevision resource, which represents one revision of the Istio control plane. Based on the IstioRevision resource, the Operator deploys the Istio control plane, which includes the istiod Deployment resource and other resources. The Red Hat OpenShift Service Mesh Operator may create additional instances of the IstioRevision resource, depending on the update strategy defined in the Istio resource. 1.1.1. About update strategies The update strategy affects how the update process is performed. For each mesh, you select one of two strategies: InPlace RevisionBased The default strategy is the InPlace strategy. For more information, see the following documentation located in "Updating OpenShift Service Mesh": "About InPlace strategy" "About RevisionBased strategy" 1.2. Installing the Service Mesh Operator Prerequisites You have deployed a cluster on OpenShift Container Platform 4.14 or later. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure In the OpenShift Container Platform web console, navigate to the Operators OperatorHub page. Search for the Red Hat OpenShift Service Mesh 3 Operator. Locate the Service Mesh Operator, and click to select it. When the prompt that discusses the community operator appears, click Continue . Verify the Service Mesh Operator is version 3.0, and click Install . Use the default installation settings presented, and click Install to continue. Click Operators Installed Operators to verify that the Service Mesh Operator is installed. Succeeded should appear in the Status column. 1.2.1. About Service Mesh custom resource definitions Installing the Red Hat OpenShift Service Mesh Operator also installs custom resource definitions (CRD) that administrators can use to configure Istio for Service Mesh installations. The Operator Lifecycle Manager (OLM) installs two categories of CRDs: Sail Operator CRDs and Istio CRDs. Sail Operator CRDs define custom resources for installing and maintaining the Istio components required to operate a service mesh. These custom resources belong to the sailoperator.io API group and include the Istio , IstioRevision , IstioCNI , and ZTunnel resource kinds. For more information on how to configure these resources, see the sailoperator.io API reference documentation. Istio CRDs are associated with mesh configuration and service management. These CRDs define custom resources in several istio.io API groups, such as networking.istio.io and security.istio.io . The CRDs also include various resource kinds, such as AuthorizationPolicy , DestinationRule , and VirtualService , that administrators use to configure a service mesh. 1.3. About Istio deployment To deploy Istio, you must create two resources: Istio and IstioCNI . The Istio resource deploys and configures the Istio Control Plane. The IstioCNI resource deploys and configures the Istio Container Network Interface (CNI) plugin. You should create these resources in separate projects; therefore, you must create two projects as part of the Istio deployment process. You can use the OpenShift web console or the OpenShift CLI (oc) to create a project or a resource in your cluster. Note In the OpenShift Container Platform, a project is essentially a Kubernetes namespace with additional annotations, such as the range of user IDs that can be used in the project. Typically, the OpenShift Container Platform web console uses the term project, and the CLI uses the term namespace, but the terms are essentially synonymous. 1.3.1. Creating the Istio project using the web console The Service Mesh Operator deploys the Istio control plane to a project that you create. In this example, istio-system is the name of the project. Prerequisties The Red Hat OpenShift Service Mesh Operator must be installed. You are logged in to the OpenShift Container Platform web console as cluster-admin. Procedure In the OpenShift Container Platform web console, click Home Projects . Click Create Project . At the prompt, enter a name for the project in the Name field. For example, istio-system . The other fields provide supplementary information to the Istio resource definition and are optional. Click Create . The Service Mesh Operator deploys Istio to the project you specified. 1.3.2. Creating the Istio resource using the web console Create the Istio resource that will contain the YAML configuration file for your Istio deployment. The Red Hat OpenShift Service Mesh Operator uses information in the YAML file to create an instance of the Istio control plane. Prerequisties The Service Mesh Operator must be installed. You are logged in to the OpenShift Container Platform web console as cluster-admin. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select istio-system in the Project drop-down menu. Click the Service Mesh Operator. Click Istio . Click Create Istio . Select the istio-system project from the Namespace drop-down menu. Click Create . This action deploys the Istio control plane. When State: Healthy appears in the Status column, Istio is successfully deployed. 1.3.3. Creating the IstioCNI project using the web console The Service Mesh Operator deploys the Istio CNI plugin to a project that you create. In this example, istio-cni is the name of the project. Prerequisties The Red Hat OpenShift Service Mesh Operator must be installed. You are logged in to the OpenShift Container Platform web console as cluster-admin. Procedure In the OpenShift Container Platform web console, click Home Projects . Click Create Project . At the prompt, you must enter a name for the project in the Name field. For example, istio-cni . The other fields provide supplementary information and are optional. Click Create . 1.3.4. Creating the IstioCNI resource using the web console Create an Istio Container Network Interface (CNI) resource, which contains the configuration file for the Istio CNI plugin. The Service Mesh Operator uses the configuration specified by this resource to deploy the CNI pod. Prerequisties The Red Hat OpenShift Service Mesh Operator must be installed. You are logged in to the OpenShift Container Platform web console as cluster-admin. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select istio-cni in the Project drop-down menu. Click the Service Mesh Operator. Click IstioCNI . Click Create IstioCNI . Ensure that the name is default . Click Create . This action deploys the Istio CNI plugin. When State: Healthy appears in the Status column, the Istio CNI plugin is successfully deployed. 1.4. Scoping the Service Mesh with discovery selectors Service Mesh includes workloads that meet the following criteria: The control plane has discovered the workload. The workload has an Envoy proxy sidecar injected. By default, the control plane discovers workloads in all namespaces across the cluster, with the following results: Each proxy instance receives configuration for all namespaces, including workloads not enrolled in the mesh. Any workload with the appropriate pod or namespace injection label receives a proxy sidecar. In shared clusters, you might want to limit the scope of Service Mesh to only certain namespaces. This approach is especially useful if multiple service meshes run in the same cluster. 1.4.1. About discovery selectors With discovery selectors, the mesh administrator can control which namespaces the control plane can access. By using a Kubernetes label selector, the administrator sets the criteria for the namespaces visible to the control plane, excluding any namespaces that do not match the specified criteria. Note Istiod always opens a watch to OpenShift for all namespaces. However, discovery selectors ignore objects that are not selected very early in its processing, minimizing costs. The discoverySelectors field accepts an array of Kubernetes selectors, which apply to labels on namespaces. You can configure each selector for different use cases: Custom label names and values. For example, configure all namespaces with the label istio-discovery=enabled . A list of namespace labels by using set-based selectors with OR logic. For instance, configure namespaces with istio-discovery=enabled OR region=us-east1 . Inclusion and exclusion of namespaces. For example, configure namespaces with istio-discovery=enabled AND the label app=helloworld . Note Discovery selectors are not a security boundary. Istiod continues to have access to all namespaces even when you have configured the discoverySelector field. Additional resources Label selectors Resources that support set-based requirements 1.4.2. Scoping a Service Mesh by using discovery selectors If you know which namespaces to include in the Service Mesh, configure discoverySelectors during or after installation by adding the required selectors to the meshConfig.discoverySelectors section of the Istio resource. For example, configure Istio to discover only namespaces labeled istio-discovery=enabled . Prerequisites The OpenShift Service Mesh operator is installed. An Istio CNI resource is created. Procedure Add a label to the namespace containing the Istio control plane, for example, the istio-system system namespace. USD oc label namespace istio-system istio-discovery=enabled Modify the Istio control plane resource to include a discoverySelectors section with the same label. kind: Istio apiVersion: sailoperator.io/v1alpha1 metadata: name: default spec: namespace: istio-system values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled Apply the Istio CR: USD oc apply -f istio.yaml Ensure that all namespaces that will contain workloads that are to be part of the Service Mesh have both the discoverySelector label and, if needed, the appropriate Istio injection label. Note Discovery selectors help restrict the scope of a single Service Mesh and are essential for limiting the control plane scope when you deploy multiple Istio control planes in a single cluster. steps Deploying the Bookinfo application 1.5. About the Bookinfo application Installing the bookinfo example application consists of two main tasks: deploying the application and creating a gateway so the application is accessible outside the cluster. You can use the bookinfo application to explore service mesh features. Using the bookinfo application, you can easily confirm that requests from a web browser pass through the mesh and reach the application. The bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, lists book details (ISBN, number of pages, and other information), and book reviews. The bookinfo application is exposed through the mesh, and the mesh configuration determines how the microservices comprising the application are used to serve requests. The review information comes from one of three services: reviews-v1 , reviews-v2 or reviews-v3 . If you deploy the bookinfo application without defining the reviews virtual service, then the mesh uses a round robin rule to route requests to a service. By deploying the reviews virtual service, you can specify a different behavior. For example, you can specify that if a user logs into the bookinfo application, then the mesh routes requests to the reviews-v2 service, and the application displays reviews with black stars. If a user does not log into the bookinfo application, then the mesh routes requests to the reviews-v3 service, and the application displays reviews with red stars. For more information, see Bookinfo Application in the upstream Istio documentation. 1.5.1. Deploying the Bookinfo application Prerequisites You have deployed a cluster on OpenShift Container Platform 4.15 or later. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. You have access to the OpenShift CLI (oc). You have installed the Red Hat OpenShift Service Mesh Operator, created the Istio resource, and the Operator has deployed Istio. You have created IstioCNI resource, and the Operator has deployed the necessary IstioCNI pods. Procedure In the OpenShift Container Platform web console, navigate to the Home Projects page. Click Create Project . Enter bookinfo in the Project name field. The Display name and Description fields provide supplementary information and are not required. Click Create . Apply the Istio discovery selector and injection label to the bookinfo namespace by entering the following command: USD oc label namespace bookinfo istio-discovery=enabled istio-injection=enabled Note In this example, the name of the Istio resource is default . If the Istio resource name is different, you must set the istio.io/rev label to the name of the Istio resource instead of adding the istio-injection=enabled label. Apply the bookinfo YAML file to deploy the bookinfo application by entering the following command: oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo Verification Verify that the bookinfo service is available by running the following command: USD oc get services -n bookinfo Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 172.30.137.21 <none> 9080/TCP 44s productpage ClusterIP 172.30.2.246 <none> 9080/TCP 43s ratings ClusterIP 172.30.33.85 <none> 9080/TCP 44s reviews ClusterIP 172.30.175.88 <none> 9080/TCP 44s Verify that the bookinfo pods are available by running the following command: USD oc get pods -n bookinfo Example output NAME READY STATUS RESTARTS AGE details-v1-698d88b-km2jg 2/2 Running 0 66s productpage-v1-675fc69cf-cvxv9 2/2 Running 0 65s ratings-v1-6484c4d9bb-tpx7d 2/2 Running 0 65s reviews-v1-5b5d6494f4-wsrwp 2/2 Running 0 65s reviews-v2-5b667bcbf8-4lsfd 2/2 Running 0 65s reviews-v3-5b9bd44f4-44hr6 2/2 Running 0 65s When the Ready columns displays 2/2 , the proxy sidecar was successfully injected. Confirm that Running appears in the Status column for each pod. Verify that the bookinfo application is running by sending a request to the bookinfo page. Run the following command: USD oc exec "USD(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>" 1.5.2. About accessing the Bookinfo application using a gateway The Red Hat OpenShift Service Mesh Operator does not deploy gateways. Gateways are not part of the control plane. As a security best-practice, Ingress and Egress gateways should be deployed in a different namespace than the namespace that contains the control plane. You can deploy gateways using either the Gateway API or the gateway injection method. 1.5.3. Accessing the Bookinfo application by using Istio gateway injection Gateway injection uses the same mechanisms as Istio sidecar injection to create a gateway from a Deployment resource that is paired with a Service resource. The Service resource can be made accessible from outside an OpenShift Container Platform cluster. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . The Red Hat OpenShift Service Mesh Operator must be installed. The Istio resource must be deployed. Procedure Create the istio-ingressgateway deployment and service by running the following command: USD oc apply -n bookinfo -f ingress-gateway.yaml Note This example uses a sample ingress-gateway.yaml file that is available in the Istio community repository. Configure the bookinfo application to use the new gateway. Apply the gateway configuration by running the following command: USD oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo Note To configure gateway injection with the bookinfo application, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed. Use a route to expose the gateway external to the cluster by running the following command: USD oc expose service istio-ingressgateway -n bookinfo Modify the YAML file to automatically scale the pod when ingress traffic increases. Example configuration apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: bookinfo spec: maxReplicas: 5 1 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway 1 This example sets the the maximum replicas to 5 and the minimum replicas to 2 . It also creates another replica when utilization reaches 80%. Specify the minimum number of pods that must be running on the node. Example configuration apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: bookinfo spec: minAvailable: 1 1 selector: matchLabels: istio: ingressgateway 1 This example ensures one replica is running if a pod gets restarted on a new node. Obtain the gateway host name and the URL for the product page by running the following command: USD HOST=USD(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}') Verify that the productpage is accessible from a web browser by running the following command: USD echo productpage URL: http://USDHOST/productpage 1.5.4. Accessing the Bookinfo application by using Gateway API The Kubernetes Gateway API deploys a gateway by creating a Gateway resource. In OpenShift Container Platform 4.15 and later versions. If you want your cluster to use the Gateway API CRDs, you must enable the CRDs because they are disabled by default. Note Red Hat provides support for using the Kubernetes Gateway API with Red Hat OpenShift Service Mesh. Red Hat does not provide support for the Kubernetes Gateway API custom resource definitions (CRDs). In this procedure, the use of community Gateway API CRDs is shown for demonstration purposes only. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . The Red Hat OpenShift Service Mesh Operator must be installed. The Istio resource must be deployed. Procedure Enable the Gateway API CRDs: USD oc get crd gateways.gateway.networking.k8s.io &> /dev/null || { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; } Create and configure a gateway using a Gateway resource and HTTPRoute resource: USD oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfo Note To configure a gateway with the bookinfo application by using the Gateway API, this example uses a sample gateway configuration file that must be applied in the namespace where the application is installed. Ensure that the Gateway API service is ready, and has an address allocated: USD oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfo Retrieve the host, port and gateway URL: USD export INGRESS_HOST=USD(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}') USD export INGRESS_PORT=USD(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}') USD export GATEWAY_URL=USDINGRESS_HOST:USDINGRESS_PORT Obtain the gateway host name and the URL of the product page: USD echo "http://USD{GATEWAY_URL}/productpage" Verify that the productpage is accessible from a web browser. 1.6. Customizing Istio configuration The values field of the Istio custom resource definition, which was created when the control plane was deployed, can be used to customize Istio configuration using Istio's Helm configuration values. When you create this resource using the OpenShift Container Platform web console, it is pre-populated with configuration settings to enable Istio to run on OpenShift. Procedure Click Operators Installed Operators . Click Istio in the Provided APIs column. Click the Istio instance, named default , in the Name column. Click YAML to view the Istio configuration and make modifications. For a list of available configuration for the values field, refer to Istio's artifacthub chart documentation . Base parameters Istiod parameters Gateway parameters CNI parameters ZTunnel parameters Additional resources Service Mesh 3.0 Operator community documentation
|
[
"oc label namespace istio-system istio-discovery=enabled",
"kind: Istio apiVersion: sailoperator.io/v1alpha1 metadata: name: default spec: namespace: istio-system values: meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled",
"oc apply -f istio.yaml",
"oc label namespace bookinfo istio-discovery=enabled istio-injection=enabled",
"apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo",
"oc get services -n bookinfo",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 172.30.137.21 <none> 9080/TCP 44s productpage ClusterIP 172.30.2.246 <none> 9080/TCP 43s ratings ClusterIP 172.30.33.85 <none> 9080/TCP 44s reviews ClusterIP 172.30.175.88 <none> 9080/TCP 44s",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-698d88b-km2jg 2/2 Running 0 66s productpage-v1-675fc69cf-cvxv9 2/2 Running 0 65s ratings-v1-6484c4d9bb-tpx7d 2/2 Running 0 65s reviews-v1-5b5d6494f4-wsrwp 2/2 Running 0 65s reviews-v2-5b667bcbf8-4lsfd 2/2 Running 0 65s reviews-v3-5b9bd44f4-44hr6 2/2 Running 0 65s",
"oc exec \"USD(oc get pod -l app=ratings -n bookinfo -o jsonpath='{.items[0].metadata.name}')\" -c ratings -n bookinfo -- curl -sS productpage:9080/productpage | grep -o \"<title>.*</title>\"",
"oc apply -n bookinfo -f ingress-gateway.yaml",
"oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo",
"oc expose service istio-ingressgateway -n bookinfo",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: bookinfo spec: maxReplicas: 5 1 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: bookinfo spec: minAvailable: 1 1 selector: matchLabels: istio: ingressgateway",
"HOST=USD(oc get route istio-ingressgateway -n bookinfo -o jsonpath='{.spec.host}')",
"echo productpage URL: http://USDHOST/productpage",
"oc get crd gateways.gateway.networking.k8s.io &> /dev/null || { oc kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0\" | oc apply -f -; }",
"oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/gateway-api/bookinfo-gateway.yaml -n bookinfo",
"oc wait --for=condition=programmed gtw bookinfo-gateway -n bookinfo",
"export INGRESS_HOST=USD(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.status.addresses[0].value}') export INGRESS_PORT=USD(oc get gtw bookinfo-gateway -n bookinfo -o jsonpath='{.spec.listeners[?(@.name==\"http\")].port}') export GATEWAY_URL=USDINGRESS_HOST:USDINGRESS_PORT",
"echo \"http://USD{GATEWAY_URL}/productpage\""
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/installing/ossm-installing-service-mesh
|
Chapter 15. Securing access to Kafka
|
Chapter 15. Securing access to Kafka Secure your Kafka cluster by managing the access a client has to Kafka brokers. A secure connection between Kafka brokers and clients can encompass the following: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users In Streams for Apache Kafka, securing a connection involves configuring listeners and user accounts: Listener configuration Use the Kafka resource to configure listeners for client connections to Kafka brokers. Listeners define how clients authenticate, such as using mTLS, SCRAM-SHA-512, OAuth 2.0, or custom authentication methods. To enhance security, configure TLS encryption to secure communication between Kafka brokers and clients. You can further secure TLS-based communication by specifying the supported TLS versions and cipher suites in the Kafka broker configuration. For an added layer of protection, use the Kafka resource to specify authorization methods for the Kafka cluster, such as simple, OAuth 2.0, OPA, or custom authorization. User accounts Set up user accounts and credentials with KafkaUser resources in Streams for Apache Kafka. Users represent your clients and determine how they should authenticate and authorize with the Kafka cluster. The authentication and authorization mechanisms specified in the user configuration must match the Kafka configuration. Additionally, define Access Control Lists (ACLs) to control user access to specific topics and actions for more fine-grained authorization. To further enhance security, specify user quotas to limit client access to Kafka brokers based on byte rates or CPU utilization. You can also add producer or consumer configuration to your clients if you wish to limit the TLS versions and cipher suites they use. The configuration on the clients must only use protocols and cipher suites that are enabled on the broker. Note If you are using an OAuth 2.0 to manage client access, user authentication and authorization credentials are managed through the authorization server. Streams for Apache Kafka operators automate the configuration process and create the certificates required for authentication. The Cluster Operator automatically sets up TLS certificates for data encryption and authentication within your cluster. 15.1. Security options for Kafka Use the Kafka resource to configure the mechanisms used for Kafka authentication and authorization. 15.1.1. Listener authentication Configure client authentication for Kafka brokers when creating listeners. Specify the listener authentication type using the Kafka.spec.kafka.listeners.authentication property in the Kafka resource. For clients inside the OpenShift cluster, you can create plain (without encryption) or tls internal listeners. The internal listener type use a headless service and the DNS names given to the broker pods. As an alternative to the headless service, you can also create a cluster-ip type of internal listener to expose Kafka using per-broker ClusterIP services. For clients outside the OpenShift cluster, you create external listeners and specify a connection mechanism, which can be nodeport , loadbalancer , ingress (Kubernetes only), or route (OpenShift only). For more information on the configuration options for connecting an external client, see Chapter 14, Setting up client access to a Kafka cluster . Supported authentication options: mTLS authentication (only on the listeners with TLS enabled encryption) SCRAM-SHA-512 authentication OAuth 2.0 token-based authentication Custom authentication TLS versions and cipher suites The authentication option you choose depends on how you wish to authenticate client access to Kafka brokers. Note Try exploring the standard authentication options before using custom authentication. Custom authentication allows for any type of Kafka-supported authentication. It can provide more flexibility, but also adds complexity. Figure 15.1. Kafka listener authentication options The listener authentication property is used to specify an authentication mechanism specific to that listener. If no authentication property is specified then the listener does not authenticate clients which connect through that listener. The listener will accept all connections without authentication. Authentication must be configured when using the User Operator to manage KafkaUsers . The following example shows: A plain listener configured for SCRAM-SHA-512 authentication A tls listener with mTLS authentication An external listener with mTLS authentication Each listener is configured with a unique name and port within a Kafka cluster. Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). Example listener authentication configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # ... 15.1.1.1. mTLS authentication mTLS authentication is always used for the communication between Kafka brokers and ZooKeeper pods. Streams for Apache Kafka can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. For mutual, or two-way, authentication, both the server and the client present certificates. When you configure mTLS authentication, the broker authenticates the client (client authentication) and the client authenticates the broker (server authentication). mTLS listener configuration in the Kafka resource requires the following: tls: true to specify TLS encryption and server authentication authentication.type: tls to specify the client authentication When a Kafka cluster is created by the Cluster Operator, it creates a new secret with the name <cluster_name>-cluster-ca-cert . The secret contains a CA certificate. The CA certificate is in PEM and PKCS #12 format . To verify a Kafka cluster, add the CA certificate to the truststore in your client configuration. To verify a client, add a user certificate and key to the keystore in your client configuration. For more information on configuring a client for mTLS, see Section 15.2.2, "User authentication" . Note TLS authentication is more commonly one-way, with one party authenticating the identity of another. For example, when HTTPS is used between a web browser and a web server, the browser obtains proof of the identity of the web server. 15.1.1.2. SCRAM-SHA-512 authentication SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. Streams for Apache Kafka can configure Kafka to use SASL (Simple Authentication and Security Layer) SCRAM-SHA-512 to provide authentication on both unencrypted and encrypted client connections. When SCRAM-SHA-512 authentication is used with a TLS connection, the TLS protocol provides the encryption, but is not used for authentication. The following properties of SCRAM make it safe to use SCRAM-SHA-512 even on unencrypted connections: The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user. The server and client each generate a new challenge for each authentication exchange. This means that the exchange is resilient against replay attacks. When KafkaUser.spec.authentication.type is configured with scram-sha-512 the User Operator will generate a random 12-character password consisting of upper and lowercase ASCII letters and numbers. 15.1.1.3. Network policies By default, Streams for Apache Kafka automatically creates a NetworkPolicy resource for every listener that is enabled on a Kafka broker. This NetworkPolicy allows applications to connect to listeners in all namespaces. Use network policies as part of the listener configuration. If you want to restrict access to a listener at the network level to only selected applications or namespaces, use the networkPolicyPeers property. Each listener can have a different networkPolicyPeers configuration . For more information on network policy peers, refer to the NetworkPolicyPeer API reference . If you want to use custom network policies, you can set the STRIMZI_NETWORK_POLICY_GENERATION environment variable to false in the Cluster Operator configuration. For more information, see Section 9.5, "Configuring the Cluster Operator" . Note Your configuration of OpenShift must support ingress NetworkPolicies in order to use network policies in Streams for Apache Kafka. 15.1.1.4. Providing listener certificates You can provide your own server certificates, called Kafka listener certificates , for TLS listeners or external listeners which have TLS encryption enabled. For more information, see Section 15.3.4, "Providing your own Kafka listener certificates for TLS encryption" . Additional resources GenericKafkaListener schema reference 15.1.2. Kafka authorization Configure authorization for Kafka brokers using the Kafka.spec.kafka.authorization property in the Kafka resource. If the authorization property is missing, no authorization is enabled and clients have no restrictions. When enabled, authorization is applied to all enabled listeners. The authorization method is defined in the type field. Supported authorization options: Simple authorization OAuth 2.0 authorization (if you are using OAuth 2.0 token based authentication) Open Policy Agent (OPA) authorization Custom authorization Figure 15.2. Kafka cluster authorization options 15.1.2.1. Super users Super users can access all resources in your Kafka cluster regardless of any access restrictions, and are supported by all authorization mechanisms. To designate super users for a Kafka cluster, add a list of user principals to the superUsers property. If a user uses mTLS authentication, the username is the common name from the TLS certificate subject prefixed with CN= . If you are not using the User Operator and using your own certificates for mTLS, the username is the full certificate subject. A full certificate subject can have the following fields: CN=user,OU=my_ou,O=my_org,L=my_location,ST=my_state,C=my_country_code . Omit any fields that are not present. An example configuration with super users apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 - CN=client_4,OU=my_ou,O=my_org,L=my_location,ST=my_state,C=US - CN=client_5,OU=my_ou,O=my_org,C=GB - CN=client_6,O=my_org # ... 15.2. Security options for Kafka clients Use the KafkaUser resource to configure the authentication mechanism, authorization mechanism, and access rights for Kafka clients. In terms of configuring security, clients are represented as users. You can authenticate and authorize user access to Kafka brokers. Authentication permits access, and authorization constrains the access to permissible actions. You can also create super users that have unconstrained access to Kafka brokers. The authentication and authorization mechanisms must match the specification for the listener used to access the Kafka brokers . For more information on configuring a KafkaUser resource to access Kafka brokers securely, see Section 14.4, "Setting up client access to a Kafka cluster using listeners" . 15.2.1. Identifying a Kafka cluster for user handling A KafkaUser resource includes a label that defines the appropriate name of the Kafka cluster (derived from the name of the Kafka resource) to which it belongs. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster The label is used by the User Operator to identify the KafkaUser resource and create a new user, and also in subsequent handling of the user. If the label does not match the Kafka cluster, the User Operator cannot identify the KafkaUser and the user is not created. If the status of the KafkaUser resource remains empty, check your label. 15.2.2. User authentication Use the KafkaUser custom resource to configure authentication credentials for users (clients) that require access to a Kafka cluster. Configure the credentials using the authentication property in KafkaUser.spec . By specifying a type , you control what credentials are generated. Supported authentication types: tls for mTLS authentication tls-external for mTLS authentication using external certificates scram-sha-512 for SCRAM-SHA-512 authentication If tls or scram-sha-512 is specified, the User Operator creates authentication credentials when it creates the user. If tls-external is specified, the user still uses mTLS, but no authentication credentials are created. Use this option when you're providing your own certificates. When no authentication type is specified, the User Operator does not create the user or its credentials. You can use tls-external to authenticate with mTLS using a certificate issued outside the User Operator. The User Operator does not generate a TLS certificate or a secret. You can still manage ACL rules and quotas through the User Operator in the same way as when you're using the tls mechanism. This means that you use the CN= USER-NAME format when specifying ACL rules and quotas. USER-NAME is the common name given in a TLS certificate. 15.2.2.1. mTLS authentication To use mTLS authentication, you set the type field in the KafkaUser resource to tls . Example user with mTLS authentication enabled apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls # ... The authentication type must match the equivalent configuration for the Kafka listener used to access the Kafka cluster. When the user is created by the User Operator, it creates a new secret with the same name as the KafkaUser resource. The secret contains a private and public key for mTLS. The public key is contained in a user certificate, which is signed by a clients CA (certificate authority) when it is created. All keys are in X.509 format. Note If you are using the clients CA generated by the Cluster Operator, the user certificates generated by the User Operator are also renewed when the clients CA is renewed by the Cluster Operator. The user secret provides keys and certificates in PEM and PKCS #12 formats . Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store When you configure a client, you specify the following: Truststore properties for the public cluster CA certificate to verify the identity of the Kafka cluster Keystore properties for the user authentication credentials to verify the client The configuration depends on the file format (PEM or PKCS #12). This example uses PKCS #12 stores, and the passwords required to access the credentials in the stores. Example client configuration using mTLS in PKCS #12 format bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6 1 The bootstrap server address to connect to the Kafka cluster. 2 The security protocol option when using TLS for encryption. 3 The truststore location contains the public key certificate ( ca.p12 ) for the Kafka cluster. A cluster CA certificate and password is generated by the Cluster Operator in the <cluster_name>-cluster-ca-cert secret when the Kafka cluster is created. 4 The password ( ca.password ) for accessing the truststore. 5 The keystore location contains the public key certificate ( user.p12 ) for the Kafka user. 6 The password ( user.password ) for accessing the keystore. 15.2.2.2. mTLS authentication using a certificate issued outside the User Operator To use mTLS authentication using a certificate issued outside the User Operator, you set the type field in the KafkaUser resource to tls-external . A secret and credentials are not created for the user. Example user with mTLS authentication that uses a certificate issued outside the User Operator apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls-external # ... 15.2.2.3. SCRAM-SHA-512 authentication To use the SCRAM-SHA-512 authentication mechanism, you set the type field in the KafkaUser resource to scram-sha-512 . Example user with SCRAM-SHA-512 authentication enabled apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 # ... When the user is created by the User Operator, it creates a new secret with the same name as the KafkaUser resource. The secret contains the generated password in the password key, which is encoded with base64. In order to use the password, it must be decoded. Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2 1 The generated password, base64 encoded. 2 The JAAS configuration string for SASL SCRAM-SHA-512 authentication, base64 encoded. Decoding the generated password: 15.2.2.3.1. Custom password configuration When a user is created, Streams for Apache Kafka generates a random password. You can use your own password instead of the one generated by Streams for Apache Kafka. To do so, create a secret with the password and reference it in the KafkaUser resource. Example user with a password set for SCRAM-SHA-512 authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 password: valueFrom: secretKeyRef: name: my-secret 1 key: my-password 2 # ... 1 The name of the secret containing the predefined password. 2 The key for the password stored inside the secret. 15.2.3. User authorization Use the KafkaUser custom resource to configure authorization rules for users (clients) that require access to a Kafka cluster. Configure the rules using the authorization property in KafkaUser.spec . By specifying a type , you control what rules are used. To use simple authorization, you set the type property to simple in KafkaUser.spec.authorization . The simple authorization uses the Kafka Admin API to manage the ACL rules inside your Kafka cluster. Whether ACL management in the User Operator is enabled or not depends on your authorization configuration in the Kafka cluster. For simple authorization, ACL management is always enabled. For OPA authorization, ACL management is always disabled. Authorization rules are configured in the OPA server. For Red Hat Single Sign-On authorization, you can manage the ACL rules directly in Red Hat Single Sign-On. You can also delegate authorization to the simple authorizer as a fallback option in the configuration. When delegation to the simple authorizer is enabled, the User Operator will enable management of ACL rules as well. For custom authorization using a custom authorization plugin, use the supportsAdminApi property in the .spec.kafka.authorization configuration of the Kafka custom resource to enable or disable the support. Authorization is cluster-wide. The authorization type must match the equivalent configuration in the Kafka custom resource. If ACL management is not enabled, Streams for Apache Kafka rejects a resource if it contains any ACL rules. If you're using a standalone deployment of the User Operator, ACL management is enabled by default. You can disable it using the STRIMZI_ACLS_ADMIN_API_SUPPORTED environment variable. If no authorization is specified, the User Operator does not provision any access rights for the user. Whether such a KafkaUser can still access resources depends on the authorizer being used. For example, for simple authorization, this is determined by the allow.everyone.if.no.acl.found configuration in the Kafka cluster. 15.2.3.1. ACL rules simple authorization uses ACL rules to manage access to Kafka brokers. ACL rules grant access rights to the user, which you specify in the acls property. For more information about the AclRule object, see the AclRule schema reference . 15.2.3.2. Super user access to Kafka brokers If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints defined in ACLs in KafkaUser . For more information on configuring super user access to brokers, see Kafka authorization . 15.2.3.3. User quotas You can configure the spec for the KafkaUser resource to enforce quotas so that a user does not exceed a configured level of access to Kafka brokers. You can set size-based network usage and time-based CPU utilization thresholds. You can also add a partition mutation quota to control the rate at which requests to change partitions are accepted for user requests. An example KafkaUser with user quotas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4 1 Byte-per-second quota on the amount of data the user can push to a Kafka broker 2 Byte-per-second quota on the amount of data the user can fetch from a Kafka broker 3 CPU utilization limit as a percentage of time for a client group 4 Number of concurrent partition creation and deletion operations (mutations) allowed per second For more information on these properties, see the KafkaUserQuotas schema reference . 15.3. Securing access to Kafka brokers To establish secure access to Kafka brokers, you configure and apply: A Kafka resource to: Create listeners with a specified authentication type Configure authorization for the whole Kafka cluster A KafkaUser resource to access the Kafka brokers securely through the listeners Configure the Kafka resource to set up: Listener authentication Network policies that restrict access to Kafka listeners Kafka authorization Super users for unconstrained access to brokers Authentication is configured independently for each listener. Authorization is always configured for the whole Kafka cluster. The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster. You can replace the certificates generated by the Cluster Operator by installing your own certificates . You can also provide your own server certificates and private keys for any listener with TLS encryption enabled. These user-provided certificates are called Kafka listener certificates . Providing Kafka listener certificates allows you to leverage existing security infrastructure, such as your organization's private CA or a public CA. Kafka clients will need to trust the CA which was used to sign the listener certificate. You must manually renew Kafka listener certificates when needed. Certificates are available in PKCS #12 format (.p12) and PEM (.crt) formats. Use KafkaUser to enable the authentication and authorization mechanisms that a specific client uses to access Kafka. Configure the KafkaUser resource to set up: Authentication to match the enabled listener authentication Authorization to match the enabled Kafka authorization Quotas to control the use of resources by clients The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type. Refer to the schema reference for more information on access configuration properties: Kafka schema reference KafkaUser schema reference GenericKafkaListener schema reference 15.3.1. Securing Kafka brokers This procedure shows the steps involved in securing Kafka brokers when running Streams for Apache Kafka. The security implemented for Kafka brokers must be compatible with the security implemented for the clients requiring access. Kafka.spec.kafka.listeners[*].authentication matches KafkaUser.spec.authentication Kafka.spec.kafka.authorization matches KafkaUser.spec.authorization The steps show the configuration for simple authorization and a listener using mTLS authentication. For more information on listener configuration, see the GenericKafkaListener schema reference . Alternatively, you can use SCRAM-SHA or OAuth 2.0 for listener authentication , and OAuth 2.0 or OPA for Kafka authorization . Procedure Configure the Kafka resource. Configure the authorization property for authorization. Configure the listeners property to create a listener with authentication. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... authorization: 1 type: simple superUsers: 2 - CN=client_1 - user_2 - CN=client_3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls 3 # ... zookeeper: # ... 1 Authorization enables simple authorization on the Kafka broker using the AclAuthorizer and StandardAuthorizer Kafka plugins . 2 List of user principals with unlimited access to Kafka. CN is the common name from the client certificate when mTLS authentication is used. 3 Listener authentication mechanisms may be configured for each listener, and specified as mTLS, SCRAM-SHA-512, or token-based OAuth 2.0 . If you are configuring an external listener, the configuration is dependent on the chosen connection mechanism. Create or update the Kafka resource. oc apply -f <kafka_configuration_file> The Kafka cluster is configured with a Kafka broker listener using mTLS authentication. A service is created for each Kafka broker pod. A service is created to serve as the bootstrap address for connection to the Kafka cluster. The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret <cluster_name> -cluster-ca-cert . 15.3.2. Securing user access to Kafka Create or modify a KafkaUser to represent a client that requires secure access to the Kafka cluster. When you configure the KafkaUser authentication and authorization mechanisms, ensure they match the equivalent Kafka configuration: KafkaUser.spec.authentication matches Kafka.spec.kafka.listeners[*].authentication KafkaUser.spec.authorization matches Kafka.spec.kafka.authorization This procedure shows how a user is created with mTLS authentication. You can also create a user with SCRAM-SHA authentication. The authentication required depends on the type of authentication configured for the Kafka broker listener . Note Authentication between Kafka users and Kafka brokers depends on the authentication settings for each. For example, it is not possible to authenticate a user with mTLS if it is not also enabled in the Kafka configuration. Prerequisites A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption . A running User Operator (typically deployed with the Entity Operator). The authentication type in KafkaUser should match the authentication configured in Kafka brokers. Procedure Configure the KafkaUser resource. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read 1 User authentication mechanism, defined as mutual tls or scram-sha-512 . 2 Simple authorization, which requires an accompanying list of ACL rules. Create or update the KafkaUser resource. oc apply -f <user_config_file> The user is created, as well as a Secret with the same name as the KafkaUser resource. The Secret contains a private and public key for mTLS authentication. For information on configuring a Kafka client with properties for secure connection to Kafka brokers, see Section 14.4, "Setting up client access to a Kafka cluster using listeners" . 15.3.3. Restricting access to Kafka listeners using network policies You can restrict access to a listener to only selected applications by using the networkPolicyPeers property. Prerequisites An OpenShift cluster with support for Ingress NetworkPolicies. The Cluster Operator is running. Procedure Open the Kafka resource. In the networkPolicyPeers property, define the application pods or namespaces that will be allowed to access the Kafka cluster. For example, to configure a tls listener to allow connections only from application pods with the label app set to kafka-client : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # ... zookeeper: # ... Create or update the resource. Use oc apply : oc apply -f your-file Additional resources networkPolicyPeers configuration NetworkPolicyPeer API reference 15.3.4. Providing your own Kafka listener certificates for TLS encryption Listeners provide client access to Kafka brokers. Configure listeners in the Kafka resource, including the configuration required for client access using TLS. By default, the listeners use certificates signed by the internal CA (certificate authority) certificates generated by Streams for Apache Kafka. A CA certificate is generated by the Cluster Operator when it creates a Kafka cluster. When you configure a client for TLS, you add the CA certificate to its truststore configuration to verify the Kafka cluster. You can also install and use your own CA certificates . Or you can configure a listener using brokerCertChainAndKey properties and use a custom server certificate. The brokerCertChainAndKey properties allow you to access Kafka brokers using your own custom certificates at the listener-level. You create a secret with your own private key and server certificate, then specify the key and certificate in the listener's brokerCertChainAndKey configuration. You can use a certificate signed by a public (external) CA or a private CA. If signed by a public CA, you usually won't need to add it to a client's truststore configuration. Custom certificates are not managed by Streams for Apache Kafka, so you need to renew them manually. Note Listener certificates are used for TLS encryption and server authentication only. They are not used for TLS client authentication. If you want to use your own certificate for TLS client authentication as well, you must install and use your own clients CA . Prerequisites The Cluster Operator is running. Each listener requires the following: A compatible server certificate signed by an external CA. (Provide an X.509 certificate in PEM format.) You can use one listener certificate for multiple listeners. Subject Alternative Names (SANs) are specified in the certificate for each listener. For more information, see Section 15.3.5, "Alternative subjects in server certificates for Kafka listeners" . If you are not using a self-signed certificate, you can provide a certificate that includes the whole CA chain in the certificate. You can only use the brokerCertChainAndKey properties if TLS encryption ( tls: true ) is configured for the listener. Note Streams for Apache Kafka does not support the use of encrypted private keys for TLS. The private key stored in the secret must be unencrypted for this to work. Procedure Create a Secret containing your private key and server certificate: oc create secret generic my-secret --from-file= my-listener-key.key --from-file= my-listener-certificate.crt Edit the Kafka resource for your cluster. Configure the listener to use your Secret , certificate file, and private key file in the configuration.brokerCertChainAndKey property. Example configuration for a loadbalancer external listener with TLS encryption enabled # ... listeners: - name: plain port: 9092 type: internal tls: false - name: external3 port: 9094 type: loadbalancer tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... Example configuration for a TLS listener # ... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... Apply the new configuration to create or update the resource: oc apply -f kafka.yaml The Cluster Operator starts a rolling update of the Kafka cluster, which updates the configuration of the listeners. Note A rolling update is also started if you update a Kafka listener certificate in a Secret that is already used by a listener. 15.3.5. Alternative subjects in server certificates for Kafka listeners In order to use TLS hostname verification with your own Kafka listener certificates , you must use the correct Subject Alternative Names (SANs) for each listener. The certificate SANs must specify hostnames for the following: All of the Kafka brokers in your cluster The Kafka cluster bootstrap service You can use wildcard certificates if they are supported by your CA. 15.3.5.1. Examples of SANs for internal listeners Use the following examples to help you specify hostnames of the SANs in your certificates for your internal listeners. Replace <cluster-name> with the name of the Kafka cluster and <namespace> with the OpenShift namespace where the cluster is running. Wildcards example for a type: internal listener //Kafka brokers *. <cluster-name> -kafka-brokers *. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc Non-wildcards example for a type: internal listener // Kafka brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc <cluster-name> -kafka-1. <cluster-name> -kafka-brokers <cluster-name> -kafka-1. <cluster-name> -kafka-brokers. <namespace> .svc # ... // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc Non-wildcards example for a type: cluster-ip listener // Kafka brokers <cluster-name> -kafka- <listener-name> -0 <cluster-name> -kafka- <listener-name> -0. <namespace> .svc <cluster-name> -kafka- <listener-name> -1 <cluster-name> -kafka- <listener-name> -1. <namespace> .svc # ... // Bootstrap service <cluster-name> -kafka- <listener-name> -bootstrap <cluster-name> -kafka- <listener-name> -bootstrap. <namespace> .svc 15.3.5.2. Examples of SANs for external listeners For external listeners which have TLS encryption enabled, the hostnames you need to specify in certificates depends on the external listener type . Table 15.1. SANs for each type of external listener External listener type In the SANs, specify... ingress Addresses of all Kafka broker Ingress resources and the address of the bootstrap Ingress . You can use a matching wildcard name. route Addresses of all Kafka broker Routes and the address of the bootstrap Route . You can use a matching wildcard name. loadbalancer Addresses of all Kafka broker loadbalancers and the bootstrap loadbalancer address. You can use a matching wildcard name. nodeport Addresses of all OpenShift worker nodes that the Kafka broker pods might be scheduled to. You can use a matching wildcard name. Additional resources Section 15.3.4, "Providing your own Kafka listener certificates for TLS encryption" 15.4. Using OAuth 2.0 token-based authentication Streams for Apache Kafka supports the use of OAuth 2.0 authentication using the OAUTHBEARER and PLAIN mechanisms. OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources. You can configure OAuth 2.0 authentication, then OAuth 2.0 authorization . Kafka brokers and clients both need to be configured to use OAuth 2.0. OAuth 2.0 authentication can also be used in conjunction with simple or OPA-based Kafka authorization . Using OAuth 2.0 token-based authentication, application clients can access resources on application servers (called resource servers ) without exposing account credentials. The application client passes an access token as a means of authenticating, which application servers can also use to determine the level of access to grant. The authorization server handles the granting of access and inquiries about access. In the context of Streams for Apache Kafka: Kafka brokers act as OAuth 2.0 resource servers Kafka clients act as OAuth 2.0 application clients Kafka clients authenticate to Kafka brokers. The brokers and clients communicate with the OAuth 2.0 authorization server, as necessary, to obtain or validate access tokens. For a deployment of Streams for Apache Kafka, OAuth 2.0 integration provides: Server-side OAuth 2.0 support for Kafka brokers Client-side OAuth 2.0 support for Kafka MirrorMaker, Kafka Connect and the Kafka Bridge 15.4.1. OAuth 2.0 authentication mechanisms Streams for Apache Kafka supports the OAUTHBEARER and PLAIN mechanisms for OAuth 2.0 authentication. Both mechanisms allow Kafka clients to establish authenticated sessions with Kafka brokers. The authentication flow between clients, the authorization server, and Kafka brokers is different for each mechanism. We recommend that you configure clients to use OAUTHBEARER whenever possible. OAUTHBEARER provides a higher level of security than PLAIN because client credentials are never shared with Kafka brokers. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER. You configure Kafka broker listeners to use OAuth 2.0 authentication for connecting clients. If necessary, you can use the OAUTHBEARER and PLAIN mechanisms on the same oauth listener. The properties to support each mechanism must be explicitly specified in the oauth listener configuration. OAUTHBEARER overview OAUTHBEARER is automatically enabled in the oauth listener configuration for the Kafka broker. You can set the enableOauthBearer property to true , though this is not required. # ... authentication: type: oauth # ... enableOauthBearer: true Many Kafka client tools use libraries that provide basic support for OAUTHBEARER at the protocol level. To support application development, Streams for Apache Kafka provides an OAuth callback handler for the upstream Kafka Client Java libraries (but not for other libraries). Therefore, you do not need to write your own callback handlers. An application client can use the callback handler to provide the access token. Clients written in other languages, such as Go, must use custom code to connect to the authorization server and obtain the access token. With OAUTHBEARER, the client initiates a session with the Kafka broker for credentials exchange, where credentials take the form of a bearer token provided by the callback handler. Using the callbacks, you can configure token provision in one of three ways: Client ID and Secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time A long-lived refresh token, obtained manually at configuration time Note OAUTHBEARER authentication can only be used by Kafka clients that support the OAUTHBEARER mechanism at the protocol level. PLAIN overview To use PLAIN, you must enable it in the oauth listener configuration for the Kafka broker. In the following example, PLAIN is enabled in addition to OAUTHBEARER, which is enabled by default. If you want to use PLAIN only, you can disable OAUTHBEARER by setting enableOauthBearer to false . # ... authentication: type: oauth # ... enablePlain: true tokenEndpointUri: https:// OAUTH-SERVER-ADDRESS /auth/realms/external/protocol/openid-connect/token PLAIN is a simple authentication mechanism used by all Kafka client tools. To enable PLAIN to be used with OAuth 2.0 authentication, Streams for Apache Kafka provides OAuth 2.0 over PLAIN server-side callbacks. With the Streams for Apache Kafka implementation of PLAIN, the client credentials are not stored in ZooKeeper. Instead, client credentials are handled centrally behind a compliant authorization server, similar to when OAUTHBEARER authentication is used. When used with the OAuth 2.0 over PLAIN callbacks, Kafka clients authenticate with Kafka brokers using either of the following methods: Client ID and secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time For both methods, the client must provide the PLAIN username and password properties to pass credentials to the Kafka broker. The client uses these properties to pass a client ID and secret or username and access token. Client IDs and secrets are used to obtain access tokens. Access tokens are passed as password property values. You pass the access token with or without an USDaccessToken: prefix. If you configure a token endpoint ( tokenEndpointUri ) in the listener configuration, you need the prefix. If you don't configure a token endpoint ( tokenEndpointUri ) in the listener configuration, you don't need the prefix. The Kafka broker interprets the password as a raw access token. If the password is set as the access token, the username must be set to the same principal name that the Kafka broker obtains from the access token. You can specify username extraction options in your listener using the userNameClaim , fallbackUserNameClaim , fallbackUsernamePrefix , and userInfoEndpointUri properties. The username extraction process also depends on your authorization server; in particular, how it maps client IDs to account names. Note OAuth over PLAIN does not support password grant mechanism. You can only 'proxy' through SASL PLAIN mechanism the client credentials (clientId + secret) or the access token as described above. Additional resources Section 15.4.6.2, "Configuring OAuth 2.0 support for Kafka brokers" 15.4.2. OAuth 2.0 Kafka broker configuration Kafka broker configuration for OAuth 2.0 involves: Creating the OAuth 2.0 client in the authorization server Configuring OAuth 2.0 authentication in the Kafka custom resource Note In relation to the authorization server, Kafka brokers and Kafka clients are both regarded as OAuth 2.0 clients. 15.4.2.1. OAuth 2.0 client configuration on an authorization server To configure a Kafka broker to validate the token received during session initiation, the recommended approach is to create an OAuth 2.0 client definition in an authorization server, configured as confidential , with the following client credentials enabled: Client ID of kafka (for example) Client ID and Secret as the authentication mechanism Note You only need to use a client ID and secret when using a non-public introspection endpoint of the authorization server. The credentials are not typically required when using public authorization server endpoints, as with fast local JWT token validation. 15.4.2.2. OAuth 2.0 authentication configuration in the Kafka cluster To use OAuth 2.0 authentication in the Kafka cluster, you specify, for example, a tls listener configuration for your Kafka cluster custom resource with the authentication method oauth : Assigining the authentication method type for OAuth 2.0 apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth #... You can configure OAuth 2.0 authentication in your listeners. We recommend using OAuth 2.0 authentication together with TLS encryption ( tls: true ). Without encryption, the connection is vulnerable to network eavesdropping and unauthorized access through token theft. You configure an external listener with type: oauth for a secure transport layer to communicate with the client. Using OAuth 2.0 with an external listener # ... listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #... The tls property is false by default, so it must be enabled. When you have defined the type of authentication as OAuth 2.0, you add configuration based on the type of validation, either as fast local JWT validation or token validation using an introspection endpoint . The procedure to configure OAuth 2.0 for listeners, with descriptions and examples, is described in Configuring OAuth 2.0 support for Kafka brokers . 15.4.2.3. Fast local JWT token validation configuration Fast local JWT token validation checks a JWT token signature locally. The local check ensures that a token: Conforms to type by containing a ( typ ) claim value of Bearer for an access token Is valid (not expired) Has an issuer that matches a validIssuerURI You specify a validIssuerURI attribute when you configure the listener, so that any tokens not issued by the authorization server are rejected. The authorization server does not need to be contacted during fast local JWT token validation. You activate fast local JWT token validation by specifying a jwksEndpointUri attribute, the endpoint exposed by the OAuth 2.0 authorization server. The endpoint contains the public keys used to validate signed JWT tokens, which are sent as credentials by Kafka clients. Note All communication with the authorization server should be performed using TLS encryption. You can configure a certificate truststore as an OpenShift Secret in your Streams for Apache Kafka project namespace, and use a tlsTrustedCertificates attribute to point to the OpenShift Secret containing the truststore file. You might want to configure a userNameClaim to properly extract a username from the JWT token. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. If you want to use Kafka ACL authorization, you need to identify the user by their username during authentication. (The sub claim in JWT tokens is typically a unique ID, not a username.) Example configuration for fast local JWT token validation apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: #... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth validIssuerUri: <https://<auth_server_address>/auth/realms/tls> jwksEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/certs> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt #... 15.4.2.4. OAuth 2.0 introspection endpoint configuration Token validation using an OAuth 2.0 introspection endpoint treats a received access token as opaque. The Kafka broker sends an access token to the introspection endpoint, which responds with the token information necessary for validation. Importantly, it returns up-to-date information if the specific access token is valid, and also information about when the token expires. To configure OAuth 2.0 introspection-based validation, you specify an introspectionEndpointUri attribute rather than the jwksEndpointUri attribute specified for fast local JWT token validation. Depending on the authorization server, you typically have to specify a clientId and clientSecret , because the introspection endpoint is usually protected. Example configuration for an introspection endpoint apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth clientId: kafka-broker clientSecret: secretName: my-cluster-oauth key: clientSecret validIssuerUri: <https://<auth_server_-_address>/auth/realms/tls> introspectionEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/token/introspect> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt 15.4.3. Session re-authentication for Kafka brokers You can configure oauth listeners to use Kafka session re-authentication for OAuth 2.0 sessions between Kafka clients and Kafka brokers. This mechanism enforces the expiry of an authenticated session between the client and the broker after a defined period of time. When a session expires, the client immediately starts a new session by reusing the existing connection rather than dropping it. Session re-authentication is disabled by default. To enable it, you set a time value for maxSecondsWithoutReauthentication in the oauth listener configuration. The same property is used to configure session re-authentication for OAUTHBEARER and PLAIN authentication. For an example configuration, see Section 15.4.6.2, "Configuring OAuth 2.0 support for Kafka brokers" . Session re-authentication must be supported by the Kafka client libraries used by the client. Session re-authentication can be used with fast local JWT or introspection endpoint token validation. Client re-authentication When the broker's authenticated session expires, the client must re-authenticate to the existing session by sending a new, valid access token to the broker, without dropping the connection. If token validation is successful, a new client session is started using the existing connection. If the client fails to re-authenticate, the broker will close the connection if further attempts are made to send or receive messages. Java clients that use Kafka client library 2.2 or later automatically re-authenticate if the re-authentication mechanism is enabled on the broker. Session re-authentication also applies to refresh tokens, if used. When the session expires, the client refreshes the access token by using its refresh token. The client then uses the new access token to re-authenticate to the existing session. Session expiry for OAUTHBEARER and PLAIN When session re-authentication is configured, session expiry works differently for OAUTHBEARER and PLAIN authentication. For OAUTHBEARER and PLAIN, using the client ID and secret method: The broker's authenticated session will expire at the configured maxSecondsWithoutReauthentication . The session will expire earlier if the access token expires before the configured time. For PLAIN using the long-lived access token method: The broker's authenticated session will expire at the configured maxSecondsWithoutReauthentication . Re-authentication will fail if the access token expires before the configured time. Although session re-authentication is attempted, PLAIN has no mechanism for refreshing tokens. If maxSecondsWithoutReauthentication is not configured, OAUTHBEARER and PLAIN clients can remain connected to brokers indefinitely, without needing to re-authenticate. Authenticated sessions do not end with access token expiry. However, this can be considered when configuring authorization, for example, by using keycloak authorization or installing a custom authorizer. Additional resources Section 15.4.2, "OAuth 2.0 Kafka broker configuration" Section 15.4.6.2, "Configuring OAuth 2.0 support for Kafka brokers" KafkaListenerAuthenticationOAuth schema reference KIP-368 15.4.4. OAuth 2.0 Kafka client configuration A Kafka client is configured with either: The credentials required to obtain a valid access token from an authorization server (client ID and Secret) A valid long-lived access token or refresh token, obtained using tools provided by an authorization server The only information ever sent to the Kafka broker is an access token. The credentials used to authenticate with the authorization server to obtain the access token are never sent to the broker. When a client obtains an access token, no further communication with the authorization server is needed. The simplest mechanism is authentication with a client ID and Secret. Using a long-lived access token, or a long-lived refresh token, adds more complexity because there is an additional dependency on authorization server tools. Note If you are using long-lived access tokens, you may need to configure the client in the authorization server to increase the maximum lifetime of the token. If the Kafka client is not configured with an access token directly, the client exchanges credentials for an access token during Kafka session initiation by contacting the authorization server. The Kafka client exchanges either: Client ID and Secret Client ID, refresh token, and (optionally) a secret Username and password, with client ID and (optionally) a secret 15.4.5. OAuth 2.0 client authentication flows OAuth 2.0 authentication flows depend on the underlying Kafka client and Kafka broker configuration. The flows must also be supported by the authorization server used. The Kafka broker listener configuration determines how clients authenticate using an access token. The client can pass a client ID and secret to request an access token. If a listener is configured to use PLAIN authentication, the client can authenticate with a client ID and secret or username and access token. These values are passed as the username and password properties of the PLAIN mechanism. Listener configuration supports the following token validation options: You can use fast local token validation based on JWT signature checking and local token introspection, without contacting an authorization server. The authorization server provides a JWKS endpoint with public certificates that are used to validate signatures on the tokens. You can use a call to a token introspection endpoint provided by an authorization server. Each time a new Kafka broker connection is established, the broker passes the access token received from the client to the authorization server. The Kafka broker checks the response to confirm whether or not the token is valid. Note An authorization server might only allow the use of opaque access tokens, which means that local token validation is not possible. Kafka client credentials can also be configured for the following types of authentication: Direct local access using a previously generated long-lived access token Contact with the authorization server for a new access token to be issued (using a client ID and a secret, or a refresh token, or a username and a password) 15.4.5.1. Example client authentication flows using the SASL OAUTHBEARER mechanism You can use the following communication flows for Kafka authentication using the SASL OAUTHBEARER mechanism. Client using client ID and secret, with broker delegating validation to authorization server Client using client ID and secret, with broker performing fast local token validation Client using long-lived access token, with broker delegating validation to authorization server Client using long-lived access token, with broker performing fast local validation Client using client ID and secret, with broker delegating validation to authorization server The Kafka client requests an access token from the authorization server using a client ID and secret, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server using its own client ID and secret. A Kafka client session is established if the token is valid. Client using client ID and secret, with broker performing fast local token validation The Kafka client authenticates with the authorization server from the token endpoint, using a client ID and secret, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token locally using a JWT token signature check, and local token introspection. Client using long-lived access token, with broker delegating validation to authorization server The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server, using its own client ID and secret. A Kafka client session is established if the token is valid. Client using long-lived access token, with broker performing fast local validation The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token locally using a JWT token signature check and local token introspection. Warning Fast local JWT token signature validation is suitable only for short-lived tokens as there is no check with the authorization server if a token has been revoked. Token expiration is written into the token, but revocation can happen at any time, so cannot be accounted for without contacting the authorization server. Any issued token would be considered valid until it expires. 15.4.5.2. Example client authentication flows using the SASL PLAIN mechanism You can use the following communication flows for Kafka authentication using the OAuth PLAIN mechanism. Client using a client ID and secret, with the broker obtaining the access token for the client Client using a long-lived access token without a client ID and secret Client using a client ID and secret, with the broker obtaining the access token for the client The Kafka client passes a clientId as a username and a secret as a password. The Kafka broker uses a token endpoint to pass the clientId and secret to the authorization server. The authorization server returns a fresh access token or an error if the client credentials are not valid. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if the token validation is successful. If local token introspection is used, a request is not made to the authorization server. The Kafka broker validates the access token locally using a JWT token signature check. Client using a long-lived access token without a client ID and secret The Kafka client passes a username and password. The password provides the value of an access token that was obtained manually and configured before running the client. The password is passed with or without an USDaccessToken: string prefix depending on whether or not the Kafka broker listener is configured with a token endpoint for authentication. If the token endpoint is configured, the password should be prefixed by USDaccessToken: to let the broker know that the password parameter contains an access token rather than a client secret. The Kafka broker interprets the username as the account username. If the token endpoint is not configured on the Kafka broker listener (enforcing a no-client-credentials mode ), the password should provide the access token without the prefix. The Kafka broker interprets the username as the account username. In this mode, the client doesn't use a client ID and secret, and the password parameter is always interpreted as a raw access token. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if token validation is successful. If local token introspection is used, there is no request made to the authorization server. Kafka broker validates the access token locally using a JWT token signature check. 15.4.6. Configuring OAuth 2.0 authentication OAuth 2.0 is used for interaction between Kafka clients and Streams for Apache Kafka components. In order to use OAuth 2.0 for Streams for Apache Kafka, you must: Deploy an authorization server and configure the deployment to integrate with Streams for Apache Kafka Deploy or update the Kafka cluster with Kafka broker listeners configured to use OAuth 2.0 Update your Java-based Kafka clients to use OAuth 2.0 Update Kafka component clients to use OAuth 2.0 15.4.6.1. Configuring an OAuth 2.0 authorization server This procedure describes in general what you need to do to configure an authorization server for integration with Streams for Apache Kafka. These instructions are not product specific. The steps are dependent on the chosen authorization server. Consult the product documentation for the authorization server for information on how to set up OAuth 2.0 access. Note If you already have an authorization server deployed, you can skip the deployment step and use your current deployment. Procedure Deploy the authorization server to your cluster. Access the CLI or admin console for the authorization server to configure OAuth 2.0 for Streams for Apache Kafka. Now prepare the authorization server to work with Streams for Apache Kafka. Configure a kafka-broker client. Configure clients for each Kafka client component of your application. What to do After deploying and configuring the authorization server, configure the Kafka brokers to use OAuth 2.0 . 15.4.6.2. Configuring OAuth 2.0 support for Kafka brokers This procedure describes how to configure Kafka brokers so that the broker listeners are enabled to use OAuth 2.0 authentication using an authorization server. We advise use of OAuth 2.0 over an encrypted interface through through a listener with tls: true . Plain listeners are not recommended. If the authorization server is using certificates signed by the trusted CA and matching the OAuth 2.0 server hostname, TLS connection works using the default settings. Otherwise, you may need to configure the truststore with proper certificates or disable the certificate hostname validation. When configuring the Kafka broker you have two options for the mechanism used to validate the access token during OAuth 2.0 authentication of the newly connected Kafka client: Configuring fast local JWT token validation Configuring token validation using an introspection endpoint Before you start For more information on the configuration of OAuth 2.0 authentication for Kafka broker listeners, see: KafkaListenerAuthenticationOAuth schema reference OAuth 2.0 authentication mechanisms Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed Procedure Update the Kafka broker configuration ( Kafka.spec.kafka ) of your Kafka resource in an editor. oc edit kafka my-cluster Configure the Kafka broker listeners configuration. The configuration for each type of listener does not have to be the same, as they are independent. The examples here show the configuration options as configured for external listeners. Example 1: Configuring fast local JWT token validation #... - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: https://<auth_server_address>/auth/realms/external 2 jwksEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/certs 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10 1 Listener type set to oauth . 2 URI of the token issuer used for authentication. 3 URI of the JWKS certificate endpoint used for local JWT validation. 4 The token claim (or key) that contains the actual username used to identify the user. Its value depends on the authorization server. If necessary, a JsonPath expression like "['user.info'].['user.id']" can be used to retrieve the username from nested JSON attributes within a token. 5 (Optional) Activates the Kafka re-authentication mechanism that enforces session expiry to the same length of time as the access token. If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. 6 (Optional) Trusted certificates for TLS connection to the authorization server. 7 (Optional) Disable TLS hostname verification. Default is false . 8 The duration the JWKS certificates are considered valid before they expire. Default is 360 seconds. If you specify a longer time, consider the risk of allowing access to revoked certificates. 9 The period between refreshes of JWKS certificates. The interval must be at least 60 seconds shorter than the expiry interval. Default is 300 seconds. 10 The minimum pause in seconds between consecutive attempts to refresh JWKS public keys. When an unknown signing key is encountered, the JWKS keys refresh is scheduled outside the regular periodic schedule with at least the specified pause since the last refresh attempt. The refreshing of keys follows the rule of exponential backoff, retrying on unsuccessful refreshes with ever increasing pause, until it reaches jwksRefreshSeconds . The default value is 1. Example 2: Configuring token validation using an introspection endpoint - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://<auth_server_address>/auth/realms/external introspectionEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token/introspect 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 1 URI of the token introspection endpoint. 2 Client ID to identify the client. 3 Client Secret and client ID is used for authentication. 4 The token claim (or key) that contains the actual username used to identify the user. Its value depends on the authorization server. If necessary, a JsonPath expression like "['user.info'].['user.id']" can be used to retrieve the username from nested JSON attributes within a token. 5 (Optional) Activates the Kafka re-authentication mechanism that enforces session expiry to the same length of time as the access token. If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. Depending on how you apply OAuth 2.0 authentication, and the type of authorization server, there are additional (optional) configuration settings you can use: # ... authentication: type: oauth # ... checkIssuer: false 1 checkAudience: true 2 fallbackUserNameClaim: client_id 3 fallbackUserNamePrefix: client-account- 4 validTokenType: bearer 5 userInfoEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/userinfo 6 enableOauthBearer: false 7 enablePlain: true 8 tokenEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token 9 customClaimCheck: "@.custom == 'custom-value'" 10 clientAudience: audience 11 clientScope: scope 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 httpRetryPauseMs: 300 16 groupsClaim: "USD.groups" 17 groupsClaimDelimiter: "," 18 includeAcceptHeader: false 19 1 If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set checkIssuer to false and do not specify a validIssuerUri . Default is true . 2 If your authorization server provides an aud (audience) claim, and you want to enforce an audience check, set checkAudience to true . Audience checks identify the intended recipients of tokens. As a result, the Kafka broker will reject tokens that do not have its clientId in their aud claim. Default is false . 3 An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID . When a user authenticates using a username and password to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. If necessary, a JsonPath expression like "['client.info'].['client.id']" can be used to retrieve the fallback username to retrieve the username from nested JSON attributes within a token. 4 In situations where fallbackUserNameClaim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client. 5 (Only applicable when using introspectionEndpointUri ) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain. 6 (Only applicable when using introspectionEndpointUri ) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an Introspection Endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The userNameClaim , fallbackUserNameClaim , and fallbackUserNamePrefix settings are applied to the response of userinfo endpoint. 7 Set this to false to disable the OAUTHBEARER mechanism on the listener. At least one of PLAIN or OAUTHBEARER has to be enabled. Default is true . 8 Set to true to enable PLAIN authentication on the listener, which is supported for clients on all platforms. 9 Additional configuration for the PLAIN mechanism. If specified, clients can authenticate over PLAIN by passing an access token as the password using an USDaccessToken: prefix. For production, always use https:// urls. 10 Additional custom rules can be imposed on the JWT access token during validation by setting this to a JsonPath filter query. If the access token does not contain the necessary data, it is rejected. When using the introspectionEndpointUri , the custom check is applied to the introspection endpoint response JSON. 11 An audience parameter passed to the token endpoint. An audience is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 12 A scope parameter passed to the token endpoint. A scope is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 13 The connect timeout in seconds when connecting to the authorization server. The default value is 60. 14 The read timeout in seconds when connecting to the authorization server. The default value is 60. 15 The maximum number of times to retry a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the connectTimeoutSeconds and readTimeoutSeconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 16 The time to wait before attempting another retry of a failed HTTP request to the authorization server. By default, this time is set to zero, meaning that no pause is applied. This is because many issues that cause failed requests are per-request network glitches or proxy issues that can be resolved quickly. However, if your authorization server is under stress or experiencing high traffic, you may want to set this option to a value of 100 ms or more to reduce the load on the server and increase the likelihood of successful retries. 17 A JsonPath query that is used to extract groups information from either the JWT token or the introspection endpoint response. This option is not set by default. By configuring this option, a custom authorizer can make authorization decisions based on user groups. 18 A delimiter used to parse groups information when it is returned as a single delimited string. The default value is ',' (comma). 19 Some authorization servers have issues with client sending Accept: application/json header. By setting includeAcceptHeader: false the header will not be sent. Default is true . Save and exit the editor, then wait for rolling updates to complete. Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} oc get pod -w The rolling update configures the brokers to use OAuth 2.0 authentication. What to do Configure your Kafka clients to use OAuth 2.0 15.4.6.3. Configuring Kafka Java clients to use OAuth 2.0 Configure Kafka producer and consumer APIs to use OAuth 2.0 for interaction with Kafka brokers. Add a callback plugin to your client pom.xml file, then configure your client for OAuth 2.0. Specify the following in your client configuration: A SASL (Simple Authentication and Security Layer) security protocol: SASL_SSL for authentication over TLS encrypted connections SASL_PLAINTEXT for authentication over unencrypted connections Use SASL_SSL for production and SASL_PLAINTEXT for local development only. When using SASL_SSL , additional ssl.truststore configuration is needed. The truststore configuration is required for secure connection ( https:// ) to the OAuth 2.0 authorization server. To verify the OAuth 2.0 authorization server, add the CA certificate for the authorization server to the truststore in your client configuration. You can configure a truststore in PEM or PKCS #12 format. A Kafka SASL mechanism: OAUTHBEARER for credentials exchange using a bearer token PLAIN to pass client credentials (clientId + secret) or an access token A JAAS (Java Authentication and Authorization Service) module that implements the SASL mechanism: org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule implements the OAuthbearer mechanism org.apache.kafka.common.security.plain.PlainLoginModule implements the plain mechanism To be able to use the OAuthbearer mechanism, you must also add the custom io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler class as the callback handler. JaasClientOauthLoginCallbackHandler handles OAuth callbacks to the authorization server for access tokens during client login. This enables automatic token renewal, ensuring continuous authentication without user intervention. Additionally, it handles login credentials for clients using the OAuth 2.0 password grant method. SASL authentication properties, which support the following authentication methods: OAuth 2.0 client credentials OAuth 2.0 password grant (deprecated) Access token Refresh token Add the SASL authentication properties as JAAS configuration ( sasl.jaas.config and sasl.login.callback.handler.class ). How you configure the authentication properties depends on the authentication method you are using to access the OAuth 2.0 authorization server. In this procedure, the properties are specified in a properties file, then loaded into the client configuration. Note You can also specify authentication properties as environment variables, or as Java system properties. For Java system properties, you can set them using setProperty and pass them on the command line using the -D option. Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Add the client library with OAuth 2.0 support to the pom.xml file for the Kafka client: <dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00007</version> </dependency> Configure the client properties by specifying the following configuration in a properties file: The security protocol The SASL mechanism The JAAS module and authentication properties according to the method being used For example, we can add the following to a client.properties file: Client credentials mechanism properties security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ 4 oauth.client.id="<client_id>" \ 5 oauth.client.secret="<client_secret>" \ 6 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ 7 oauth.ssl.truststore.password="USDSTOREPASS" \ 8 oauth.ssl.truststore.type="PKCS12" \ 9 oauth.scope="<scope>" \ 10 oauth.audience="<audience>" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 SASL_SSL security protocol for TLS-encrypted connections. Use SASL_PLAINTEXT over unencrypted connections for local development only. 2 The SASL mechanism specified as OAUTHBEARER or PLAIN . 3 The truststore configuration for secure access to the Kafka cluster. 4 URI of the authorization server token endpoint. 5 Client ID, which is the name used when creating the client in the authorization server. 6 Client secret created when creating the client in the authorization server. 7 The location contains the public key certificate ( truststore.p12 ) for the authorization server. 8 The password for accessing the truststore. 9 The truststore type. 10 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. 11 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. Password grants mechanism properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.password.grant.username="<username>" \ 3 oauth.password.grant.password="<password>" \ 4 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.scope="<scope>" \ oauth.audience="<audience>" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Username for password grant authentication. OAuth password grant configuration (username and password) uses the OAuth 2.0 password grant method. To use password grants, create a user account for a client on your authorization server with limited permissions. The account should act like a service account. Use in environments where user accounts are required for authentication, but consider using a refresh token first. 4 Password for password grant authentication. Note SASL PLAIN does not support passing a username and password (password grants) using the OAuth 2.0 password grant method. Access token properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.access.token="<access_token>" \ 1 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Long-lived access token for Kafka clients. Refresh token properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.refresh.token="<refresh_token>" \ 3 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Long-lived refresh token for Kafka clients. Input the client properties for OAUTH 2.0 authentication into the Java client code. Example showing input of client properties Properties props = new Properties(); try (FileReader reader = new FileReader("client.properties", StandardCharsets.UTF_8)) { props.load(reader); } Verify that the Kafka client can access the Kafka brokers. 15.4.6.4. Configuring OAuth 2.0 for Kafka components This procedure describes how to configure Kafka components to use OAuth 2.0 authentication using an authorization server. You can configure authentication for: Kafka Connect Kafka MirrorMaker Kafka Bridge In this scenario, the Kafka component and the authorization server are running in the same cluster. Before you start For more information on the configuration of OAuth 2.0 authentication for Kafka components, see the KafkaClientAuthenticationOAuth schema reference . The schema reference includes examples of configuration options. Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Create a client secret and mount it to the component as an environment variable. For example, here we are creating a client Secret for the Kafka Bridge: apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1 1 The clientSecret key must be in base64 format. Create or edit the resource for the Kafka component so that OAuth 2.0 authentication is configured for the authentication property. For OAuth 2.0 authentication, you can use the following options: Client ID and secret Client ID and refresh token Access token Username and password TLS For example, here OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and secret, and TLS: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: oauth 1 tokenEndpointUri: https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert certificate: tls.crt 1 Authentication type set to oauth . 2 URI of the token endpoint for authentication. 3 Trusted certificates for TLS connection to the authorization server. Depending on how you apply OAuth 2.0 authentication, and the type of authorization server, there are additional configuration options you can use: # ... spec: # ... authentication: # ... disableTlsHostnameVerification: true 1 checkAccessTokenType: false 2 accessTokenIsJwt: false 3 scope: any 4 audience: kafka 5 connectTimeoutSeconds: 60 6 readTimeoutSeconds: 60 7 httpRetries: 2 8 httpRetryPauseMs: 300 9 includeAcceptHeader: false 10 1 (Optional) Disable TLS hostname verification. Default is false . 2 If the authorization server does not return a typ (type) claim inside the JWT token, you can apply checkAccessTokenType: false to skip the token type check. Default is true . 3 If you are using opaque tokens, you can apply accessTokenIsJwt: false so that access tokens are not treated as JWT tokens. 4 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. In this case it is any . 5 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. In this case it is kafka . 6 (Optional) The connect timeout in seconds when connecting to the authorization server. The default value is 60. 7 (Optional) The read timeout in seconds when connecting to the authorization server. The default value is 60. 8 (Optional) The maximum number of times to retry a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the connectTimeoutSeconds and readTimeoutSeconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 9 (Optional) The time to wait before attempting another retry of a failed HTTP request to the authorization server. By default, this time is set to zero, meaning that no pause is applied. This is because many issues that cause failed requests are per-request network glitches or proxy issues that can be resolved quickly. However, if your authorization server is under stress or experiencing high traffic, you may want to set this option to a value of 100 ms or more to reduce the load on the server and increase the likelihood of successful retries. 10 (Optional) Some authorization servers have issues with client sending Accept: application/json header. By setting includeAcceptHeader: false the header will not be sent. Default is true . Apply the changes to the deployment of your Kafka resource. oc apply -f your-file Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} oc get pod -w The rolling updates configure the component for interaction with Kafka brokers using OAuth 2.0 authentication. 15.5. Using OAuth 2.0 token-based authorization If you are using OAuth 2.0 with Red Hat Single Sign-On for token-based authentication, you can also use Red Hat Single Sign-On to configure authorization rules to constrain client access to Kafka brokers. Authentication establishes the identity of a user. Authorization decides the level of access for that user. Streams for Apache Kafka supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. Kafka allows all users full access to brokers by default, and also provides the AclAuthorizer and StandardAuthorizer plugins to configure authorization based on Access Control Lists (ACLs). The ACL rules managed by these plugins are used to grant or deny access to resources based on the username , and these rules are stored within the Kafka cluster itself. However, OAuth 2.0 token-based authorization with Red Hat Single Sign-On offers far greater flexibility on how you wish to implement access control to Kafka brokers. In addition, you can configure your Kafka brokers to use OAuth 2.0 authorization and ACLs. Additional resources Using OAuth 2.0 token-based authentication Kafka Authorization Red Hat Single Sign-On documentation 15.5.1. OAuth 2.0 authorization mechanism OAuth 2.0 authorization in Streams for Apache Kafka uses Red Hat Single Sign-On server Authorization Services REST endpoints to extend token-based authentication with Red Hat Single Sign-On by applying defined security policies on a particular user, and providing a list of permissions granted on different resources for that user. Policies use roles and groups to match permissions to users. OAuth 2.0 authorization enforces permissions locally based on the received list of grants for the user from Red Hat Single Sign-On Authorization Services. 15.5.1.1. Kafka broker custom authorizer A Red Hat Single Sign-On authorizer ( KeycloakAuthorizer ) is provided with Streams for Apache Kafka. To be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure a custom authorizer on the Kafka broker. The authorizer fetches a list of granted permissions from the authorization server as needed, and enforces authorization locally on the Kafka Broker, making rapid authorization decisions for each client request. 15.5.2. Configuring OAuth 2.0 authorization support This procedure describes how to configure Kafka brokers to use OAuth 2.0 authorization using Red Hat Single Sign-On Authorization Services. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of Red Hat Single Sign-On groups , roles , clients , and users to configure access in Red Hat Single Sign-On. Typically, groups are used to match users based on organizational departments or geographical locations. And roles are used to match users based on their function. With Red Hat Single Sign-On, you can store users and groups in LDAP, whereas clients and roles cannot be stored this way. Storage and access to user data may be a factor in how you choose to configure authorization policies. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites Streams for Apache Kafka must be configured to use OAuth 2.0 with Red Hat Single Sign-On for token-based authentication . You use the same Red Hat Single Sign-On server endpoint when you set up authorization. OAuth 2.0 authentication must be configured with the maxSecondsWithoutReauthentication option to enable re-authentication. Procedure Access the Red Hat Single Sign-On Admin Console or use the Red Hat Single Sign-On Admin CLI to enable Authorization Services for the Kafka broker client you created when setting up OAuth 2.0 authentication. Use Authorization Services to define resources, authorization scopes, policies, and permissions for the client. Bind the permissions to users and clients by assigning them roles and groups. Configure the Kafka brokers to use Red Hat Single Sign-On authorization by updating the Kafka broker configuration ( Kafka.spec.kafka ) of your Kafka resource in an editor. oc edit kafka my-cluster Configure the Kafka broker kafka configuration to use keycloak authorization, and to be able to access the authorization server and Authorization Services. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=fred - sam - CN=edward tlsTrustedCertificates: 7 - secretName: oauth-server-cert certificate: ca.crt grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 grantsMaxIdleSeconds: 300 10 grantsGcPeriodSeconds: 300 11 grantsAlwaysLatest: false 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 enableMetrics: false 16 includeAcceptHeader: false 17 #... 1 Type keycloak enables Red Hat Single Sign-On authorization. 2 URI of the Red Hat Single Sign-On token endpoint. For production, always use https:// urls. When you configure token-based oauth authentication, you specify a jwksEndpointUri as the URI for local JWT validation. The hostname for the tokenEndpointUri URI must be the same. 3 The client ID of the OAuth 2.0 client definition in Red Hat Single Sign-On that has Authorization Services enabled. Typically, kafka is used as the ID. 4 (Optional) Delegate authorization to Kafka AclAuthorizer and StandardAuthorizer if access is denied by Red Hat Single Sign-On Authorization Services policies. Default is false . 5 (Optional) Disable TLS hostname verification. Default is false . 6 (Optional) Designated super users. 7 (Optional) Trusted certificates for TLS connection to the authorization server. 8 (Optional) The time between two consecutive grants refresh runs. That is the maximum time for active sessions to detect any permissions changes for the user on Red Hat Single Sign-On. The default value is 60. 9 (Optional) The number of threads to use to refresh (in parallel) the grants for the active sessions. The default value is 5. 10 (Optional) The time, in seconds, after which an idle grant in the cache can be evicted. The default value is 300. 11 (Optional) The time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. 12 (Optional) Controls whether the latest grants are fetched for a new session. When enabled, grants are retrieved from Red Hat Single Sign-On and cached for the user. The default value is false . 13 (Optional) The connect timeout in seconds when connecting to the Red Hat Single Sign-On token endpoint. The default value is 60. 14 (Optional) The read timeout in seconds when connecting to the Red Hat Single Sign-On token endpoint. The default value is 60. 15 (Optional) The maximum number of times to retry (without pausing) a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the connectTimeoutSeconds and readTimeoutSeconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 16 (Optional) Enable or disable OAuth metrics. The default value is false . 17 (Optional) Some authorization servers have issues with client sending Accept: application/json header. By setting includeAcceptHeader: false the header will not be sent. Default is true . Save and exit the editor, then wait for rolling updates to complete. Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c kafka oc get pod -w The rolling update configures the brokers to use OAuth 2.0 authorization. Verify the configured permissions by accessing Kafka brokers as clients or users with specific roles, making sure they have the necessary access, or do not have the access they are not supposed to have. 15.5.3. Managing policies and permissions in Red Hat Single Sign-On Authorization Services This section describes the authorization models used by Red Hat Single Sign-On Authorization Services and Kafka, and defines the important concepts in each model. To grant permissions to access Kafka, you can map Red Hat Single Sign-On Authorization Services objects to Kafka resources by creating an OAuth client specification in Red Hat Single Sign-On. Kafka permissions are granted to user accounts or service accounts using Red Hat Single Sign-On Authorization Services rules. Examples are shown of the different user permissions required for common Kafka operations, such as creating and listing topics. 15.5.3.1. Kafka and Red Hat Single Sign-On authorization models overview Kafka and Red Hat Single Sign-On Authorization Services use different authorization models. Kafka authorization model Kafka's authorization model uses resource types . When a Kafka client performs an action on a broker, the broker uses the configured KeycloakAuthorizer to check the client's permissions, based on the action and resource type. Kafka uses five resource types to control access: Topic , Group , Cluster , TransactionalId , and DelegationToken . Each resource type has a set of available permissions. Topic Create Write Read Delete Describe DescribeConfigs Alter AlterConfigs Group Read Describe Delete Cluster Create Describe Alter DescribeConfigs AlterConfigs IdempotentWrite ClusterAction TransactionalId Describe Write DelegationToken Describe Red Hat Single Sign-On Authorization Services model The Red Hat Single Sign-On Authorization Services model has four concepts for defining and granting permissions: resources , authorization scopes , policies , and permissions . Resources A resource is a set of resource definitions that are used to match resources with permitted actions. A resource might be an individual topic, for example, or all topics with names starting with the same prefix. A resource definition is associated with a set of available authorization scopes, which represent a set of all actions available on the resource. Often, only a subset of these actions is actually permitted. Authorization scopes An authorization scope is a set of all the available actions on a specific resource definition. When you define a new resource, you add scopes from the set of all scopes. Policies A policy is an authorization rule that uses criteria to match against a list of accounts. Policies can match: Service accounts based on client ID or roles User accounts based on username, groups, or roles. Permissions A permission grants a subset of authorization scopes on a specific resource definition to a set of users. Additional resources Kafka authorization model 15.5.3.2. Map Red Hat Single Sign-On Authorization Services to the Kafka authorization model The Kafka authorization model is used as a basis for defining the Red Hat Single Sign-On roles and resources that will control access to Kafka. To grant Kafka permissions to user accounts or service accounts, you first create an OAuth client specification in Red Hat Single Sign-On for the Kafka broker. You then specify Red Hat Single Sign-On Authorization Services rules on the client. Typically, the client id of the OAuth client that represents the broker is kafka . The example configuration files provided with Streams for Apache Kafka use kafka as the OAuth client id. Note If you have multiple Kafka clusters, you can use a single OAuth client ( kafka ) for all of them. This gives you a single, unified space in which to define and manage authorization rules. However, you can also use different OAuth client ids (for example, my-cluster-kafka or cluster-dev-kafka ) and define authorization rules for each cluster within each client configuration. The kafka client definition must have the Authorization Enabled option enabled in the Red Hat Single Sign-On Admin Console. All permissions exist within the scope of the kafka client. If you have different Kafka clusters configured with different OAuth client IDs, they each need a separate set of permissions even though they're part of the same Red Hat Single Sign-On realm. When the Kafka client uses OAUTHBEARER authentication, the Red Hat Single Sign-On authorizer ( KeycloakAuthorizer ) uses the access token of the current session to retrieve a list of grants from the Red Hat Single Sign-On server. To retrieve the grants, the authorizer evaluates the Red Hat Single Sign-On Authorization Services policies and permissions. Authorization scopes for Kafka permissions An initial Red Hat Single Sign-On configuration usually involves uploading authorization scopes to create a list of all possible actions that can be performed on each Kafka resource type. This step is performed once only, before defining any permissions. You can add authorization scopes manually instead of uploading them. Authorization scopes must contain all the possible Kafka permissions regardless of the resource type: Create Write Read Delete Describe Alter DescribeConfig AlterConfig ClusterAction IdempotentWrite Note If you're certain you won't need a permission (for example, IdempotentWrite ), you can omit it from the list of authorization scopes. However, that permission won't be available to target on Kafka resources. Resource patterns for permissions checks Resource patterns are used for pattern matching against the targeted resources when performing permission checks. The general pattern format is RESOURCE-TYPE:PATTERN-NAME . The resource types mirror the Kafka authorization model. The pattern allows for two matching options: Exact matching (when the pattern does not end with * ) Prefix matching (when the pattern ends with * ) Example patterns for resources Additionally, the general pattern format can be prefixed by kafka-cluster: CLUSTER-NAME followed by a comma, where CLUSTER-NAME refers to the metadata.name in the Kafka custom resource. Example patterns for resources with cluster prefix When the kafka-cluster prefix is missing, it is assumed to be kafka-cluster:* . When defining a resource, you can associate it with a list of possible authorization scopes which are relevant to the resource. Set whatever actions make sense for the targeted resource type. Though you may add any authorization scope to any resource, only the scopes supported by the resource type are considered for access control. Policies for applying access permission Policies are used to target permissions to one or more user accounts or service accounts. Targeting can refer to: Specific user or service accounts Realm roles or client roles User groups JavaScript rules to match a client IP address A policy is given a unique name and can be reused to target multiple permissions to multiple resources. Permissions to grant access Use fine-grained permissions to pull together the policies, resources, and authorization scopes that grant access to users. The name of each permission should clearly define which permissions it grants to which users. For example, Dev Team B can read from topics starting with x . Additional resources For more information about how to configure permissions through Red Hat Single Sign-On Authorization Services, see Section 15.5.4, "Trying Red Hat Single Sign-On Authorization Services" . 15.5.3.3. Example permissions required for Kafka operations The following examples demonstrate the user permissions required for performing common operations on Kafka. Create a topic To create a topic, the Create permission is required for the specific topic, or for Cluster:kafka-cluster . bin/kafka-topics.sh --create --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties List topics If a user has the Describe permission on a specified topic, the topic is listed. bin/kafka-topics.sh --list \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display topic details To display a topic's details, Describe and DescribeConfigs permissions are required on the topic. bin/kafka-topics.sh --describe --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Produce messages to a topic To produce messages to a topic, Describe and Write permissions are required on the topic. If the topic hasn't been created yet, and topic auto-creation is enabled, the permissions to create a topic are required. bin/kafka-console-producer.sh --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties Consume messages from a topic To consume messages from a topic, Describe and Read permissions are required on the topic. Consuming from the topic normally relies on storing the consumer offsets in a consumer group, which requires additional Describe and Read permissions on the consumer group. Two resources are needed for matching. For example: bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties Produce messages to a topic using an idempotent producer As well as the permissions for producing to a topic, an additional IdempotentWrite permission is required on the Cluster:kafka-cluster resource. Two resources are needed for matching. For example: bin/kafka-console-producer.sh --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1 List consumer groups When listing consumer groups, only the groups on which the user has the Describe permissions are returned. Alternatively, if the user has the Describe permission on the Cluster:kafka-cluster , all the consumer groups are returned. bin/kafka-consumer-groups.sh --list \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display consumer group details To display a consumer group's details, the Describe permission is required on the group and the topics associated with the group. bin/kafka-consumer-groups.sh --describe --group my-group-1 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Change topic configuration To change a topic's configuration, the Describe and Alter permissions are required on the topic. bin/kafka-topics.sh --alter --topic my-topic --partitions 2 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display Kafka broker configuration In order to use kafka-configs.sh to get a broker's configuration, the DescribeConfigs permission is required on the Cluster:kafka-cluster . bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Change Kafka broker configuration To change a Kafka broker's configuration, DescribeConfigs and AlterConfigs permissions are required on Cluster:kafka-cluster . bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Delete a topic To delete a topic, the Describe and Delete permissions are required on the topic. bin/kafka-topics.sh --delete --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Select a lead partition To run leader selection for topic partitions, the Alter permission is required on the Cluster:kafka-cluster . bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties Reassign partitions To generate a partition reassignment file, Describe permissions are required on the topics involved. bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list "0,1" --generate \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json To execute the partition reassignment, Describe and Alter permissions are required on Cluster:kafka-cluster . Also, Describe permissions are required on the topics involved. bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties To verify partition reassignment, Describe , and AlterConfigs permissions are required on Cluster:kafka-cluster , and on each of the topics involved. bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties 15.5.4. Trying Red Hat Single Sign-On Authorization Services This example explains how to use Red Hat Single Sign-On Authorization Services with keycloak authorization. Use Red Hat Single Sign-On Authorization Services to enforce access restrictions on Kafka clients. Red Hat Single Sign-On Authorization Services use authorization scopes, policies and permissions to define and apply access control to resources. Red Hat Single Sign-On Authorization Services REST endpoints provide a list of granted permissions on resources for authenticated users. The list of grants (permissions) is fetched from the Red Hat Single Sign-On server as the first action after an authenticated session is established by the Kafka client. The list is refreshed in the background so that changes to the grants are detected. Grants are cached and enforced locally on the Kafka broker for each user session to provide fast authorization decisions. Streams for Apache Kafka provides example configuration files . These include the following example files for setting up Red Hat Single Sign-On: kafka-ephemeral-oauth-single-keycloak-authz.yaml An example Kafka custom resource configured for OAuth 2.0 token-based authorization using Red Hat Single Sign-On. You can use the custom resource to deploy a Kafka cluster that uses keycloak authorization and token-based oauth authentication. kafka-authz-realm.json An example Red Hat Single Sign-On realm configured with sample groups, users, roles and clients. You can import the realm into a Red Hat Single Sign-On instance to set up fine-grained permissions to access Kafka. If you want to try the example with Red Hat Single Sign-On, use these files to perform the tasks outlined in this section in the order shown. Accessing the Red Hat Single Sign-On Admin Console Deploying a Kafka cluster with Red Hat Single Sign-On authorization Preparing TLS connectivity for a CLI Kafka client session Checking authorized access to Kafka using a CLI Kafka client session Authentication When you configure token-based oauth authentication, you specify a jwksEndpointUri as the URI for local JWT validation. When you configure keycloak authorization, you specify a tokenEndpointUri as the URI of the Red Hat Single Sign-On token endpoint. The hostname for both URIs must be the same. Targeted permissions with group or role policies In Red Hat Single Sign-On, confidential clients with service accounts enabled can authenticate to the server in their own name using a client ID and a secret. This is convenient for microservices that typically act in their own name, and not as agents of a particular user (like a web site). Service accounts can have roles assigned like regular users. They cannot, however, have groups assigned. As a consequence, if you want to target permissions to microservices using service accounts, you cannot use group policies, and should instead use role policies. Conversely, if you want to limit certain permissions only to regular user accounts where authentication with a username and password is required, you can achieve that as a side effect of using the group policies rather than the role policies. This is what is used in this example for permissions that start with ClusterManager . Performing cluster management is usually done interactively using CLI tools. It makes sense to require the user to log in before using the resulting access token to authenticate to the Kafka broker. In this case, the access token represents the specific user, rather than the client application. 15.5.4.1. Accessing the Red Hat Single Sign-On Admin Console Set up Red Hat Single Sign-On, then connect to its Admin Console and add the preconfigured realm. Use the example kafka-authz-realm.json file to import the realm. You can check the authorization rules defined for the realm in the Admin Console. The rules grant access to the resources on the Kafka cluster configured to use the example Red Hat Single Sign-On realm. Prerequisites A running OpenShift cluster. The Streams for Apache Kafka examples/security/keycloak-authorization/kafka-authz-realm.json file that contains the preconfigured realm. Procedure Install the Red Hat Single Sign-On server using the Red Hat Single Sign-On Operator as described in Server Installation and Configuration in the Red Hat Single Sign-On documentation. Wait until the Red Hat Single Sign-On instance is running. Get the external hostname to be able to access the Admin Console. NS=sso oc get ingress keycloak -n USDNS In this example, we assume the Red Hat Single Sign-On server is running in the sso namespace. Get the password for the admin user. oc get -n USDNS pod keycloak-0 -o yaml | less The password is stored as a secret, so get the configuration YAML file for the Red Hat Single Sign-On instance to identify the name of the secret ( secretKeyRef.name ). Use the name of the secret to obtain the clear text password. SECRET_NAME=credential-keycloak oc get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D In this example, we assume the name of the secret is credential-keycloak . Log in to the Admin Console with the username admin and the password you obtained. Use https:// HOSTNAME to access the Kubernetes Ingress . You can now upload the example realm to Red Hat Single Sign-On using the Admin Console. Click Add Realm to import the example realm. Add the examples/security/keycloak-authorization/kafka-authz-realm.json file, and then click Create . You now have kafka-authz as your current realm in the Admin Console. The default view displays the Master realm. In the Red Hat Single Sign-On Admin Console, go to Clients > kafka > Authorization > Settings and check that Decision Strategy is set to Affirmative . An affirmative policy means that at least one policy must be satisfied for a client to access the Kafka cluster. In the Red Hat Single Sign-On Admin Console, go to Groups , Users , Roles and Clients to view the realm configuration. Groups Groups are used to create user groups and set user permissions. Groups are sets of users with a name assigned. They are used to compartmentalize users into geographical, organizational or departmental units. Groups can be linked to an LDAP identity provider. You can make a user a member of a group through a custom LDAP server admin user interface, for example, to grant permissions on Kafka resources. Users Users are used to create users. For this example, alice and bob are defined. alice is a member of the ClusterManager group and bob is a member of ClusterManager-my-cluster group. Users can be stored in an LDAP identity provider. Roles Roles mark users or clients as having certain permissions. Roles are a concept analogous to groups. They are usually used to tag users with organizational roles and have the requisite permissions. Roles cannot be stored in an LDAP identity provider. If LDAP is a requirement, you can use groups instead, and add Red Hat Single Sign-On roles to the groups so that when users are assigned a group they also get a corresponding role. Clients Clients can have specific configurations. For this example, kafka , kafka-cli , team-a-client , and team-b-client clients are configured. The kafka client is used by Kafka brokers to perform the necessary OAuth 2.0 communication for access token validation. This client also contains the authorization services resource definitions, policies, and authorization scopes used to perform authorization on the Kafka brokers. The authorization configuration is defined in the kafka client from the Authorization tab, which becomes visible when Authorization Enabled is switched on from the Settings tab. The kafka-cli client is a public client that is used by the Kafka command line tools when authenticating with username and password to obtain an access token or a refresh token. The team-a-client and team-b-client clients are confidential clients representing services with partial access to certain Kafka topics. In the Red Hat Single Sign-On Admin Console, go to Authorization > Permissions to see the granted permissions that use the resources and policies defined for the realm. For example, the kafka client has the following permissions: Dev Team A The Dev Team A realm role can write to topics that start with x_ on any cluster. This combines a resource called Topic:x_* , Describe and Write scopes, and the Dev Team A policy. The Dev Team A policy matches all users that have a realm role called Dev Team A . Dev Team B The Dev Team B realm role can read from topics that start with x_ on any cluster. This combines Topic:x_* , Group:x_* resources, Describe and Read scopes, and the Dev Team B policy. The Dev Team B policy matches all users that have a realm role called Dev Team B . Matching users and clients have the ability to read from topics, and update the consumed offsets for topics and consumer groups that have names starting with x_ . 15.5.4.2. Deploying a Kafka cluster with Red Hat Single Sign-On authorization Deploy a Kafka cluster configured to connect to the Red Hat Single Sign-On server. Use the example kafka-ephemeral-oauth-single-keycloak-authz.yaml file to deploy the Kafka cluster as a Kafka custom resource. The example deploys a single-node Kafka cluster with keycloak authorization and oauth authentication. Prerequisites The Red Hat Single Sign-On authorization server is deployed to your OpenShift cluster and loaded with the example realm. The Cluster Operator is deployed to your OpenShift cluster. The Streams for Apache Kafka examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml custom resource. Procedure Use the hostname of the Red Hat Single Sign-On instance you deployed to prepare a truststore certificate for Kafka brokers to communicate with the Red Hat Single Sign-On server. SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem The certificate is required as Kubernetes Ingress is used to make a secure (HTTPS) connection. Usually there is not one single certificate, but a certificate chain. You only have to provide the top-most issuer CA, which is listed last in the /tmp/sso.pem file. You can extract it manually or using the following commands: Example command to extract the top CA certificate in a certificate chain split -p "-----BEGIN CERTIFICATE-----" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt Note A trusted CA certificate is normally obtained from a trusted source, and not by using the openssl command. Deploy the certificate to OpenShift as a secret. oc create secret generic oauth-server-cert --from-file=/tmp/sso-ca.crt -n USDNS Set the hostname as an environment variable SSO_HOST= SSO-HOSTNAME Create and deploy the example Kafka cluster. cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\USD{SSO_HOST}'"#USDSSO_HOST#" | oc create -n USDNS -f - 15.5.4.3. Preparing TLS connectivity for a CLI Kafka client session Create a new pod for an interactive CLI session. Set up a truststore with a Red Hat Single Sign-On certificate for TLS connectivity. The truststore is to connect to Red Hat Single Sign-On and the Kafka broker. Prerequisites The Red Hat Single Sign-On authorization server is deployed to your OpenShift cluster and loaded with the example realm. In the Red Hat Single Sign-On Admin Console, check the roles assigned to the clients are displayed in Clients > Service Account Roles . The Kafka cluster configured to connect with Red Hat Single Sign-On is deployed to your OpenShift cluster. Procedure Run a new interactive pod container using the Streams for Apache Kafka image to connect to a running Kafka broker. NS=sso oc run -ti --restart=Never --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 kafka-cli -n USDNS -- /bin/sh Note If oc times out waiting on the image download, subsequent attempts may result in an AlreadyExists error. Attach to the pod container. oc attach -ti kafka-cli -n USDNS Use the hostname of the Red Hat Single Sign-On instance to prepare a certificate for client connection using TLS. SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem Usually there is not one single certificate, but a certificate chain. You only have to provide the top-most issuer CA, which is listed last in the /tmp/sso.pem file. You can extract it manually or using the following command: Example command to extract the top CA certificate in a certificate chain split -p "-----BEGIN CERTIFICATE-----" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt Note A trusted CA certificate is normally obtained from a trusted source, and not by using the openssl command. Create a truststore for TLS connection to the Kafka brokers. keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso-ca.crt -noprompt Use the Kafka bootstrap address as the hostname of the Kafka broker and the tls listener port (9093) to prepare a certificate for the Kafka broker. KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.pem The obtained .pem file is usually not one single certificate, but a certificate chain. You only have to provide the top-most issuer CA, which is listed last in the /tmp/my-cluster-kafka.pem file. You can extract it manually or using the following command: Example command to extract the top CA certificate in a certificate chain split -p "-----BEGIN CERTIFICATE-----" /tmp/my-cluster-kafka.pem kafka- for f in USD(ls kafka-*); do mv USDf USDf.pem; done cp USD(ls kafka-* | sort -r | head -n 1) my-cluster-kafka-ca.crt Note A trusted CA certificate is normally obtained from a trusted source, and not by using the openssl command. For this example we assume the client is running in a pod in the same namespace where the Kafka cluster was deployed. If the client is accessing the Kafka cluster from outside the OpenShift cluster, you would have to first determine the bootstrap address. In that case you can also get the cluster certificate directly from the OpenShift secret, and there is no need for openssl . For more information, see Chapter 14, Setting up client access to a Kafka cluster . Add the certificate for the Kafka broker to the truststore. keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka-ca.crt -noprompt Keep the session open to check authorized access. 15.5.4.4. Checking authorized access to Kafka using a CLI Kafka client session Check the authorization rules applied through the Red Hat Single Sign-On realm using an interactive CLI session. Apply the checks using Kafka's example producer and consumer clients to create topics with user and service accounts that have different levels of access. Use the team-a-client and team-b-client clients to check the authorization rules. Use the alice admin user to perform additional administrative tasks on Kafka. The Streams for Apache Kafka image used in this example contains Kafka producer and consumer binaries. Prerequisites ZooKeeper and Kafka are running in the OpenShift cluster to be able to send and receive messages. The interactive CLI Kafka client session is started. Apache Kafka download . Setting up client and admin user configuration Prepare a Kafka configuration file with authentication properties for the team-a-client client. SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.client.id="team-a-client" \ oauth.client.secret="team-a-client-secret" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF The SASL OAUTHBEARER mechanism is used. This mechanism requires a client ID and client secret, which means the client first connects to the Red Hat Single Sign-On server to obtain an access token. The client then connects to the Kafka broker and uses the access token to authenticate. Prepare a Kafka configuration file with authentication properties for the team-b-client client. cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.client.id="team-b-client" \ oauth.client.secret="team-b-client-secret" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF Authenticate admin user alice by using curl and performing a password grant authentication to obtain a refresh token. USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST "https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" -H 'Content-Type: application/x-www-form-urlencoded' -d "grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F "refresh_token\":\"" '{printf USD2}' | awk -F "\"" '{printf USD1}') The refresh token is an offline token that is long-lived and does not expire. Prepare a Kafka configuration file with authentication properties for the admin user alice . cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.refresh.token="USDREFRESH_TOKEN" \ oauth.client.id="kafka-cli" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF The kafka-cli public client is used for the oauth.client.id in the sasl.jaas.config . Since it's a public client it does not require a secret. The client authenticates with the refresh token that was authenticated in the step. The refresh token requests an access token behind the scenes, which is then sent to the Kafka broker for authentication. Producing messages with authorized access Use the team-a-client configuration to check that you can produce messages to topics that start with a_ or x_ . Write to topic my-topic . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic my-topic \ --producer.config=/tmp/team-a-client.properties First message This request returns a Not authorized to access topics: [my-topic] error. team-a-client has a Dev Team A role that gives it permission to perform any supported actions on topics that start with a_ , but can only write to topics that start with x_ . The topic named my-topic matches neither of those rules. Write to topic a_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --producer.config /tmp/team-a-client.properties First message Second message Messages are produced to Kafka successfully. Press CTRL+C to exit the CLI application. Check the Kafka container log for a debug log of Authorization GRANTED for the request. oc logs my-cluster-kafka-0 -f -n USDNS Consuming messages with authorized access Use the team-a-client configuration to consume messages from topic a_messages . Fetch messages from topic a_messages . bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties The request returns an error because the Dev Team A role for team-a-client only has access to consumer groups that have names starting with a_ . Update the team-a-client properties to specify the custom consumer group it is permitted to use. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1 The consumer receives all the messages from the a_messages topic. Administering Kafka with authorized access The team-a-client is an account without any cluster-level access, but it can be used with some administrative operations. List topics. bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list The a_messages topic is returned. List consumer groups. bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list The a_consumer_group_1 consumer group is returned. Fetch details on the cluster configuration. bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties \ --entity-type brokers --describe --entity-default The request returns an error because the operation requires cluster level permissions that team-a-client does not have. Using clients with different permissions Use the team-b-client configuration to produce messages to topics that start with b_ . Write to topic a_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --producer.config /tmp/team-b-client.properties Message 1 This request returns a Not authorized to access topics: [a_messages] error. Write to topic b_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic b_messages \ --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3 Messages are produced to Kafka successfully. Write to topic x_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-b-client.properties Message 1 A Not authorized to access topics: [x_messages] error is returned, The team-b-client can only read from topic x_messages . Write to topic x_messages using team-a-client . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-a-client.properties Message 1 This request returns a Not authorized to access topics: [x_messages] error. The team-a-client can write to the x_messages topic, but it does not have a permission to create a topic if it does not yet exist. Before team-a-client can write to the x_messages topic, an admin power user must create it with the correct configuration, such as the number of partitions and replicas. Managing Kafka with an authorized admin user Use admin user alice to manage Kafka. alice has full access to manage everything on any Kafka cluster. Create the x_messages topic as alice . bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties \ --topic x_messages --create --replication-factor 1 --partitions 1 The topic is created successfully. List all topics as alice . bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list Admin user alice can list all the topics, whereas team-a-client and team-b-client can only list the topics they have access to. The Dev Team A and Dev Team B roles both have Describe permission on topics that start with x_ , but they cannot see the other team's topics because they do not have Describe permissions on them. Use the team-a-client to produce messages to the x_messages topic: bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3 As alice created the x_messages topic, messages are produced to Kafka successfully. Use the team-b-client to produce messages to the x_messages topic. bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-b-client.properties Message 4 Message 5 This request returns a Not authorized to access topics: [x_messages] error. Use the team-b-client to consume messages from the x_messages topic: bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b The consumer receives all the messages from the x_messages topic. Use the team-a-client to consume messages from the x_messages topic. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a This request returns a Not authorized to access topics: [x_messages] error. Use the team-a-client to consume messages from a consumer group that begins with a_ . bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a This request returns a Not authorized to access topics: [x_messages] error. Dev Team A has no Read access on topics that start with a x_ . Use alice to produce messages to the x_messages topic. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/alice.properties Messages are produced to Kafka successfully. alice can read from or write to any topic. Use alice to read the cluster configuration. bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties \ --entity-type brokers --describe --entity-default The cluster configuration for this example is empty. Additional resources Server Installation and Configuration Map Red Hat Single Sign-On Authorization Services to the Kafka authorization model
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 - CN=client_4,OU=my_ou,O=my_org,L=my_location,ST=my_state,C=US - CN=client_5,OU=my_ou,O=my_org,C=GB - CN=client_6,O=my_org #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store",
"bootstrap.servers= <kafka_cluster_name> -kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password= <truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password= <keystore_password> 6",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls-external #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2",
"echo \"Z2VuZXJhdGVkcGFzc3dvcmQ=\" | base64 --decode",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 password: valueFrom: secretKeyRef: name: my-secret 1 key: my-password 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # authorization: 1 type: simple superUsers: 2 - CN=client_1 - user_2 - CN=client_3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls 3 # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read",
"apply -f <user_config_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # zookeeper: #",
"apply -f your-file",
"create secret generic my-secret --from-file= my-listener-key.key --from-file= my-listener-certificate.crt",
"listeners: - name: plain port: 9092 type: internal tls: false - name: external3 port: 9094 type: loadbalancer tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"apply -f kafka.yaml",
"//Kafka brokers *. <cluster-name> -kafka-brokers *. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc",
"// Kafka brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc <cluster-name> -kafka-1. <cluster-name> -kafka-brokers <cluster-name> -kafka-1. <cluster-name> -kafka-brokers. <namespace> .svc // Bootstrap service <cluster-name> -kafka-bootstrap <cluster-name> -kafka-bootstrap. <namespace> .svc",
"// Kafka brokers <cluster-name> -kafka- <listener-name> -0 <cluster-name> -kafka- <listener-name> -0. <namespace> .svc <cluster-name> -kafka- <listener-name> -1 <cluster-name> -kafka- <listener-name> -1. <namespace> .svc // Bootstrap service <cluster-name> -kafka- <listener-name> -bootstrap <cluster-name> -kafka- <listener-name> -bootstrap. <namespace> .svc",
"authentication: type: oauth # enableOauthBearer: true",
"authentication: type: oauth # enablePlain: true tokenEndpointUri: https:// OAUTH-SERVER-ADDRESS /auth/realms/external/protocol/openid-connect/token",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth #",
"listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth validIssuerUri: <https://<auth_server_address>/auth/realms/tls> jwksEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/certs> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth clientId: kafka-broker clientSecret: secretName: my-cluster-oauth key: clientSecret validIssuerUri: <https://<auth_server_-_address>/auth/realms/tls> introspectionEndpointUri: <https://<auth_server_address>/auth/realms/tls/protocol/openid-connect/token/introspect> userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt",
"edit kafka my-cluster",
"# - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: https://<auth_server_address>/auth/realms/external 2 jwksEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/certs 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10",
"- name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://<auth_server_address>/auth/realms/external introspectionEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token/introspect 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5",
"authentication: type: oauth # checkIssuer: false 1 checkAudience: true 2 fallbackUserNameClaim: client_id 3 fallbackUserNamePrefix: client-account- 4 validTokenType: bearer 5 userInfoEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/userinfo 6 enableOauthBearer: false 7 enablePlain: true 8 tokenEndpointUri: https://<auth_server_address>/auth/realms/external/protocol/openid-connect/token 9 customClaimCheck: \"@.custom == 'custom-value'\" 10 clientAudience: audience 11 clientScope: scope 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 httpRetryPauseMs: 300 16 groupsClaim: \"USD.groups\" 17 groupsClaimDelimiter: \",\" 18 includeAcceptHeader: false 19",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00007</version> </dependency>",
"security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" \\ 4 oauth.client.id=\"<client_id>\" \\ 5 oauth.client.secret=\"<client_secret>\" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\"<scope>\" \\ 10 oauth.audience=\"<audience>\" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.password.grant.username=\"<username>\" \\ 3 oauth.password.grant.password=\"<password>\" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.access.token=\"<access_token>\" \\ 1 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.refresh.token=\"<refresh_token>\" \\ 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth 1 tokenEndpointUri: https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert certificate: tls.crt",
"spec: # authentication: # disableTlsHostnameVerification: true 1 checkAccessTokenType: false 2 accessTokenIsJwt: false 3 scope: any 4 audience: kafka 5 connectTimeoutSeconds: 60 6 readTimeoutSeconds: 60 7 httpRetries: 2 8 httpRetryPauseMs: 300 9 includeAcceptHeader: false 10",
"apply -f your-file",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"edit kafka my-cluster",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=fred - sam - CN=edward tlsTrustedCertificates: 7 - secretName: oauth-server-cert certificate: ca.crt grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 grantsMaxIdleSeconds: 300 10 grantsGcPeriodSeconds: 300 11 grantsAlwaysLatest: false 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 enableMetrics: false 16 includeAcceptHeader: false 17 #",
"logs -f USD{POD_NAME} -c kafka get pod -w",
"Topic:my-topic Topic:orders-* Group:orders-* Cluster:*",
"kafka-cluster:my-cluster,Topic:* kafka-cluster:*,Group:b_*",
"bin/kafka-topics.sh --create --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --describe --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties",
"Topic:my-topic Group:my-group-*",
"bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties",
"Topic:my-topic Cluster:kafka-cluster",
"bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1",
"bin/kafka-consumer-groups.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-consumer-groups.sh --describe --group my-group-1 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --alter --topic my-topic --partitions 2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --delete --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list \"0,1\" --generate --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"NS=sso get ingress keycloak -n USDNS",
"get -n USDNS pod keycloak-0 -o yaml | less",
"SECRET_NAME=credential-keycloak get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D",
"Dev Team A can write to topics that start with x_ on any cluster Dev Team B can read from topics that start with x_ on any cluster Dev Team B can update consumer group offsets that start with x_ on any cluster ClusterManager of my-cluster Group has full access to cluster config on my-cluster ClusterManager of my-cluster Group has full access to consumer groups on my-cluster ClusterManager of my-cluster Group has full access to topics on my-cluster",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt",
"create secret generic oauth-server-cert --from-file=/tmp/sso-ca.crt -n USDNS",
"SSO_HOST= SSO-HOSTNAME",
"cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\\USD{SSO_HOST}'\"#USDSSO_HOST#\" | oc create -n USDNS -f -",
"NS=sso run -ti --restart=Never --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 kafka-cli -n USDNS -- /bin/sh",
"attach -ti kafka-cli -n USDNS",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso-ca.crt -noprompt",
"KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" /tmp/my-cluster-kafka.pem kafka- for f in USD(ls kafka-*); do mv USDf USDf.pem; done cp USD(ls kafka-* | sort -r | head -n 1) my-cluster-kafka-ca.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka-ca.crt -noprompt",
"SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-a-client\" oauth.client.secret=\"team-a-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-b-client\" oauth.client.secret=\"team-b-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST \"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" -H 'Content-Type: application/x-www-form-urlencoded' -d \"grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access\" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F \"refresh_token\\\":\\\"\" '{printf USD2}' | awk -F \"\\\"\" '{printf USD1}')",
"cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.refresh.token=\"USDREFRESH_TOKEN\" oauth.client.id=\"kafka-cli\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic my-topic --producer.config=/tmp/team-a-client.properties First message",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-a-client.properties First message Second message",
"logs my-cluster-kafka-0 -f -n USDNS",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --entity-type brokers --describe --entity-default",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic b_messages --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --topic x_messages --create --replication-factor 1 --partitions 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 4 Message 5",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/alice.properties",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --entity-type brokers --describe --entity-default"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-securing-access-str
|
Configuring
|
Configuring Red Hat Advanced Cluster Security for Kubernetes 4.7 Configuring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/index
|
19.3. Creating and Editing Password Policies
|
19.3. Creating and Editing Password Policies A password policy can be selective; it may only define certain elements. A global password policy sets defaults that are used for every user entry, unless a group policy takes priority. Note A global policy always exists, so there is no reason to add a global password policy. Group-level policies override the global policies and offer specific policies that only apply to group members. Password policies are not cumulative. Either a group policy or the global policy is in effect for a user or group, but not both simultaneously. Group-level policies do not exist by default, so they must be created manually. Note It is not possible to set a password policy for a non-existent group. 19.3.1. Creating Password Policies in the Web UI Click the Policy tab, and then click the Password Policies subtab. All of the policies in the UI are listed by group. The global password policy is defined by the global_policy group. Click the group link. Click the Add link at the top. In the pop-up box, select the group for which to create the password policy. Set the priority of the policy. The higher the number, the lower the priority. Conversely, the highest priority policy has the lowest number. Only one password policy is in effect for a user, and that is the highest priority policy. Note The priority cannot be changed in the UI once the policy is created. Click the Add and Edit button so that the policy form immediately opens. Set the policy fields. Leaving a field blank means that attribute is not added the password policy configuration. Max lifetime sets the maximum amount of time, in days, that a password is valid before a user must reset it. Min lifetime sets the minimum amount of time, in hours, that a password must remain in effect before a user is permitted to change it. This prevents a user from attempting to change a password back immediately to an older password or from cycling through the password history. History size sets how many passwords are stored. A user cannot re-use a password that is still in the password history. Character classes sets the number of different categories of character that must be used in the password. This does not set which classes must be used; it sets the number of different (unspecified) classes which must be used in a password. For example, a character class can be a number, special character, or capital; the complete list of categories is in Table 19.1, "Password Policy Settings" . This is part of setting the complexity requirements. Min length sets how many characters must be in a password. This is part of setting the complexity requirements. 19.3.2. Creating Password Policies with the Command Line Password policies are added with the pwpolicy-add command. For example: Note Setting an attribute to a blank value effectively removes that attribute from the password policy. 19.3.3. Editing Password Policies with the Command Line As with most IdM entries, a password policy is edited by using a *-mod command, pwpolicy-mod , and then the policy name. However, there is one difference with editing password policies: there is a global policy which always exists. Editing a group-level password policy is slightly different than editing the global password policy. Editing a group-level password policy follows the standard syntax of *-mod commands. It uses the pwpolicy-mod command, the name of the policy entry, and the attributes to change. For example: To edit the global password policy, use the pwpolicy-mod command with the attributes to change, but without specifying a password policy name . For example:
|
[
"kinit admin ipa pwpolicy-add groupName --attribute-value",
"kinit admin ipa pwpolicy-add exampleGroup --minlife=7 --maxlife=49 --history= --priority=1 Group: exampleGroup Max lifetime (days): 49 Min lifetime (hours): 7 Priority: 1",
"[jsmith@ipaserver ~]USD ipa pwpolicy-mod exampleGroup --lockouttime=300 --history=5 --minlength=8",
"[jsmith@ipaserver ~]USD ipa pwpolicy-mod --lockouttime=300 --history=5 --minlength=8"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/setting_different_password_policies_for_different_user_groups
|
Chapter 8. Upgrading the Migration Toolkit for Containers
|
Chapter 8. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 8.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.7 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.7 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 8.2. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 3 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD sudo podman login registry.redhat.io Download the operator.yml file by entering the following command: + USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml If you have previously added the OpenShift Container Platform 3 cluster to the MTC web console, you must update the service account token in the web console because the upgrade process deletes and restores the openshift-migration namespace: Obtain the service account token by entering the following command: USD oc sa get-token migration-controller -n openshift-migration In the MTC web console, click Clusters . Click the Options menu to the cluster and select Edit . Enter the new service account token in the Service account token field. Click Update cluster and then click Close . Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 8.3. Upgrading MTC 1.3 to 1.7 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.7, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration
|
[
"sudo podman login registry.redhat.io",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc create -f controller.yml",
"oc sa get-token migration-controller -n openshift-migration",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/upgrading-3-4
|
6.2. Order Constraints
|
6.2. Order Constraints Order constraints determine the order in which the resources run. You can configure an order constraint to determine the order in which resources start and stop. Use the following command to configure an order constraint. Table 6.2, "Properties of an Order Constraint" . summarizes the properties and options for configuring order constraints. Table 6.2. Properties of an Order Constraint Field Description resource_id The name of a resource on which an action is performed. action The action to perform on a resource. Possible values of the action property are as follows: * start - Start the resource. * stop - Stop the resource. * promote - Promote the resource from a slave resource to a master resource. * demote - Demote the resource from a master resource to a slave resource. If no action is specified, the default action is start . For information on master and slave resources, see Section 8.2, "Multi-State Resources: Resources That Have Multiple Modes" . kind option How to enforce the constraint. The possible values of the kind option are as follows: * Optional - Only applies if both resources are starting and/or stopping. For information on optional ordering, see Section 6.2.2, "Advisory Ordering" . * Mandatory - Always (default value). If the first resource you specified is stopping or cannot be started, the second resource you specified must be stopped. For information on mandatory ordering, see Section 6.2.1, "Mandatory Ordering" . * Serialize - Ensure that no two stop/start actions occur concurrently for a set of resources. symmetrical options If true, which is the default, stop the resources in the reverse order. Default value: true 6.2.1. Mandatory Ordering A mandatory constraints indicates that the second resource you specify cannot run without the first resource you specify being active. This is the default value of the kind option. Leaving the default value ensures that the second resource you specify will react when the first resource you specify changes state. If the first resource you specified resource was running and is stopped, the second resource you specified will also be stopped (if it is running). If the first resource you specified resource was not running and cannot be started, the second resource you specified will be stopped (if it is running). If the first resource you specified is (re)started while the second resource you specified is running, the second resource you specified will be stopped and restarted.
|
[
"pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-orderconstraints-haar
|
Chapter 3. Configuring IAM for IBM Cloud
|
Chapter 3. Configuring IAM for IBM Cloud In environments where the cloud identity and access management (IAM) APIs are not reachable, you must put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. Storing an administrator-level credential secret in the cluster kube-system project is not supported for IBM Cloud(R); therefore, you must set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources About the Cloud Credential Operator 3.2. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys for IBM Cloud(R) 3.3. steps Installing a cluster on IBM Cloud(R) with customizations 3.4. Additional resources Preparing to update a cluster with manually maintained credentials
|
[
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_cloud/configuring-iam-ibm-cloud
|
Chapter 6. Network connections
|
Chapter 6. Network connections 6.1. Creating outgoing connections To connect to a remote server, pass connection options containing the host and port to the container.connect() method. Example: Creating outgoing connections container.on("connection_open", function (event) { console.log("Connection " + event.connection + " is open"); }); var opts = { host: "example.com", port: 5672 }; container.connect(opts); The default host is localhost . The default port is 5672. For information about creating secure connections, Chapter 7, Security . 6.2. Configuring reconnect Reconnect allows a client to recover from lost connections. It is used to ensure that the components in a distributed system reestablish communication after temporary network or component failures. AMQ JavaScript enables reconnect by default. If a connection attempt fails, the client will try again after a brief delay. The delay increases exponentially for each new attempt, up to a default maximum of 60 seconds. To disable reconnect, set the reconnect connection option to false . Example: Disabling reconnect var opts = { host: "example.com", reconnect: false }; container.connect(opts); To control the delays between connection attempts, set the initial_reconnect_delay and max_reconnect_delay connection options. Delay options are specified in milliseconds. To limit the number of reconnect attempts, set the reconnect_limit option. Example: Configuring reconnect var opts = { host: "example.com", initial_reconnect_delay: 100 , max_reconnect_delay: 60 * 1000 , reconnect_limit: 10 }; container.connect(opts); 6.3. Configuring failover AMQ JavaScript allows you to configure alternate connection endpoints programatically. To specify multiple connection endpoints, define a function that returns new connection options and pass the function in the connection_details option. The function is called once for each connection attempt. Example: Configuring failover var hosts = ["alpha.example.com", "beta.example.com"]; var index = -1; function failover_fn() { index += 1; if (index == hosts.length) index = 0; return {host: hosts[index].hostname}; }; var opts = { host: "example.com", connection_details: failover_fn } container.connect(opts); This example implements repeating round-robin failover for a list of hosts. You can use this interface to implement your own failover behavior. 6.4. Accepting incoming connections AMQ JavaScript can accept inbound network connections, enabling you to build custom messaging servers. To start listening for connections, use the container.listen() method with options containing the local host address and port to listen on. Example: Accepting incoming connections container.on("connection_open", function (event) { console.log("New incoming connection " + event.connection ); }); var opts = { host: "0.0.0.0", port: 5672 }; container.listen(opts); The special IP address 0.0.0.0 listens on all available IPv4 interfaces. To listen on all IPv6 interfaces, use [::0] . For more information, see the server receive.js example .
|
[
"container.on(\"connection_open\", function (event) { console.log(\"Connection \" + event.connection + \" is open\"); }); var opts = { host: \"example.com\", port: 5672 }; container.connect(opts);",
"var opts = { host: \"example.com\", reconnect: false }; container.connect(opts);",
"var opts = { host: \"example.com\", initial_reconnect_delay: 100 , max_reconnect_delay: 60 * 1000 , reconnect_limit: 10 }; container.connect(opts);",
"var hosts = [\"alpha.example.com\", \"beta.example.com\"]; var index = -1; function failover_fn() { index += 1; if (index == hosts.length) index = 0; return {host: hosts[index].hostname}; }; var opts = { host: \"example.com\", connection_details: failover_fn } container.connect(opts);",
"container.on(\"connection_open\", function (event) { console.log(\"New incoming connection \" + event.connection ); }); var opts = { host: \"0.0.0.0\", port: 5672 }; container.listen(opts);"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_javascript_client/network_connections
|
Chapter 3. Reviewing automation execution environments with automation content navigator
|
Chapter 3. Reviewing automation execution environments with automation content navigator As a content developer, you can review your automation execution environment with automation content navigator and display the packages and collections included in the automation execution environments. Automation content navigator runs a playbook to extract and display the results. 3.1. Reviewing automation execution environments from automation content navigator You can review your automation execution environments with the automation content navigator text-based user interface. Prerequisites Automation execution environments Procedure Review the automation execution environments included in your automation content navigator configuration. USD ansible-navigator images Type the number of the automation execution environment you want to delve into for more details. You can review the packages and versions of each installed automation execution environment and the Ansible version any included collections. Optional: pass in the automation execution environment that you want to use. This becomes the primary and is the automation execution environment that automation content navigator uses. USD ansible-navigator images --eei registry.example.com/example-enterprise-ee:latest Verification Review the automation execution environment output.
|
[
"ansible-navigator images",
"ansible-navigator images --eei registry.example.com/example-enterprise-ee:latest"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_content_navigator/assembly-review-ee-navigator_ansible-navigator
|
Chapter 8. Direct Migration Requirements
|
Chapter 8. Direct Migration Requirements Direct Migration is available with Migration Toolkit for Containers (MTC) 1.4.0 or later. There are two parts of the Direct Migration: Direct Volume Migration Direct Image Migration Direct Migration enables the migration of persistent volumes and internal images directly from the source cluster to the destination cluster without an intermediary replication repository (object storage). 8.1. Prerequisites Expose the internal registries for both clusters (source and destination) involved in the migration for external traffic. Ensure the remote source and destination clusters can communicate using OpenShift Container Platform routes on port 443. Configure the exposed registry route in the source and destination MTC clusters; do this by specifying the spec.exposedRegistryPath field or from the MTC UI. Note If the destination cluster is the same as the host cluster (where a migration controller exists), there is no need to configure the exposed registry route for that particular MTC cluster. The spec.exposedRegistryPath is required only for Direct Image Migration and not Direct Volume Migration. Ensure the two spec flags in MigPlan custom resource (CR) indirectImageMigration and indirectVolumeMigration are set to false for Direct Migration to be performed. The default value for these flags is false . The Direct Migration feature of MTC uses the Rsync utility. 8.2. Rsync configuration for direct volume migration Direct Volume Migration (DVM) in MTC uses Rsync to synchronize files between the source and the target persistent volumes (PVs), using a direct connection between the two PVs. Rsync is a command-line tool that allows you to transfer files and directories to local and remote destinations. The rsync command used by DVM is optimized for clusters functioning as expected. The MigrationController CR exposes the following variables to configure rsync_options in Direct Volume Migration: Variable Type Default value Description rsync_opt_bwlimit int Not set When set to a positive integer, --bwlimit=<int> option is added to Rsync command. rsync_opt_archive bool true Sets the --archive option in the Rsync command. rsync_opt_partial bool true Sets the --partial option in the Rsync command. rsync_opt_delete bool true Sets the --delete option in the Rsync command. rsync_opt_hardlinks bool true Sets the --hard-links option is the Rsync command. rsync_opt_info string COPY2 DEL2 REMOVE2 SKIP2 FLIST2 PROGRESS2 STATS2 Enables detailed logging in Rsync Pod. rsync_opt_extras string Empty Reserved for any other arbitrary options. Setting the options set through the variables above are global for all migrations. The configuration will take effect for all future migrations as soon as the Operator successfully reconciles the MigrationController CR. Any ongoing migration can use the updated settings depending on which step it currently is in. Therefore, it is recommended that the settings be applied before running a migration. The users can always update the settings as needed. Use the rsync_opt_extras variable with caution. Any options passed using this variable are appended to the rsync command, with addition. Ensure you add white spaces when specifying more than one option. Any error in specifying options can lead to a failed migration. However, you can update MigrationController CR as many times as you require for future migrations. Customizing the rsync_opt_info flag can adversely affect the progress reporting capabilities in MTC. However, removing progress reporting can have a performance advantage. This option should only be used when the performance of Rsync operation is observed to be unacceptable. Note The default configuration used by DVM is tested in various environments. It is acceptable for most production use cases provided the clusters are healthy and performing well. These configuration variables should be used in case the default settings do not work and the Rsync operation fails. 8.2.1. Resource limit configurations for Rsync pods The MigrationController CR exposes following variables to configure resource usage requirements and limits on Rsync: Variable Type Default Description source_rsync_pod_cpu_limits string 1 Source rsync pod's CPU limit source_rsync_pod_memory_limits string 1Gi Source rsync pod's memory limit source_rsync_pod_cpu_requests string 400m Source rsync pod's cpu requests source_rsync_pod_memory_requests string 1Gi Source rsync pod's memory requests target_rsync_pod_cpu_limits string 1 Target rsync pod's cpu limit target_rsync_pod_cpu_requests string 400m Target rsync pod's cpu requests target_rsync_pod_memory_limits string 1Gi Target rsync pod's memory limit target_rsync_pod_memory_requests string 1Gi Target rsync pod's memory requests 8.2.1.1. Supplemental group configuration for Rsync pods If Persistent Volume Claims (PVC) are using a shared storage, the access to storage can be configured by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Variable Type Default Description src_supplemental_groups string Not Set Comma separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not Set Comma separated list of supplemental groups for target Rsync Pods For example, the MigrationController CR can be updated to set the values: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 8.2.1.2. Rsync retry configuration With Migration Toolkit for Containers (MTC) 1.4.3 and later, a new ability of retrying a failed Rsync operation is introduced. By default, the migration controller retries Rsync until all of the data is successfully transferred from the source to the target volume or a specified number of retries is met. The default retry limit is set to 20 . For larger volumes, a limit of 20 retries may not be sufficient. You can increase the retry limit by using the following variable in the MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40 In this example, the retry limit is increased to 40 . 8.2.1.3. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 8.2.1.3.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 8.2.1.3.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 8.2.1.3.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 8.2.2. MigCluster Configuration For every MigCluster resource created in Migration Toolkit for Containers (MTC), a ConfigMap named migration-cluster-config is created in the Migration Operator's namespace on the cluster which MigCluster resource represents. The migration-cluster-config allows you to configure MigCluster specific values. The Migration Operator manages the migration-cluster-config . You can configure every value in the ConfigMap using the variables exposed in the MigrationController CR: Variable Type Required Description migration_stage_image_fqin string No Image to use for Stage Pods (applicable only to IndirectVolumeMigration) migration_registry_image_fqin string No Image to use for Migration Registry rsync_endpoint_type string No Type of endpoint for data transfer ( Route , ClusterIP , NodePort ) rsync_transfer_image_fqin string No Image to use for Rsync Pods (applicable only to DirectVolumeMigration) migration_rsync_privileged bool No Whether to run Rsync Pods as privileged or not migration_rsync_super_privileged bool No Whether to run Rsync Pods as super privileged containers ( spc_t SELinux context) or not cluster_subdomain string No Cluster's subdomain migration_registry_readiness_timeout int No Readiness timeout (in seconds) for Migration Registry Deployment migration_registry_liveness_timeout int No Liveness timeout (in seconds) for Migration Registry Deployment exposed_registry_validation_path string No Subpath to validate exposed registry in a MigCluster (for example /v2) 8.3. Direct migration known issues 8.3.1. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 8.3.1.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 8.3.1.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false .
|
[
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migration_toolkit_for_containers/mtc-direct-migration-requirements
|
7.2. Templates
|
7.2. Templates To create a template, an administrator creates and customizes a virtual machine. Desired packages are installed, customized configurations are applied, the virtual machine is prepared for its intended purpose in order to minimize the changes that must be made to it after deployment. An optional but recommended step before creating a template from a virtual machine is generalization. Generalization is used to remove details like system user names, passwords, and timezone information that will change upon deployment. Generalization does not affect customized configurations. Generalization of Windows and Linux guests in the Red Hat Virtualization environment is discussed further in Templates in the Virtual Machine Management Guide . Red Hat Enterprise Linux guests are generalized using sys-unconfig . Windows guests are generalized using sys-prep . When the virtual machine that provides the basis for a template is satisfactorily configured, generalized if desired, and stopped, an administrator can create a template from the virtual machine. Creating a template from a virtual machine causes a read-only copy of the specially configured virtual disk to be created. The read-only image forms the backing image for all subsequently created virtual machines that are based on that template. In other words, a template is essentially a customized read-only virtual disk with an associated virtual hardware configuration. The hardware can be changed in virtual machines created from a template, for instance, provisioning two gigabytes of RAM for a virtual machine created from a template that has one gigabyte of RAM. The template virtual disk, however, cannot be changed as doing so would result in changes for all virtual machines based on the template. When a template has been created, it can be used as the basis for multiple virtual machines. Virtual machines are created from a given template using a Thin provisioning method or a Clone provisioning method. Virtual machines that are cloned from templates take a complete writable copy of the template base image, sacrificing the space savings of the thin creation method in exchange for no longer depending on the presence of the template. Virtual machines that are created from a template using the thin method use the read-only image from the template as a base image, requiring that the template and all virtual machines created from it be stored on the same storage domain. Changes to data and newly generated data are stored in a copy-on-write image. Each virtual machine based on a template uses the same base read-only image, as well as a copy-on-write image that is unique to the virtual machine. This provides storage savings by limiting the number of times identical data is kept in storage. Furthermore, frequent use of the read-only backing image can cause the data being accessed to be cached, resulting in a net performance increase.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/templates1
|
Chapter 4. Upgrading a geo-replication deployment of standalone Red Hat Quay
|
Chapter 4. Upgrading a geo-replication deployment of standalone Red Hat Quay Use the following procedure to upgrade your geo-replication Red Hat Quay deployment. Important When upgrading geo-replication Red Hat Quay deployments to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), or geo-replication deployments, you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay deployment before upgrading. Prerequisites You have logged into registry.redhat.io Procedure This procedure assumes that you are running Red Hat Quay services on three (or more) systems. For more information, see Preparing for Red Hat Quay high availability . Obtain a list of all Red Hat Quay instances on each system running a Red Hat Quay instance. Enter the following command on System A to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01 Enter the following command on System B to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02 Enter the following command on System C to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03 Temporarily shut down all Red Hat Quay instances on each system. Enter the following command on System A to shut down the Red Hat Quay instance: USD sudo podman stop ec16ece208c0 Enter the following command on System B to shut down the Red Hat Quay instance: USD sudo podman stop 7ae0c9a8b37d Enter the following command on System C to shut down the Red Hat Quay instance: USD sudo podman stop e75c4aebfee9 Obtain the latest Red Hat Quay version, for example, Red Hat Quay 3, on each system. Enter the following command on System A to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv} Enter the following command on System B to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v{producty} Enter the following command on System C to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv} On System A of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay01 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv} Wait for the new Red Hat Quay container to become fully operational on System A. You can check the status of the container by entering the following command: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v{producty} registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01 Optional: Ensure that Red Hat Quay is fully operation by navigating to the Red Hat Quay UI. After ensuring that Red Hat Quay on System A is fully operational, run the new image versions on System B and on System C. On System B of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay02 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv} On System C of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay03 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv} You can check the status of the containers on System B and on System C by entering the following command: USD sudo podman ps
|
[
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03",
"sudo podman stop ec16ece208c0",
"sudo podman stop 7ae0c9a8b37d",
"sudo podman stop e75c4aebfee9",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:v{producty}",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay01 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v{producty} registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay02 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay03 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman ps"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/upgrade_red_hat_quay/upgrading-geo-repl-quay
|
Chapter 4. Installing a cluster on GCP with customizations
|
Chapter 4. Installing a cluster on GCP with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on infrastructure that the installation program provisions on Google Cloud Platform (GCP). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on GCP". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 4.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 4.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D Tau T2D 4.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 4.2. Machine series for 64-bit ARM machines Tau T2A 4.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 4.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 4.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 4.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 15 17 18 21 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 4.5.8. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.6. Managing user-defined labels and tags for GCP Important Support for user-defined labels and tags for GCP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Google Cloud Platform (GCP) provides labels and tags that help to identify and organize the resources created for a specific OpenShift Container Platform cluster, making them easier to manage. You can define labels and tags for each GCP resource only during OpenShift Container Platform cluster installation. Important User-defined labels and tags are not supported for OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.16. Note You cannot update the tags that are already added. Also, a new tag-supported resource creation fails if the configured tag keys or tag values are deleted. User-defined labels User-defined labels and OpenShift Container Platform specific labels are applied only to resources created by OpenShift Container Platform installation program and its core components such as: GCP filestore CSI Driver Operator GCP PD CSI Driver Operator Image Registry Operator Machine API provider for GCP User-defined tags are attached to resources created by the OpenShift Container Platform installation program, Image Registry Operator, and Machine API Operator. User-defined tags are not attached to the resources created by any other Operators or the Kubernetes in-tree components. User-defined labels and OpenShift Container Platform labels are available on the following GCP resources: Compute disk Compute instance Compute image Compute forwarding rule DNS managed zone Filestore instance Storage bucket Limitations to user-defined labels Labels for ComputeAddress are supported in the GCP beta version. OpenShift Container Platform does not add labels to the resource. User-defined tags User-defined tags are attached to resources created by the OpenShift Container Platform Image Registry Operator and not on the resources created by any other Operators or the Kubernetes in-tree components. User-defined tags are available on the following GCP resources: Storage buckets Compute instances Compute disks Limitations to the user-defined tags Tags will not be attached to the following items: Filestore instance resources created by the GCP filestore CSI driver Operator Compute disk and compute image resources created by the GCP PD CSI driver Operator Tags must not be restricted to particular service accounts, because Operators create and use service accounts with minimal roles. OpenShift Container Platform does not create any key and value resources of the tag. OpenShift Container Platform specific tags are not added to any resource. Additional resources For more information about identifying the OrganizationID , see: OrganizationID For more information about identifying the ProjectID , see: ProjectID For more information about labels, see Labels Overview . For more information about tags, see Tags Overview . 4.6.1. Configuring user-defined labels and tags for GCP Prerequisites The installation program requires that a service account includes a TagUser role, so that the program can create the OpenShift Container Platform cluster with defined tags at both organization and project levels. Procedure Update the install-config.yaml file to define the list of desired labels and tags. Note Labels and tags are defined during the install-config.yaml creation phase, and cannot be modified or updated with new labels and tags after cluster creation. Sample install-config.yaml file apiVersion: v1 featureSet: TechPreviewNoUpgrade platform: gcp: userLabels: 1 - key: <label_key> 2 value: <label_value> 3 userTags: 4 - parentID: <OrganizationID/ProjectID> 5 key: <tag_key_short_name> value: <tag_value_short_name> 1 Adds keys and values as labels to the resources created on GCP. 2 Defines the label name. 3 Defines the label content. 4 Adds keys and values as tags to the resources created on GCP. 5 The ID of the hierarchical resource where the tags are defined, at the organization or the project level. The following are the requirements for user-defined labels: A label key and value must have a minimum of 1 character and can have a maximum of 63 characters. A label key and value must contain only lowercase letters, numeric characters, underscore ( _ ), and dash ( - ). A label key must start with a lowercase letter. You can configure a maximum of 32 labels per resource. Each resource can have a maximum of 64 labels, and 32 labels are reserved for internal use by OpenShift Container Platform. The following are the requirements for user-defined tags: Tag key and tag value must already exist. OpenShift Container Platform does not create the key and the value. A tag parentID can be either OrganizationID or ProjectID : OrganizationID must consist of decimal numbers without leading zeros. ProjectID must be 6 to 30 characters in length, that includes only lowercase letters, numbers, and hyphens. ProjectID must start with a letter, and cannot end with a hyphen. A tag key must contain only uppercase and lowercase alphanumeric characters, hyphen ( - ), underscore ( _ ), and period ( . ). A tag value must contain only uppercase and lowercase alphanumeric characters, hyphen ( - ), underscore ( _ ), period ( . ), at sign ( @ ), percent sign ( % ), equals sign ( = ), plus ( + ), colon ( : ), comma ( , ), asterisk ( * ), pound sign ( USD ), ampersand ( & ), parentheses ( () ), square braces ( [] ), curly braces ( {} ), and space. A tag key and value must begin and end with an alphanumeric character. Tag value must be one of the pre-defined values for the key. You can configure a maximum of 50 tags. There should be no tag key defined with the same value as any of the existing tag keys that will be inherited from the parent resource. 4.6.2. Querying user-defined labels and tags for GCP After creating the OpenShift Container Platform cluster, you can access the list of the labels and tags defined for the GCP resources in the infrastructures.config.openshift.io/cluster object as shown in the following sample infrastructure.yaml file. Sample infrastructure.yaml file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: platformSpec: type: GCP status: infrastructureName: <cluster_id> 1 platform: GCP platformStatus: gcp: resourceLabels: - key: <label_key> value: <label_value> resourceTags: - key: <tag_key_short_name> parentID: <OrganizationID/ProjectID> value: <tag_value_short_name> type: GCP 1 The cluster ID that is generated during cluster installation. Along with the user-defined labels, resources have a label defined by the OpenShift Container Platform. The format of the OpenShift Container Platform labels is kubernetes-io-cluster-<cluster_id>:owned . 4.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 4.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 4.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 4.8.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 4.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 4.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 4.8.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 4.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 4.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 4.9. Using the GCP Marketplace offering Using the GCP Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through GCP, while still being supported directly by Red Hat. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to deploy compute machines. To deploy an OpenShift Container Platform cluster using an RHCOS image from the GCP Marketplace, override the default behavior by modifying the install-config.yaml file to reference the location of GCP Marketplace offer. Prerequisites You have an existing install-config.yaml file. Procedure Edit the compute.platform.gcp.osImage parameters to specify the location of the GCP Marketplace image: Set the project parameter to redhat-marketplace-public Set the name parameter to one of the following offers: OpenShift Container Platform redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine redhat-coreos-oke-413-x86-64-202305021736 Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies a GCP Marketplace image for compute machines apiVersion: v1 baseDomain: example.com controlPlane: # ... compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-413-x86-64-202305021736 # ... 4.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 featureSet: TechPreviewNoUpgrade platform: gcp: userLabels: 1 - key: <label_key> 2 value: <label_value> 3 userTags: 4 - parentID: <OrganizationID/ProjectID> 5 key: <tag_key_short_name> value: <tag_value_short_name>",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: platformSpec: type: GCP status: infrastructureName: <cluster_id> 1 platform: GCP platformStatus: gcp: resourceLabels: - key: <label_key> value: <label_value> resourceTags: - key: <tag_key_short_name> parentID: <OrganizationID/ProjectID> value: <tag_value_short_name> type: GCP",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"apiVersion: v1 baseDomain: example.com controlPlane: compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-413-x86-64-202305021736",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_gcp/installing-gcp-customizations
|
Chapter 2. Admin REST API
|
Chapter 2. Admin REST API Red Hat Single Sign-On comes with a fully functional Admin REST API with all features provided by the Admin Console. To invoke the API you need to obtain an access token with the appropriate permissions. The required permissions are described in Server Administration Guide . A token can be obtained by enabling authenticating to your application with Red Hat Single Sign-On; see the Securing Applications and Services Guide . You can also use direct access grant to obtain an access token. For complete documentation see API Documentation . 2.1. Example using CURL Obtain access token for user in the realm master with username admin and password password : curl \ -d "client_id=admin-cli" \ -d "username=admin" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/auth/realms/master/protocol/openid-connect/token" Note By default this token expires in 1 minute The result will be a JSON document. To invoke the API you need to extract the value of the access_token property. You can then invoke the API by including the value in the Authorization header of requests to the API. The following example shows how to get the details of the master realm: curl \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ "http://localhost:8080/auth/admin/realms/master"
|
[
"curl -d \"client_id=admin-cli\" -d \"username=admin\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/auth/realms/master/protocol/openid-connect/token\"",
"curl -H \"Authorization: bearer eyJhbGciOiJSUz...\" \"http://localhost:8080/auth/admin/realms/master\""
] |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_developer_guide/admin_rest_api
|
Chapter 3. Mirroring images for a disconnected installation
|
Chapter 3. Mirroring images for a disconnected installation You can use the procedures in this section to ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place your mirror registry on a mirror host that has access to both your network and the Internet. If you do not have access to a mirror host, use the Mirroring an Operator catalog procedure to copy images to a device you can move across network boundaries with. 3.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 3.2. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.3. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.3.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 3.3.1.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.3.1.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 3.3.1.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.4. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your restricted network. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager and save it to a .json file. Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Make a copy of your pull secret in JSON format: USD cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. Save the file either as ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json . The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Edit the new file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.5. Mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. If you already have a container image registry, such as Red Hat Quay, you can skip these steps and go straight to Mirroring the OpenShift Container Platform image repository . Prerequisites An OpenShift Container Platform subscription. Red Hat Enterprise Linux (RHEL) 8 with Podman 3.3 and OpenSSL installed. Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server. Passwordless sudo access on the target host. Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys. 2 or more vCPUs. 8 GB of RAM. About 7.7 GB for OpenShift Container Platform 4.7 Release images, or about 713 GB for OpenShift Container Platform 4.7 Release images and OpenShift Container Platform 4.7 Red Hat Operator images. Up to 1 TB per stream or more is suggested. Important These requirements are based on local testing results with only Release images and Operator images tested. Storage requirements can vary based on your organization's needs. Some users might require more space, for example, when they mirror multiple z-streams. You can use standard Red Hat Quay functionality to remove unnecessary images and free up space. 3.5.1. Mirror registry for Red Hat OpenShift introduction For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with pre-configured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. The mirror registry for Red Hat OpenShift is limited to hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift . Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Unlike Red Hat Quay, the mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged, because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment. 3.5.2. Mirroring on a local host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD sudo ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the registry by running the following command: USD podman login --authfile pull-secret.txt \ -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring an Operator catalog" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 3.5.3. Mirroring on a remote host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD sudo ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the mirror registry by running the following command: USD podman login --authfile pull-secret.txt \ -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring an Operator catalog" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 3.6. Upgrading the mirror registry for Red Hat OpenShift You can upgrade the mirror registry for Red Hat OpenShift from your local host by running the following command: USD sudo ./mirror-registry upgrade Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. 3.6.1. Uninstalling the mirror registry for Red Hat OpenShift You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command: USD sudo ./mirror-registry uninstall -v \ --quayRoot <example_directory_name> Note Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use --autoApprove to skip this prompt. Users who install the mirror registry for Red Hat OpenShift with the --quayRoot flag must include the --quayRoot flag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with --quayRoot example_directory_name , you must include that string to properly uninstall the mirror registry. 3.6.2. Mirror registry for Red Hat OpenShift flags The following flags are available for the mirror registry for Red Hat OpenShift : Flags Description --autoApprove A boolean value that disables interactive prompts. If set to true , the quayRoot directory is automatically deleted when uninstalling the mirror registry. Defaults to false if left unspecified. --initPassword The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. --initUser string Shows the username of the initial user. Defaults to init if left unspecified. --quayHostname The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to SERVER_HOSTNAME in the Quay config.yaml . Must resolve by DNS. Defaults to <targetHostname>:8443 if left unspecified. [1] --quayRoot , -r The directory where container image layer and configuration data is saved, including rootCA.key , rootCA.pem , and rootCA.srl certificates. Requires about 7.7 GB for OpenShift Container Platform 4.7 Release images, or about 713 GB for OpenShift Container Platform 4.7 Release images and OpenShift Container Platform 4.7 Red Hat Operator images. Defaults to /etc/quay-install if left unspecified. --ssh-key , -k The path of your SSH identity key. Defaults to ~/.ssh/quay_installer if left unspecified. --sslCert The path to the SSL/TLS public key / certificate. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --sslCheckSkip Skips the check for the certificate hostname against the SERVER_HOSTNAME in the config.yaml file. [2] --sslKey The path to the SSL/TLS private key used for HTTPS communication. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --targetHostname , -H The hostname of the target you want to install Quay to. Defaults to USDHOST , for example, a local host, if left unspecified. --targetUsername , -u The user on the target host which will be used for SSH. Defaults to USDUSER , for example, the current user if left unspecified. --verbose , -v Shows debug logs and Ansible playbook outputs. --version Shows the version for the mirror registry for Red Hat OpenShift . --quayHostname must be modified if the public DNS name of your system is different from the local hostname. --sslCheckSkip is used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation. Additional resources Using SSL to protect connections to Red Hat Quay Configuring the system to trust the certificate authority Mirroring the OpenShift Container Platform image repository Mirroring an Operator catalog 3.7. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the Internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates that do not set a Subject Alternative Name, you must precede the oc commands in this procedure with GODEBUG=x509ignoreCN=0 . If you do not set this variable, the oc commands will fail with the following error: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0 Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your server, such as x86_64 .: USD ARCHITECTURE=<server_architecture> Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have Internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active Internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.8. The Cluster Samples Operator in a disconnected environment In a disconnected environment, you must take additional steps after you install a cluster to configure the Cluster Samples Operator. Review the following information in preparation. 3.8.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 3.9. steps Mirror the OperatorHub images for the Operators that you want to install in your cluster. Install a cluster on infrastructure that you provision in your restricted network, such as on VMware vSphere , bare metal , or Amazon Web Services . 3.10. Additional resources See Gathering data about specific features for more information about using must-gather.
|
[
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"sudo ./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"podman login --authfile pull-secret.txt -u init -p <password> <host_example_com>:8443> --tls-verify=false 1",
"sudo ./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"podman login --authfile pull-secret.txt -u init -p <password> <host_example_com>:8443> --tls-verify=false 1",
"sudo ./mirror-registry upgrade",
"sudo ./mirror-registry uninstall -v --quayRoot <example_directory_name>",
"x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<server_architecture>",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-install"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-mirroring-installation-images
|
Part I. Introduction to Administering JBoss Data Virtualization
|
Part I. Introduction to Administering JBoss Data Virtualization
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/part-introduction_to_administering_jboss_data_virtualization
|
Appendix H. Using Your Subscription
|
Appendix H. Using Your Subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the JBOSS INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Registering Your System for Packages To install RPM packages on Red Hat Enterprise Linux, your system must be registered. If you are using zip or tar files, this step is not required. Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. To learn more see How to Register and Subscribe a System to the Red Hat Customer Portal . Revised on 2021-06-10 08:59:29 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/using_your_subscription
|
15.5. Managing Synchronization Agreements
|
15.5. Managing Synchronization Agreements 15.5.1. Trusting the Active Directory and IdM CA Certificates Both Active Directory and Identity Management use certificates for server authentication. For the Active Directory and IdM SSL server certificates to be trusted by each other, both servers need to trust the CA certificate for the CA which issued those certificates. This means that the Active Directory CA certificate needs to be imported into the IdM database, and the IdM CA certificate needs to be imported into the Active Directory database. On the Active Directory server, download the IdM server's CA certificate from http://ipa.example.com/ipa/config/ca.crt . Install the IdM CA certificate in the Active Directory certificate database. This can be done using the Microsoft Management Console or the certutil utility . For example: For more details, see the Active Directory documentation. Export the Active Directory CA certificate. In My Network Places , open the CA distribution point. Double-click the security certificate file ( .crt file) to display the Certificate dialog box. On the Details tab, click Copy to File to start the Certificate Export Wizard . Click , and then select Base-64 encoded X.509 (.CER) . Specify a suitable directory and file name for the exported file. Click to export the certificate, and then click Finish . Copy the Active Directory certificate over to the IdM server machine. Download the IdM server's CA certificate from http://ipa.example.com/ipa/config/ca.crt . Copy both the Active Directory CA certificate and the IdM CA certificate into the /etc/openldap/cacerts/ directory. Update the hash symlinks for the certificates. Edit the /etc/openldap/ldap.conf file, and add the information to point to and use the certificates in the /etc/openldap/cacerts/ directory. 15.5.2. Creating Synchronization Agreements Synchronization agreements are created on the IdM server using the ipa-replica-manage connect command because it creates a connection to the Active Directory domain. The options to create the synchronization agreement are listed in Table 15.2, "Synchronization Agreement Options" . Make sure that the Active Directory and IdM servers trust each other's CA certificates, as in Section 15.5.1, "Trusting the Active Directory and IdM CA Certificates" . Remove any existing Kerberos credentials on the IdM server. Use the ipa-replica-manage command to create a Windows synchronization agreement. This requires the --winsync option. If passwords will be synchronized as well as user accounts, then also use the --passsync option and set a password to use for Password Sync. The --binddn and --bindpwd options give the username and password of the system account on the Active Directory server that IdM will use to connect to the Active Directory server. When prompted, enter the Directory Manager password. Optional. Configure Password Synchronization, as in Section 15.6.2, "Setting up Password Synchronization" . Table 15.2. Synchronization Agreement Options Option Description --winsync Identifies this as a synchronization agreement. --binddn Gives the full user DN of the synchronization identity. This is the user DN that the IdM LDAP server uses to bind to Active Directory. This user must exist in the Active Directory domain and must have replicator, read, search, and write permissions on the Active Directory subtree. --bindpw Gives the password for the sync user. --passsync Gives the password for the Windows user account which is involved in synchronization. --cacert Gives the full path and file name of the Active Directory CA certificate. This certificate is exported in Section 15.5.1, "Trusting the Active Directory and IdM CA Certificates" . --win-subtree Gives the DN of the Windows subtree containing the users to synchronize. The default value is cn=Users,USDSUFFIX . AD_server_name Gives the hostname of the Active Directory domain controller. 15.5.3. Changing the Behavior for Syncing User Account Attributes When the sync agreement is created, it has certain default behaviors defined for how the synchronization process handles the user account attributes during synchronization. The types of behaviors are things like how to handle lockout attributes or how to handle different DN formats. This behavior can be changed by editing the synchronization agreement. The list of attribute-related parameters are in Table 15.3, "Synced Attribute Settings" . The sync agreement exists as a special plug-in entry in the LDAP server and each attribute behavior is set through an LDAP attribute. To change the sync behavior, use the ldapmodify command to modify the LDAP server entry directly. For example, account lockout attributes are synchronized between IdM and Active Directory by default, but this can be disabled by editing the ipaWinSyncAcctDisable attribute. (Changing this means that if an account is disabled in Active Directory, it is still active in IdM and vice versa.) Table 15.3. Synced Attribute Settings Parameter Description Possible Values General User Account Parameters ipaWinSyncNewEntryFilter Sets the search filter to use to find the entry which contains the list of object classes to add to new user entries. The default is (cn=ipaConfig) . ipaWinSyncNewUserOCAttr Sets the attribute in the configuration entry which actually contains the list of object classes to add to new user entries. The default is ipauserobjectclasses . ipaWinSyncHomeDirAttr Identifies which attribute in the entry contains the default location of the POSIX home directory. The default is ipaHomesRootDir . ipaWinSyncUserAttr Sets an additional attribute with a specific value to add to Active Directory users when they are synced over from the Active Directory domain. If the attribute is multi-valued, then it can be set multiple times, and the sync process adds all of the values to the entry. Note This only sets the attribute value if the entry does not already have that attribute present. If the attribute is present, then the entry's value is used when the Active Directory entry is synced over. ipaWinSyncUserAttr: attributeName attributeValue ipaWinSyncForceSync Sets whether to check existing IdM users which match an existing Active Directory user should be automatically edited so they can be synchronized. If an IdM user account has a uid parameter which is identical to the samAccountName in an existing Active Directory user, then that account is not synced by default. This attribute tells the sync service to add the ntUser and ntUserDomainId to the IdM user entries automatically, which allows them to be synchronized. true | false User Account Lock Parameters ipaWinSyncAcctDisable Sets which way to synchronize account lockout attributes. It is possible to control which account lockout settings are in effect. For example, to_ad means that when account lockout attribute is set in IdM, its value is synced over to Active Directory and overrides the local Active Directory value. By default, account lockout attributes are synced from both domains. both (default) to_ad to_ds none ipaWinSyncInactivatedFilter Sets the search filter to use to find the DN of the group used to hold inactivated (disabled) users. This does not need to be changed in most deployments. The default is (&(cn=inactivated)(objectclass=groupOfNames)) . ipaWinSyncActivatedFilter Sets the search filter to use to find the DN of the group used to hold active users. This does not need to be changed in most deployments. The default is (&(cn=activated)(objectclass=groupOfNames)) . Group Parameters ipaWinSyncDefaultGroupAttr Sets the attribute in the new user account to reference to see what the default group for the user is. The group name in the entry is then used to find the gidNumber for the user account. The default is ipaDefaultPrimaryGroup . ipaWinSyncDefaultGroupFilter Sets the search filter to map the group name to the POSIX gidNumber . The default is (&(gidNumber=*)(objectclass=posixGroup)(cn= groupAttr_value )) . Realm Parameters ipaWinSyncRealmAttr Sets the attribute which contains the realm name in the realm entry. The default is cn . ipaWinSyncRealmFilter Sets the search filter to use to find the entry which contains the IdM realm name. The default is (objectclass=krbRealmContainer) . 15.5.4. Changing the Synchronized Windows Subtree Creating a synchronization agreement automatically sets the two subtrees to use as the synchronized user database. In IdM, the default is cn=users,cn=accounts,USDSUFFIX , and for Active Directory, the default is CN=Users,USDSUFFIX . The value for the Active Directory subtree can be set to a non-default value when the sync agreement is created by using the --win-subtree option. After the agreement is created, the Active Directory subtree can be changed by using the ldapmodify command to edit the nsds7WindowsReplicaSubtree value in the sync agreement entry. Get the name of the sync agreement, using ldapsearch . This search returns only the values for the dn and nsds7WindowsReplicaSubtree attributes instead of the entire entry. Modify the sync agreement The new subtree setting takes effect immediately. If a sync operation is currently running, then it takes effect as soon as the current operation completes. 15.5.5. Configuring Uni-Directional Sync By default, all modifications and deletions are bi-directional. A change in Active Directory is synced over to Identity Management, and a change to an entry in Identity Management is synced over to Active Directory. This is essentially an equitable, multi-master relationship, where both Active Directory and Identity Management are equal peers in synchronization and are both data masters. However, there can be some data structure or IT designs where only one domain should be a data master and the other domain should accept updates. This changes the sync relationship from a multi-master relationship (where the peer servers are equal) to a master-consumer relationship. This is done by setting the oneWaySync parameter on the sync agreement. The possible values are fromWindows (for Active Directory to Identity Management sync) and toWindows (for Identity Management to Active Directory sync). For example, to sync changes from Active Directory to Identity Management: Important Enabling uni-directional sync does not automatically prevent changes on the un-synchronized server, and this can lead to inconsistencies between the sync peers between sync updates. For example, uni-directional sync is configured to go from Active Directory to Identity Management, so Active Directory is (in essence) the data master. If an entry is modified or even deleted on the Identity Management, then the Identity Management information is different then the information and those changes are never carried over to Active Directory. During the sync update, the edits are overwritten on the Directory Server and the deleted entry is re-added. 15.5.6. Deleting Synchronization Agreements Synchronization can be stopped by deleting the sync agreement which disconnects the IdM and Active Directory servers. In the inverse of creating a sync agreement, deleting a sync agreement uses the ipa-replica-manage disconnect command and then the hostname of the Active Directory server. Delete the sync agreement. Remove the Active Directory CA certificate from the IdM server database: 15.5.7. Winsync Agreement Failures Creating the sync agreement fails because it cannot connect to the Active Directory server. One of the most common sync agreement failures is that the IdM server cannot connect to the Active Directory server: This can occur if the wrong Active Directory CA certificate was specified when the agreement was created. This creates duplicate certificates in the IdM LDAP database (in the /etc/dirsrv/slapd-DOMAIN/ directory) with the name Imported CA . This can be checked using certutil : To resolve this issue, clear the certificate database: This deletes the CA certificate from the LDAP database. There are errors saying passwords are not being synced because it says the entry exists For some entries in the user database, there may be an informational error message that the password is not being reset because the entry already exists: This is not an error. This message occurs when an exempt user, the Password Sync user, is not being changed. The Password Sync user is the operational user which is used by the service to change the passwords in IdM.
|
[
"certutil -installcert -v -config \"ipaserver.example.com\\Example Domain CA\" c:\\path\\to\\ca.crt",
"cacertdir_rehash /etc/openldap/cacerts/",
"TLS_CACERTDIR /etc/openldap/cacerts/ TLS_REQCERT allow",
"kdestroy",
"ipa-replica-manage connect --winsync --binddn cn=administrator,cn=users,dc=example,dc=com --bindpw Windows-secret --passsync secretpwd --cacert /etc/openldap/cacerts/windows.cer adserver.example.com -v",
"[jsmith@ipaserver ~]USD ldapmodify -x -D \"cn=directory manager\" -w password dn: cn=ipa-winsync,cn=plugins,cn=config changetype: modify replace: ipaWinSyncAcctDisable ipaWinSyncAcctDisable: none modifying entry \"cn=ipa-winsync,cn=plugins,cn=config\"",
"[jsmith@ipaserver ~]USD ldapsearch -xLLL -D \"cn=directory manager\" -w password -p 389 -h ipaserver.example.com -b cn=config objectclass=nsdswindowsreplicationagreement dn nsds7WindowsReplicaSubtree dn: cn=meToWindowsBox.example.com,cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsds7WindowsReplicaSubtree: cn=users,dc=example,dc=com ... 8<",
"[jsmith@ipaserver ~]USD ldapmodify -x -D \"cn=directory manager\" -W -p 389 -h ipaserver.example.com <<EOF dn: cn=meToWindowsBox.example.com,cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config changetype: modify replace: nsds7WindowsReplicaSubtree nsds7WindowsReplicaSubtree: cn=alternateusers,dc=example,dc=com EOF modifying entry \"cn=meToWindowsBox.example.com,cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config\"",
"[jsmith@ipaserver ~]USD ldapmodify -x -D \"cn=directory manager\" -w password -p 389 -h ipaserver.example.com dn: cn=windows.example.com,cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config changetype: modify add: oneWaySync oneWaySync: fromWindows",
"ipa-replica-manage disconnect adserver.example.com",
"certutil -D -d /etc/dirsrv/slapd-EXAMPLE.COM/ -n \"Imported CA\"",
"\"Update failed! Status: [81 - LDAP error: Can't contact LDAP server]",
"certutil -L -d /etc/dirsrv/slapd-DOMAIN/ Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI CA certificate CTu,u,Cu Imported CA CT,,C Server-Cert u,u,u Imported CA CT,,C",
"certutil -d /etc/dirsrv/slapd-DOMAIN-NAME -D -n \"Imported CA\"",
"\"Windows PassSync entry exists, not resetting password\""
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/managing-sync-agmt
|
Chapter 2. Red Hat Decision Manager BPMN and DMN modelers
|
Chapter 2. Red Hat Decision Manager BPMN and DMN modelers Red Hat Decision Manager provides the following extensions or applications that you can use to design Business Process Model and Notation (BPMN) process models and Decision Model and Notation (DMN) decision models using graphical modelers. Business Central : Enables you to view and design BPMN models, DMN models, and test scenario files in a related embedded designer. To use Business Central, you can set up a development environment containing a Business Central to design business rules and processes, and a KIE Server to execute and test the created business rules and processes. Red Hat Decision Manager VS Code extension : Enables you to view and design BPMN models, DMN models, and test scenario files in Visual Studio Code (VS Code). The VS Code extension requires VS Code 1.46.0 or later. To install the Red Hat Decision Manager VS Code extension, select the Extensions menu option in VS Code and search for and install the Red Hat Business Automation Bundle extension. Standalone BPMN and DMN editors : Enable you to view and design BPMN and DMN models embedded in your web applications. To download the necessary files, you can either use the NPM artifacts from the NPM registry or download the JavaScript files directly for the DMN standalone editor library at https://<YOUR_PAGE>/dmn/index.js and for the BPMN standalone editor library at https://<YOUR_PAGE>/bpmn/index.js . 2.1. Installing the Red Hat Decision Manager VS Code extension bundle Red Hat Decision Manager provides a Red Hat Business Automation Bundle VS Code extension that enables you to design Decision Model and Notation (DMN) decision models, Business Process Model and Notation (BPMN) 2.0 business processes, and test scenarios directly in VS Code. VS Code is the preferred integrated development environment (IDE) for developing new business applications. Red Hat Decision Manager also provides individual DMN Editor and BPMN Editor VS Code extensions for DMN or BPMN support only, if needed. Important The editors in the VS Code are partially compatible with the editors in the Business Central, and several Business Central features are not supported in the VS Code. Prerequisites The latest stable version of VS Code is installed. Procedure In your VS Code IDE, select the Extensions menu option and search for Red Hat Business Automation Bundle for DMN, BPMN, and test scenario file support. For DMN or BPMN file support only, you can also search for the individual DMN Editor or BPMN Editor extensions. When the Red Hat Business Automation Bundle extension appears in VS Code, select it and click Install . For optimal VS Code editor behavior, after the extension installation is complete, reload or close and re-launch your instance of VS Code. After you install the VS Code extension bundle, any .dmn , .bpmn , or .bpmn2 files that you open or create in VS Code are automatically displayed as graphical models. Additionally, any .scesim files that you open or create are automatically displayed as tabular test scenario models for testing the functionality of your business decisions. If the DMN, BPMN, or test scenario modelers open only the XML source of a DMN, BPMN, or test scenario file and displays an error message, review the reported errors and the model file to ensure that all elements are correctly defined. Note For new DMN or BPMN models, you can also enter dmn.new or bpmn.new in a web browser to design your DMN or BPMN model in the online modeler. When you finish creating your model, you can click Download in the online modeler page to import your DMN or BPMN file into your Red Hat Decision Manager project in VS Code. 2.2. Configuring the Red Hat Decision Manager standalone editors Red Hat Decision Manager provides standalone editors that are distributed in a self-contained library providing an all-in-one JavaScript file for each editor. The JavaScript file uses a comprehensive API to set and control the editor. You can install the standalone editors using the following methods: Download each JavaScript file manually Use the NPM package Procedure Install the standalone editors using one of the following methods: Download each JavaScript file manually : For this method, follow these steps: Download the JavaScript files. Add the downloaded Javascript files to your hosted application. Add the following <script> tag to your HTML page: Script tag for your HTML page for the DMN editor Script tag for your HTML page for the BPMN editor Use the NPM package : For this method, follow these steps: Add the NPM package to your package.json file: Adding the NPM package Import each editor library to your TypeScript file: Importing each editor After you install the standalone editors, open the required editor by using the provided editor API, as shown in the following example for opening a DMN editor. The API is the same for each editor. Opening the DMN standalone editor const editor = DmnEditor.open({ container: document.getElementById("dmn-editor-container"), initialContent: Promise.resolve(""), readOnly: false, origin: "", resources: new Map([ [ "MyIncludedModel.dmn", { contentType: "text", content: Promise.resolve("") } ] ]) }); Use the following parameters with the editor API: Table 2.1. Example parameters Parameter Description container HTML element in which the editor is appended. initialContent Promise to a DMN model content. This parameter can be empty, as shown in the following examples: Promise.resolve("") Promise.resolve("<DIAGRAM_CONTENT_DIRECTLY_HERE>") fetch("MyDmnModel.dmn").then(content ⇒ content.text()) readOnly (Optional) Enables you to allow changes in the editor. Set to false (default) to allow content editing and true for read-only mode in editor. origin (Optional) Origin of the repository. The default value is window.location.origin . resources (Optional) Map of resources for the editor. For example, this parameter is used to provide included models for the DMN editor or work item definitions for the BPMN editor. Each entry in the map contains a resource name and an object that consists of content-type ( text or binary ) and content (similar to the initialContent parameter). The returned object contains the methods that are required to manipulate the editor. Table 2.2. Returned object methods Method Description getContent(): Promise<string> Returns a promise containing the editor content. setContent(path: string, content: string): void Sets the content of the editor. getPreview(): Promise<string> Returns a promise containing an SVG string of the current diagram. subscribeToContentChanges(callback: (isDirty: boolean) ⇒ void): (isDirty: boolean) ⇒ void Sets a callback to be called when the content changes in the editor and returns the same callback to be used for unsubscription. unsubscribeToContentChanges(callback: (isDirty: boolean) ⇒ void): void Unsubscribes the passed callback when the content changes in the editor. markAsSaved(): void Resets the editor state that indicates that the content in the editor is saved. Also, it activates the subscribed callbacks related to content change. undo(): void Undoes the last change in the editor. Also, it activates the subscribed callbacks related to content change. redo(): void Redoes the last undone change in the editor. Also, it activates the subscribed callbacks related to content change. close(): void Closes the editor. getElementPosition(selector: string): Promise<Rect> Provides an alternative to extend the standard query selector when an element lives inside a canvas or a video component. The selector parameter must follow the <PROVIDER>:::<SELECT> format, such as Canvas:::MySquare or Video:::PresenterHand . This method returns a Rect representing the element position. envelopeApi: MessageBusClientApi<KogitoEditorEnvelopeApi> This is an advanced editor API. For more information about advanced editor API, see MessageBusClientApi and KogitoEditorEnvelopeApi .
|
[
"<script src=\"https://<YOUR_PAGE>/dmn/index.js\"></script>",
"<script src=\"https://<YOUR_PAGE>/bpmn/index.js\"></script>",
"npm install @kie-tools/kie-editors-standalone",
"import * as DmnEditor from \"@kie-tools/kie-editors-standalone/dist/dmn\" import * as BpmnEditor from \"@kie-tools/kie-editors-standalone/dist/bpmn\"",
"const editor = DmnEditor.open({ container: document.getElementById(\"dmn-editor-container\"), initialContent: Promise.resolve(\"\"), readOnly: false, origin: \"\", resources: new Map([ [ \"MyIncludedModel.dmn\", { contentType: \"text\", content: Promise.resolve(\"\") } ] ]) });"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/con-bpmn-dmn-modelers_getting-started-decision-services
|
Chapter 1. Preparing to install on a single node
|
Chapter 1. Preparing to install on a single node 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. About OpenShift on a single node You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special Ignition configuration file. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability. Important The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default network plugin for single-node OpenShift deployments. 1.3. Requirements for installing OpenShift on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements: Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation. Note For the ppc64le platform, the host should prepare the ISO, but does not need to create the USB boot drive. The ISO can be mounted to PowerVM directly. Note ISO is not required for IBM Z(R) installations. CPU Architecture: Installing OpenShift Container Platform on a single node supports x86_64 , arm64 , ppc64le , and s390x CPU architectures. Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors . In most cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file. The following list shows the only exceptions and the corresponding parameter to specify in the install-config.yaml configuration file: Amazon Web Services (AWS), where you use platform=aws Google Cloud Platform (GCP), where you use platform=gcp Microsoft Azure, where you use platform=azure Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 1.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 8 vCPUs 16 GB of RAM 120 GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Adding Operators during the installation process might increase the minimum resource requirements. The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Note BMC is not supported on IBM Z(R) and IBM Power(R). Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN): Table 1.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by both clients external to the cluster and within the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by both clients external to the cluster and within the cluster. Important Without persistent IP addresses, communications between the apiserver and etcd might fail.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_a_single_node/preparing-to-install-sno
|
Chapter 7. Migrating to data science pipelines 2.0
|
Chapter 7. Migrating to data science pipelines 2.0 From OpenShift AI version 2.9, data science pipelines are based on KubeFlow Pipelines (KFP) version 2.0 . Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI. Important Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade to OpenShift AI 2.9 or later with data science pipelines, ensure that your cluster does not have an existing installation of Argo Workflows that is not installed by OpenShift AI. If there is an existing installation of Argo Workflows that is not installed by data science pipelines on your cluster, data science pipelines will be disabled after you install or upgrade OpenShift AI. To enable data science pipelines, remove the separate installation of Argo Workflows from your cluster. Data science pipelines will be enabled automatically. Argo Workflows resources that are created by OpenShift AI have the following labels in the OpenShift Console under Administration > CustomResourceDefinitions , in the argoproj.io group: 7.1. Upgrading to data science pipelines 2.0 Starting with OpenShift AI 2.16, data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server. OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. If you are upgrading to OpenShift AI 2.16 or later, you must manually migrate your existing data science pipelines 1.0 instances and update your workbenches. To upgrade to OpenShift AI 2.16 or later with data science pipelines 2.0, follow these steps: Note If you are using GitOps to manage your data science pipelines 1.0 pipeline runs, pause any sync operations related to data science pipelines including PipelineRuns or DataSciencePipelinesApplications (DSPAs) management. After migrating to data science pipelines 2.0, your PipelineRuns will be managed independently of data science pipelines, similar to any other Tekton resources. Back up your pipelines data. Deploy a new cluster (or use a different existing cluster) with Red Hat OpenShift AI 2.18 to use as an intermediate cluster. You will use this intermediate cluster to upload, test, and verify your new pipelines. In OpenShift AI 2.18 on the intermediate cluster, do the following tasks: Create a new data science project. Configure a new pipeline server. Important If you use an external database, you must use a different external database than the one you use for data science pipelines 1.0, as the database is migrated to data science pipelines 2.0 format. Update and recompile your data science pipelines 1.0 pipelines as described in Migrate to Kubeflow Pipelines v2 . Note Data science pipelines 2.0 does not use the kfp-tekton library. In most cases, you can replace usage of kfp-tekton with the kfp library. For data science pipelines 2.0, use the latest version of the KFP SDK. For more information, see the Kubeflow Pipelines SDK API Reference . Tip You can view historical data science pipelines 1.0 pipeline run information on your primary cluster in the OpenShift Console Developer perspective under Pipelines Project PipelineRuns . Import your updated pipelines to the new data science project. Test and verify your new pipelines. On your primary cluster, do the following tasks: Remove your data science pipelines 1.0 pipeline servers. Optional: Remove your data science pipelines 1.0 resources. For more information, see Removing data science pipelines 1.0 resources . Upgrade to Red Hat OpenShift AI 2.18. For more information, see Upgrading OpenShift AI Self-Managed , or for disconnected environments, Upgrading Red Hat OpenShift AI in a disconnected environment . In the upgraded instance of Red Hat OpenShift AI 2.18 on your primary cluster, do the following tasks: Recreate the pipeline servers for each data science project where the data science pipelines 1.0 pipeline servers existed. Note If you are using GitOps to manage your DSPAs, do the following tasks in your DSPAs before performing sync operations: Set spec.dspVersion to v2 . Verify that the apiVersion is using v1 instead of v1alpha1 . Import your updated data science pipelines to the applicable pipeline servers. Tip You can perform a batch upload by creating a script that uses the KFP SDK Client and the .upload_pipeline and .get_pipeline methods. For any workbenches that communicate with data science pipelines 1.0, do the following tasks in the upgraded instance of Red Hat OpenShift AI: Delete the existing workbench. For more information, see Deleting a workbench from a data science project . If you want to use the notebook image version 2024.2, upgrade to Python 3.11 before creating a new workbench. Create a new workbench that uses the existing persistent storage of the deleted workbench. For more information, see Creating a workbench . Run the pipeline so that the data science pipelines 2.0 pipeline server schedules it. 7.2. Removing data science pipelines 1.0 resources When your migration to data science pipelines 2.0 is complete on the intermediate cluster, you can clean up the data science pipelines 1.0 resources in your cluster. Important Before removing data science pipelines 1.0 resources, ensure that migration of your data science pipelines 1.0 pipelines to 2.0 is complete. Identify the DataSciencePipelinesApplication (DSPA) resource that corresponds to the data science pipelines 1.0 pipeline server: Delete the cluster role binding associated with this DSPA: Delete the DSPA: If necessary, delete the DataSciencePipelinesApplication finalizer to complete the removal of the resource: If you are not using OpenShift Pipelines for any purpose other than data science pipelines 1.0, you can remove the OpenShift Pipelines Operator. Data science pipelines 1.0 used the kfp-tekton Python library. Data science pipelines 2.0 does not use kfp-tekton . You can uninstall kfp-tekton when there are no remaining data science pipelines 1.0 pipeline servers in use on your cluster. Additional resources PyPI: kfp Kubeflow Pipelines SDK API Reference . Creating a data science project Configuring a pipeline server Importing a data science pipeline Deleting a pipeline server
|
[
"labels: app.kubernetes.io/part-of: data-science-pipelines-operator app.opendatahub.io/data-science-pipelines-operator: 'true'",
"get dspa -n <YOUR_DS_PROJECT>",
"delete clusterrolebinding ds-pipeline-ui-auth-delegator-<YOUR_DS_PROJECT>-dspa",
"delete dspa dspa -n <YOUR_DS_PROJECT>",
"patch dspa dspa -n <YOUR_DS_PROJECT> --type=merge -p \"{\\\"metadata\\\":{\\\"finalizers\\\":null}}\""
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_science_pipelines/migrating-to-data-science-pipelines-2_ds-pipelines
|
Chapter 3. Installing a cluster on Nutanix
|
Chapter 3. Installing a cluster on Nutanix In OpenShift Container Platform version 4.16, you can choose one of the following options to install a cluster on your Nutanix instance: Using installer-provisioned infrastructure : Use the procedures in the following sections to use installer-provisioned infrastructure. Installer-provisioned infrastructure is ideal for installing in connected or disconnected network environments. The installer-provisioned infrastructure includes an installation program that provisions the underlying infrastructure for the cluster. Using the Assisted Installer : The Assisted Installer hosted at console.redhat.com . The Assisted Installer cannot be used in disconnected environments. The Assisted Installer does not provision the underlying infrastructure for the cluster, so you must provision the infrastructure before you run the Assisted Installer. Installing with the Assisted Installer also provides integration with Nutanix, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details. Using user-provisioned infrastructure : Complete the relevant steps outlined in the Installing a cluster on any platform documentation. 3.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Internet access for Prism Central Prism Central requires internet access to obtain the Red Hat Enterprise Linux CoreOS (RHCOS) image that is required to install the cluster. The RHCOS image for Nutanix is available at rhcos.mirror.openshift.com . 3.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.6. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Nutanix". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 3.7.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 10 12 15 16 17 18 19 21 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 13 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 TThe cluster network plugin to install. The default value OVNKubernetes is the only supported value. 14 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.7.2. Configuring failure domains Failure domains improve the fault tolerance of an OpenShift Container Platform cluster by distributing control plane and compute machines across multiple Nutanix Prism Elements (clusters). Tip It is recommended that you configure three failure domains to ensure high-availability. Prerequisites You have an installation configuration file ( install-config.yaml ). Procedure Edit the install-config.yaml file and add the following stanza to configure the first failure domain: apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid> # ... where: <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <prism_element_name> Optional. Specifies the name of the Prism Element. <prism_element_uuid > Specifies the UUID of the Prism Element. <network_uuid > Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. As required, configure additional failure domains. To distribute control plane and compute machines across the failure domains, do one of the following: If compute and control plane machines can share the same set of failure domains, add the failure domain names under the cluster's default machine configuration. Example of control plane and compute machines sharing a set of failure domains apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... If compute and control plane machines must use different failure domains, add the failure domain names under the respective machine pools. Example of control plane and compute machines using different failure domains apiVersion: v1 baseDomain: example.com compute: # ... controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 # ... Save the file. 3.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.9. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 3.10. Adding config map and secret resources required for Nutanix CCM Installations on Nutanix require additional ConfigMap and Secret resources to integrate with the Nutanix Cloud Controller Manager (CCM). Prerequisites You have created a manifests directory within your installation directory. Procedure Navigate to the manifests directory: USD cd <path_to_installation_directory>/manifests Create the cloud-conf ConfigMap file with the name openshift-cloud-controller-manager-cloud-config.yaml and add the following information: apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: "{ \"prismCentral\": { \"address\": \"<prism_central_FQDN/IP>\", 1 \"port\": 9440, \"credentialRef\": { \"kind\": \"Secret\", \"name\": \"nutanix-credentials\", \"namespace\": \"openshift-cloud-controller-manager\" } }, \"topologyDiscovery\": { \"type\": \"Prism\", \"topologyCategories\": null }, \"enableCustomLabeling\": true }" 1 Specify the Prism Central FQDN/IP. Verify that the file cluster-infrastructure-02-config.yml exists and has the following information: spec: cloudConfig: key: config name: cloud-provider-config 3.11. Services for a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Configuring a user-managed load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for a user-managed load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for user-managed load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.11.1. Configuring a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer. Note MetalLB, which runs on a cluster, functions as a user-managed load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples show health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration. Example HAProxy configuration with one listed subnet # ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Example HAProxy configuration with multiple listed subnets # ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ... Use the curl CLI command to verify that the user-managed load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's install-config.yaml file: # ... platform: nutanix: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3 # ... 1 Set UserManaged for the type parameter to specify a user-managed load balancer for your cluster. The parameter defaults to OpenShiftManagedDefault , which denotes the default internal load balancer. For services defined in an openshift-kni-infra namespace, a user-managed load balancer can deploy the coredns service to pods in your cluster but ignores keepalived and haproxy services. 2 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. 3 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Verification Use the curl CLI command to verify that the user-managed load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.13. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 3.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 3.15. Additional resources About remote health monitoring 3.16. steps Opt out of remote health reporting Customize your cluster
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3",
"apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"cd <path_to_installation_directory>/manifests",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: \"{ \\\"prismCentral\\\": { \\\"address\\\": \\\"<prism_central_FQDN/IP>\\\", 1 \\\"port\\\": 9440, \\\"credentialRef\\\": { \\\"kind\\\": \\\"Secret\\\", \\\"name\\\": \\\"nutanix-credentials\\\", \\\"namespace\\\": \\\"openshift-cloud-controller-manager\\\" } }, \\\"topologyDiscovery\\\": { \\\"type\\\": \\\"Prism\\\", \\\"topologyCategories\\\": null }, \\\"enableCustomLabeling\\\": true }\"",
"spec: cloudConfig: key: config name: cloud-provider-config",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: nutanix: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_nutanix/installing-nutanix-installer-provisioned
|
Chapter 1. Overview of compliance service reports
|
Chapter 1. Overview of compliance service reports The compliance service enables users to download data based on filters in place at the time of download. Downloading a compliance report requires the following actions: Uploading current system data to Red Hat Insights for Red Hat Enterprise Linux Filtering your results in the compliance service web console Downloading reports; either exporting comma separated values (CSV) or JavaScript Object Notation (JSON) data, or as a PDF
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports/assembly-compl-report-overview
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
| null |
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/visualizing_your_costs_using_cost_explorer/proc-providing-feedback-on-redhat-documentation
|
4.225. perl-Test-Spelling
|
4.225. perl-Test-Spelling 4.225.1. RHBA-2011:1093 - perl-Test-Spelling bug fix update An updated perl-Test-Spelling package that fixes one bug is now available for Red Hat Enterprise Linux 6. The perl-Test-Spelling package allows users to check spelling of a POD file. Bug Fix BZ# 636835 Prior to this update, the perl-Test-Spelling package erroneously required the aspell package instead of the hunspell package at runtime. This update fixes the problem by correcting perl-Test-Spelling's runtime dependencies so that the hunspell package is now required, as expected. All users of perl-Test-Spelling should upgrade to this updated package, which fixes this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/perl-test-spelling
|
Use Red Hat Quay
|
Use Red Hat Quay Red Hat Quay 3 Use Red Hat Quay Red Hat OpenShift Documentation Team
|
[
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"username\": \"newuser\", \"email\": \"[email protected]\" }' \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"{\"username\": \"newuser\", \"email\": \"[email protected]\", \"password\": \"IJWZ8TIY301KPFOW3WEUJEVZ3JR11CY1\", \"encrypted_password\": \"9Q36xF54YEOLjetayC0NBaIKgcFFmIHsS3xTZDLzZSrhTBkxUc9FDwUKfnxLWhco6oBJV1NDBjoBcDGmsZMYPt1dSA4yWpPe/JKY9pnDcsw=\"}",
"podman login <quay-server.example.com>",
"username: newuser password: IJWZ8TIY301KPFOW3WEUJEVZ3JR11CY1",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"{\"users\": [{\"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}, \"super_user\": true, \"enabled\": true}, {\"kind\": \"user\", \"name\": \"newuser\", \"username\": \"newuser\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": {\"name\": \"newuser\", \"hash\": \"f338a2c83bfdde84abe2d3348994d70c34185a234cfbf32f9e323e3578e7e771\", \"color\": \"#9edae5\", \"kind\": \"user\"}, \"super_user\": false, \"enabled\": true}]}",
"curl -X DELETE -H \"Authorization: Bearer <insert token here>\" https://<quay-server.example.com>/api/v1/superuser/users/<username>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"",
"\"Created\"",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"email\": \"<org_email>\", \"invoice_email\": <true/false>, \"invoice_email_address\": \"<billing_email>\" }'",
"{\"name\": \"test\", \"email\": \"[email protected]\", \"avatar\": {\"name\": \"test\", \"hash\": \"a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe\", \"color\": \"#aec7e8\", \"kind\": \"user\"}, \"is_admin\": true, \"is_member\": true, \"teams\": {\"owners\": {\"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false}}, \"ordered_teams\": [\"owners\"], \"invoice_email\": true, \"invoice_email_address\": \"[email protected]\", \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"quotas\": [{\"id\": 2, \"limit_bytes\": 10737418240, \"limits\": [{\"id\": 1, \"type\": \"Reject\", \"limit_percent\": 90}]}], \"quota_report\": {\"quota_bytes\": 0, \"configured_quota\": 10737418240, \"running_backfill\": \"complete\", \"backfill_status\": \"complete\"}}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://<quay-server.example.com>/api/v1/error/not_found\", \"status\": 404}",
"sudo podman pull busybox",
"Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9",
"sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test",
"Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<private>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"",
"{\"namespace\": \"quayadmin\", \"name\": \"<new_repository_name>\", \"kind\": \"image\"}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://quay-server.example.com/api/v1/error/not_found\", \"status\": 404}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"",
"{\"name\": \"orgname+robot-name\", \"created\": \"Fri, 10 May 2024 15:11:00 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"",
"{\"name\": \"quayadmin+robot-name\", \"created\": \"Fri, 10 May 2024 15:24:57 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"ROBOTS_DISALLOW: true",
"podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" <quay-server.example.com>",
"Error: logging into \"<quay-server.example.com>\": invalid username/password",
"podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" --log-level=debug <quay-server.example.com>",
"DEBU[0000] error logging into \"quay-server.example.com\": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator.",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"test-org+test\", \"created\": \"Fri, 10 May 2024 17:46:02 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"quayadmin+test\", \"created\": \"Fri, 10 May 2024 14:12:11 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"",
"{\"robots\": []}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"{\"message\":\"Could not find robot with specified username\"}",
"http://localhost:8080/realms/master/protocol/openid-connect/token",
"http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id>",
"https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43",
"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43",
"curl -X POST \"http://localhost:8080/realms/master/protocol/openid-connect/token\" 1 -H \"Content-Type: application/x-www-form-urlencoded\" -d \"client_id=quaydev\" 2 -d \"client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz\" 3 -d \"grant_type=authorization_code\" -d \"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43\" 4",
"{\"access_token\":\"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...\", \"expires_in\":60,\"refresh_expires_in\":1800,\"refresh_token\":\"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw\",\"token_type\":\"Bearer\",\"not-before-policy\":0,\"session_state\":\"5c9bce22-6b85-4654-b716-e9bbb3e755bc\",\"scope\":\"profile email\"}",
"import requests import os TOKEN=os.environ.get('TOKEN') robot_user = \"fed-test+robot1\" def get_quay_robot_token(fed_token): URL = \"https://<quay-server.example.com>/oauth2/federation/robot/token\" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == \"__main__\": get_quay_robot_token(TOKEN)",
"export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0",
"python3 robot_fed_token_auth.py",
"<Response [200]> {\"token\": \"291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ...\"}",
"export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ",
"podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN",
"podman pull <quay-server.example.com/<repository_name>/<image_name>>",
"Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps",
"podman pull <quay-server.example.com/<different_repository_name>/<image_name>>",
"Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized",
"curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H \"Authorization: Bearer <bearer_token>\" --data '{\"role\": \"creator\"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>",
"{\"name\": \"example_team\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"example_team\", \"hash\": \"dec209fd7312a2284b689d4db3135e2846f27e0f40fa126776a0ce17366bc989\", \"color\": \"#e7ba52\", \"kind\": \"team\"}, \"new_team\": true}",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"{\"name\": \"testuser\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"d51d17303dc3271ac3266fb332d7df919bab882bbfc7199d2017a4daac8979f0\", \"color\": \"#5254a3\", \"kind\": \"user\"}, \"invited\": false}",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members\"",
"{\"name\": \"owners\", \"members\": [{\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}, \"invited\": false}, {\"name\": \"test-org+test\", \"kind\": \"user\", \"is_robot\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}, \"invited\": false}], \"can_edit\": true}",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions\"",
"{\"permissions\": [{\"repository\": {\"name\": \"api-repo\", \"is_public\": true}, \"role\": \"admin\"}]}",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"role\": \"<role>\" }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"",
"{\"name\": \"testteam\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"testteam\", \"hash\": \"827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8\", \"color\": \"#ff7f0e\", \"kind\": \"team\"}, \"new_team\": false}",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"<admin_read_or_write>\", \"delegate\": { \"name\": \"<username>\", \"kind\": \"user\" }, \"activating_user\": { \"name\": \"<robot_name>\" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes",
"{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"admin\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"write\" }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid>",
"{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"write\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes",
"{\"prototypes\": []}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"admin\"}' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>",
"{\"role\": \"admin\", \"name\": \"quayadmin+test\", \"is_robot\": true, \"avatar\": {\"name\": \"quayadmin+test\", \"hash\": \"ca9afae0a9d3ca322fc8a7a866e8476dd6c98de543decd186ae090e420a88feb\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/",
"{\"message\":\"User does not have permission for repo.\"}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>?includeTags=true",
"{\"namespace\": \"quayadmin\", \"name\": \"busybox\", \"kind\": \"image\", \"description\": null, \"is_public\": false, \"is_organization\": false, \"is_starred\": false, \"status_token\": \"d8f5e074-690a-46d7-83c8-8d4e3d3d0715\", \"trust_enabled\": false, \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"state\": \"NORMAL\", \"tags\": {\"example\": {\"name\": \"example\", \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}, \"test\": {\"name\": \"test\", \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}}, \"can_write\": true, \"can_admin\": true}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/",
"{\"tags\": [{\"name\": \"test-two\", \"reversion\": true, \"start_ts\": 1718737153, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1718737029, \"end_ts\": 1718737153, \"manifest_digest\": \"sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea\", \"is_manifest_list\": false, \"size\": 2275317, \"last_modified\": \"Tue, 18 Jun 2024 18:57:09 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1718737018, \"end_ts\": 1718737029, \"manifest_digest\": \"sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea\", \"is_manifest_list\": false, \"size\": 2275317, \"last_modified\": \"Tue, 18 Jun 2024 18:56:58 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:57:09 -0000\"}, {\"name\": \"sample_tag\", \"reversion\": false, \"start_ts\": 1718736147, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:42:27 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1717680780, \"end_ts\": 1718737018, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:33:00 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:56:58 -0000\"}, {\"name\": \"tag-test\", \"reversion\": false, \"start_ts\": 1717680378, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:26:18 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"\"Updated\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore",
"{}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag",
"{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"{\"labels\": [{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}, {\"id\": \"2d34ec64-4051-43ad-ae06-d5f81003576a\", \"key\": \"org.opencontainers.image.version\", \"value\": \"1.36.1-glibc\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}]}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id>",
"{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"key\": \"<key>\", \"value\": \"<value>\", \"media_type\": \"<media_type>\" }' https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"{\"label\": {\"id\": \"346593fd-18c8-49db-854f-4cb1fb76ff9c\", \"key\": \"example-key\", \"value\": \"example-value\", \"source_type\": \"api\", \"media_type\": \"text/plain\"}}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid>",
"docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag>",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"\"Updated\"",
"podman pull quay-server.example.com/quayadmin/busybox:test2",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/tag/?onlyActiveTags=true&page=1&limit=10\"",
"{\"tags\": [{\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1717680780, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:33:00 -0000\"}, {\"name\": \"tag-test\", \"reversion\": false, \"start_ts\": 1717680378, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:26:18 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/quayadmin/busybox/tag/?onlyActiveTags=true&page=1&limit=20&specificTag=test-two\"",
"{\"tags\": [{\"name\": \"test-two\", \"reversion\": true, \"start_ts\": 1718737153, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag",
"{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore",
"{}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag",
"{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://<quay-server.example.com>/api/v1/user/aggregatelogs\"",
"{\"aggregated\": [{\"kind\": \"create_tag\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"manifest_label_add\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"push_repo\", \"count\": 2, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"revert_tag\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}]}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18\"\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://quay-server.example.com/api/v1/user/logs?performer=quayuser&starttime=01/01/2024&endtime=06/18/2024\"",
"--- {\"start_time\": \"Mon, 01 Jan 2024 00:00:00 -0000\", \"end_time\": \"Wed, 19 Jun 2024 00:00:00 -0000\", \"logs\": [{\"kind\": \"revert_tag\", \"metadata\": {\"username\": \"quayuser\", \"repo\": \"busybox\", \"tag\": \"test-two\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}, \"ip\": \"192.168.1.131\", \"datetime\": \"Tue, 18 Jun 2024 18:59:13 -0000\", \"performer\": {\"kind\": \"user\", \"name\": \"quayuser\", \"is_robot\": false, \"avatar\": {\"name\": \"quayuser\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, {\"kind\": \"push_repo\", \"metadata\": {\"repo\": \"busybox\", \"namespace\": \"quayuser\", \"user-agent\": \"containers/5.30.1 (github.com/containers/image)\", \"tag\": \"test-two\", \"username\": \"quayuser\", } ---",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/organization/{orgname}/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/repository/{repository}/logs\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/user/exportlogs\"",
"{\"export_id\": \"6a0b9ea9-444c-4a19-9db8-113201c38cd4\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"2024-01-01\", \"endtime\": \"2024-06-18\", \"callback_url\": \"http://your-callback-url.example.com\" }' \"http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/manifest/<manifest_digest>/security?vulnerabilities=<true_or_false>\"",
"{\"status\": \"queued\", \"data\": null}",
"NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1",
"Test Notification Queued A test version of this notification has been queued and should appear shortly",
"{ \"repository\": \"sample_org/busybox\", \"namespace\": \"sample_org\", \"name\": \"busybox\", \"docker_url\": \"quay-server.example.com/sample_org/busybox\", \"homepage\": \"http://quay-server.example.com/repository/sample_org/busybox\", \"tags\": [ \"latest\", \"v1\" ], \"expiring_in\": \"1 days\" }",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/",
"{\"uuid\": \"240662ea-597b-499d-98bb-2b57e73408d6\", \"title\": null, \"event\": \"repo_push\", \"method\": \"quay_notification\", \"config\": {\"target\": {\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, \"event_config\": {}, \"number_of_failures\": 0}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test",
"{}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification",
"{\"notifications\": []}",
"{ \"name\": \"repository\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"updated_tags\": [ \"latest\" ] }",
"{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"repo\": \"test\", \"trigger_metadata\": { \"default_branch\": \"master\", \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional }, \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" } } }, \"is_manual\": false, \"manual_user\": null, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\" }",
"{ \"build_id\": \"a8cc247a-a662-4fee-8dcb-7d7e822b71ba\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"50bc599\", \"trigger_metadata\": { //Optional \"commit\": \"50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"date\": \"2019-03-06T14:10:14+11:00\", \"message\": \"test build\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/a8cc247a-a662-4fee-8dcb-7d7e822b71ba\" }",
"{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"b7f7d2b\", \"image_id\": \"sha256:0339f178f26ae24930e9ad32751d6839015109eabdf1c25b3b0f2abf8934f6cb\", \"trigger_metadata\": { \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\", \"manifest_digests\": [ \"quay.io/dgangaia/test@sha256:2a7af5265344cc3704d5d47c4604b1efcbd227a7a6a6ff73d6e4e08a27fd7d99\", \"quay.io/dgangaia/test@sha256:569e7db1a867069835e8e97d50c96eccafde65f08ea3e0d5debaf16e2545d9d1\" ] }",
"{ \"build_id\": \"5346a21d-3434-4764-85be-5be1296f293c\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"docker_url\": \"quay.io/dgangaia/test\", \"error_message\": \"Could not find or parse Dockerfile: unknown instruction: GIT\", \"namespace\": \"dgangaia\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"6ae9a86\", \"trigger_metadata\": { //Optional \"commit\": \"6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"date\": \"2019-03-06T14:18:16+11:00\", \"message\": \"failed build test\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/5346a21d-3434-4764-85be-5be1296f293c\" }",
"{ \"build_id\": \"cbd534c5-f1c0-4816-b4e3-55446b851e70\", \"trigger_kind\": \"github\", \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"cbce83c\", \"trigger_metadata\": { \"commit\": \"cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { \"url\": \"https://github.com/dgangaia/test/commit/cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"date\": \"2019-03-06T14:27:53+11:00\", \"message\": \"testing cancel build\", \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" }, \"author\": { \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/cbd534c5-f1c0-4816-b4e3-55446b851e70\" }",
"{ \"repository\": \"dgangaia/repository\", \"namespace\": \"dgangaia\", \"name\": \"repository\", \"docker_url\": \"quay.io/dgangaia/repository\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"tags\": [\"latest\", \"othertag\"], \"vulnerability\": { \"id\": \"CVE-1234-5678\", \"description\": \"This is a bad vulnerability\", \"link\": \"http://url/to/vuln/info\", \"priority\": \"Critical\", \"has_fix\": true } }",
"**Default:** `False`",
"FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 10737418240, \"limits\": \"10 Gi\" }'",
"\"Created\"",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[{\"id\": 1, \"limit_bytes\": 10737418240, \"limit\": \"10.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}]",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <limit_in_bytes> }'",
"{\"id\": 1, \"limit_bytes\": 21474836480, \"limit\": \"20.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}",
"podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }",
"podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true'",
"{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq",
"{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]",
"podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04",
"Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace",
"podman pull <registry_url>/<organization_name>/<quayio_namespace>/<image_name>",
"podman pull quay-server.example.com/proxytest/projectquay/quay:3.7.9",
"podman pull quay-server.example.com/proxytest/projectquay/quay:3.6.2",
"podman pull quay-server.example.com/proxytest/projectquay/quay:3.5.1",
"sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"helm repo add redhat-cop https://redhat-cop.github.io/helm-charts",
"helm repo update",
"helm pull redhat-cop/etherpad --version=0.0.4 --untar",
"helm package ./etherpad",
"Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz",
"helm registry login quay370.apps.quayperf370.perfscale.devcluster.openshift.com",
"helm push etherpad-0.0.4.tgz oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com",
"Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b",
"rm -rf etherpad-0.0.4.tgz",
"helm pull oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad --version 0.0.4",
"Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902",
"oras push --annotation \"quay.expires-after=2d\" \\ 1 --annotation \"expiration = 2d\" \\ 2 quay.io/<organization_name>/<repository>/<image_name>:<tag>",
"[✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 561/561 B 100.00% 511ms └─ sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b Pushed [registry] quay.io/stevsmit/testorg3/oci-image:v1 ArtifactType: application/vnd.unknown.artifact.v1 Digest: sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b",
"oras pull quay.io/<organization_name>/<repository>/<image_name>:<tag>",
"oras manifest fetch quay.io/<organization_name>/<repository>/<image_name>:<tag>",
"{\"schemaVersion\":2,\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"artifactType\":\"application/vnd.unknown.artifact.v1\",\"config\":{\"mediaType\":\"application/vnd.oci.empty.v1+json\",\"digest\":\"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a\",\"size\":2,\"data\":\"e30=\"},\"layers\":[{\"mediaType\":\"application/vnd.oci.empty.v1+json\",\"digest\":\"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a\",\"size\":2,\"data\":\"e30=\"}],\"annotations\":{\"org.opencontainers.image.created\":\"2024-07-11T15:22:42Z\",\"version \":\" 8.11\"}}",
"podman tag <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>",
"podman push <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>",
"oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-api <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> <example_file>.txt",
"-spec v1.1-referrers-api quay.io/testorg3/myartifact-image:v1.0 hi.txt [✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 677ms └─ sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6",
"oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-tag <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> <example_file>.txt",
"[✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 465ms └─ sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383",
"oras discover --insecure --distribution-spec v1.1-referrers-tag <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>",
"quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da └── doc/example └── sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383",
"oras discover --distribution-spec v1.1-referrers-api <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>",
"Discovered 3 artifacts referencing v1.0 Digest: sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Artifact Type Digest sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 sha256:22b7e167793808f83db66f7d35fbe0088b34560f34f8ead36019a4cc48fd346b sha256:bb2b7e7c3a58fd9ba60349473b3a746f9fe78995a88cb329fc2fd1fd892ea4e4",
"FEATURE_REFERRERS_API: true",
"echo -n '<username>:<password>' | base64",
"abcdeWFkbWluOjE5ODlraWROZXQxIQ==",
"curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq",
"{ \"token\": \"eyJhbGciOiJSUzI1NiIsImtpZCI6Ijl5RWNtWmdiZ0l6czBBZW16emhTMHM1R0g2RDJnV2JGUTdUNGZYand4MlUiLCJ0eXAiOiJKV1QifQ...\" }",
"GET https://<quay-server.example.com>/v2/<organization_name>/<repository_name>/referrers/sha256:0de63ba2d98ab328218a1b6373def69ec0d0e7535866f50589111285f2bf3fb8 --header 'Authorization: Bearer <v2_bearer_token> -k | jq",
"{ \"schemaVersion\": 2, \"mediaType\": \"application/vnd.oci.image.index.v1+json\", \"manifests\": [ { \"mediaType\": \"application/vnd.oci.image.manifest.v1+json\", \"digest\": \"sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383\", \"size\": 793 }, ] }"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html-single/use_red_hat_quay/index
|
Chapter 21. Artifact management
|
Chapter 21. Artifact management You can manage artifacts from the Artifacts page in Business Central. The artifact repository is a local Maven repository and there is only one Maven repository for each installation. Business Central recommends using Maven repository solutions like Sonatype NexusTM , Apache ArchivaTM , or JFrog ArtifactoryTM . The Artifacts page lists all the artifacts in the Maven repository. You can upload artifacts to the Maven repository. Note You can only upload JAR, KJAR, and pom.xml files to the Artifacts repository. 21.1. Viewing an artifact You can view all the content of the local maven repository from the Artifacts page. Procedure In Business Central, select the Admin icon in the upper-right corner of the screen and select Artifacts . Click Open to view the artifact details. Click Ok to go back to the Artifacts page. 21.2. Downloading an artifact You can download and save an artifact from Business Central repository to the local storage of a project. Procedure In Business Central, select the Admin icon in the upper-right corner of the screen and select Artifacts . Click Download . Browse to the directory where you want to save the artifact. Click Save . 21.3. Uploading an artifact You can upload an artifact from the local storage to a project in Business Central. Procedure In Business Central, select the Admin icon in the upper-right corner of the screen and select Artifacts . Click Upload . Click Choose File and browse to the directory from where you want to upload the artifact. Click Upload . Note If you are using a non-Maven artifact, first deploy the artifact to the Maven repository using the mvn deploy command and then refresh the artifact list in Business Central.
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/con-business-central-artifacts_configuring-central
|
Chapter 5. File Systems
|
Chapter 5. File Systems Support of Btrfs File System The Btrfs (B-Tree) file system is supported as a Technology Preview in Red Hat Enterprise Linux 7.1. This file system offers advanced management, reliability, and scalability features. It enables users to create snapshots, it enables compression and integrated device management. OverlayFS The OverlayFS file system service allows the user to "overlay" one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This can be useful because it allows multiple users to share a file-system image, for example containers, or when the base image is on read-only media, for example a DVD-ROM. In Red Hat Enterprise Linux 7.1, OverlayFS is supported as a Technology Preview. There are currently two restrictions: It is recommended to use ext4 as the lower file system; the use of xfs and gfs2 file systems is not supported. SELinux is not supported, and to use OverlayFS, it is required to disable enforcing mode. Support of Parallel NFS Parallel NFS (pNFS) is a part of the NFS v4.1 standard that allows clients to access storage devices directly and in parallel. The pNFS architecture can improve the scalability and performance of NFS servers for several common workloads. pNFS defines three different storage protocols or layouts: files, objects, and blocks. The client supports the files layout, and since Red Hat Enterprise Linux 7.1, the blocks and object layouts are fully supported. Red Hat continues to work with partners and open source projects to qualify new pNFS layout types and to provide full support for more layout types in the future. For more information on pNFS, refer to http://www.pnfs.com/ .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-File_Systems
|
20.3. Adding a Group from a Directory Service
|
20.3. Adding a Group from a Directory Service The API adds existing directory service groups to the Red Hat Virtualization Manager database with a POST request to the groups collection. Example 20.2. Adding a group from a directory service
|
[
"POST /ovirt-engine/api/group HTTP/1.1 Content-Type: application/xml Accept: application/xml <group> <name>www.example.com/accounts/groups/mygroup</name> <domain> <name>example.com</name> </domain> </group>"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/adding_a_group_from_a_directory_service
|
13.2. Preparing for a Driver Update During Installation
|
13.2. Preparing for a Driver Update During Installation If a driver update is necessary and available for your hardware, Red Hat or a trusted third party such as the hardware vendor will typically provide it in the form of an image file in ISO format. Some methods of performing a driver update require you to make the image file available to the installation program, while others require you to use the image file to make a driver update disk: Methods that use the image file itself local hard drive USB flash drive Methods that use a driver update disk produced from an image file CD DVD Choose a method to provide the driver update, and refer to Section 13.2.1, "Preparing to Use a Driver Update Image File" , Section 13.2.2, "Preparing a Driver Disc" , or Section 13.2.3, "Preparing an Initial RAM Disk Update" . Note that you can use a USB storage device either to provide an image file, or as a driver update disk. 13.2.1. Preparing to Use a Driver Update Image File 13.2.1.1. Preparing to use an image file on local storage To make the ISO image file available on local storage, such as a hard drive or USB flash drive, you must first determine whether you want to install the updates automatically or select them manually. For manual installations, copy the file onto the storage device. You can rename the file if you find it helpful to do so, but you must not change the filename extension, which must remain .iso . In the following example, the file is named dd.iso : Figure 13.1. Content of a USB flash drive holding a driver update image file Note that if you use this method, the storage device will contain only a single file. This differs from driver discs on formats such as CD and DVD, which contain many files. The ISO image file contains all of the files that would normally be on a driver disc. Refer to Section 13.3.2, "Let the Installer Prompt You for a Driver Update" and Section 13.3.3, "Use a Boot Option to Specify a Driver Update Disk" to learn how to select the driver update manually during installation. For automatic installations, you will need to extract the ISO to the root directory of the storage device rather than simply copy it. Copying the ISO is only effective for manual installations. You must also change the file system label of the device to OEMDRV . The installation program will then automatically examine it for driver updates and load any that it detects. This behavior is controlled by the dlabel=on boot option, which is enabled by default. Refer to Section 6.3.1, "Let the Installer Find a Driver Update Disk Automatically" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-Preparing_for_a_driver_update_during_installation-ppc
|
Chapter 1. JFR creation options for Cryostat
|
Chapter 1. JFR creation options for Cryostat With Cryostat, you can create a JDK Flight Recorder (JFR) recording that monitors the performance of your JVM in your containerized application. Additionally, you can take a snapshot of an active JFR recording to capture any collected data, up to a specific point in time, for your target JVM application. Cryostat supports all of the following different ways to create JFR recordings: You can use the Cryostat web console to create JFR recordings manually for target JVMs that are using a JMX or agent HTTP connection. The Cryostat server can send on-demand requests over JMX or an agent HTTP connection to start JFR recordings dynamically based on automated rules. The Cryostat agent can start JFR recordings automatically at agent startup based on a given event template as part of the agent harvester feature. From Red Hat build of Cryostat 2.4 onward, the Cryostat agent can start JFR recordings dynamically based on MBean custom triggers and a given event template. The rest of this document describes how to create a JFR recording manually in the Cryostat web console. Additional resources Using automated rules on Cryostat Enabling dynamic JFR recordings based on MBean custom triggers
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/creating_a_jfr_recording_with_cryostat/con_jfr-creation-options-for-cryostat_cryostat
|
3.7. Active-State Power Management
|
3.7. Active-State Power Management Active-State Power Management (ASPM) saves power in the Peripheral Component Interconnect Express (PCI Express or PCIe) subsystem by setting a lower power state for PCIe links when the devices to which they connect are not in use. ASPM controls the power state at both ends of the link, and saves power in the link even when the device at the end of the link is in a fully powered-on state. When ASPM is enabled, device latency increases because of the time required to transition the link between different power states. ASPM has three policies to determine power states: default sets PCIe link power states according to the defaults specified by the firmware on the system (for example, BIOS). This is the default state for ASPM. powersave sets ASPM to save power wherever possible, regardless of the cost to performance. performance disables ASPM to allow PCIe links to operate with maximum performance. You can forcibly enable or disable ASPM support by using the pcie_aspm kernel parameter: pcie_aspm=off disables ASPM pcie_aspm=force enables ASPM, even on devices that do not support ASPM If the hardware supports ASPM, the operating system enables ASPM automatically at boot time. To check the ASPM support, see the output of the following command: Warning If you forcibly enable ASPM by using pcie_aspm=force on hardware that does not support ASPM, the system might become unresponsive. Before setting pcie_aspm=force , ensure that all PCIe hardware on the system supports ASPM. To set the ASPM policies, use one of the following options: modify the settings in the /sys/module/pcie_aspm/parameters/policy file specify the pcie_aspm.policy kernel parameter at boot time For example, pcie_aspm.policy=performance sets the ASPM performance policy.
|
[
"~]USD journalctl -b | grep ASPM"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/aspm
|
Chapter 7. Configure storage for OpenShift Container Platform services
|
Chapter 7. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 7.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 7.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 7.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 7.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 7.3. Persistent Volume Claims attached to prometheus-k8s-* pod 7.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 7.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 7.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 7.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
|
[
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/configure_storage_for_openshift_container_platform_services
|
Chapter 75. tsigkey
|
Chapter 75. tsigkey This chapter describes the commands under the tsigkey command. 75.1. tsigkey create Create new tsigkey Usage: Table 75.1. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --secret SECRET Tsigkey secret --scope SCOPE Tsigkey scope --resource-id RESOURCE_ID Tsigkey resource_id --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 75.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 75.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 75.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 75.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 75.2. tsigkey delete Delete tsigkey Usage: Table 75.6. Positional arguments Value Summary id Tsigkey id Table 75.7. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 75.3. tsigkey list List tsigkeys Usage: Table 75.8. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --scope SCOPE Tsigkey scope --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 75.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 75.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 75.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 75.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 75.4. tsigkey set Set tsigkey properties Usage: Table 75.13. Positional arguments Value Summary id Tsigkey id Table 75.14. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --secret SECRET Tsigkey secret --scope SCOPE Tsigkey scope --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 75.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 75.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 75.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 75.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 75.5. tsigkey show Show tsigkey details Usage: Table 75.19. Positional arguments Value Summary id Tsigkey id Table 75.20. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 75.21. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 75.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 75.23. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 75.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack tsigkey create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name NAME --algorithm ALGORITHM --secret SECRET --scope SCOPE --resource-id RESOURCE_ID [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack tsigkey delete [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack tsigkey list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name NAME] [--algorithm ALGORITHM] [--scope SCOPE] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack tsigkey set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] [--algorithm ALGORITHM] [--secret SECRET] [--scope SCOPE] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack tsigkey show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/tsigkey
|
Red Hat JBoss Core Services ModSecurity Guide
|
Red Hat JBoss Core Services ModSecurity Guide Red Hat JBoss Core Services 2.4.57 For use with Red Hat JBoss middleware products. Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_modsecurity_guide/index
|
2.11. Desktop
|
2.11. Desktop PackageKit component Installing or updating packages signed with a GPG key not known or accessible to the system may throw PackageKit in a loop of password dialogues, repeatedly asking the user to confirm the installation of these packages from an untrusted source. This issue may occur if additional third party repositories are configured on the system for which the GPG public key is not imported into the RPM database, nor specified in the respective Yum repository configuration. Official Red Hat Enterprise Linux repositories and packages should not be affected by this issue. To work around this issue, import the respective GPG public key into the RPM database by executing the following command as root: gnome-power-manager component, BZ# 748704 After resuming the system or re-enabling the display, an icon may appear in the notification area with a tooltip that reads: This error message is incorrect, has no effect on the system, and can be safely ignored. acroread component Running a AMD64 system without the sssd-client.i686 package installed, which uses SSSD for getting information about users, causes acroread to fail to start. To work around this issue, manually install the sssd-client.i686 package. kernel component, BZ# 681257 With newer kernels, such as the kernel shipped in Red Hat Enterprise Linux 6.1, Nouveau has corrected the Transition Minimized Differential Signaling (TMDS) bandwidth limits for pre-G80 nVidia chipsets. Consequently, the resolution auto-detected by X for some monitors may differ from that used in Red Hat Enterprise Linux 6.0. fprintd component When enabled, fingerprint authentication is the default authentication method to unlock a workstation, even if the fingerprint reader device is not accessible. However, after a 30 second wait, password authentication will become available. evolution component Evolution's IMAP backend only refreshes folder contents under the following circumstances: when the user switches into or out of a folder, when the auto-refresh period expires, or when the user manually refreshes a folder (that is, using the menu item Folder Refresh ). Consequently, when replying to a message in the Sent folder, the new message does not immediately appear in the Sent folder. To see the message, force a refresh using one of the methods describe above. anaconda component The clock applet in the GNOME panel has a default location of Boston, USA. Additional locations are added via the applet's preferences dialog. Additionally, to change the default location, left-click the applet, hover over the desired location in the Locations section, and click the Set... button that appears. xorg-x11-server component, BZ# 623169 In some multi-monitor configurations (for example, dual monitors with both rotated), the cursor confinement code produces incorrect results. For example, the cursor may be permitted to disappear off the screen when it should not, or be prevented from entering some areas where it should be allowed to go. Currently, the only workaround for this issue is to disable monitor rotation.
|
[
"~]# rpm --import <file_containing_the_public_key>",
"Session active, not inhibited, screen idle. If you see this test, your display server is broken and you should notify your distributor. Please see http://blogs.gnome.org/hughsie/2009/08/17/gnome-power-manager-and-blanking-removal-of-bodges/ for more information."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/desktop_issues
|
Installing
|
Installing OpenShift Container Platform 4.10 Installing and configuring OpenShift Container Platform clusters Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/index
|
Chapter 8. Configuring the OpenTelemetry Collector metrics
|
Chapter 8. Configuring the OpenTelemetry Collector metrics The following list shows some of these metrics: Collector memory usage CPU utilization Number of active traces and spans processed Dropped spans, logs, or metrics Exporter and receiver statistics The Red Hat build of OpenTelemetry Operator automatically creates a service named <instance_name>-collector-monitoring that exposes the Collector's internal metrics. This service listens on port 8888 by default. You can use these metrics for monitoring the Collector's performance, resource consumption, and other internal behaviors. You can also use a Prometheus instance or another monitoring tool to scrape these metrics from the mentioned <instance_name>-collector-monitoring service. Note When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true , the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics. Prerequisites Monitoring for user-defined projects is enabled in the cluster. Procedure To enable metrics of an OpenTelemetry Collector instance, set the spec.observability.metrics.enableMetrics field to true : apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets . Filter by Source: User . Check that the ServiceMonitors or PodMonitors in the opentelemetry-collector-<instance_name> format have the Up status. Additional resources Enabling monitoring for user-defined projects
|
[
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/otel-configuring-metrics
|
3.6.3. Deleting a Member from a Cluster
|
3.6.3. Deleting a Member from a Cluster To delete a member from an existing cluster that is currently in operation, follow the steps in this section. The starting point of the procedure is at the Choose a cluster to administer page (displayed on the cluster tab). Click the link of the node to be deleted. Clicking the link of the node to be deleted causes a page to be displayed for that link showing how that node is configured. Note To allow services running on a node to fail over when the node is deleted, skip the step. Disable or relocate each service that is running on the node to be deleted: Note Repeat this step for each service that needs to be disabled or started on another node. Under Services on this Node , click the link for a service. Clicking that link cause a configuration page for that service to be displayed. On that page, at the Choose a task drop-down box, choose to either disable the service are start it on another node and click Go . Upon confirmation that the service has been disabled or started on another node, click the cluster tab. Clicking the cluster tab causes the Choose a cluster to administer page to be displayed. At the Choose a cluster to administer page, click the link of the node to be deleted. Clicking the link of the node to be deleted causes a page to be displayed for that link showing how that node is configured. On that page, at the Choose a task drop-down box, choose Delete this node and click Go . When the node is deleted, a page is displayed that lists the nodes in the cluster. Check the list to make sure that the node has been deleted.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-delete-member-conga-CA
|
Chapter 5. Configuring the Network Observability Operator
|
Chapter 5. Configuring the Network Observability Operator You can update the Flow Collector API resource to configure the Network Observability Operator and its managed components. The Flow Collector is explicitly created during installation. Since this resource operates cluster-wide, only a single FlowCollector is allowed, and it has to be named cluster . 5.1. View the FlowCollector resource You can view and edit YAML directly in the OpenShift Container Platform web console. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. There, you can modify the FlowCollector resource to configure the Network Observability operator. The following example shows a sample FlowCollector resource for OpenShift Container Platform Network Observability operator: Sample FlowCollector resource apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi conversationEndTimeout: 10s logTypes: FLOWS 3 conversationHeartbeatInterval: 30s loki: 4 url: 'https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network' statusUrl: 'https://loki-query-frontend-http.netobserv.svc:3100/' authToken: FORWARD tls: enable: true caCert: type: configmap name: loki-gateway-ca-bundle certFile: service-ca.crt namespace: loki-namespace # 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: "3100": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service' 1 The Agent specification, spec.agent.type , must be EBPF . eBPF is the only OpenShift Container Platform supported option. 2 You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Lower sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. The lower the value, the increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. It is recommend to start with default values and refine empirically, to determine which setting your cluster can manage. 3 The optional specifications spec.processor.logTypes , spec.processor.conversationHeartbeatInterval , and spec.processor.conversationEndTimeout can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The values for spec.processor.logTypes are as follows: FLOWS CONVERSATIONS , ENDED_CONVERSATIONS , or ALL . Storage requirements are highest for ALL and lowest for ENDED_CONVERSATIONS . 4 The Loki specification, spec.loki , specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install. 5 The original certificates are copied to the Network Observability instance namespace and watched for updates. When not provided, the namespace defaults to be the same as "spec.namespace". If you chose to install Loki in a different namespace, you must specify it in the spec.loki.tls.caCert.namespace field. Similarly, the spec.exporters.kafka.tls.caCert.namespace field is available for Kafka installed in a different namespace. 6 The spec.quickFilters specification defines filters that show up in the web console. The Application filter keys, src_namespace and dst_namespace , are negated ( ! ), so the Application filter shows all traffic that does not originate from, or have a destination to, any openshift- or netobserv namespaces. For more information, see Configuring quick filters below. Additional resources For more information about conversation tracking, see Working with conversations . 5.2. Configuring the Flow Collector resource with Kafka You can configure the FlowCollector resource to use Kafka for high-throughput and low-latency data feeds. A Kafka instance needs to be running, and a Kafka topic dedicated to OpenShift Container Platform Network Observability must be created in that instance. For more information, see Kafka documentation with AMQ Streams . Prerequisites Kafka is installed. Red Hat supports Kafka with AMQ Streams Operator. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the Network Observability Operator, select Flow Collector . Select the cluster and then click the YAML tab. Modify the FlowCollector resource for OpenShift Container Platform Network Observability Operator to use Kafka, as shown in the following sample YAML: Sample Kafka configuration in FlowCollector resource apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: deploymentModel: KAFKA 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" 2 topic: network-flows 3 tls: enable: false 4 1 Set spec.deploymentModel to KAFKA instead of DIRECT to enable the Kafka deployment model. 2 spec.kafka.address refers to the Kafka bootstrap server address. You can specify a port if needed, for instance kafka-cluster-kafka-bootstrap.netobserv:9093 for using TLS on port 9093. 3 spec.kafka.topic should match the name of a topic created in Kafka. 4 spec.kafka.tls can be used to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv ) and where the eBPF agents are deployed (default: netobserv-privileged ). It must be referenced with spec.kafka.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.kafka.tls.userCert . 5.3. Export enriched network flow data You can send network flows to Kafka, IPFIX, or both at the same time. Any processor or storage that supports Kafka or IPFIX input, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. Prerequisites Your Kafka or IPFIX collector endpoint(s) are available from Network Observability flowlogs-pipeline pods. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Edit the FlowCollector to configure spec.exporters as follows: apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: exporters: - type: KAFKA 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: "ipfix-collector.ipfix.svc.cluster.local" targetPort: 4739 transport: tcp or udp 5 2 The Network Observability Operator exports all flows to the configured Kafka topic. 3 You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv). It must be referenced with spec.exporters.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.exporters.tls.userCert . 1 4 You can export flows to IPFIX instead of or in conjunction with exporting flows to Kafka. 5 You have the option to specify transport. The default value is tcp but you can also specify udp . After configuration, network flows data can be sent to an available output in a JSON format. For more information, see Network flows format reference . Additional resources For more information about specifying flow format, see Network flows format reference . 5.4. Updating the Flow Collector resource As an alternative to editing YAML in the OpenShift Container Platform web console, you can configure specifications, such as eBPF sampling, by patching the flowcollector custom resource (CR): Procedure Run the following command to patch the flowcollector CR and update the spec.agent.ebpf.sampling value: USD oc patch flowcollector cluster --type=json -p "[{"op": "replace", "path": "/spec/agent/ebpf/sampling", "value": <new value>}] -n netobserv" 5.5. Configuring quick filters You can modify the filters in the FlowCollector resource. Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample FlowCollector resource for more context about modifying the YAML. Note The filter matching types "all of" or "any of" is a UI setting that the users can modify from the query options. It is not part of this resource configuration. Here is a list of all available filter keys: Table 5.1. Filter keys Universal* Source Destination Description namespace src_namespace dst_namespace Filter traffic related to a specific namespace. name src_name dst_name Filter traffic related to a given leaf resource name, such as a specific pod, service, or node (for host-network traffic). kind src_kind dst_kind Filter traffic related to a given resource kind. The resource kinds include the leaf resource (Pod, Service or Node), or the owner resource (Deployment and StatefulSet). owner_name src_owner_name dst_owner_name Filter traffic related to a given resource owner; that is, a workload or a set of pods. For example, it can be a Deployment name, a StatefulSet name, etc. resource src_resource dst_resource Filter traffic related to a specific resource that is denoted by its canonical name, that identifies it uniquely. The canonical notation is kind.namespace.name for namespaced kinds, or node.name for nodes. For example, Deployment.my-namespace.my-web-server . address src_address dst_address Filter traffic related to an IP address. IPv4 and IPv6 are supported. CIDR ranges are also supported. mac src_mac dst_mac Filter traffic related to a MAC address. port src_port dst_port Filter traffic related to a specific port. host_address src_host_address dst_host_address Filter traffic related to the host IP address where the pods are running. protocol N/A N/A Filter traffic related to a protocol, such as TCP or UDP. Universal keys filter for any of source or destination. For example, filtering name: 'my-pod' means all traffic from my-pod and all traffic to my-pod , regardless of the matching type used, whether Match all or Match any . 5.6. Configuring monitoring for SR-IOV interface traffic In order to collect traffic from a cluster with a Single Root I/O Virtualization (SR-IOV) device, you must set the FlowCollector spec.agent.ebpf.privileged field to true . Then, the eBPF agent monitors other network namespaces in addition to the host network namespaces, which are monitored by default. When a pod with a virtual functions (VF) interface is created, a new network namespace is created. With SRIOVNetwork policy IPAM configurations specified, the VF interface is migrated from the host network namespace to the pod network namespace. Prerequisites Access to an OpenShift Container Platform cluster with a SR-IOV device. The SRIOVNetwork custom resource (CR) spec.ipam configuration must be set with an IP address from the range that the interface lists or from other plugins. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for SR-IOV monitoring apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF ebpf: privileged: true 1 1 The spec.agent.ebpf.privileged field value must be set to true to enable SR-IOV monitoring. Additional resources For more information about creating the SriovNetwork custom resource, see Creating an additional SR-IOV network attachment with the CNI VRF plugin . 5.7. Resource management and performance considerations The amount of resources required by Network Observability depends on the size of your cluster and your requirements for the cluster to ingest and store observability data. To manage resources and set performance criteria for your cluster, consider configuring the following settings. Configuring these settings might meet your optimal setup and observability needs. The following settings can help you manage resources and performance from the outset: eBPF Sampling You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Smaller sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. Smaller values result in an increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. Consider starting with the default values and refine empirically, in order to determine which setting your cluster can manage. Restricting or excluding interfaces Reduce the overall observed traffic by setting the values for spec.agent.ebpf.interfaces and spec.agent.ebpf.excludeInterfaces . By default, the agent fetches all the interfaces in the system, except the ones listed in excludeInterfaces and lo (local interface). Note that the interface names might vary according to the Container Network Interface (CNI) used. The following settings can be used to fine-tune performance after the Network Observability has been running for a while: Resource requirements and limits Adapt the resource requirements and limits to the load and memory usage you expect on your cluster by using the spec.agent.ebpf.resources and spec.processor.resources specifications. The default limits of 800MB might be sufficient for most medium-sized clusters. Cache max flows timeout Control how often flows are reported by the agents by using the eBPF agent's spec.agent.ebpf.cacheMaxFlows and spec.agent.ebpf.cacheActiveTimeout specifications. A larger value results in less traffic being generated by the agents, which correlates with a lower CPU load. However, a larger value leads to a slightly higher memory consumption, and might generate more latency in the flow collection. 5.7.1. Resource considerations The following table outlines examples of resource considerations for clusters with certain workload sizes. Important The examples outlined in the table demonstrate scenarios that are tailored to specific workloads. Consider each example only as a baseline from which adjustments can be made to accommodate your workload needs. Table 5.2. Resource recommendations Extra small (10 nodes) Small (25 nodes) Medium (65 nodes) [2] Large (120 nodes) [2] Worker Node vCPU and memory 4 vCPUs| 16GiB mem [1] 16 vCPUs| 64GiB mem [1] 16 vCPUs| 64GiB mem [1] 16 vCPUs| 64GiB Mem [1] LokiStack size 1x.extra-small 1x.small 1x.small 1x.medium Network Observability controller memory limit 400Mi (default) 400Mi (default) 400Mi (default) 800Mi eBPF sampling rate 50 (default) 50 (default) 50 (default) 50 (default) eBPF memory limit 800Mi (default) 800Mi (default) 2000Mi 800Mi (default) FLP memory limit 800Mi (default) 800Mi (default) 800Mi (default) 800Mi (default) FLP Kafka partitions N/A 48 48 48 Kafka consumer replicas N/A 24 24 24 Kafka brokers N/A 3 (default) 3 (default) 3 (default) Tested with AWS M6i instances. In addition to this worker and its controller, 3 infra nodes (size M6i.12xlarge ) and 1 workload node (size M6i.8xlarge ) were tested.
|
[
"apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi conversationEndTimeout: 10s logTypes: FLOWS 3 conversationHeartbeatInterval: 30s loki: 4 url: 'https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network' statusUrl: 'https://loki-query-frontend-http.netobserv.svc:3100/' authToken: FORWARD tls: enable: true caCert: type: configmap name: loki-gateway-ca-bundle certFile: service-ca.crt namespace: loki-namespace # 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'",
"apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: deploymentModel: KAFKA 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: exporters: - type: KAFKA 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: \"ipfix-collector.ipfix.svc.cluster.local\" targetPort: 4739 transport: tcp or udp 5",
"oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\"",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF ebpf: privileged: true 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/network_observability/configuring-network-observability-operators
|
Chapter 4. Managing user accounts using Ansible playbooks
|
Chapter 4. Managing user accounts using Ansible playbooks You can manage users in IdM using Ansible playbooks. After presenting the user life cycle , this chapter describes how to use Ansible playbooks for the following operations: Ensuring the presence of a single user listed directly in the YML file. Ensuring the presence of multiple users listed directly in the YML file. Ensuring the presence of multiple users listed in a JSON file that is referenced from the YML file. Ensuring the absence of users listed directly in the YML file. 4.1. User life cycle Identity Management (IdM) supports three user account states: Stage users are not allowed to authenticate. This is an initial state. Some of the user account properties required for active users cannot be set, for example, group membership. Active users are allowed to authenticate. All required user account properties must be set in this state. Preserved users are former active users that are considered inactive and cannot authenticate to IdM. Preserved users retain most of the account properties they had as active users, but they are not part of any user groups. You can delete user entries permanently from the IdM database. Important Deleted user accounts cannot be restored. When you delete a user account, all the information associated with the account is permanently lost. A new administrator can only be created by a user with administrator rights, such as the default admin user. If you accidentally delete all administrator accounts, the Directory Manager must create a new administrator manually in the Directory Server. Warning Do not delete the admin user. As admin is a pre-defined user required by IdM, this operation causes problems with certain commands. If you want to define and use an alternative admin user, disable the pre-defined admin user with ipa user-disable admin after you granted admin permissions to at least one different user. Warning Do not add local users to IdM. The Name Service Switch (NSS) always resolves IdM users and groups before resolving local users and groups. This means that, for example, IdM group membership does not work for local users. 4.2. Ensuring the presence of an IdM user using an Ansible playbook The following procedure describes ensuring the presence of a user in IdM using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the data of the user whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/add-user.yml file. For example, to create user named idm_user and add Password123 as the user password: --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: "{{ ipaadmin_password }}" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: "+555123457" email: [email protected] passwordexpiration: "2023-01-19 23:59:59" password: "Password123" update_password: on_create You must use the following options to add a user: name : the login name first : the first name string last : the last name string For the full list of available user options, see the /usr/share/doc/ansible-freeipa/README-user.md Markdown file. Note If you use the update_password: on_create option, Ansible only creates the user password when it creates the user. If the user is already created with a password, Ansible does not generate a new password. Run the playbook: Verification You can verify if the new user account exists in IdM by using the ipa user-show command: Log into ipaserver as admin: Request a Kerberos ticket for admin: Request information about idm_user : The user named idm_user is present in IdM. 4.3. Ensuring the presence of multiple IdM users using Ansible playbooks The following procedure describes ensuring the presence of multiple users in IdM using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the data of the users whose presence you want to ensure in IdM. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-users-present.yml file. For example, to create users idm_user_1 , idm_user_2 , and idm_user_3 , and add Password123 as the password of idm_user_1 : --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: "{{ ipaadmin_password }}" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: "+555123457" email: [email protected] passwordexpiration: "2023-01-19 23:59:59" password: "Password123" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011 Note If you do not specify the update_password: on_create option, Ansible re-sets the user password every time the playbook is run: if the user has changed the password since the last time the playbook was run, Ansible re-sets password. Run the playbook: Verification You can verify if the user account exists in IdM by using the ipa user-show command: Log into ipaserver as administrator: Display information about idm_user_1 : The user named idm_user_1 is present in IdM. 4.4. Ensuring the presence of multiple IdM users from a JSON file using Ansible playbooks The following procedure describes how you can ensure the presence of multiple users in IdM using an Ansible playbook. The users are stored in a JSON file. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary tasks. Reference the JSON file with the data of the users whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/README-user.md file: Create the users.json file, and add the IdM users into it. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/README-user.md file. For example, to create users idm_user_1 , idm_user_2 , and idm_user_3 , and add Password123 as the password of idm_user_1 : { "users": [ { "name": "idm_user_1", "first": "First 1", "last": "Last 1", "password": "Password123" }, { "name": "idm_user_2", "first": "First 2", "last": "Last 2" }, { "name": "idm_user_3", "first": "First 3", "last": "Last 3" } ] } Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification You can verify if the user accounts are present in IdM using the ipa user-show command: Log into ipaserver as administrator: Display information about idm_user_1 : The user named idm_user_1 is present in IdM. 4.5. Ensuring the absence of users using Ansible playbooks The following procedure describes how you can use an Ansible playbook to ensure that specific users are absent from IdM. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the users whose absence from IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-users-present.yml file. For example, to delete users idm_user_1 , idm_user_2 , and idm_user_3 : --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: "{{ ipaadmin_password }}" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification You can verify that the user accounts do not exist in IdM by using the ipa user-show command: Log into ipaserver as administrator: Request information about idm_user_1 : The user named idm_user_1 does not exist in IdM. 4.6. Additional resources See the README-user.md Markdown file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/user directory.
|
[
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" update_password: on_create",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-IdM-user.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa user-show idm_user User login: idm_user First name: Alice Last name: Acme .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure users' presence hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Include users_present.json include_vars: file: users_present.json - name: Users present ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: \"{{ users }}\"",
"{ \"users\": [ { \"name\": \"idm_user_1\", \"first\": \"First 1\", \"last\": \"Last 1\", \"password\": \"Password123\" }, { \"name\": \"idm_user_2\", \"first\": \"First 2\", \"last\": \"Last 2\" }, { \"name\": \"idm_user_3\", \"first\": \"First 3\", \"last\": \"Last 3\" } ] }",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /ensure-users-present-jsonfile.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /delete-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 ipa: ERROR: idm_user_1: user not found"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-accounts-using-ansible-playbooks_managing-users-groups-hosts
|
Chapter 7. Installing a Red Hat Enterprise Linux 6 Guest Virtual Machine on a Red Hat Enterprise Linux 6 Host
|
Chapter 7. Installing a Red Hat Enterprise Linux 6 Guest Virtual Machine on a Red Hat Enterprise Linux 6 Host This chapter covers how to install a Red Hat Enterprise Linux 6 guest virtual machine on a Red Hat Enterprise Linux 6 host. These procedures assume that the KVM hypervisor and all other required packages are installed and the host is configured for virtualization. Note For more information on installing the virtualization packages, refer to Chapter 5, Installing the Virtualization Packages . 7.1. Creating a Red Hat Enterprise Linux 6 Guest with Local Installation Media This procedure covers creating a Red Hat Enterprise Linux 6 guest virtual machine with a locally stored installation DVD or DVD image. DVD images are available from http://access.redhat.com for Red Hat Enterprise Linux 6. Procedure 7.1. Creating a Red Hat Enterprise Linux 6 guest virtual machine with virt-manager Optional: Preparation Prepare the storage environment for the virtual machine. For more information on preparing storage, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . Important Various storage types may be used for storing guest virtual machines. However, for a virtual machine to be able to use migration features the virtual machine must be created on networked storage. Red Hat Enterprise Linux 6 requires at least 1GB of storage space. However, Red Hat recommends at least 5GB of storage space for a Red Hat Enterprise Linux 6 installation and for the procedures in this guide. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications System Tools Virtual Machine Manager . Figure 7.1. The Virtual Machine Manager window Click on the Create a new virtual machine button to start the new virtualized guest wizard. Figure 7.2. The Create a new virtual machine button The New VM window opens. Name the virtual machine Virtual machine names can contain letters, numbers and the following characters: ' _ ', ' . ' and ' - '. Virtual machine names must be unique for migration and cannot consist only of numbers. Choose the Local install media (ISO image or CDROM) radio button. Figure 7.3. The New VM window - Step 1 Click Forward to continue. Select the installation media Select the appropriate radio button for your installation media. Figure 7.4. Locate your install media If you wish to install from a CD-ROM or DVD, select the Use CDROM or DVD radio button, and select the appropriate disk drive from the drop-down list of drives available. If you wish to install from an ISO image, select Use ISO image , and then click the Browse... button to open the Locate media volume window. Select the installation image you wish to use, and click Choose Volume . If no images are displayed in the Locate media volume window, click on the Browse Local button to browse the host machine for the installation image or DVD drive containing the installation disk. Select the installation image or DVD drive containing the installation disk and click Open ; the volume is selected for use and you are returned to the Create a new virtual machine wizard. Important For ISO image files and guest storage images, the recommended location to use is /var/lib/libvirt/images/ . Any other location may require additional configuration by SELinux. Refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide for more details on configuring SELinux. Select the operating system type and version which match the installation media you have selected. Figure 7.5. The New VM window - Step 2 Click Forward to continue. Set RAM and virtual CPUs Choose appropriate values for the virtual CPUs and RAM allocation. These values affect the host's and guest's performance. Memory and virtual CPUs can be overcommitted. For more information on overcommitting, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . Virtual machines require sufficient physical memory (RAM) to run efficiently and effectively. Red Hat supports a minimum of 512MB of RAM for a virtual machine. Red Hat recommends at least 1024MB of RAM for each logical core. Assign sufficient virtual CPUs for the virtual machine. If the virtual machine runs a multithreaded application, assign the number of virtual CPUs the guest virtual machine will require to run efficiently. You cannot assign more virtual CPUs than there are physical processors (or hyper-threads) available on the host system. The number of virtual CPUs available is noted in the Up to X available field. Figure 7.6. The new VM window - Step 3 Click Forward to continue. Enable and assign storage Enable and assign storage for the Red Hat Enterprise Linux 6 guest virtual machine. Assign at least 5GB for a desktop installation or at least 1GB for a minimal installation. Note Live and offline migrations require virtual machines to be installed on shared network storage. For information on setting up shared storage for virtual machines, refer to the Red Hat Enterprise Linux Virtualization Administration Guide . With the default local storage Select the Create a disk image on the computer's hard drive radio button to create a file-based image in the default storage pool, the /var/lib/libvirt/images/ directory. Enter the size of the disk image to be created. If the Allocate entire disk now check box is selected, a disk image of the size specified will be created immediately. If not, the disk image will grow as it becomes filled. Note Although the storage pool is a virtual container it is limited by two factors: maximum size allowed to it by qemu-kvm and the size of the disk on the host physical machine. Storage pools may not exceed the size of the disk on the host physical machine. The maximum sizes are as follows: virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk) Ext4 = ~ 16 TB (using 4 KB block size) XFS = ~8 Exabytes qcow2 and host file systems keep their own metadata and scalability should be evaluated/tuned when trying very large image sizes. Using raw disks means fewer layers that could affect scalability or max size. Figure 7.7. The New VM window - Step 4 Click Forward to create a disk image on the local hard drive. Alternatively, select Select managed or other existing storage , then select Browse to configure managed storage. With a storage pool If you selected Select managed or other existing storage in the step to use a storage pool and clicked Browse , the Locate or create storage volume window will appear. Figure 7.8. The Locate or create storage volume window Select a storage pool from the Storage Pools list. Optional: Click on the New Volume button to create a new storage volume. The Add a Storage Volume screen will appear. Enter the name of the new storage volume. Choose a format option from the Format drop-down menu. Format options include raw, cow, qcow, qcow2, qed, vmdk, and vpc. Adjust other fields as desired. Figure 7.9. The Add a Storage Volume window Click Finish to continue. Verify and finish Verify there were no errors made during the wizard and everything appears as expected. Select the Customize configuration before install check box to change the guest's storage or network devices, to use the paravirtualized drivers or to add additional devices. Click on the Advanced options down arrow to inspect and modify advanced options. For a standard Red Hat Enterprise Linux 6 installation, none of these options require modification. Figure 7.10. The New VM window - local storage Click Finish to continue into the Red Hat Enterprise Linux installation sequence. For more information on installing Red Hat Enterprise Linux 6 refer to the Red Hat Enterprise Linux 6 Installation Guide . A Red Hat Enterprise Linux 6 guest virtual machine is now created from an ISO installation disc image. After the installation completes, you can connect to the guest operating system. For more information, see Section 6.5, "Connecting to Virtual Machines"
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-rhel6_install
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_red_hat_process_automation_manager_on_red_hat_openshift_container_platform/snip-conscious-language_deploying-on-openshift
|
4.5.2. Managing Disk Quotas
|
4.5.2. Managing Disk Quotas If quotas are implemented, they need some maintenance - mostly in the form of watching to see if the quotas are exceeded and making sure the quotas are accurate. Of course, if users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has a few choices to make depending on what type of users they are and how much disk space impacts their work. The administrator can either help the user determine how to use less disk space or increase the user's disk quota. You can create a disk usage report by running the repquota utility. For example, the command repquota /home produces this output: To view the disk usage report for all (option -a ) quota-enabled file systems, use the command: While the report is easy to read, a few points should be explained. The -- displayed after each user is a quick way to determine whether the block limits have been exceeded. If the block soft limit is exceeded, a + appears in place of the first - in the output. The second - indicates the inode limit, but GFS2 file systems do not support inode limits so that character will remain as - . GFS2 file systems do not support a grace period, so the grace column will remain blank. Note that the repquota command is not supported over NFS, irrespective of the underlying file system.
|
[
"*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 36 0 0 4 0 0 kristin -- 540 0 0 125 0 0 testuser -- 440400 500000 550000 37418 0 0",
"repquota -a"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s2-disk-quotas-managing
|
Chapter 6. Tuned [tuned.openshift.io/v1]
|
Chapter 6. Tuned [tuned.openshift.io/v1] Description Tuned is a collection of rules that allows cluster-wide deployment of node-level sysctls and more flexibility to add custom tuning specified by user needs. These rules are translated and passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The responsibility for applying the node-level tuning then lies with the containerized Tuned daemons. More info: https://github.com/openshift/cluster-node-tuning-operator Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status status object TunedStatus is the status for a Tuned resource. 6.1.1. .spec Description spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status Type object Property Type Description managementState string managementState indicates whether the registry instance represented by this config instance is under operator management or not. Valid values are Force, Managed, Unmanaged, and Removed. profile array Tuned profiles. profile[] object A Tuned profile. recommend array Selection logic for all Tuned profiles. recommend[] object Selection logic for a single Tuned profile. 6.1.2. .spec.profile Description Tuned profiles. Type array 6.1.3. .spec.profile[] Description A Tuned profile. Type object Required data name Property Type Description data string Specification of the Tuned profile to be consumed by the Tuned daemon. name string Name of the Tuned profile to be used in the recommend section. 6.1.4. .spec.recommend Description Selection logic for all Tuned profiles. Type array 6.1.5. .spec.recommend[] Description Selection logic for a single Tuned profile. Type object Required priority profile Property Type Description machineConfigLabels object (string) MachineConfigLabels specifies the labels for a MachineConfig. The MachineConfig is created automatically to apply additional host settings (e.g. kernel boot parameters) profile 'Profile' needs and can only be applied by creating a MachineConfig. This involves finding all MachineConfigPools with machineConfigSelector matching the MachineConfigLabels and setting the profile 'Profile' on all nodes that match the MachineConfigPools' nodeSelectors. match array Rules governing application of a Tuned profile connected by logical OR operator. match[] object Rules governing application of a Tuned profile. operand object Optional operand configuration. priority integer Tuned profile priority. Highest priority is 0. profile string Name of the Tuned profile to recommend. 6.1.6. .spec.recommend[].match Description Rules governing application of a Tuned profile connected by logical OR operator. Type array 6.1.7. .spec.recommend[].match[] Description Rules governing application of a Tuned profile. Type object Required label Property Type Description label string Node or Pod label name. match array (undefined) Additional rules governing application of the tuned profile connected by logical AND operator. type string Match type: [node/pod]. If omitted, "node" is assumed. value string Node or Pod label value. If omitted, the presence of label name is enough to match. 6.1.8. .spec.recommend[].operand Description Optional operand configuration. Type object Property Type Description debug boolean turn debugging on/off for the TuneD daemon: true/false (default is false) tunedConfig object Global configuration for the TuneD daemon as defined in tuned-main.conf 6.1.9. .spec.recommend[].operand.tunedConfig Description Global configuration for the TuneD daemon as defined in tuned-main.conf Type object Property Type Description reapply_sysctl boolean turn reapply_sysctl functionality on/off for the TuneD daemon: true/false 6.1.10. .status Description TunedStatus is the status for a Tuned resource. Type object 6.2. API endpoints The following API endpoints are available: /apis/tuned.openshift.io/v1/tuneds GET : list objects of kind Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds DELETE : delete collection of Tuned GET : list objects of kind Tuned POST : create a Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} DELETE : delete a Tuned GET : read the specified Tuned PATCH : partially update the specified Tuned PUT : replace the specified Tuned 6.2.1. /apis/tuned.openshift.io/v1/tuneds HTTP method GET Description list objects of kind Tuned Table 6.1. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty 6.2.2. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds HTTP method DELETE Description delete collection of Tuned Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Tuned Table 6.3. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty HTTP method POST Description create a Tuned Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body Tuned schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 202 - Accepted Tuned schema 401 - Unauthorized Empty 6.2.3. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the Tuned HTTP method DELETE Description delete a Tuned Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Tuned Table 6.10. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Tuned Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Tuned Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body Tuned schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/node_apis/tuned-tuned-openshift-io-v1
|
Chapter 6. Miscellaneous Changes
|
Chapter 6. Miscellaneous Changes 6.1. Changes to Delivery of JBoss EAP Natives and Apache HTTP Server JBoss EAP 7 natives are delivered differently in this release than in the past. Some now ship with the new Red Hat JBoss Core Services product, which is a set of supplementary software that is common to many of the Red Hat JBoss middleware products. The new product allows for faster distribution of updates and a more consistent update experience. The JBoss Core Services product is available for download in a different location on the Red Hat Customer Portal. The following table lists the differences in the delivery methods between the releases. Package JBoss EAP 6 JBoss EAP 7 AIO Natives for Messaging Delivered with the product in a separate "Native Utilities" download Included within the JBoss EAP distribution. No additional download is required. Apache HTTP Server Delivered with the product in a separate "Apache HTTP Server" download Delivered with the new JBoss Core Services product mod_cluster, mod_jk, isapi, and nsapi connectors Delivered with the product in a separate "Webserver Connector Natives" download Delivered with the new JBoss Core Services product JSVC Delivered with the product in a separate "Native Utilities" download Delivered with the new JBoss Core Services product OpenSSL Delivered with the product in a separate "Native Utilities" download Delivered with the new JBoss Core Services product tcnatives Delivered with the product in a separate "Native Components" download This was dropped in JBoss EAP 7 You should also be aware of the following changes: Support was dropped for mod_cluster and mod_jk connectors used with Apache HTTP Server from Red Hat Enterprise Linux RPM channels. If you run Apache HTTP Server from Red Hat Enterprise Linux RPM channels and need to configure load balancing for JBoss EAP 7 servers, you can do one of the following: Use the Apache HTTP Server provided by JBoss Core Services. You can configure JBoss EAP 7 to act as a front-end load balancer. For more information, see Configuring JBoss EAP as a Front-end Load Balancer in the JBoss EAP Configuration Guide . You can deploy Apache HTTP Server on a machine that is supported and certified and then run the load balancer on that machine. For the list of supported configurations, see Overview of HTTP Connectors in the JBoss EAP 7 Configuration Guide . You can find more information about JBoss Core Services in the Apache HTTP Server Installation Guide . 6.2. Changes to Deployments on Amazon EC2 A number of changes have been made to the Amazon Machine Images (AMI) in JBoss EAP 7. This section briefly summarizes some of those changes. The way you launch non-clustered and clustered JBoss EAP instances and domains in Amazon EC2 has changed significantly. JBoss EAP 6 used the User Data: field for JBoss EAP configuration. The AMI scripts that parsed the configuration in the User Data: field and started the servers automatically on instance startup have been removed from JBoss EAP 7. Red Hat JBoss Operations Network agent was installed in the release of JBoss EAP. In JBoss EAP 7, you must install it separately. For details on deploying JBoss EAP 7 on Amazon EC2, see Deploying JBoss EAP on Amazon Web Services . 6.3. Undeploying Applications That Include Shared Modules Changes in the JBoss EAP 7.1 server and the Maven plug-in can result in the following failure when you attempt to undeploy your application. This error can occur if your application contains modules that interact with or depend on each other. For example, assume you have an application that contains two Maven WAR project modules, application-A and application-B , that share data managed by the data-sharing module. When you deploy this application, you must deploy the shared data-sharing module first, and then deploy the modules that depend on it. The deployment order is specified in the <modules> element of the parent pom.xml file. This is true in JBoss EAP 6.4 through JBoss EAP 7.4. In releases prior to JBoss EAP 7.1, you could undeploy all of the archives for this application from the root of the parent project using the following command. In JBoss EAP 7.1 and later, you must first undeploy the archives that use the shared modules, and then undeploy the shared modules. Since there is no way to specify the order of undeployment in the project pom.xml file, you must undeploy the modules manually. You can accomplish this by running the following commands from the root of the parent directory. This new undeploy behavior is more correct and ensures that you do not end up in an unstable deployment state. 6.4. Changes to JBoss EAP Scripts The add-user script behavior has changed in JBoss EAP 7 due to a change in password policy. JBoss EAP 6 had a strict password policy. As a result, the add-user script rejected weak passwords that did not satisfy the minimum requirements. In JBoss EAP 7, weak passwords are accepted and a warning is issued. For more information, see Setting Add-User Utility Password Restrictions in the JBoss EAP Configuration Guide . 6.5. Removal of OSGi Support When JBoss EAP 6.0 GA was first released, JBoss OSGi, an implementation of the OSGi specification, was included as a Technology Preview feature. With the release of JBoss EAP 6.1.0, JBoss OSGi was demoted from Technology Preview to Unsupported. In JBoss EAP 6.1.0, the configadmin and osgi extension modules and subsystem configuration for a standalone server were moved to a separate EAP_HOME /standalone/configuration/standalone-osgi.xml configuration file. Because you should not migrate this unsupported configuration file, the removal of JBoss OSGi support should not impact the migration of a standalone server configuration. If you modified any of the other standalone configuration files to configure osgi or configadmin , those configurations must be removed. For a managed domain, the osgi extension and subsystem configuration were removed from the EAP_HOME /domain/configuration/domain.xml file in the JBoss EAP 6.1.0 release. However, the configadmin module extension and subsystem configuration remain in the EAP_HOME /domain/configuration/domain.xml file. This configuration is no longer supported in JBoss EAP 7 and must be removed. 6.6. Changes to Java Platform Module System Names Standalone Java applications that use the JPMS architecture require code updates due to Java Platform Module System (JPMS) name changes in JBoss EAP 7.3. Update your standalone application code with the new JPMS module names to work properly with JBoss EAP 7.3. The change in JPMS module names only affects standalone applications, therefore no code updates are required to the JBoss EAP applications deployed on the server. Table 6.1. JPMS Name Changes GroupID JBoss EAP 7.2 JBoss EAP 7.3 org.jboss.spec.javax.ws.rs:jboss-jaxrs-api_2.1_spec beta.jboss.jaxrs.api_2_1 java.ws.rs org.jboss.spec.javax.security.jacc:jboss-jacc-api_1.5_spec beta.jboss.jacc.api_1_5 java.security.jacc org.jboss.spec.javax.security.auth.message:jboss-jaspi-api_1.1_spec beta.jboss.jaspi.api_1_1 java.security.auth.message 6.7. Changes in SOAP with Attachments API for Java Update the user-defined SOAP handlers to comply with the SAAJ 1.4 specification when migrating to JBoss EAP 7.3. As JBoss EAP 7.3 ships with SAAJ 1.4, SOAP handlers written for the release of JBoss EAP, which shipped with SAAJ 1.3, might not work correctly due to the differences in SAAJ 1.4 and 1.3 specifications. For information about SAAJ 1.4, see SOAP with Attachments . While updating the SOAP handlers, SAAJ 1.3 can be used in JBoss EAP 7.3 by setting the system property -Djboss.saaj.api.version=1.3 . After the SOAP handlers are updated, remove the system property to restore the default functionality. 6.8. Maven Artifact Changes for Jakarta EE Some javax Maven artifacts have been replaced with jakarta Maven artifacts for JBoss EAP 7.3. You must update your project dependencies with the new jakarta Maven artifacts when building your projects for JBoss EAP 7.3. Not updating project dependencies will cause build failures when building the projects for JBoss EAP 7.3. For information about managing project dependencies, see Manage Project Dependencies in the Development Guide . The following table lists the javax artifacts and the jakarta artifacts that replaced them in JBoss EAP 7.3. Table 6.2. javax artifacts and jakarta artifacts replacing them javax artifact jakarta artifact com.sun.mail:javax.mail com.sun.mail:jakarta.mail javax.activation:activation com.sun.activation:jakarta.atcivation javax.enterprise:cdi-api jakarta.enterprise:jakarta.enterprise.cdi-api javax.inject:javax.inject jakarta.inject:jakarta.inject-api javax.json:javax.json-api jakarta.json:jakarta.json-api javax.json.bind:javax.json.bind-api jakarta.json.bind:jakarta.json.bind-api javax.persistence:javax.persistence-api jakarta.persistence:jakarta.persistence-api javax.security.enterprise:javax.security.enterprise-api jakarta.security.enterprise:jakarta.security.enterprise-api javax.validation:validation-api jakarta.validation:jakarta.validation-api org.glassfish:javax.json org.glassfish:jakarta.json org.jboss.spec.javax.xml.soap:jboss-saaj-api_1.3_spec org.jboss.spec.javax.xml.soap:jboss-saaj-api_1.4_spec org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec org.jboss.spec.javax.transaction:jboss-transaction-api_1.3_spec Note com.sun.mail:jakarta.mail brings in Jakarta Mail 1.6.4 library. For information about Jakarta Mail compatibility, see Compatibility Notes maintained by Eclipse.
|
[
"WFLYCTL0184: New missing/unsatisfied dependencies",
"mvn wildfly:undeploy",
"mvn wildfly:undeploy -pl application-A,application-B mvn wildfly:undeploy -pl data-shared"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/migration_guide/migration_miscellaneous_changes
|
Chapter 3. Cluster capabilities
|
Chapter 3. Cluster capabilities Cluster administrators can use cluster capabilities to enable or disable optional components prior to installation. Cluster administrators can enable cluster capabilities at anytime after installation. Note Cluster administrators cannot disable a cluster capability after it is enabled. 3.1. Enabling cluster capabilities If you are using an installation method that includes customizing your cluster by creating an install-config.yaml file, you can select which cluster capabilities you want to make available on the cluster. Note If you customize your cluster by enabling or disabling specific cluster capabilities, you must manually maintain your install-config.yaml file. New OpenShift Container Platform updates might declare new capability handles for existing components, or introduce new components altogether. Users who customize their install-config.yaml file should consider periodically updating their install-config.yaml file as OpenShift Container Platform is updated. You can use the following configuration parameters to select cluster capabilities: capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage 1 Defines a baseline set of capabilities to install. Valid values are None , vCurrent and v4.x . If you select None , all optional capabilities are disabled. The default value is vCurrent , which enables all optional capabilities. Note v4.x refers to any value up to and including the current cluster version. For example, valid values for a OpenShift Container Platform 4.12 cluster are v4.11 and v4.12 . 2 Defines a list of capabilities to explicitly enable. These capabilities are enabled in addition to the capabilities specified in baselineCapabilitySet . Note In this example, the default capability is set to v4.11 . The additionalEnabledCapabilities field enables additional capabilities over the default v4.11 capability set. The following table describes the baselineCapabilitySet values. Table 3.1. Cluster capabilities baselineCapabilitySet values description Value Description vCurrent Specify this option when you want to automatically add new, default capabilities that are introduced in new releases. v4.11 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.11. By specifying v4.11 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.11 are baremetal , MachineAPI , marketplace , and openshift-samples . v4.12 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.12. By specifying v4.12 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.12 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , and CSISnapshot . v4.13 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.13. By specifying v4.13 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.13 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , and NodeTuning . v4.14 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.14. By specifying v4.14 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.14 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , and DeploymentConfig . v4.15 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.15. By specifying v4.15 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.15 are baremetal , MachineAPI , marketplace , OperatorLifecycleManager , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , CloudCredential , and DeploymentConfig . v4.16 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.16. By specifying v4.16 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.16 are baremetal , MachineAPI , marketplace , OperatorLifecycleManager , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , CloudCredential , DeploymentConfig , and CloudControllerManager . v4.17 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.17. By specifying v4.17 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.17 are baremetal , MachineAPI , marketplace , OperatorLifecycleManager , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , CloudCredential , DeploymentConfig , and CloudControllerManager . None Specify when the other sets are too large, and you do not need any capabilities or want to fine-tune via additionalEnabledCapabilities . Additional resources Installing a cluster on AWS with customizations Installing a cluster on GCP with customizations 3.2. Optional cluster capabilities in OpenShift Container Platform 4.17 Currently, cluster Operators provide the features for these optional capabilities. The following summarizes the features provided by each capability and what functionality you lose if it is disabled. Additional resources Cluster Operators reference 3.2.1. Bare-metal capability Purpose The Cluster Baremetal Operator provides the features for the baremetal capability. The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. The bare-metal capability is required for deployments using installer-provisioned infrastructure. Disabling the bare-metal capability can result in unexpected problems with these deployments. It is recommended that cluster administrators only disable the bare-metal capability during installations with user-provisioned infrastructure that do not have any BareMetalHost resources in the cluster. Important If the bare-metal capability is disabled, the cluster cannot provision or manage bare-metal nodes. Only disable the capability if there are no BareMetalHost resources in your deployment. The baremetal capability depends on the MachineAPI capability. If you enable the baremetal capability, you must also enable MachineAPI . Additional resources Deploying installer-provisioned clusters on bare metal Preparing for bare metal cluster installation Configuration using the Bare Metal Operator 3.2.2. Build capability Purpose The Build capability enables the Build API. The Build API manages the lifecycle of Build and BuildConfig objects. Important If you disable the Build capability, the following resources will not be available in the cluster: Build and BuildConfig resources The builder service account Disable the Build capability only if you do not require Build and BuildConfig resources or the builder service account in the cluster. 3.2.3. Cloud controller manager capability Purpose The Cloud Controller Manager Operator provides features for the CloudControllerManager capability. Note Currently, disabling the CloudControllerManager capability is not supported on all platforms. You can determine if your cluster supports disabling the CloudControllerManager capability by checking values in the installation configuration ( install-config.yaml ) file for your cluster. In the install-config.yaml file, locate the platform parameter. If the value of the platform parameter is Baremetal or None , you can disable the CloudControllerManager capability on your cluster. If the value of the platform parameter is External , locate the platform.external.cloudControllerManager parameter. If the value of the platform.external.cloudControllerManager parameter is None , you can disable the CloudControllerManager capability on your cluster. Important If these parameters contain any other values than those listed, you cannot disable the CloudControllerManager capability on your cluster. Note The status of this Operator is General Availability for Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud(R), global Microsoft Azure, Microsoft Azure Stack Hub, Nutanix, Red Hat OpenStack Platform (RHOSP), and VMware vSphere. The Operator is available as a Technology Preview for IBM Power(R) Virtual Server. The Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of OpenShift Container Platform. The Operator is based on the Kubebuilder framework and controller-runtime libraries. It is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Cloud configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. 3.2.4. Cloud credential capability Purpose The Cloud Credential Operator provides features for the CloudCredential capability. Note Currently, disabling the CloudCredential capability is only supported for bare-metal clusters. The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Additional resources About the Cloud Credential Operator 3.2.5. Cluster Image Registry capability Purpose The Cluster Image Registry Operator provides features for the ImageRegistry capability. The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. In order to integrate the image registry into the cluster's user authentication and authorization system, an image pull secret is generated for each service account in the cluster. Important If you disable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, the image pull secret is not generated for each service account. If you disable the ImageRegistry capability, you can reduce the overall resource footprint of OpenShift Container Platform in Telco environments. Depending on your deployment, you can disable this component if you do not need it. Project cluster-image-registry-operator Additional resources Image Registry Operator in OpenShift Container Platform Automatically generated secrets 3.2.6. Cluster storage capability Purpose The Cluster Storage Operator provides the features for the Storage capability. The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Important If the cluster storage capability is disabled, the cluster will not have a default storageclass or any CSI drivers. Users with administrator privileges can create a default storageclass and manually install CSI drivers if the cluster storage capability is disabled. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. 3.2.7. Console capability Purpose The Console Operator provides the features for the Console capability. The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Additional resources Web console overview 3.2.8. CSI snapshot controller capability Purpose The Cluster CSI Snapshot Controller Operator provides the features for the CSISnapshot capability. The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Additional resources CSI volume snapshots 3.2.9. DeploymentConfig capability Purpose The DeploymentConfig capability enables and manages the DeploymentConfig API. Important If you disable the DeploymentConfig capability, the following resources will not be available in the cluster: DeploymentConfig resources The deployer service account Disable the DeploymentConfig capability only if you do not require DeploymentConfig resources and the deployer service account in the cluster. 3.2.10. Ingress Capability Purpose The Ingress Operator provides the features for the Ingress Capability. The Ingress Capability is enabled by default. Important If you set the baselineCapabilitySet field to None , you must explicitly enable the Ingress Capability, because the installation of a cluster fails if the Ingress Capability is disabled. The Ingress Operator configures and manages the OpenShift Container Platform router. Project openshift-ingress-operator CRDs clusteringresses.ingress.openshift.io Scope: Namespaced CR: clusteringresses Validation: No Configuration objects Cluster config Type Name: clusteringresses.ingress.openshift.io Instance Name: default View Command: USD oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml Notes The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router: USD oc get deployment -n openshift-ingress The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed Ingress Controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr , then the Ingress Controller operates in IPv6-only mode. In the following example, Ingress Controllers managed by the Ingress Operator will run in IPv4-only mode because only one cluster network exists and the network is an IPv4 cidr : USD oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}' Example output map[cidr:10.128.0.0/14 hostPrefix:23] 3.2.11. Insights capability Purpose The Insights Operator provides the features for the Insights capability. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Using Insights Operator 3.2.12. Machine API capability Purpose The machine-api-operator , cluster-autoscaler-operator , and cluster-control-plane-machine-set-operator Operators provide the features for the MachineAPI capability. You can disable this capability only if you install a cluster with user-provisioned infrastructure. The Machine API capability is responsible for all machine configuration and management in the cluster. If you disable the Machine API capability during installation, you need to manage all machine-related tasks manually. Additional resources Overview of machine management Machine API Operator Cluster Autoscaler Operator Control Plane Machine Set Operator 3.2.13. Marketplace capability Purpose The Marketplace Operator provides the features for the marketplace capability. The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. If you disable the marketplace capability, the Marketplace Operator does not create the openshift-marketplace namespace. Catalog sources can still be configured and managed on the cluster manually, but OLM depends on the openshift-marketplace namespace in order to make catalogs available to all namespaces on the cluster. Users with elevated permissions to create namespaces prefixed with openshift- , such as system or cluster administrators, can manually create the openshift-marketplace namespace. If you enable the marketplace capability, you can enable and disable individual catalogs by configuring the Marketplace Operator. Additional resources Red Hat-provided Operator catalogs 3.2.14. Node Tuning capability Purpose The Node Tuning Operator provides features for the NodeTuning capability. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. If you disable the NodeTuning capability, some default tuning settings will not be applied to the control-plane nodes. This might limit the scalability and performance of large clusters with over 900 nodes or 900 routes. Additional resources Using the Node Tuning Operator 3.2.15. OpenShift samples capability Purpose The Cluster Samples Operator provides the features for the openshift-samples capability. The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. If you disable the samples capability, users cannot access the image streams, samples, and templates it provides. Depending on your deployment, you might want to disable this component if you do not need it. Additional resources Configuring the Cluster Samples Operator 3.2.16. Operator Lifecycle Manager capability Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. If an Operator requires any of the following APIs, then you must enable the OperatorLifecycleManager capability: ClusterServiceVersion CatalogSource Subscription InstallPlan OperatorGroup Important The marketplace capability depends on the OperatorLifecycleManager capability. You cannot disable the OperatorLifecycleManager capability and enable the marketplace capability. Additional resources Operator Lifecycle Manager concepts and resources 3.3. Viewing the cluster capabilities As a cluster administrator, you can view the capabilities by using the clusterversion resource status. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To view the status of the cluster capabilities, run the following command: USD oc get clusterversion version -o jsonpath='{.spec.capabilities}{"\n"}{.status.capabilities}{"\n"}' Example output {"additionalEnabledCapabilities":["openshift-samples"],"baselineCapabilitySet":"None"} {"enabledCapabilities":["openshift-samples"],"knownCapabilities":["CSISnapshot","Console","Insights","Storage","baremetal","marketplace","openshift-samples"]} 3.4. Enabling the cluster capabilities by setting baseline capability set As a cluster administrator, you can enable cluster capabilities any time after a OpenShift Container Platform installation by setting the baselineCapabilitySet configuration parameter. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To set the baselineCapabilitySet configuration parameter, run the following command: USD oc patch clusterversion version --type merge -p '{"spec":{"capabilities":{"baselineCapabilitySet":"vCurrent"}}}' 1 1 For baselineCapabilitySet you can specify vCurrent , v4.17 , or None . 3.5. Enabling the cluster capabilities by setting additional enabled capabilities As a cluster administrator, you can enable cluster capabilities any time after a OpenShift Container Platform installation by setting the additionalEnabledCapabilities configuration parameter. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure View the additional enabled capabilities by running the following command: USD oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{"\n"}' Example output ["openshift-samples"] To set the additionalEnabledCapabilities configuration parameter, run the following command: USD oc patch clusterversion/version --type merge -p '{"spec":{"capabilities":{"additionalEnabledCapabilities":["openshift-samples", "marketplace"]}}}' Important It is not possible to disable a capability which is already enabled in a cluster. The cluster version Operator (CVO) continues to reconcile the capability which is already enabled in the cluster. If you try to disable a capability, then CVO shows the divergent spec: USD oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type=="ImplicitlyEnabledCapabilities")]}{"\n"}' Example output {"lastTransitionTime":"2022-07-22T03:14:35Z","message":"The following capabilities could not be disabled: openshift-samples","reason":"CapabilitiesImplicitlyEnabled","status":"True","type":"ImplicitlyEnabledCapabilities"} Note During the cluster upgrades, it is possible that a given capability could be implicitly enabled. If a resource was already running on the cluster before the upgrade, then any capabilities that is part of the resource will be enabled. For example, during a cluster upgrade, a resource that is already running on the cluster has been changed to be part of the marketplace capability by the system. Even if a cluster administrator does not explicitly enabled the marketplace capability, it is implicitly enabled by the system.
|
[
"capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc get clusterversion version -o jsonpath='{.spec.capabilities}{\"\\n\"}{.status.capabilities}{\"\\n\"}'",
"{\"additionalEnabledCapabilities\":[\"openshift-samples\"],\"baselineCapabilitySet\":\"None\"} {\"enabledCapabilities\":[\"openshift-samples\"],\"knownCapabilities\":[\"CSISnapshot\",\"Console\",\"Insights\",\"Storage\",\"baremetal\",\"marketplace\",\"openshift-samples\"]}",
"oc patch clusterversion version --type merge -p '{\"spec\":{\"capabilities\":{\"baselineCapabilitySet\":\"vCurrent\"}}}' 1",
"oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{\"\\n\"}'",
"[\"openshift-samples\"]",
"oc patch clusterversion/version --type merge -p '{\"spec\":{\"capabilities\":{\"additionalEnabledCapabilities\":[\"openshift-samples\", \"marketplace\"]}}}'",
"oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type==\"ImplicitlyEnabledCapabilities\")]}{\"\\n\"}'",
"{\"lastTransitionTime\":\"2022-07-22T03:14:35Z\",\"message\":\"The following capabilities could not be disabled: openshift-samples\",\"reason\":\"CapabilitiesImplicitlyEnabled\",\"status\":\"True\",\"type\":\"ImplicitlyEnabledCapabilities\"}"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installation_overview/cluster-capabilities
|
5.9.3. Checking the Default SELinux Context
|
5.9.3. Checking the Default SELinux Context Use the matchpathcon command to check if files and directories have the correct SELinux context. From the matchpathcon (8) manual page: " matchpathcon queries the system policy and outputs the default security context associated with the file path." [10] . The following example demonstrates using the matchpathcon command to verify that files in /var/www/html/ directory are labeled correctly: As the Linux root user, run the touch /var/www/html/file{1,2,3} command to create three files ( file1 , file2 , and file3 ). These files inherit the httpd_sys_content_t type from the /var/www/html/ directory: As the Linux root user, run the chcon -t samba_share_t /var/www/html/file1 command to change the file1 type to samba_share_t . Note that the Apache HTTP Server cannot read files or directories labeled with the samba_share_t type. The matchpathcon -V option compares the current SELinux context to the correct, default context in SELinux policy. Run the matchpathcon -V /var/www/html/* command to check all files in the /var/www/html/ directory: The following output from the matchpathcon command explains that file1 is labeled with the samba_share_t type, but should be labeled with the httpd_sys_content_t type: To resolve the label problem and allow the Apache HTTP Server access to file1 , as the Linux root user, run the restorecon -v /var/www/html/file1 command: [10] The matchpathcon (8) manual page, as shipped with the libselinux-utils package in Red Hat Enterprise Linux, is written by Daniel Walsh. Any edits or changes in this version were done by Murray McAllister.
|
[
"~]# touch /var/www/html/file{1,2,3} ~]# ls -Z /var/www/html/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3",
"~]USD matchpathcon -V /var/www/html/* /var/www/html/file1 has context unconfined_u:object_r:samba_share_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 /var/www/html/file2 verified. /var/www/html/file3 verified.",
"/var/www/html/file1 has context unconfined_u:object_r:samba_share_t:s0, should be system_u:object_r:httpd_sys_content_t:s0",
"~]# restorecon -v /var/www/html/file1 restorecon reset /var/www/html/file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-maintaining_selinux_labels_-checking_the_default_selinux_context
|
5.345. vim
|
5.345. vim 5.345.1. RHBA-2012:0454 - vim bug fix update Updated vim packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. Vim (Vi IMproved) is an updated and improved version of the vi editor. Bug Fixes BZ# 594997 Previously, when using the VimExplorer file manager with the locale set to Simplified Chinese (zh_CN), the netrw.vim script inserted an unwanted "e" character in front of file names. The underlying code has been modified so that file names are now displayed correctly, without unwanted characters. BZ# 634902 The spec file template that was used when new spec files were edited contained outdated information. With this update, the spec file template is updated to adhere to the latest spec file guidelines. BZ# 652610 When using the file explorer in a subdirectory of the root directory, the "vim .." command displayed only part of the root directory's content. A patch has been applied to address this issue, and the "vim .." command now lists the content of the root directory properly in the described scenario. BZ# 663753 Due to a typographic error in the filetype plug-in, the vim utility could display the httpd configuration files with incorrect syntax highlighting. This update corrects the errors in the filetype plug-in, and the httpd configuration files are now displayed with the correct syntax highlighting. All users of vim are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/vim
|
3.5. Custom Authentication Modules
|
3.5. Custom Authentication Modules A custom authentication module may be a subclass of a provided module or a completely new module. All authentication modules implement the javax.security.auth.spi.LoginModule interface. Refer to the relevant API documentation for more information.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/custom_authentication_modules
|
function::get_cycles
|
function::get_cycles Name function::get_cycles - Processor cycle count Synopsis Arguments None Description This function returns the processor cycle counter value if available, else it returns zero. The cycle counter is free running and unsynchronized on each processor. Thus, the order of events cannot determined by comparing the results of the get_cycles function on different processors.
|
[
"get_cycles:long()"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-get-cycles
|
15.4. Specifying Default User and Group Attributes
|
15.4. Specifying Default User and Group Attributes Identity Management uses a template when it creates new entries. For users, the template is very specific. Identity Management uses default values for several core attributes for IdM user accounts. These defaults can define actual values for user account attributes (such as the home directory location) or it can define the format of attribute values, such as the user name length. These settings also define the object classes assigned to users. For groups, the template only defines the assigned object classes. These default definitions are all contained in a single configuration entry for the IdM server, cn=ipaconfig,cn=etc,dc=example,dc=com . The configuration can be changed using the ipa config-mod command. Table 15.3. Default User Parameters Field Command-Line Option Descriptions Maximum user name length --maxusername Sets the maximum number of characters for user names. The default value is 32. Root for home directories --homedirectory Sets the default directory to use for user home directories. The default value is /home . Default shell --defaultshell Sets the default shell to use for users. The default value is /bin/sh . Default user group --defaultgroup Sets the default group to which all newly created accounts are added. The default value is ipausers , which is automatically created during the IdM server installation process. Default e-mail domain --emaildomain Sets the email domain to use to create email addresses based on the new accounts. The default is the IdM server domain. Search time limit --searchtimelimit Sets the maximum amount of time, in seconds, to spend on a search before the server returns results. Search size limit --searchrecordslimit Sets the maximum number of records to return in a search. User search fields --usersearch Sets the fields in a user entry that can be used as a search string. Any attribute listed has an index kept for that attribute, so setting too many attributes could affect server performance. Group search fields --groupsearch Sets the fields in a group entry that can be used as a search string. Certificate subject base Sets the base DN to use when creating subject DNs for client certificates. This is configured when the server is set up. Default user object classes --userobjectclasses Defines an object class that is used to create IdM user accounts. This can be invoked multiple times. The complete list of object classes must be given because the list is overwritten when the command is run. Default group object classes --groupobjectclasses Defines an object class that is used to create IdM group accounts. This can be invoked multiple times. The complete list of object classes must be given because the list is overwritten when the command is run. Password expiration notification --pwdexpnotify Sets how long, in days, before a password expires for the server to send a notification. Password plug-in features Sets the format of passwords that are allowed for users. 15.4.1. Viewing Attributes from the Web UI Open the IPA Server tab. Select the Configuration subtab. The complete configuration entry is shown in three sections, one for all search limits, one for user templates, and one for group templates. Figure 15.4. Setting Search Limits Figure 15.5. User Attributes Figure 15.6. Group Attributes 15.4.2. Viewing Attributes from the Command Line The config-show command shows the current configuration which applies to all new user accounts. By default, only the most common attributes are displayed; use the --all option to show the complete configuration.
|
[
"[bjensen@server ~]USD kinit admin [bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/Configuring_IPA_Users-Specifying_Default_User_Settings
|
Chapter 9. Preparing a RHEL installation on 64-bit IBM Z
|
Chapter 9. Preparing a RHEL installation on 64-bit IBM Z This section describes how to install Red Hat Enterprise Linux on the 64-bit IBM Z architecture. 9.1. Planning for installation on 64-bit IBM Z Red Hat Enterprise Linux 9 runs on IBM z14 or IBM LinuxONE II systems, or later. The installation process assumes that you are familiar with the 64-bit IBM Z and can set up logical partitions (LPARs) and z/VM guest virtual machines. For installation of Red Hat Enterprise Linux on 64-bit IBM Z, Red Hat supports Direct Access Storage Device (DASD), SCSI disk devices attached over Fiber Channel Protocol (FCP), and virtio-blk and virtio-scsi devices. When using FCP devices, Red Hat recommends using them in multipath configuration for better reliability. Important DASDs are disks that allow a maximum of three partitions per device. For example, dasda can have partitions dasda1 , dasda2 , and dasda3 . Pre-installation decisions Whether the operating system is to be run on an LPAR, KVM, or as a z/VM guest operating system. Network configuration. Red Hat Enterprise Linux 9 for 64-bit IBM Z supports the following network devices: Real and virtual Open Systems Adapter (OSA) Real and virtual HiperSockets LAN channel station (LCS) for real OSA virtio-net devices RDMA over Converged Ethernet (RoCE) Ensure you select machine type as ESA for your z/VM VMs, because selecting any other machine types might prevent RHEL from installing. See the IBM documentation . Note When initializing swap space on a Fixed Block Architecture (FBA) DASD using the SWAPGEN utility, the FBAPART option must be used. Additional resources For additional information about system requirements, see RHEL Technology Capabilities and Limits For additional information about 64-bit IBM Z, see IBM documentation . For additional information about using secure boot with Linux on IBM Z, see Secure boot for Linux on IBM Z . For installation instructions on IBM Power Servers, refer to IBM installation documentation . To see if your system is supported for installing RHEL, refer to https://catalog.redhat.com . 9.2. Boot media compatibility for IBM Z servers The following table provides detailed information about the supported boot media options for installing Red Hat Enterprise Linux (RHEL) on 64-bit IBM Z servers. It outlines the compatibility of each boot medium with different system types and indicates whether the zipl boot loader is used. This information helps you determine the most suitable boot medium for your specific environment. System type / Boot media Uses zipl boot loader z/VM KVM LPAR z/VM Reader No Yes N/A N/A SE or HMC (remote SFTP, FTPS, FTP server, DVD) No N/A N/A Yes DASD Yes Yes Yes Yes FCP SCSI LUNs Yes Yes Yes Yes FCP SCSI DVD Yes Yes Yes Yes N/A indicates that the boot medium is not applicable for the specified system type. 9.3. Supported environments and components for IBM Z servers The following tables provide information about the supported environments, network devices, machine types, and storage types for different system types when installing Red Hat Enterprise Linux (RHEL) on 64-bit IBM Z servers. Use these tables to identify the compatibility of various components with your specific system configuration. Table 9.1. Network device compatibility for system types Network device z/VM KVM LPAR Open Systems Adapter (OSA) Yes N/A Yes HiperSockets Yes N/A Yes LAN channel station (LCS) Yes N/A Yes virtio-net N/A Yes N/A RDMA over Converged Ethernet (RoCE) Yes Yes Yes N/A indicates that the component is not applicable for the specified system type. Table 9.2. Machine type compatibility for system types Machine type z/VM KVM LPAR ESA Yes N/A N/A s390-virtio-ccw N/A Yes N/A N/A indicates that the component is not applicable for the specified system type. Table 9.3. Storage type compatibility for system types Storage type z/VM KVM LPAR DASD Yes Yes Yes FCP SCSI Yes Yes [a] Yes virtio-blk N/A Yes N/A [a] Conditional support based on configuration N/A indicates that the component is not applicable for the specified system type. 9.4. Overview of installation process on 64-bit IBM Z servers You can install Red Hat Enterprise Linux on 64-bit IBM Z interactively or in unattended mode. Installation on 64-bit IBM Z differs from other architectures as it is typically performed over a network, and not from local media. The installation consists of three phases: Booting the installation Connect to the mainframe Customize the boot parameters Perform an initial program load (IPL), or boot from the media containing the installation program Connecting to the installation system From a local machine, connect to the remote 64-bit IBM Z system using SSH, and start the installation program using Virtual Network Computing (VNC) Completing the installation using the RHEL installation program 9.5. Boot media for installing RHEL on 64-bit IBM Z servers After establishing a connection with the mainframe, you need to perform an initial program load (IPL), or boot, from the medium containing the installation program. This document describes the most common methods of installing Red Hat Enterprise Linux on 64-bit IBM Z. In general, any method may be used to boot the Linux installation system, which consists of a kernel ( kernel.img ) and initial RAM disk ( initrd.img ) with parameters in the generic.prm file supplemented by user defined parameters. Additionally, a generic.ins file is loaded which determines file names and memory addresses for the initrd, kernel and generic.prm . The Linux installation system is also called the installation program in this book. You can use the following boot media only if Linux is to run as a guest operating system under z/VM: z/VM reader You can use the following boot media only if Linux is to run in LPAR mode: SE or HMC through a remote SFTP, FTPS or FTP server SE or HMC DVD You can use the following boot media for both z/VM and LPAR: DASD SCSI disk device that is attached through an FCP channel If you use DASD or an FCP-attached SCSI disk device as boot media, you must have a configured zipl boot loader. 9.6. Customizing boot parameters Before the installation can begin, you must configure some mandatory boot parameters. When installing through z/VM, these parameters must be configured before you boot into the generic.prm file. When installing on an LPAR, the rd.cmdline parameter is set to ask by default, meaning that you will be given a prompt on which you can enter these boot parameters. In both cases, the required parameters are the same. All network configuration can either be specified by using a parameter file, or at the prompt. Installation source An installation source must always be configured. Use the inst.repo option to specify the package source for the installation. Network devices Network configuration must be provided if network access will be required during the installation. If you plan to perform an unattended (Kickstart-based) installation by using only local media such as a disk, network configuration can be omitted. ip= Use the ip= option for basic network configuration, and other options as required. rd.znet= Also use the rd.znet= kernel option, which takes a network protocol type, a comma delimited list of sub-channels, and, optionally, comma delimited sysfs parameter and value pairs for qeth devices. This parameter can be specified multiple times to activate multiple network devices. For example: When specifying multiple rd.znet boot options, only the last one is passed on to the kernel command line of the installed system. This does not affect the networking of the system since all network devices configured during installation are properly activated and configured at boot. The qeth device driver assigns the same interface name for Ethernet and Hipersockets devices: enc <device number> . The bus ID is composed of the channel subsystem ID, subchannel set ID, and device number, separated by dots; the device number is the last part of the bus ID, without leading zeroes and dots. For example, the interface name will be enca00 for a device with the bus ID 0.0.0a00 . Storage devices At least one storage device must always be configured for text mode installations. The rd.dasd= option takes a Direct Access Storage Device (DASD) adapter device bus identifier. For multiple DASDs, specify the parameter multiple times, or use a comma separated list of bus IDs. To specify a range of DASDs, specify the first and the last bus ID. For example: The rd.zfcp= option takes a SCSI over FCP (zFCP) adapter device bus identifier, a target world wide port name (WWPN), and an FCP LUN, then activates one path to a SCSI disk. This parameter needs to be specified at least twice to activate multiple paths to the same disk. This parameter can be specified multiple times to activate multiple disks, each with multiple paths. Since 9, a target world wide port name (WWPN) and an FCP LUN have to be provided only if the zFCP device is not configured in NPIV mode or when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter. It provides access to all SCSI devices found in the storage area network attached to the FCP device with the specified bus ID. This parameter needs to be specified at least twice to activate multiple paths to the same disks. Kickstart options If you are using a Kickstart file to perform an automatic installation, you must always specify the location of the Kickstart file using the inst.ks= option. For an unattended, fully automatic Kickstart installation, the inst.cmdline option is also useful. An example customized generic.prm file containing all mandatory parameters look similar to the following example: Example 9.1. Customized generic.prm file Some installation methods also require a file with a mapping of the location of installation data in the file system of the HMC DVD or FTP server and the memory locations where the data is to be copied. The file is typically named generic.ins , and contains file names for the initial RAM disk, kernel image, and parameter file ( generic.prm ) and a memory location for each file. An example generic.ins will look similar to the following example: Example 9.2. Sample generic.ins file A valid generic.ins file is provided by Red Hat along with all other files required to boot the installer. Modify this file only if you want to, for example, load a different kernel version than default. Additional resources For a list of all boot options to customize the installation program's behavior, see Boot options reference . 9.7. Parameters and configuration files on 64-bit IBM Z This section contains information about the parameters and configuration files on 64-bit IBM Z. 9.7.1. Required configuration file parameters on 64-bit IBM Z Several parameters are required and must be included in the parameter file. These parameters are also provided in the file generic.prm in directory images/ of the installation DVD. ro Mounts the root file system, which is a RAM disk, read-only. ramdisk_size= size Modifies the memory size reserved for the RAM disk to ensure that the Red Hat Enterprise Linux installation program fits within it. For example: ramdisk_size=40000 . The generic.prm file also contains the additional parameter cio_ignore=all,!condev . This setting speeds up boot and device detection on systems with many devices. The installation program transparently handles the activation of ignored devices. 9.7.2. 64-bit IBM z/VM configuration file Under z/VM, you can use a configuration file on a CMS-formatted disk. The purpose of the CMS configuration file is to save space in the parameter file by moving the parameters that configure the initial network setup, the DASD, and the FCP specification out of the parameter file. Each line of the CMS configuration file contains a single variable and its associated value, in the following shell-style syntax: variable = value . You must also add the CMSDASD and CMSCONFFILE parameters to the parameter file. These parameters point the installation program to the configuration file: CMSDASD= cmsdasd_address Where cmsdasd_address is the device number of a CMS-formatted disk that contains the configuration file. This is usually the CMS user's A disk. For example: CMSDASD=191 CMSCONFFILE= configuration_file Where configuration_file is the name of the configuration file. This value must be specified in lower case. It is specified in a Linux file name format: CMS_file_name . CMS_file_type . The CMS file REDHAT CONF is specified as redhat.conf . The CMS file name and the file type can each be from one to eight characters that follow the CMS conventions. For example: CMSCONFFILE=redhat.conf 9.7.3. Installation network, DASD and FCP parameters on 64-bit IBM Z These parameters can be used to automatically set up the preliminary network, and can be defined in the CMS configuration file. These parameters are the only parameters that can also be used in a CMS configuration file. All other parameters in other sections must be specified in the parameter file. NETTYPE=" type " Where type must be one of the following: qeth , lcs , or ctc . The default is qeth . Choose qeth for: OSA-Express features HiperSockets Virtual connections on z/VM, including VSWITCH and Guest LAN Select ctc for: Channel-to-channel network connections SUBCHANNELS=" device_bus_IDs " Where device_bus_IDs is a comma-separated list of two or three device bus IDs. The IDs must be specified in lowercase. Provides required device bus IDs for the various network interfaces: For example (a sample qeth SUBCHANNEL statement): PORTNO=" portnumber " You can add either PORTNO="0" (to use port 0) or PORTNO="1" (to use port 1 of OSA features with two ports per CHPID). LAYER2=" value " Where value can be 0 or 1 . Use LAYER2="0" to operate an OSA or HiperSockets device in layer 3 mode ( NETTYPE="qeth" ). Use LAYER2="1" for layer 2 mode. For virtual network devices under z/VM this setting must match the definition of the GuestLAN or VSWITCH to which the device is coupled. To use network services that operate on layer 2 (the Data Link Layer or its MAC sublayer) such as DHCP, layer 2 mode is a good choice. The qeth device driver default for OSA devices is now layer 2 mode. To continue using the default of layer 3 mode, set LAYER2="0" explicitly. VSWITCH=" value " Where value can be 0 or 1 . Specify VSWITCH="1" when connecting to a z/VM VSWITCH or GuestLAN, or VSWITCH="0" (or nothing at all) when using directly attached real OSA or directly attached real HiperSockets. MACADDR=" MAC_address " If you specify LAYER2="1" and VSWITCH="0" , you can optionally use this parameter to specify a MAC address. Linux requires six colon-separated octets as pairs lower case hex digits - for example, MACADDR=62:a3:18:e7:bc:5f . This is different from the notation used by z/VM. If you specify LAYER2="1" and VSWITCH="1" , you must not specify the MACADDR , because z/VM assigns a unique MAC address to virtual network devices in layer 2 mode. CTCPROT=" value " Where value can be 0 , 1 , or 3 . Specifies the CTC protocol for NETTYPE="ctc" . The default is 0 . HOSTNAME=" string " Where string is the host name of the newly-installed Linux instance. IPADDR=" IP " Where IP is the IP address of the new Linux instance. NETMASK=" netmask " Where netmask is the netmask. The netmask supports the syntax of a prefix integer (from 1 to 32) as specified in IPv4 classless interdomain routing (CIDR). For example, you can specify 24 instead of 255.255.255.0 , or 20 instead of 255.255.240.0 . GATEWAY=" gw " Where gw is the gateway IP address for this network device. MTU=" mtu " Where mtu is the Maximum Transmission Unit (MTU) for this network device. DNS=" server1:server2:additional_server_terms:serverN " Where " server1:server2:additional_server_terms:serverN " is a list of DNS servers, separated by colons. For example: SEARCHDNS=" domain1:domain2:additional_dns_terms:domainN " Where " domain1:domain2:additional_dns_terms:domainN " is a list of the search domains, separated by colons. For example: You only need to specify SEARCHDNS= if you specify the DNS= parameter. DASD= Defines the DASD or range of DASDs to configure for the installation. The installation program supports a comma-separated list of device bus IDs, or ranges of device bus IDs with the optional attributes ro , diag , erplog , and failfast . Optionally, you can abbreviate device bus IDs to device numbers with leading zeros stripped. Any optional attributes should be separated by colons and enclosed in parentheses. Optional attributes follow a device bus ID or a range of device bus IDs. The only supported global option is autodetect . This does not support the specification of non-existent DASDs to reserve kernel device names for later addition of DASDs. Use persistent DASD device names such as /dev/disk/by-path/name to enable transparent addition of disks later. Other global options such as probeonly , nopav , or nofcx are not supported by the installation program. Only specify those DASDs that need to be installed on your system. All unformatted DASDs specified here must be formatted after a confirmation later on in the installation program. Add any data DASDs that are not needed for the root file system or the /boot partition after installation. For example: FCP_ n =" device_bus_ID [ WWPN FCP_LUN ]" For FCP-only environments, remove the DASD= option from the CMS configuration file to indicate no DASD is present. Where: n is typically an integer value (for example FCP_1 or FCP_2 ) but could be any string with alphabetic or numeric characters or underscores. device_bus_ID specifies the device bus ID of the FCP device representing the host bus adapter (HBA) (for example 0.0.fc00 for device fc00). WWPN is the world wide port name used for routing (often in conjunction with multipathing) and is as a 16-digit hex value (for example 0x50050763050b073d ). FCP_LUN refers to the storage logical unit identifier and is specified as a 16-digit hexadecimal value padded with zeroes to the right (for example 0x4020400100000000 ). Note A target world wide port name (WWPN) and an FCP_LUN have to be provided if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-9.0 or older releases. Otherwise only the device_bus_ID value is mandatory. These variables can be used on systems with FCP devices to activate FCP LUNs such as SCSI disks. Additional FCP LUNs can be activated during the installation interactively or by means of a Kickstart file. An example value looks similar to the following: Each of the values used in the FCP parameters (for example FCP_1 or FCP_2 ) are site-specific and are normally supplied by the FCP storage administrator. 9.7.4. Parameters for kickstart installations on 64-bit IBM Z The following parameters can be defined in a parameter file but do not work in a CMS configuration file. inst.ks= URL References a Kickstart file, which usually resides on the network for Linux installations on 64-bit IBM Z. Replace URL with the full path including the file name of the Kickstart file. This parameter activates automatic installation with Kickstart. inst.cmdline This requires installation with a Kickstart file that answers all questions, because the installation program does not support interactive user input in cmdline mode. Ensure that your Kickstart file contains all required parameters before you use the inst.cmdline option. If a required command is missing, the installation will fail. 9.7.5. Miscellaneous parameters on 64-bit IBM Z The following parameters can be defined in a parameter file but do not work in a CMS configuration file. rd.live.check Turns on testing of an ISO-based installation source; for example, when using inst.repo= with an ISO on local disk or mounted with NFS. inst.nompath Disables support for multipath devices. inst.proxy=[ protocol ://][ username [: password ]@] host [: port ] Specify a proxy to use with installation over HTTP, HTTPS or FTP. inst.rescue Boot into a rescue system running from a RAM disk that can be used to fix and restore an installed system. inst.stage2= URL Specifies a path to a tree containing install.img , not to the install.img directly. Otherwise, follows the same syntax as inst.repo= . If inst.stage2 is specified, it typically takes precedence over other methods of finding install.img . However, if Anaconda finds install.img on local media, the inst.stage2 URL will be ignored. If inst.stage2 is not specified and install.img cannot be found locally, Anaconda looks to the location given by inst.repo= or method= . If only inst.stage2= is given without inst.repo= or method= , Anaconda uses whatever repos the installed system would have enabled by default for installation. Use the option multiple times to specify multiple HTTP, HTTPS or FTP sources. The HTTP, HTTPS or FTP paths are then tried sequentially until one succeeds: inst.syslog= IP/hostname [: port ] Sends log messages to a remote syslog server. The boot parameters described here are the most useful for installations and trouble shooting on 64-bit IBM Z, but only a subset of those that influence the installation program. 9.7.6. Sample parameter file and CMS configuration file on 64-bit IBM Z To change the parameter file, begin by extending the shipped generic.prm file. Example of generic.prm file: Example of redhat.conf file configuring a QETH network device (pointed to by CMSCONFFILE in generic.prm ): 9.7.7. Using parameter and configuration files on 64-bit IBM Z The 64-bit IBM Z architecture can use a customized parameter file to pass boot parameters to the kernel and the installation program. You need to change the parameter file if you want to: Install unattended with Kickstart. Choose non-default installation settings that are not accessible through the installation program's interactive user interface, such as rescue mode. The parameter file can be used to set up networking non-interactively before the installation program ( Anaconda ) starts. The kernel parameter file is limited to 3754 bytes plus an end-of-line character. The parameter file can be variable or fixed record format. Fixed record format increases the file size by padding each line up to the record length. Should you encounter problems with the installation program not recognizing all specified parameters in LPAR environments, you can try to put all parameters in one single line or start and end each line with a space character. The parameter file contains kernel parameters, such as ro , and parameters for the installation process, such as vncpassword=test or vnc . 9.8. Preparing an installation in a z/VM guest virtual machine Use the x3270 or c3270 terminal emulator, to log in to z/VM from other Linux systems, or use the IBM 3270 terminal emulator on the 64-bit IBM Z Hardware Management Console (HMC). If you are running Microsoft Windows operating system, there are several options available, and can be found through an internet search. A free native Windows port of c3270 called wc3270 also exists. Ensure you select machine type as ESA for your z/VM VMs, because selecting any other machine types might prevent installing RHEL. See the IBM documentation . Procedure Log on to the z/VM guest virtual machine chosen for the Linux installation. optional: If your 3270 connection is interrupted and you cannot log in again because the session is still active, you can replace the old session with a new one by entering the following command on the z/VM logon screen: + Replace user with the name of the z/VM guest virtual machine. Depending on whether an external security manager, for example RACF, is used, the logon command might vary. If you are not already running CMS (single-user operating system shipped with z/VM) in your guest, boot it now by entering the command: Be sure not to use CMS disks such as your A disk (often device number 0191) as installation targets. To find out which disks are in use by CMS, use the following query: You can use the following CP (z/VM Control Program, which is the z/VM hypervisor) query commands to find out about the device configuration of your z/VM guest virtual machine: Query the available main memory, which is called storage in 64-bit IBM Z terminology. Your guest should have at least 1 GiB of main memory. Query available network devices by type: osa OSA - CHPID type OSD, real or virtual (VSWITCH or GuestLAN), both in QDIO mode hsi HiperSockets - CHPID type IQD, real or virtual (GuestLAN type Hipers) lcs LCS - CHPID type OSE For example, to query all of the network device types mentioned above, run: Query available DASDs. Only those that are flagged RW for read-write mode can be used as installation targets: Query available FCP devices (vHBAs):
|
[
"rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno= <number>",
"rd.dasd=0.0.0200 rd.dasd=0.0.0202(ro),0.0.0203(ro:failfast),0.0.0205-0.0.0207",
"rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000 rd.zfcp=0.0.4000",
"ro ramdisk_size=40000 cio_ignore=all,!condev inst.repo=http://example.com/path/to/repository rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0,portname=foo ip=192.168.17.115::192.168.17.254:24:foobar.systemz.example.com:enc600:none nameserver=192.168.17.1 rd.dasd=0.0.0200 rd.dasd=0.0.0202 rd.zfcp=0.0.4000,0x5005076300c213e9,0x5022000000000000 rd.zfcp=0.0.5000,0x5005076300dab3e9,0x5022000000000000 inst.ks=http://example.com/path/to/kickstart",
"images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408",
"qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"",
"SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"",
"DNS=\"10.1.2.3:10.3.2.1\"",
"SEARCHDNS=\"subdomain.domain:domain\"",
"DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"",
"FCP_ n =\" device_bus_ID [ WWPN FCP_LUN ]\"",
"FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\" FCP_2=\"0.0.4000\"",
"inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/",
"ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" inst.vnc inst.repo=http://example.com/path/to/dvd-contents",
"NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"",
"logon user here",
"cp ipl cms",
"query disk",
"cp query virtual storage",
"cp query virtual osa",
"cp query virtual dasd",
"cp query virtual fcp"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/preparing-a-rhel-installation-on-64-bit-ibm-z_rhel-installer
|
Chapter 5. Working with pipelines in JupyterLab
|
Chapter 5. Working with pipelines in JupyterLab 5.1. Overview of pipelines in JupyterLab You can use Elyra to create visual end-to-end pipeline workflows in JupyterLab. Elyra is an extension for JupyterLab that provides you with a Pipeline Editor to create pipeline workflows that can be executed in OpenShift AI. You can access the Elyra extension within JupyterLab when you create the most recent version of one of the following workbench images: Standard Data Science PyTorch TensorFlow TrustyAI AMD ROCm-PyTorch AMD ROCm-TensorFlow The Elyra pipeline editor is only available in specific workbench images. To use Elyra, the workbench must be based on a JupyterLab image. The Elyra extension is not available in code-server or RStudio IDEs. The workbench must also be derived from the Standard Data Science image. It is not available in Minimal Python or CUDA-based workbenches. All other supported JupyterLab-based workbench images have access to the Elyra extension. When you use the Pipeline Editor to visually design your pipelines, minimal coding is required to create and run pipelines. For more information about Elyra, see Elyra Documentation . For more information about the Pipeline Editor, see Visual Pipeline Editor . After you have created your pipeline, you can run it locally in JupyterLab, or remotely using data science pipelines in OpenShift AI. The pipeline creation process consists of the following tasks: Create a data science project that contains a workbench. Create a pipeline server. Create a new pipeline in the Pipeline Editor in JupyterLab. Develop your pipeline by adding Python notebooks or Python scripts and defining their runtime properties. Define execution dependencies. Run or export your pipeline. Before you can run a pipeline in JupyterLab, your pipeline instance must contain a runtime configuration. A runtime configuration defines connectivity information for your pipeline instance and S3-compatible cloud storage. If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the OpenShift AI dashboard, you must create a runtime configuration before you can run your pipeline in JupyterLab. For more information about runtime configurations, see Runtime Configuration . As a prerequisite, before you create a workbench, ensure that you have created and configured a pipeline server within the same data science project as your workbench. You can use S3-compatible cloud storage to make data available to your notebooks and scripts while they are executed. Your cloud storage must be accessible from the machine in your deployment that runs JupyterLab and from the cluster that hosts data science pipelines. Before you create and run pipelines in JupyterLab, ensure that you have your s3-compatible storage credentials readily available. Additional resources Elyra Documentation Visual Pipeline Editor Runtime Configuration . 5.2. Accessing the pipeline editor You can use Elyra to create visual end-to-end pipeline workflows in JupyterLab. Elyra is an extension for JupyterLab that provides you with a Pipeline Editor to create pipeline workflows that can execute in OpenShift AI. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project. You have created and configured a pipeline server within the data science project that contains your workbench. Important To ensure that the runtime configuration is created automatically, you must create the pipeline server before you create the workbench. You have created a workbench with a workbench image that contains the Elyra extension (Standard Data Science, TensorFlow, TrustyAI, AMD ROCm-PyTorch, AMD ROCm-TensorFlow, or PyTorch), as described in Creating a workbench and selecting an IDE . You have started the workbench and opened the JupyterLab interface, as described in Accessing your workbench IDE . Important The Elyra pipeline editor is only available in specific workbench images. To use Elyra, the workbench must be based on a JupyterLab image. The Elyra extension is not available in code-server or RStudio IDEs. The workbench must also be derived from the Standard Data Science image. It is not available in Minimal Python or CUDA-based workbenches. All other supported JupyterLab-based workbench images have access to the Elyra extension. You have access to S3-compatible storage. Procedure After you open JupyterLab, confirm that the JupyterLab launcher is automatically displayed. In the Elyra section of the JupyterLab launcher, click the Pipeline Editor tile. The Pipeline Editor opens. Verification You can view the Pipeline Editor in JupyterLab. 5.3. Creating a runtime configuration If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the OpenShift AI dashboard, you must create a runtime configuration before you can run your pipeline in JupyterLab. This enables you to specify connectivity information for your pipeline instance and S3-compatible cloud storage. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have access to S3-compatible cloud storage. You have created a data science project that contains a workbench. You have created and configured a pipeline server within the data science project that contains your workbench. You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch). Procedure In the left sidebar of JupyterLab, click Runtimes ( ). Click the Create new runtime configuration button ( ). The Add new Data Science Pipelines runtime configuration page opens. Complete the relevant fields to define your runtime configuration. In the Display Name field, enter a name for your runtime configuration. Optional: In the Description field, enter a description to define your runtime configuration. Optional: In the Tags field, click Add Tag to define a category for your pipeline instance. Enter a name for the tag and press Enter. Define the credentials of your data science pipeline: In the Data Science Pipelines API Endpoint field, enter the API endpoint of your data science pipeline. Do not specify the pipelines namespace in this field. In the Public Data Science Pipelines API Endpoint field, enter the public API endpoint of your data science pipeline. Important You can obtain the data science pipelines API endpoint from the Data Science Pipelines Runs page in the dashboard. Copy the relevant endpoint and enter it in the Public Data Science Pipelines API Endpoint field. Optional: In the Data Science Pipelines User Namespace field, enter the relevant user namespace to run pipelines. From the Authentication Type list, select the authentication type required to authenticate your pipeline. Important If you created a notebook directly from the Jupyter tile on the dashboard, select EXISTING_BEARER_TOKEN from the Authentication Type list. In the Data Science Pipelines API Endpoint Username field, enter the user name required for the authentication type. In the Data Science Pipelines API Endpoint Password Or Token , enter the password or token required for the authentication type. Important To obtain the data science pipelines API endpoint token, in the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display token and copy the value of --token= from the Log in with this token command. Define the connectivity information of your S3-compatible storage: In the Cloud Object Storage Endpoint field, enter the endpoint of your S3-compatible storage. For more information about Amazon s3 endpoints, see Amazon Simple Storage Service endpoints and quotas . Optional: In the Public Cloud Object Storage Endpoint field, enter the URL of your S3-compatible storage. In the Cloud Object Storage Bucket Name field, enter the name of the bucket where your pipeline artifacts are stored. If the bucket name does not exist, it is created automatically. From the Cloud Object Storage Authentication Type list, select the authentication type required to access to your S3-compatible cloud storage. If you use AWS S3 buckets, select KUBERNETES_SECRET from the list. In the Cloud Object Storage Credentials Secret field, enter the secret that contains the storage user name and password. This secret is defined in the relevant user namespace, if applicable. In addition, it must be stored on the cluster that hosts your pipeline runtime. In the Cloud Object Storage Username field, enter the user name to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, enter your AWS Secret Access Key ID. In the Cloud Object Storage Password field, enter the password to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, enter your AWS Secret Access Key. Click Save & Close . Verification The runtime configuration that you created appears on the Runtimes tab ( ) in the left sidebar of JupyterLab. 5.4. Updating a runtime configuration To ensure that your runtime configuration is accurate and updated, you can change the settings of an existing runtime configuration. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have access to S3-compatible storage. You have created a data science project that contains a workbench. You have created and configured a pipeline server within the data science project that contains your workbench. A previously created runtime configuration is available in the JupyterLab interface. You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch). Procedure In the left sidebar of JupyterLab, click Runtimes ( ). Hover the cursor over the runtime configuration that you want to update and click the Edit button ( ). The Data Science Pipelines runtime configuration page opens. Fill in the relevant fields to update your runtime configuration. In the Display Name field, update name for your runtime configuration, if applicable. Optional: In the Description field, update the description of your runtime configuration, if applicable. Optional: In the Tags field, click Add Tag to define a category for your pipeline instance. Enter a name for the tag and press Enter. Define the credentials of your data science pipeline: In the Data Science Pipelines API Endpoint field, update the API endpoint of your data science pipeline, if applicable. Do not specify the pipelines namespace in this field. In the Public Data Science Pipelines API Endpoint field, update the API endpoint of your data science pipeline, if applicable. Optional: In the Data Science Pipelines User Namespace field, update the relevant user namespace to run pipelines, if applicable. From the Authentication Type list, select a new authentication type required to authenticate your pipeline, if applicable. Important If you created a notebook directly from the Jupyter tile on the dashboard, select EXISTING_BEARER_TOKEN from the Authentication Type list. In the Data Science Pipelines API Endpoint Username field, update the user name required for the authentication type, if applicable. In the Data Science Pipelines API Endpoint Password Or Token , update the password or token required for the authentication type, if applicable. Important To obtain the data science pipelines API endpoint token, in the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display token and copy the value of --token= from the Log in with this token command. Define the connectivity information of your S3-compatible storage: In the Cloud Object Storage Endpoint field, update the endpoint of your S3-compatible storage, if applicable. For more information about Amazon s3 endpoints, see Amazon Simple Storage Service endpoints and quotas . Optional: In the Public Cloud Object Storage Endpoint field, update the URL of your S3-compatible storage, if applicable. In the Cloud Object Storage Bucket Name field, update the name of the bucket where your pipeline artifacts are stored, if applicable. If the bucket name does not exist, it is created automatically. From the Cloud Object Storage Authentication Type list, update the authentication type required to access to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, you must select USER_CREDENTIALS from the list. Optional: In the Cloud Object Storage Credentials Secret field, update the secret that contains the storage user name and password, if applicable. This secret is defined in the relevant user namespace. You must save the secret on the cluster that hosts your pipeline runtime. Optional: In the Cloud Object Storage Username field, update the user name to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, update your AWS Secret Access Key ID. Optional: In the Cloud Object Storage Password field, update the password to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, update your AWS Secret Access Key. Click Save & Close . Verification The runtime configuration that you updated is shown on the Runtimes tab ( ) in the left sidebar of JupyterLab. 5.5. Deleting a runtime configuration After you have finished using your runtime configuration, you can delete it from the JupyterLab interface. After deleting a runtime configuration, you cannot run pipelines in JupyterLab until you create another runtime configuration. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that contains a workbench. You have created and configured a pipeline server within the data science project that contains your workbench. A previously created runtime configuration is visible in the JupyterLab interface. You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch). Procedure In the left sidebar of JupyterLab, click Runtimes ( ). Hover the cursor over the runtime configuration that you want to delete and click the Delete Item button ( ). A dialog box appears prompting you to confirm the deletion of your runtime configuration. Click OK . Verification The runtime configuration that you deleted is no longer shown on the Runtimes tab ( ) in the left sidebar of JupyterLab. 5.6. Duplicating a runtime configuration To prevent you from re-creating runtime configurations with similar values in their entirety, you can duplicate an existing runtime configuration in the JupyterLab interface. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that contains a workbench. You have created and configured a pipeline server within the data science project that contains your workbench. A previously created runtime configuration is visible in the JupyterLab interface. You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch). Procedure In the left sidebar of JupyterLab, click Runtimes ( ). Hover the cursor over the runtime configuration that you want to duplicate and click the Duplicate button ( ). Verification The runtime configuration that you duplicated is shown on the Runtimes tab ( ) in the left sidebar of JupyterLab. 5.7. Running a pipeline in JupyterLab You can run pipelines that you have created in JupyterLab from the Pipeline Editor user interface. Before you can run a pipeline, you must create a data science project and a pipeline server. After you create a pipeline server, you must create a workbench within the same project as your pipeline server. Your pipeline instance in JupyterLab must contain a runtime configuration. If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the OpenShift AI dashboard, you must create a runtime configuration before you can run your pipeline in JupyterLab. A runtime configuration defines connectivity information for your pipeline instance and S3-compatible cloud storage. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have access to S3-compatible storage. You have created a pipeline in JupyterLab. You have opened your pipeline in the Pipeline Editor in JupyterLab. Your pipeline instance contains a runtime configuration. You have created and configured a pipeline server within the data science project that contains your workbench. You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch). Procedure In the Pipeline Editor user interface, click Run Pipeline ( ). The Run Pipeline dialog appears. The Pipeline Name field is automatically populated with the pipeline file name. Note After you run your pipeline, a pipeline experiment containing your pipeline run is automatically created on the Experiments Experiments and runs page in the OpenShift AI dashboard. The experiment name matches the name that you assigned to the pipeline. Define the settings for your pipeline run. From the Runtime Configuration list, select the relevant runtime configuration to run your pipeline. Optional: Configure your pipeline parameters, if applicable. If your pipeline contains nodes that reference pipeline parameters, you can change the default parameter values. If a parameter is required and has no default value, you must enter a value. Click OK . Verification You can view the details of your pipeline run on the Experiments Experiments and runs page in the OpenShift AI dashboard. You can view the output artifacts of your pipeline run. The artifacts are stored in your designated object storage bucket. 5.8. Exporting a pipeline in JupyterLab You can export pipelines that you have created in JupyterLab. When you export a pipeline, the pipeline is prepared for later execution, but is not uploaded or executed immediately. During the export process, any package dependencies are uploaded to S3-compatible storage. Also, pipeline code is generated for the target runtime. Before you can export a pipeline, you must create a data science project and a pipeline server. After you create a pipeline server, you must create a workbench within the same project as your pipeline server. In addition, your pipeline instance in JupyterLab must contain a runtime configuration. If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the OpenShift AI dashboard, you must create a runtime configuration before you can export your pipeline in JupyterLab. A runtime configuration defines connectivity information for your pipeline instance and S3-compatible cloud storage. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that contains a workbench. You have created and configured a pipeline server within the data science project that contains your workbench. You have access to S3-compatible storage. You have a created a pipeline in JupyterLab. You have opened your pipeline in the Pipeline Editor in JupyterLab. Your pipeline instance contains a runtime configuration. You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch). Procedure In the Pipeline Editor user interface, click Export Pipeline ( ). The Export Pipeline dialog appears. The Pipeline Name field is automatically populated with the pipeline file name. Define the settings to export your pipeline. From the Runtime Configuration list, select the relevant runtime configuration to export your pipeline. From the Export Pipeline as select an appropriate file format In the Export Filename field, enter a file name for the exported pipeline. Select the Replace if file already exists check box to replace an existing file of the same name as the pipeline you are exporting. Optional: Configure your pipeline parameters, if applicable. If your pipeline contains nodes that reference pipeline parameters, you can change the default parameter values. If a parameter is required and has no default value, you must enter a value. Click OK . Verification You can view the file containing the pipeline that you exported in your designated object storage bucket.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_science_pipelines/working-with-pipelines-in-jupyterlab_ds-pipelines
|
Chapter 10. Deprecated functionality
|
Chapter 10. Deprecated functionality Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 9. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 10.1. Installer and image creation Deprecated Kickstart commands The following Kickstart commands have been deprecated: timezone --ntpservers timezone --nontp logging --level %packages --excludeWeakdeps %packages --instLangs %anaconda pwpolicy Note that where only specific options are listed, the base command and its other options are still available and not deprecated. Using the deprecated commands in Kickstart files prints a warning in the logs. You can turn the deprecated command warnings into errors with the inst.ksstrict boot option. (BZ#1899167) 10.2. Shells and command-line tools Setting the TMPDIR variable in the ReaR configuration file is deprecated Setting the TMPDIR environment variable in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file), by using a statement such as export TMPDIR=... , does not work and is deprecated. To specify a custom directory for ReaR temporary files, export the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=... statement and then execute the rear command in the same shell session or script. Jira:RHELDOCS-18049 10.3. Security SHA-1 is deprecated for cryptographic purposes The usage of the SHA-1 message digest for cryptographic purposes has been deprecated in RHEL 9. The digest produced by SHA-1 is not considered secure because of many documented successful attacks based on finding hash collisions. The RHEL core crypto components no longer create signatures using SHA-1 by default. Applications in RHEL 9 have been updated to avoid using SHA-1 in security-relevant use cases. Among the exceptions, the HMAC-SHA1 message authentication code and the Universal Unique Identifier (UUID) values can still be created using SHA-1 because these use cases do not currently pose security risks. SHA-1 also can be used in limited cases connected with important interoperability and compatibility concerns, such as Kerberos and WPA-2. See the List of RHEL applications using cryptography that is not compliant with FIPS 140-3 section in the RHEL 9 Security hardening document for more details. If your scenario requires the use of SHA-1 for verifying existing or third-party cryptographic signatures, you can enable it by entering the following command: Alternatively, you can switch the system-wide crypto policies to the LEGACY policy. Note that LEGACY also enables many other algorithms that are not secure. (JIRA:RHELPLAN-110763) SCP is deprecated in RHEL 9 The secure copy protocol (SCP) is deprecated because it has known security vulnerabilities. The SCP API remains available for the RHEL 9 lifecycle but using it reduces system security. In the scp utility, SCP is replaced by the SSH File Transfer Protocol (SFTP) by default. The OpenSSH suite does not use SCP in RHEL 9. SCP is deprecated in the libssh library. (JIRA:RHELPLAN-99136) Digest-MD5 in SASL is deprecated The Digest-MD5 authentication mechanism in the Simple Authentication Security Layer (SASL) framework is deprecated, and it might be removed from the cyrus-sasl packages in a future major release. (BZ#1995600) OpenSSL deprecates MD2, MD4, MDC2, Whirlpool, RIPEMD160, Blowfish, CAST, DES, IDEA, RC2, RC4, RC5, SEED, and PBKDF1 The OpenSSL project has deprecated a set of cryptographic algorithms because they are insecure, uncommonly used, or both. Red Hat also discourages the use of those algorithms, and RHEL 9 provides them for migrating encrypted data to use new algorithms. Users must not depend on those algorithms for the security of their systems. The implementations of the following algorithms have been moved to the legacy provider in OpenSSL: MD2, MD4, MDC2, Whirlpool, RIPEMD160, Blowfish, CAST, DES, IDEA, RC2, RC4, RC5, SEED, and PBKDF1. See the /etc/pki/tls/openssl.cnf configuration file for instructions on how to load the legacy provider and enable support for the deprecated algorithms. ( BZ#1975836 ) /etc/system-fips is now deprecated Support for indicating FIPS mode through the /etc/system-fips file has been removed, and the file will not be included in future versions of RHEL. To install RHEL in FIPS mode, add the fips=1 parameter to the kernel command line during the system installation. You can check whether RHEL operates in FIPS mode by using the fips-mode-setup --check command. (JIRA:RHELPLAN-103232) libcrypt.so.1 is now deprecated The libcrypt.so.1 library is now deprecated, and it might be removed in a future version of RHEL. ( BZ#2034569 ) fapolicyd.rules is deprecated The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules replaces the /etc/fapolicyd/fapolicyd.rules file. The fagenrules script now merges all component rule files in this directory to the /etc/fapolicyd/compiled.rules file. Rules in /etc/fapolicyd/fapolicyd.trust are still processed by the fapolicyd framework but only for ensuring backward compatibility. ( BZ#2054740 ) 10.4. Networking Network teams are deprecated in RHEL 9 The teamd service and the libteam library are deprecated in Red Hat Enterprise Linux 9 and will be removed in the major release. As a replacement, configure a bond instead of a network team. Red Hat focuses its efforts on kernel-based bonding to avoid maintaining two features, bonds and teams, that have similar functions. The bonding code has a high customer adoption, is robust, and has an active community development. As a result, the bonding code receives enhancements and updates. For details about how to migrate a team to a bond, see Migrating a network team configuration to network bond . (BZ#1935544) NetworkManager connection profiles in ifcfg format are deprecated In RHEL 9.0 and later, connection profiles in ifcfg format are deprecated. The major RHEL release will remove the support for this format. However, in RHEL 9, NetworkManager still processes and updates existing profiles in this format if you modify them. By default, NetworkManager now stores connection profiles in keyfile format in the /etc/NetworkManager/system-connections/ directory. Unlike the ifcfg format, the keyfile format supports all connection settings that NetworkManager provides. For further details about the keyfile format and how to migrate profiles, see NetworkManager connection profiles in keyfile format . (BZ#1894877) The iptables back end in firewalld is deprecated In RHEL 9, the iptables framework is deprecated. As a consequence, the iptables backend and the direct interface in firewalld are also deprecated. Instead of the direct interface you can use the native features in firewalld to configure the required rules. ( BZ#2089200 ) 10.5. Kernel ATM encapsulation is deprecated in RHEL 9 Asynchronous Transfer Mode (ATM) encapsulation enables Layer-2 (Point-to-Point Protocol, Ethernet) or Layer-3 (IP) connectivity for the ATM Adaptation Layer 5 (AAL-5). Red Hat has not been providing support for ATM NIC drivers since RHEL 7. The support for ATM implementation is being dropped in RHEL 9. These protocols are currently used only in chipsets, which support the ADSL technology and are being phased out by manufacturers. Therefore, ATM encapsulation is deprecated in Red Hat Enterprise Linux 9. For more information, see PPP Over AAL5 , Multiprotocol Encapsulation over ATM Adaptation Layer 5 , and Classical IP and ARP over ATM . ( BZ#2058153 ) 10.6. File systems and storage lvm2-activation-generator and its generated services removed in RHEL 9.0 The lvm2-activation-generator program and its generated services lvm2-activation , lvm2-activation-early , and lvm2-activation-net are removed in RHEL 9.0. The lvm.conf event_activation setting, used to activate the services, is no longer functional. The only method for auto activating volume groups is event based activation. ( BZ#2038183 ) 10.7. Dynamic programming languages, web and database servers libdb has been deprecated RHEL 8 and RHEL 9 currently provide Berkeley DB ( libdb ) version 5.3.28, which is distributed under the LGPLv2 license. The upstream Berkeley DB version 6 is available under the AGPLv3 license, which is more restrictive. The libdb package is deprecated as of RHEL 9 and might not be available in future major RHEL releases. In addition, cryptographic algorithms have been removed from libdb in RHEL 9 and multiple libdb dependencies have been removed from RHEL 9. Users of libdb are advised to migrate to a different key-value database. For more information, see the Knowledgebase article Available replacements for the deprecated Berkeley DB (libdb) in RHEL . (BZ#1927780, BZ#1974657 , JIRA:RHELPLAN-80695) 10.8. Compilers and development tools Smaller size of keys than 2048 are deprecated by openssl 3.0 Key sizes smaller than 2048 bits are deprecated by openssl 3.0 and no longer work in Go's FIPS mode. ( BZ#2111072 ) Some PKCS1 v1.5 modes are now deprecated Some PKCS1 v1.5 modes are not approved in FIPS-140-3 for encryption and are disabled. They will no longer work in Go's FIPS mode. (BZ#2092016) 10.9. Identity Management SHA-1 in OpenDNSSec is now deprecated OpenDNSSec supports exporting Digital Signatures and authentication records using the SHA-1 algorithm. The use of the SHA-1 algorithm is no longer supported. With the RHEL 9 release, SHA-1 in OpenDNSSec is deprecated and it might be removed in a future minor release. Additionally, OpenDNSSec support is limited to its integration with Red Hat Identity Management. OpenDNSSec is not supported standalone. ( BZ#1979521 ) The SSSD implicit files provider domain is disabled by default The SSSD implicit files provider domain, which retrieves user information from local files such as /etc/shadow and group information from /etc/groups , is now disabled by default. To retrieve user and group information from local files with SSSD: Configure SSSD. Choose one of the following options: Explicitly configure a local domain with the id_provider=files option in the sssd.conf configuration file. Enable the files provider by setting enable_files_domain=true in the sssd.conf configuration file. Configure the name services switch. (JIRA:RHELPLAN-100639) -h and -p options were deprecated in OpenLDAP client utilities. The upstream OpenLDAP project has deprecated the -h and -p options in its utilities, and recommends using the -H option instead to specify the LDAP URI. As a consequence, RHEL 9 has deprecated these two options in all OpenLDAP client utilities. The -h and -p options will be removed from RHEL products in future releases. (JIRA:RHELPLAN-137660) The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. Jira:RHELDOCS-16612 10.10. Desktop GTK 2 is now deprecated The legacy GTK 2 toolkit and the following, related packages have been deprecated: adwaita-gtk2-theme gnome-common gtk2 gtk2-immodules hexchat Several other packages currently depend on GTK 2. These have been modified so that they no longer depend on the deprecated packages in a future major RHEL release. If you maintain an application that uses GTK 2, Red Hat recommends that you port the application to GTK 4. (JIRA:RHELPLAN-131882) 10.11. Graphics infrastructures X.org Server is now deprecated The X.org display server is deprecated, and will be removed in a future major RHEL release. The default desktop session is now the Wayland session in most cases. The X11 protocol remains fully supported using the XWayland back end. As a result, applications that require X11 can run in the Wayland session. Red Hat is working on resolving the remaining problems and gaps in the Wayland session. For the outstanding problems in Wayland , see the Known issues section. You can switch your user session back to the X.org back end. For more information, see Selecting GNOME environment and display protocol . (JIRA:RHELPLAN-121048) Motif has been deprecated The Motif widget toolkit has been deprecated in RHEL, because development in the upstream Motif community is inactive. The following Motif packages have been deprecated, including their development and debugging variants: motif openmotif openmotif21 openmotif22 Additionally, the motif-static package has been removed. Red Hat recommends using the GTK toolkit as a replacement. GTK is more maintainable and provides new features compared to Motif. (JIRA:RHELPLAN-98983) 10.12. Red Hat Enterprise Linux system roles The networking system role displays a deprecation warning when configuring teams on RHEL 9 nodes The network teaming capabilities have been deprecated in RHEL 9. As a result, using the networking RHEL system role on an RHEL 8 controller to configure a network team on RHEL 9 nodes, shows a warning about its deprecation. ( BZ#1999770 ) 10.13. Virtualization SecureBoot image verification using SHA1-based signatures is deprecated Performing SecureBoot image verification using SHA1-based signatures on UEFI (PE/COFF) executables has become deprecated. Instead, Red Hat recommends using signatures based on the SHA2 algorithm, or later. (BZ#1935497) Limited support for virtual machine snapshots Creating snapshots of virtual machines (VMs) is currently only supported for VMs not using the UEFI firmware. In addition, during the snapshot operation, the QEMU monitor may become blocked, which negatively impacts the hypervisor performance for certain workloads. Also note that the current mechanism of creating VM snapshots has been deprecated, and Red Hat does not recommend using VM snapshots in a production environment. However, a new VM snapshot mechanism is under development and is planned to be fully implemented in a future minor release of RHEL 9. (JIRA:RHELPLAN-15509, BZ#1621944) virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager may not be yet available in the RHEL web console. (JIRA:RHELPLAN-10304) libvirtd has become deprecated The monolithic libvirt daemon, libvirtd , has been deprecated in RHEL 9, and will be removed in a future major release of RHEL. Note that you can still use libvirtd for managing virtualization on your hypervisor, but Red Hat recommends switching to the newly introduced modular libvirt daemons. For instructions and details, see the RHEL 9 Configuring and Managing Virtualization document. (JIRA:RHELPLAN-113995) The virtual floppy driver has become deprecated The isa-fdc driver, which controls virtual floppy disk devices, is now deprecated, and will become unsupported in a future release of RHEL. Therefore, to ensure forward compatibility with migrated virtual machines (VMs), Red Hat discourages using floppy disk devices in VMs hosted on RHEL 9. ( BZ#1965079 ) qcow2-v2 image format is deprecated With RHEL 9, the qcow2-v2 format for virtual disk images has become deprecated, and will become unsupported in a future major release of RHEL. In addition, the RHEL 9 Image Builder cannot create disk images in the qcow2-v2 format. Instead of qcow2-v2, Red Hat strongly recommends using qcow2-v3. To convert a qcow2-v2 image to a later format version, use the qemu-img amend command. ( BZ#1951814 ) Legacy CPU models are now deprecated A significant number of CPU models have become deprecated and will become unsupported for use in virtual machines (VMs) in a future major release of RHEL. The deprecated models are as follows: For Intel: models prior to Intel Xeon 55xx and 75xx Processor families (also known as Nehalem) For AMD: models prior to AMD Opteron G4 For IBM Z: models prior to IBM z14 To check whether your VM is using a deprecated CPU model, use the virsh dominfo utility, and look for a line similar to the following in the Messages section: ( BZ#2060839 ) 10.14. Containers Running RHEL 9 containers on a RHEL 7 host is not supported Running RHEL 9 containers on a RHEL 7 host is not supported. It might work, but it is not guaranteed. For more information, see Red Hat Enterprise Linux Container Compatibility Matrix . (JIRA:RHELPLAN-100087) SHA1 hash algorithm within Podman has been deprecated The SHA1 algorithm used to generate the filename of the rootless network namespace is no longer supported in Podman. Therefore, rootless containers started before updating to Podman 4.1.1 or later have to be restarted if they are joined to a network (and not just using slirp4netns ) to ensure they can connect to containers started after the upgrade. (BZ#2069279) rhel9/pause has been deprecated The rhel9/pause container image has been deprecated. ( BZ#2106816 ) 10.15. Deprecated packages This section lists packages that have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux. For changes to packages between RHEL 8 and RHEL 9, see Changes to packages in the Considerations in adopting RHEL 9 document. Important The support status of deprecated packages remains unchanged within RHEL 9. For more information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . The following packages have been deprecated in RHEL 9: iptables-devel iptables-libs iptables-nft iptables-nft-services iptables-utils libdb mcpp mod_auth_mellon python3-pytz xorg-x11-server-Xorg
|
[
"update-crypto-policies --set DEFAULT:SHA1",
"[domain/local] id_provider=files",
"[sssd] enable_files_domain = true",
"authselect enable-feature with-files-provider",
"tainted: use of deprecated configuration settings deprecated configuration: CPU model 'i486'"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/deprecated_functionality
|
Chapter 8. Deploying clustered Redis on Red Hat Ansible Automation Platform Operator
|
Chapter 8. Deploying clustered Redis on Red Hat Ansible Automation Platform Operator When you create an Ansible Automation Platform instance through the Ansible Automation Platform Operator, standalone Redis is assigned by default. To deploy clustered Redis, use the following procedure. For more information about Redis, refer to Caching and queueing system in the Planning your installation guide. Prerequisites You have installed an Ansible Automation Platform Operator deployment. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Details tab. On the Ansible Automation Platform tile click Create instance . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AnsibleAutomationPlatform . Change the redis_mode value to "cluster". Click Reload , then Save . Click to expand Advanced configuration . For the Redis Mode list, select Cluster . Configure the rest of your instance as necessary, then click Create . Your instance will deploy with a cluster Redis with 6 Redis replicas as default.
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/operator-deploy-redis
|
C.3. Selection Criteria Fields
|
C.3. Selection Criteria Fields This section describes the logical and physical volume selection criteria fields you can specify. Table C.5, "Logical Volume Fields" describes the logical volume fields and their field types. Table C.5. Logical Volume Fields Logical Volume Field Description Field Type lv_uuid Unique identifier string lv_name Name (logical volumes created for internal use are enclosed in brackets) string lv_full_name Full name of logical volume including its volume group, namely VG/LV string lv_path Full pathname for logical volume (blank for internal logical volumes) string lv_dm_path Internal device mapper pathname for logical volume (in /dev/mapper directory) string lv_parent For logical volumes that are components of another logical volume, the parent logical volume string lv_layout logical volume layout string list lv_role logical volume role string list lv_initial_image_sync Set if mirror/RAID images underwent initial resynchronization number lv_image_synced Set if mirror/RAID image is synchronized number lv_merging Set if snapshot logical volume is being merged to origin number lv_converting Set if logical volume is being converted number lv_allocation_policy logical volume allocation policy string lv_allocation_locked Set if logical volume is locked against allocation changes number lv_fixed_minor Set if logical volume has fixed minor number assigned number lv_merge_failed Set if snapshot merge failed number lv_snapshot_invalid Set if snapshot logical volume is invalid number lv_skip_activation Set if logical volume is skipped on activation number lv_when_full For thin pools, behavior when full string lv_active Active state of the logical volume string lv_active_locally Set if the logical volume is active locally number lv_active_remotely Set if the logical volume is active remotely number lv_active_exclusively Set if the logical volume is active exclusively number lv_major Persistent major number or -1 if not persistent number lv_minor Persistent minor number or -1 if not persistent number lv_read_ahead Read ahead setting in current units size lv_size Size of logical volume in current units size lv_metadata_size For thin and cache pools, the size of the logical volume that holds the metadata size seg_count Number of segments in logical volume number origin For snapshots, the origin device of this logical volume string origin_size For snapshots, the size of the origin device of this logical volume size data_percent For snapshot and thin pools and volumes, the percentage full if logical volume is active percent snap_percent For snapshots, the percentage full if logical volume is active percent metadata_percent For thin pools, the percentage of metadata full if logical volume is active percent copy_percent For RAID, mirrors and pvmove, current percentage in-sync percent sync_percent For RAID, mirrors and pvmove, current percentage in-sync percent raid_mismatch_count For RAID, number of mismatches found or repaired number raid_sync_action For RAID, the current synchronization action being performed string raid_write_behind For RAID1, the number of outstanding writes allowed to writemostly devices number raid_min_recovery_rate For RAID1, the minimum recovery I/O load in kiB/sec/disk number raid_max_recovery_rate For RAID1, the maximum recovery I/O load in kiB/sec/disk number move_pv For pvmove, source physical volume of temporary logical volume created by pvmove string convert_lv For lvconvert, name of temporary logical volume created by lvconvert string mirror_log For mirrors, the logical volume holding the synchronization log string data_lv For thin and cache pools, the logical volume holding the associated data string metadata_lv For thin and cache pools, the logical volume holding the associated metadata string pool_lv For thin volumes, the thin pool logical volume for this volume string lv_tags Tags, if any string list lv_profile Configuration profile attached to this logical volume string lv_time Creation time of the logical volume, if known time lv_host Creation host of the logical volume, if known string lv_modules Kernel device-mapper modules required for this logical volume string list Table C.6, "Logical Volume Device Combined Info and Status Fields" describes the logical volume device fields that combine both logical device info and logical device status. Table C.6. Logical Volume Device Combined Info and Status Fields Logical Volume Field Description Field Type lv_attr Selects according to both logical volume device info as well as logical volume status. string Table C.7, "Logical Volume Device Info Fields" describes the logical volume device info fields and their field types. Table C.7. Logical Volume Device Info Fields Logical Volume Field Description Field Type lv_kernel_major Currently assigned major number or -1 if logical volume is not active number lv_kernel_minor Currently assigned minor number or -1 if logical volume is not active number lv_kernel_read_ahead Currently-in-use read ahead setting in current units size lv_permissions logical volume permissions string lv_suspended Set if logical volume is suspended number lv_live_table Set if logical volume has live table present number lv_inactive_table Set if logical volume has inactive table present number lv_device_open Set if logical volume device is open number Table C.8, "Logical Volume Device Status Fields" describes the logical volume device status fields and their field types. Table C.8. Logical Volume Device Status Fields Logical Volume Field Description Field Type cache_total_blocks Total cache blocks number cache_used_blocks Used cache blocks number cache_dirty_blocks Dirty cache blocks number cache_read_hits Cache read hits number cache_read_misses Cache read misses number cache_write_hits Cache write hits number cache_write_misses Cache write misses number lv_health_status logical volume health status string Table C.9, "Physical Volume Label Fields" describes the physical volume label fields and their field types. Table C.9. Physical Volume Label Fields Physical Volume Field Description Field Type pv_fmt Type of metadata string pv_uuid Unique identifier string dev_size Size of underlying device in current units size pv_name Name string pv_mda_free Free metadata area space on this device in current units size pv_mda_size Size of smallest metadata area on this device in current units size Table C.5, "Logical Volume Fields" describes the physical volume fields and their field types. Table C.10. Physical Volume Fields Physical Volume Field Description Field Type pe_start Offset to the start of data on the underlying device number pv_size Size of physical volume in current units size pv_free Total amount of unallocated space in current units size pv_used Total amount of allocated space in current units size pv_attr Various attributes string pv_allocatable Set if this device can be used for allocation number pv_exported Set if this device is exported number pv_missing Set if this device is missing in system number pv_pe_count Total number of physical extents number pv_pe_alloc_count Total number of allocated physical extents number pv_tags Tags, if any string list pv_mda_count Number of metadata areas on this device number pv_mda_used_count Number of metadata areas in use on this device number pv_ba_start Offset to the start of PV Bootloader Area on the underlying device in current units size pv_ba_size Size of PV Bootloader Area in current units size Table C.11, "Volume Group Fields" describes the volume group fields and their field types. Table C.11. Volume Group Fields Volume Group Field Description Field Type vg_fmt Type of metadata string vg_uuid Unique identifier string vg_name Name string vg_attr Various attributes string vg_permissions Volume group permissions string vg_extendable Set if volume group is extendable number vg_exported Set if volume group is exported number vg_partial Set if volume group is partial number vg_allocation_policy Volume group allocation policy string vg_clustered Set if volume group is clustered number vg_size Total size of volume group in current units size vg_free Total amount of free space in current units size vg_sysid System ID of the volume group indicating which host owns it string vg_systemid System ID of the volume group indicating which host owns it string vg_extent_size Size of physical extents in current units size vg_extent_count Total number of physical extents number vg_free_count Total number of unallocated physical extents number max_lv Maximum number of logical volumes allowed in volume group or 0 if unlimited number max_pv Maximum number of physical volumes allowed in volume group or 0 if unlimited number pv_count Number of physical volumes number lv_count Number of logical volumes number snap_count Number of snapshots number vg_seqno Revision number of internal metadata - incremented whenever it changes number vg_tags Tags, if any string list vg_profile Configuration profile attached to this volume group string vg_mda_count Number of metadata areas on this volume group number vg_mda_used_count Number of metadata areas in use on this volume group number vg_mda_free Free metadata area space for this volume group in current units size vg_mda_size Size of smallest metadata area for this volume group in current units size vg_mda_copies Target number of in use metadata areas in the volume group number Table C.12, "Logical Volume Segment Fields" describes the logical volume segment fields and their field types. Table C.12. Logical Volume Segment Fields Logical Volume Segment Field Description Field Type segtype Type of logical volume segment string stripes Number of stripes or mirror legs number stripesize For stripes, amount of data placed on one device before switching to the size stripe_size For stripes, amount of data placed on one device before switching to the size regionsize For mirrors, the unit of data copied when synchronizing devices size region_size For mirrors, the unit of data copied when synchronizing devices size chunksize For snapshots, the unit of data used when tracking changes size chunk_size For snapshots, the unit of data used when tracking changes size thin_count For thin pools, the number of thin volumes in this pool number discards For thin pools, how discards are handled string cachemode For cache pools, how writes are cached string zero For thin pools, if zeroing is enabled number transaction_id For thin pools, the transaction id number thin_id For thin volumes, the thin device id number seg_start Offset within the logical volume to the start of the segment in current units size seg_start_pe Offset within the logical volume to the start of the segment in physical extents. number seg_size Size of segment in current units size seg_size_pe Size of segment in physical extents size seg_tags Tags, if any string list seg_pe_ranges Ranges of physical extents of underlying devices in command line format string devices Underlying devices used with starting extent numbers string seg_monitor dmeventd monitoring status of the segment string cache_policy The cache policy (cached segments only) string cache_settings Cache settings/parameters (cached segments only) string list Table C.13, "Physical Volume Segment Fields" describes the physical volume segment fields and their field types. Table C.13. Physical Volume Segment Fields Physical Volume Segment Field Description Field Type pvseg_start Physical extent number of start of segment number pvseg_size Number of extents in segment number Table C.14, "Selection Criteria Synonyms" lists the synonyms you can use for field values. These synonyms can be used in selection criteria as well as for values just like their original values. In this table, a field value of "" indicates a blank string, which can be matched by specifying -S 'field_name=""'. In this table, a field indicated by 0 or 1 indicates a binary value. You can specify a --binary option for reporting tools which causes binary fields to display 0 or 1 instead of what is indicated in this table as "some text" or "". Table C.14. Selection Criteria Synonyms Field Field Value Synonyms pv_allocatable allocatable 1 pv_allocatable "" 0 pv_exported exported 1 pv_exported "" 0 pv_missing missing 1 pv_missing "" 0 vg_extendable extendable 1 vg_extendable "" 0 vg_exported exported 1 vg_exported "" 0 vg_partial partial 1 vg_partial "" 0 vg_clustered clustered 1 vg_clustered "" 0 vg_permissions writable rw, read-write vg_permissions read-only r, ro vg_mda_copies unmanaged unknown, undefined, undef, -1 lv_initial_image_sync initial image sync sync, 1 lv_initial_image_sync "" 0 lv_image_synced image synced synced, 1 lv_image_synce "" 0 lv_merging merging 1 lv_merging "" 0 lv_converting converting 1 lv_converting "" 0 lv_allocation_locked allocation locked locked, 1 lv_allocation_locked "" 0 lv_fixed_minor fixed minor fixed, 1 lv_fixed_minor "" 0 lv_active_locally active locally active, locally, 1 lv_active_locally "" 0 lv_active_remotely active remotely active, remotely, 1 lv_active_remotely "" 0 lv_active_exclusively active exclusively active, exclusively, 1 lv_active_exclusively "" 0 lv_merge_failed merge failed failed, 1 lv_merge_failed "" 0 lv_snapshot_invalid snapshot invalid invalid, 1 lv_snapshot_invalid "" 0 lv_suspended suspended 1 lv_suspended "" 0 lv_live_table live table present live table, live, 1 lv_live_table "" 0 lv_inactive_table inactive table present inactive table, inactive, 1 lv_inactive_table "" 0 lv_device_open open 1 lv_device_open "" 0 lv_skip_activation skip activation skip, 1 lv_skip_activation "" 0 zero zero 1 zero "" 0 lv_permissions writable rw, read-write lv_permissions read-only r, ro lv_permissions read-only-override ro-override, r-override, R lv_when_full error error when full, error if no space lv_when_full queue queue when full, queue if no space lv_when_full "" undefined cache_policy "" undefined seg_monitor "" undefined lv_health_status "" undefined
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/selection_fields
|
Cluster Observability Operator
|
Cluster Observability Operator OpenShift Container Platform 4.18 Configuring and using the Cluster Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cluster_observability_operator/index
|
Chapter 11. Reinstalling a hyperconverged host
|
Chapter 11. Reinstalling a hyperconverged host Some configuration changes require a hyperconverged host to be reinstalled before the configuration change can take effect. Follow these steps to reinstall a hyperconverged host. Log in to the Administration Portal. Click Compute Hosts . Select the host and click Management > Maintenance > OK to place this host in Maintenance mode. Click Installation > Reinstall to open the Reinstall window. On the General tab, uncheck the Automatically Configure Host firewall checkbox. On the Hosted Engine tab, set the value of Choose hosted engine deployment action to Deploy . Click OK to reinstall the host.
| null |
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/task-reinstall-host
|
Apache Karaf Security Guide
|
Apache Karaf Security Guide Red Hat Fuse 7.13 Secure the Apache Karaf container Red Hat Fuse Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/index
|
5.41. ctdb
|
5.41. ctdb 5.41.1. RHBA-2012:0904 - ctdb bug fix update Updated ctdb packages that fix one bug are now available for Red Hat Enterprise Linux 6. The ctdb packages provide a clustered database based on Samba's Trivial Database (TDB) used to store temporary data. Bug Fix BZ# 794888 Prior to this update, the ctdb working directory, all subdirectories and the files within were created with incorrect SELinux contexts when the ctdb service was started. This update uses the post-install script to create the ctdb directory, and the command "/sbin/restorecon -R /var/ctdb" sets now the right SELinux context. All users of ctdb are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/ctdb
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/migrating_to_red_hat_build_of_openjdk_21_from_earlier_versions/making-open-source-more-inclusive
|
Chapter 2. Defining Logical Data Units
|
Chapter 2. Defining Logical Data Units Abstract When describing a service in a WSDL contract complex data types are defined as logical units using XML Schema. 2.1. Introduction to Logical Data Units When defining a service, the first thing you must consider is how the data used as parameters for the exposed operations is going to be represented. Unlike applications that are written in a programming language that uses fixed data structures, services must define their data in logical units that can be consumed by any number of applications. This involves two steps: Breaking the data into logical units that can be mapped into the data types used by the physical implementations of the service Combining the logical units into messages that are passed between endpoints to carry out the operations This chapter discusses the first step. Chapter 3, Defining Logical Messages Used by a Service discusses the second step. 2.2. Mapping data into logical data units Overview The interfaces used to implement a service define the data representing operation parameters as XML documents. If you are defining an interface for a service that is already implemented, you must translate the data types of the implemented operations into discreet XML elements that can be assembled into messages. If you are starting from scratch, you must determine the building blocks from which your messages are built, so that they make sense from an implementation standpoint. Available type systems for defining service data units According to the WSDL specification, you can use any type system you choose to define data types in a WSDL contract. However, the W3C specification states that XML Schema is the preferred canonical type system for a WSDL document. Therefore, XML Schema is the intrinsic type system in Apache CXF. XML Schema as a type system XML Schema is used to define how an XML document is structured. This is done by defining the elements that make up the document. These elements can use native XML Schema types, like xsd:int , or they can use types that are defined by the user. User defined types are either built up using combinations of XML elements or they are defined by restricting existing types. By combining type definitions and element definitions you can create intricate XML documents that can contain complex data. When used in WSDL XML Schema defines the structure of the XML document that holds the data used to interact with a service. When defining the data units used by your service, you can define them as types that specify the structure of the message parts. You can also define your data units as elements that make up the message parts. Considerations for creating your data units You might consider simply creating logical data units that map directly to the types you envision using when implementing the service. While this approach works, and closely follows the model of building RPC-style applications, it is not necessarily ideal for building a piece of a service-oriented architecture. The Web Services Interoperability Organization's WS-I basic profile provides a number of guidelines for defining data units and can be accessed at http://www.ws-i.org/Profiles/BasicProfile-1.1-2004-08-24.html#WSDLTYPES . In addition, the W3C also provides the following guidelines for using XML Schema to represent data types in WSDL documents: Use elements, not attributes. Do not use protocol-specific types as base types. 2.3. Adding data units to a contract Overview Depending on how you choose to create your WSDL contract, creating new data definitions requires varying amounts of knowledge. The Apache CXF GUI tools provide a number of aids for describing data types using XML Schema. Other XML editors offer different levels of assistance. Regardless of the editor you choose, it is a good idea to have some knowledge about what the resulting contract should look like. Procedure Defining the data used in a WSDL contract involves the following steps: Determine all the data units used in the interface described by the contract. Create a types element in your contract. Create a schema element, shown in Example 2.1, "Schema entry for a WSDL contract" , as a child of the type element. The targetNamespace attribute specifies the namespace under which new data types are defined. Best practice is to also define the namespace that provides access to the target namespace. The remaining entries should not be changed. Example 2.1. Schema entry for a WSDL contract For each complex type that is a collection of elements, define the data type using a complexType element. See Section 2.5.1, "Defining data structures" . For each array, define the data type using a complexType element. See Section 2.5.2, "Defining arrays" . For each complex type that is derived from a simple type, define the data type using a simpleType element. See Section 2.5.4, "Defining types by restriction" . For each enumerated type, define the data type using a simpleType element. See Section 2.5.5, "Defining enumerated types" . For each element, define it using an element element. See Section 2.6, "Defining elements" . 2.4. XML Schema simple types Overview If a message part is going to be of a simple type it is not necessary to create a type definition for it. However, the complex types used by the interfaces defined in the contract are defined using simple types. Entering simple types XML Schema simple types are mainly placed in the element elements used in the types section of your contract. They are also used in the base attribute of restriction elements and extension elements. Simple types are always entered using the xsd prefix. For example, to specify that an element is of type int , you would enter xsd:int in its type attribute as shown in Example 2.2, "Defining an element with a simple type" . Example 2.2. Defining an element with a simple type Supported XSD simple types Apache CXF supports the following XML Schema simple types: xsd:string xsd:normalizedString xsd:int xsd:unsignedInt xsd:long xsd:unsignedLong xsd:short xsd:unsignedShort xsd:float xsd:double xsd:boolean xsd:byte xsd:unsignedByte xsd:integer xsd:positiveInteger xsd:negativeInteger xsd:nonPositiveInteger xsd:nonNegativeInteger xsd:decimal xsd:dateTime xsd:time xsd:date xsd:QName xsd:base64Binary xsd:hexBinary xsd:ID xsd:token xsd:language xsd:Name xsd:NCName xsd:NMTOKEN xsd:anySimpleType xsd:anyURI xsd:gYear xsd:gMonth xsd:gDay xsd:gYearMonth xsd:gMonthDay 2.5. Defining complex data types Abstract XML Schema provides a flexible and powerful mechanism for building complex data structures from its simple data types. You can create data structures by creating a sequence of elements and attributes. You can also extend your defined types to create even more complex types. In addition to building complex data structures, you can also describe specialized types such as enumerated types, data types that have a specific range of values, or data types that need to follow certain patterns by either extending or restricting the primitive types. 2.5.1. Defining data structures Overview In XML Schema, data units that are a collection of data fields are defined using complexType elements. Specifying a complex type requires three pieces of information: The name of the defined type is specified in the name attribute of the complexType element. The first child element of the complexType describes the behavior of the structure's fields when it is put on the wire. See the section called "Complex type varieties" . Each of the fields of the defined structure are defined in element elements that are grandchildren of the complexType element. See the section called "Defining the parts of a structure" . For example, the structure shown in Example 2.3, "Simple Structure" is defined in XML Schema as a complex type with two elements. Example 2.3. Simple Structure Example 2.4, "A complex type" shows one possible XML Schema mapping for the structure shown in Example 2.3, "Simple Structure" The structure defined in Example 2.4, "A complex type" generates a message containing two elements: name and age . . Example 2.4. A complex type Complex type varieties XML Schema has three ways of describing how the fields of a complex type are organized when represented as an XML document and passed on the wire. The first child element of the complexType element determines which variety of complex type is being used. Table 2.1, "Complex type descriptor elements" shows the elements used to define complex type behavior. Table 2.1. Complex type descriptor elements Element Complex Type Behavior sequence All of a complex type's fields can be present and they must be in the order in which they are specified in the type definition. all All of the complex type's fields can be present but they can be in any order. choice Only one of the elements in the structure can be placed in the message. If the structure is defined using a choice element, as shown in Example 2.5, "Simple complex choice type" , it generates a message with either a name element or an age element. Example 2.5. Simple complex choice type Defining the parts of a structure You define the data fields that make up a structure using element elements. Every complexType element should contain at least one element element. Each element element in the complexType element represents a field in the defined data structure. To fully describe a field in a data structure, element elements have two required attributes: The name attribute specifies the name of the data field and it must be unique within the defined complex type. The type attribute specifies the type of the data stored in the field. The type can be either one of the XML Schema simple types, or any named complex type that is defined in the contract. In addition to name and type , element elements have two other commonly used optional attributes: minOcurrs and maxOccurs . These attributes place bounds on the number of times the field occurs in the structure. By default, each field occurs only once in a complex type. Using these attributes, you can change how many times a field must or can appear in a structure. For example, you can define a field, previousJobs , that must occur at least three times, and no more than seven times, as shown in Example 2.6, "Simple complex type with occurrence constraints" . Example 2.6. Simple complex type with occurrence constraints You can also use the minOccurs to make the age field optional by setting the minOccurs to zero as shown in Example 2.7, "Simple complex type with minOccurs set to zero" . In this case age can be omitted and the data will still be valid. Example 2.7. Simple complex type with minOccurs set to zero Defining attributes In XML documents, attributes are contained in the element's tag. For example, in the complexType element in the code below, name is an attribute. To specify an attribute for a complex type, you define an attribute element in the complexType element definition. An attribute element can appear only after the all , sequence , or choice element. Specify one attribute element for each of the complex type's attributes. Any attribute elements must be direct children of the complexType element. Example 2.8. Complex type with an attribute In the code, the attribute element specifies that the personalInfo complex type has an age attribute. The attribute element has these attributes: name - A required attribute that specifies the string that identifies the attribute. type - Specifies the type of the data stored in the field. The type can be one of the XML Schema simple types. use - An optional attribute that specifies whether the complex type is required to have this attribute. Valid values are required or optional . The default is that the attribute is optional. In an attribute element, you can specify the optional default attribute, which lets you specify a default value for the attribute. 2.5.2. Defining arrays Overview Apache CXF supports two methods for defining arrays in a contract. The first is define a complex type with a single element whose maxOccurs attribute has a value greater than one. The second is to use SOAP arrays. SOAP arrays provide added functionality such as the ability to easily define multi-dimensional arrays and to transmit sparsely populated arrays. Complex type arrays Complex type arrays are a special case of a sequence complex type. You simply define a complex type with a single element and specify a value for the maxOccurs attribute. For example, to define an array of twenty floating point numbers you use a complex type similar to the one shown in Example 2.9, "Complex type array" . Example 2.9. Complex type array You can also specify a value for the minOccurs attribute. SOAP arrays SOAP arrays are defined by deriving from the SOAP-ENC:Array base type using the wsdl:arrayType element. The syntax for this is shown in Example 2.10, "Syntax for a SOAP array derived using wsdl:arrayType" . Ensure that the definitions element declares xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" . Example 2.10. Syntax for a SOAP array derived using wsdl:arrayType Using this syntax, TypeName specifies the name of the newly-defined array type. ElementType specifies the type of the elements in the array. ArrayBounds specifies the number of dimensions in the array. To specify a single dimension array use [] ; to specify a two-dimensional array use either [][] or [,] . For example, the SOAP Array, SOAPStrings, shown in Example 2.11, "Definition of a SOAP array" , defines a one-dimensional array of strings. The wsdl:arrayType attribute specifies the type of the array elements, xsd:string , and the number of dimensions, with [] implying one dimension. Example 2.11. Definition of a SOAP array You can also describe a SOAP Array using a simple element as described in the SOAP 1.1 specification. The syntax for this is shown in Example 2.12, "Syntax for a SOAP array derived using an element" . Example 2.12. Syntax for a SOAP array derived using an element When using this syntax, the element's maxOccurs attribute must always be set to unbounded . 2.5.3. Defining types by extension Like most major coding languages, XML Schema allows you to create data types that inherit some of their elements from other data types. This is called defining a type by extension. For example, you could create a new type called alienInfo , that extends the personalInfo structure defined in Example 2.4, "A complex type" by adding a new element called planet . Types defined by extension have four parts: The name of the type is defined by the name attribute of the complexType element. The complexContent element specifies that the new type will have more than one element. Note If you are only adding new attributes to the complex type, you can use a simpleContent element. The type from which the new type is derived, called the base type, is specified in the base attribute of the extension element. The new type's elements and attributes are defined in the extension element, the same as they are for a regular complex type. For example, alienInfo is defined as shown in Example 2.13, "Type defined by extension" . Example 2.13. Type defined by extension 2.5.4. Defining types by restriction Overview XML Schema allows you to create new types by restricting the possible values of an XML Schema simple type. For example, you can define a simple type, SSN , which is a string of exactly nine characters. New types defined by restricting simple types are defined using a simpleType element. The definition of a type by restriction requires three things: The name of the new type is specified by the name attribute of the simpleType element. The simple type from which the new type is derived, called the base type , is specified in the restriction element. See the section called "Specifying the base type" . The rules, called facets , defining the restrictions placed on the base type are defined as children of the restriction element. See the section called "Defining the restrictions" . Specifying the base type The base type is the type that is being restricted to define the new type. It is specified using a restriction element. The restriction element is the only child of a simpleType element and has one attribute, base , that specifies the base type. The base type can be any of the XML Schema simple types. For example, to define a new type by restricting the values of an xsd:int you use a definition like the one shown in Example 2.14, "Using int as the base type" . Example 2.14. Using int as the base type Defining the restrictions The rules defining the restrictions placed on the base type are called facets . Facets are elements with one attribute, value , that defines how the facet is enforced. The available facets and their valid value settings depend on the base type. For example, xsd:string supports six facets, including: length minLength maxLength pattern whitespace enumeration Each facet element is a child of the restriction element. Example Example 2.15, "SSN simple type description" shows an example of a simple type, SSN , which represents a social security number. The resulting type is a string of the form xxx-xx-xxxx . <SSN>032-43-9876<SSN> is a valid value for an element of this type, but <SSN>032439876</SSN> is not. Example 2.15. SSN simple type description 2.5.5. Defining enumerated types Overview Enumerated types in XML Schema are a special case of definition by restriction. They are described by using the enumeration facet which is supported by all XML Schema primitive types. As with enumerated types in most modern programming languages, a variable of this type can only have one of the specified values. Defining an enumeration in XML Schema The syntax for defining an enumeration is shown in Example 2.16, "Syntax for an enumeration" . Example 2.16. Syntax for an enumeration EnumName specifies the name of the enumeration type. EnumType specifies the type of the case values. CaseNValue , where N is any number one or greater, specifies the value for each specific case of the enumeration. An enumerated type can have any number of case values, but because it is derived from a simple type, only one of the case values is valid at a time. Example For example, an XML document with an element defined by the enumeration widgetSize , shown in Example 2.17, "widgetSize enumeration" , would be valid if it contained <widgetSize>big</widgetSize>, but it would not be valid if it contained <widgetSize>big,mungo</widgetSize>. Example 2.17. widgetSize enumeration 2.6. Defining elements Elements in XML Schema represent an instance of an element in an XML document generated from the schema. The most basic element consists of a single element element. Like the element element used to define the members of a complex type, they have three attributes: name - A required attribute that specifies the name of the element as it appears in an XML document. type - Specifies the type of the element. The type can be any XML Schema primitive type or any named complex type defined in the contract. This attribute can be omitted if the type has an in-line definition. nillable - Specifies whether an element can be omitted from a document entirely. If nillable is set to true , the element can be omitted from any document generated using the schema. An element can also have an in-line type definition. In-line types are specified using either a complexType element or a simpleType element. Once you specify if the type of data is complex or simple, you can define any type of data needed using the tools available for each type of data. In-line type definitions are discouraged because they are not reusable.
|
[
"<schema targetNamespace=\"http://schemas.iona.com/bank.idl\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://schemas.iona.com/bank.idl\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\">",
"<element name=\"simpleInt\" type=\"xsd:int\" />",
"struct personalInfo { string name; int age; };",
"<complexType name=\"personalInfo\"> <sequence> <element name=\"name\" type=\"xsd:string\" /> <element name=\"age\" type=\"xsd:int\" /> </sequence> </complexType>",
"<complexType name=\"personalInfo\"> <choice> <element name=\"name\" type=\"xsd:string\"/> <element name=\"age\" type=\"xsd:int\"/> </choice> </complexType>",
"<complexType name=\"personalInfo\"> <all> <element name=\"name\" type=\"xsd:string\"/> <element name=\"age\" type=\"xsd:int\"/> <element name=\"previousJobs\" type=\"xsd:string: minOccurs=\"3\" maxOccurs=\"7\"/> </all> </complexType>",
"<complexType name=\"personalInfo\"> <choice> <element name=\"name\" type=\"xsd:string\"/> <element name=\"age\" type=\"xsd:int\" minOccurs=\"0\"/> </choice> </complexType>",
"<complexType name=\"personalInfo\"> <all> <element name=\"name\" type=\"xsd:string\"/> <element name=\"previousJobs\" type=\"xsd:string\" minOccurs=\"3\" maxOccurs=\"7\"/> </all> <attribute name=\"age\" type=\"xsd:int\" use=\"required\" /> </complexType>",
"<complexType name=\"personalInfo\"> <element name=\"averages\" type=\"xsd:float\" maxOccurs=\"20\"/> </complexType>",
"<complexType name=\" TypeName \"> <complexContent> <restriction base=\"SOAP-ENC:Array\"> <attribute ref=\"SOAP-ENC:arrayType\" wsdl:arrayType=\" ElementType<ArrayBounds> \"/> </restriction> </complexContent> </complexType>",
"<complexType name=\"SOAPStrings\"> <complexContent> <restriction base=\"SOAP-ENC:Array\"> <attribute ref=\"SOAP-ENC:arrayType\" wsdl:arrayType=\"xsd:string[]\"/> </restriction> </complexContent> </complexType>",
"<complexType name=\" TypeName \"> <complexContent> <restriction base=\"SOAP-ENC:Array\"> <sequence> <element name=\" ElementName \" type=\" ElementType \" maxOccurs=\"unbounded\"/> </sequence> </restriction> </complexContent> </complexType>",
"<complexType name=\"alienInfo\"> <complexContent> <extension base=\"xsd1:personalInfo\"> <sequence> <element name=\"planet\" type=\"xsd:string\"/> </sequence> </extension> </complexContent> </complexType>",
"<simpleType name=\"restrictedInt\"> <restriction base=\"xsd:int\"> </restriction> </simpleType>",
"<simpleType name=\"SSN\"> <restriction base=\"xsd:string\"> <pattern value=\"\\d{3}-\\d{2}-\\d{4}\"/> </restriction> </simpleType>",
"<simpleType name=\" EnumName \"> <restriction base=\" EnumType \"> <enumeration value=\" Case1Value \"/> <enumeration value=\" Case2Value \"/> <enumeration value=\" CaseNValue \"/> </restriction> </simpleType>",
"<simpleType name=\"widgetSize\"> <restriction base=\"xsd:string\"> <enumeration value=\"big\"/> <enumeration value=\"large\"/> <enumeration value=\"mungo\"/> </restriction> </simpleType>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/WSDLTypes
|
Chapter 4. Installing Red Hat build of OpenJDK with the MSI installer
|
Chapter 4. Installing Red Hat build of OpenJDK with the MSI installer This procedure describes how to install Red Hat build of OpenJDK 21 for Microsoft Windows using the MSI-based installer. Procedure Download the MSI-based installer of Red Hat build of OpenJDK 21 for Microsoft Windows. Run the installer for Red Hat build of OpenJDK 21 for Microsoft Windows. Click on the welcome screen. Check I accept the terms in license agreement , then click . Click . Accept the defaults or review the optional properties . Click Install . Click Yes on the Do you want to allow this app to make changes on your device? . To verify the Red Hat build of OpenJDK 21 for Microsoft Windows is successfully installed, run java -version command in the command prompt.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/installing_and_using_red_hat_build_of_openjdk_21_for_windows/installing_openjdk_msi_installer
|
Chapter 11. Logging
|
Chapter 11. Logging 11.1. Enabling protocol logging The client can log AMQP protocol frames to the console. This data is often critical when diagnosing problems. To enable protocol logging, set the PN_TRACE_FRM environment variable to 1 : Example: Enabling protocol logging USD export PN_TRACE_FRM=1 USD <your-client-program> To disable protocol logging, unset the PN_TRACE_FRM environment variable.
|
[
"export PN_TRACE_FRM=1 <your-client-program>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/logging
|
Chapter 4. Configuring Operator-based broker deployments
|
Chapter 4. Configuring Operator-based broker deployments 4.1. How the Operator generates the broker configuration Before you use Custom Resource (CR) instances to configure your broker deployment, you should understand how the Operator generates the broker configuration. When you create an Operator-based broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod. The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image. By default, the AMQ Broker Operator uses a built-in Init Container. The Init Container uses the main CR instance for your deployment to generate the configuration used by each broker application container. If you have specified address settings in the CR, the Operator generates a default configuration and then merges or replaces that configuration with the configuration specified in the CR. This process is described in the section that follows. 4.1.1. How the Operator generates the address settings configuration If you have included an address settings configuration in the main Custom Resource (CR) instance for your deployment, the Operator generates the address settings configuration for each broker as described below. The Operator runs the Init Container before the broker application container. The Init Container generates a default address settings configuration. The default address settings configuration is shown below. <address-settings> <!-- if you define auto-create on certain queues, management has to be auto-create --> <address-setting match="activemq.management#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <!-- default for catch all --> <address-setting match="#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <address-settings> If you have also specified an address settings configuration in your Custom Resource (CR) instance, the Init Container processes that configuration and converts it to XML. Based on the value of the applyRule property in the CR, the Init Container merges or replaces the default address settings configuration shown above with the configuration that you have specified in the CR. The result of this merge or replacement is the final address settings configuration that the broker will use. When the Init Container has finished generating the broker configuration (including address settings), the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. You can inspect the address settings configuration in the broker.xml configuration file. For a running broker, this file is located in the /home/jboss/amq-broker/etc directory. Additional resources For an example of using the applyRule property in a CR, see Section 4.2.3, "Matching address settings to configured addresses in an Operator-based broker deployment" . 4.1.2. Directory structure of a broker Pod When you create an Operator-based broker deployment, a Pod for each broker runs in a StatefulSet in your OpenShift project. An application container for the broker runs within each Pod. The Operator runs a type of container called an Init Container when initializing each Pod. In OpenShift Container Platform, Init Containers are specialized containers that run before application containers. Init Containers can include utilities or setup scripts that are not present in the application image. When generating the configuration for a broker instance, the Init Container uses files contained in a default installation directory. This installation directory is on a volume that the Operator mounts to the broker Pod and which the Init Container and broker container share. The path that the Init Container uses to mount the shared volume is defined in an environment variable called CONFIG_INSTANCE_DIR . The default value of CONFIG_INSTANCE_DIR is /amq/init/config . In the documentation, this directory is referred to as <install_dir> . Note You cannot change the value of the CONFIG_INSTANCE_DIR environment variable. By default, the installation directory has the following sub-directories: Sub-directory Contents <install_dir> /bin Binaries and scripts needed to run the broker. <install_dir> /etc Configuration files. <install_dir> /data The broker journal. <install_dir> /lib JARs and libraries needed to run the broker. <install_dir> /log Broker log files. <install_dir> /tmp Temporary web application files. When the Init Container has finished generating the broker configuration, the broker application container starts. When starting, the broker container copies its configuration from the installation directory previously used by the Init Container. When the broker Pod is initialized and running, the broker configuration is located in the /home/jboss/amq-broker directory (and subdirectories) of the broker. Additional resources For more information about how the Operator chooses a container image for the built-in Init Container, see Section 2.4, "How the Operator chooses container images" . To learn how to build and specify a custom Init Container image, see Section 4.6, "Specifying a custom Init Container image" . 4.2. Configuring addresses and queues for Operator-based broker deployments For an Operator-based broker deployment, you use two separate Custom Resource (CR) instances to configure address and queues and their associated settings. To create address and queues on your brokers, you deploy a CR instance based on the address Custom Resource Definition (CRD). If you used the OpenShift command-line interface (CLI) to install the Operator, the address CRD is the broker_activemqartemisaddress_crd.yaml file that was included in the deploy/crds of the Operator installation archive that you downloaded and extracted. If you used OperatorHub to install the Operator, the address CRD is the ActiveMQAretmisAddress CRD listed under Administration Custom Resource Definitions in the OpenShift Container Platform web console. To configure address and queue settings that you then match to specific addresses, you include configuration in the main Custom Resource (CR) instance used to create your broker deployment . If you used the OpenShift CLI to install the Operator, the main broker CRD is the broker_activemqartemis_crd.yaml file that was included in the deploy/crds of the Operator installation archive that you downloaded and extracted. If you used OperatorHub to install the Operator, the main broker CRD is the ActiveMQAretmis CRD listed under Administration Custom Resource Definitions in the OpenShift Container Platform web console. In general, the address and queue settings that you can configure for a broker deployment on OpenShift Container Platform are fully equivalent to those of standalone broker deployments on Linux or Windows. However, you should be aware of some differences in how those settings are configured. Those differences are described in the following sub-section. 4.2.1. Differences in configuration of address and queue settings between OpenShift and standalone broker deployments To configure address and queue settings for broker deployments on OpenShift Container Platform, you add configuration to an addressSettings section of the main Custom Resource (CR) instance for the broker deployment. This contrasts with standalone deployments on Linux or Windows, for which you add configuration to an address-settings element in the broker.xml configuration file. The format used for the names of configuration items differs between OpenShift Container Platform and standalone broker deployments. For OpenShift Container Platform deployments, configuration item names are in camel case , for example, defaultQueueRoutingType . By contrast, configuration item names for standalone deployments are in lower case and use a dash ( - ) separator, for example, default-queue-routing-type . The following table shows some further examples of this naming difference. Configuration item for standalone broker deployment Configuration item for OpenShift broker deployment address-full-policy addressFullPolicy auto-create-queues autoCreateQueues default-queue-routing-type defaultQueueRoutingType last-value-queue lastValueQueue Additional resources For examples of creating addresses and queues and matching settings for OpenShift Container Platform broker deployments, see: Creating addresses and queues for a broker deployment on OpenShift Container Platform Matching address settings to configured addresses for a broker deployment on OpenShift Container Platform To learn about all of the configuration options for addresses, queues, and address settings for OpenShift Container Platform broker deployments, see Section 8.1, "Custom Resource configuration reference" . For comprehensive information about configuring addresses, queues, and associated address settings for standalone broker deployments, see Addresses, Queues, and Topics in Configuring AMQ Broker . You can use this information to create equivalent configurations for broker deployments on OpenShift Container Platform. 4.2.2. Creating addresses and queues for an Operator-based broker deployment The following procedure shows how to use a Custom Resource (CR) instance to add an address and associated queue to an Operator-based broker deployment. Note To create multiple addresses and/or queues in your broker deployment, you need to create separate CR files and deploy them individually, specifying new address and/or queue names in each case. In addition, the name attribute of each CR instance must be unique. Prerequisites You must have already installed the AMQ Broker Operator, including the dedicated Custom Resource Definition (CRD) required to create addresses and queues on your brokers. For information on two alternative ways to install the Operator, see: Section 3.2, "Installing the Operator using the CLI" . Section 3.3, "Installing the Operator using OperatorHub" . You should be familiar with how to use a CR instance to create a basic broker deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . Procedure Start configuring a Custom Resource (CR) instance to define addresses and queues for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemisaddress_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemisAddresss CRD. Click the Instances tab. Click Create ActiveMQArtemisAddress . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the spec section of the CR, add lines to define an address, queue, and routing type. For example: apiVersion: broker.amq.io/v2alpha2 kind: ActiveMQArtemisAddress metadata: name: myAddressDeployment0 namespace: myProject spec: ... addressName: myAddress0 queueName: myQueue0 routingType: anycast ... The preceding configuration defines an address named myAddress0 with a queue named myQueue0 and an anycast routing type. Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . (Optional) To delete an address and queue previously added to your deployment using a CR instance, use the following command: USD oc delete -f <path/to/address_custom_resource_instance> .yaml 4.2.3. Matching address settings to configured addresses in an Operator-based broker deployment If delivery of a message to a client is unsuccessful, you might not want the broker to make ongoing attempts to deliver the message. To prevent infinite delivery attempts, you can define a dead letter address and an associated dead letter queue . After a specified number of delivery attempts, the broker removes an undelivered message from its original queue and sends the message to the configured dead letter address. A system administrator can later consume undelivered messages from a dead letter queue to inspect the messages. The following example shows how to configure a dead letter address and queue for an Operator-based broker deployment. The example demonstrates how to: Use the addressSetting section of the main broker Custom Resource (CR) instance to configure address settings. Match those address settings to addresses in your broker deployment. Prerequisites You must be using the latest version of the Operator for AMQ Broker 7.9 (that is, version 7.9.4-opr-3). To learn how to upgrade the Operator to the latest version, see Chapter 6, Upgrading an Operator-based broker deployment . You should be familiar with how to use a CR instance to create a basic broker deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . You should be familiar with the default address settings configuration that the Operator merges or replaces with the configuration specified in your CR instance. For more information, see Section 4.1.1, "How the Operator generates the address settings configuration" . Procedure Start configuring a CR instance to add a dead letter address and queue to receive undelivered messages for each broker in the deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemisaddress_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemisAddresss CRD. Click the Instances tab. Click Create ActiveMQArtemisAddress . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the spec section of the CR, add lines to specify a dead letter address and queue to receive undelivered messages. For example: apiVersion: broker.amq.io/v2alpha2 kind: ActiveMQArtemisAddress metadata: name: ex-aaoaddress spec: ... addressName: myDeadLetterAddress queueName: myDeadLetterQueue routingType: anycast ... The preceding configuration defines a dead letter address named myDeadLetterAddress with a dead letter queue named myDeadLetterQueue and an anycast routing type. Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Deploy the address CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Create the address CR. Using the OpenShift web console: When you have finished configuring the CR, click Create . Start configuring a Custom Resource (CR) instance for a broker deployment. From a sample CR file: Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. In the deploymentPlan section of the CR, add a new addressSettings section that contains a single addressSetting section, as shown below. spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: Add a single instance of the match property to the addressSetting block. Specify an address-matching expression. For example: spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress match Specifies the address, or set of address to which the broker applies the configuration that follows. In this example, the value of the match property corresponds to a single address called myAddress . Add properties related to undelivered messages and specify values. For example: spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 deadLetterAddress Address to which the broker sends undelivered messages. maxDeliveryAttempts Maximum number of delivery attempts that a broker makes before moving a message to the configured dead letter address. In the preceding example, if the broker makes five unsuccessful attempts to deliver a message to an address that begins with myAddress , the broker moves the message to the specified dead letter address, myDeadLetterAddress . (Optional) Apply similar configuration to another address or set of addresses. For example: spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3 In this example, the value of the second match property includes an asterisk wildcard character. The wildcard character means that the preceding configuration is applied to any address that begins with the string myOtherAddresses . Note If you use a wildcard expression as a value for the match property, you must enclose the value in single quotation marks, for example, 'myOtherAddresses*' . At the beginning of the addressSettings section, add the applyRule property and specify a value. For example: spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: applyRule: merge_all addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3 The applyRule property specifies how the Operator applies the configuration that you add to the CR for each matching address or set of addresses. The values that you can specify are: merge_all For address settings specified in both the CR and the default configuration that match the same address or set of addresses: Replace any property values specified in the default configuration with those specified in the CR. Keep any property values that are specified uniquely in the CR or the default configuration. Include each of these in the final, merged configuration. For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration. merge_replace For address settings specified in both the CR and the default configuration that match the same address or set of addresses, include the settings specified in the CR in the final, merged configuration. Do not include any properties specified in the default configuration, even if these are not specified in the CR. For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration. replace_all Replace all address settings specified in the default configuration with those specified in the CR. The final, merged configuration corresponds exactly to that specified in the CR. Note If you do not explicitly include the applyRule property in your CR, the Operator uses a default value of merge_all . Deploy the broker CR instance. Using the OpenShift command-line interface: Save the CR file. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources To learn about all of the configuration options for addresses, queues, and address settings for OpenShift Container Platform broker deployments, see Section 8.1, "Custom Resource configuration reference" . If you installed the AMQ Broker Operator using the OpenShift command-line interface (CLI), the installation archive that you downloaded and extracted contains some additional examples of configuring address settings. In the deploy/examples folder of the installation archive, see: artemis-basic-address-settings-deployment.yaml artemis-merge-replace-address-settings-deployment.yaml artemis-replace-address-settings-deployment.yaml For comprehensive information about configuring addresses, queues, and associated address settings for standalone broker deployments, see Addresses, Queues, and Topics in Configuring AMQ Broker . You can use this information to create equivalent configurations for broker deployments on OpenShift Container Platform. For more information about Init Containers in OpenShift Container Platform, see Using Init Containers to perform tasks before a pod is deployed . 4.3. Creating a security configuration for an Operator-based broker deployment The following procedure shows how to use a Custom Resource (CR) instance to add users and associated security configuration to an Operator-based broker deployment. Prerequisites You must have already installed the AMQ Broker Operator. For information on two alternative ways to install the Operator, see: Section 3.2, "Installing the Operator using the CLI" . Section 3.3, "Installing the Operator using OperatorHub" . You should be familiar with broker security as described in Securing brokers You should be familiar with how to use a CR instance to create a basic broker deployment. For more information, see Section 3.4.1, "Deploying a basic broker instance" . Procedure You can deploy the security CR before or after you create a broker deployment. However, if you deploy the security CR after creating the broker deployment, the broker pod is restarted to accept the new configuration. Start configuring a Custom Resource (CR) instance to define users and associated security configuration for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project for the broker deployment. Open the sample CR file called broker_activemqartemissecurity_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project for the broker deployment. Start a new CR instance based on the address CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemisSecurity CRD. Click the Instances tab. Click Create ActiveMQArtemisSecurity . Within the console, a YAML editor opens, enabling you to configure a CR instance. In the spec section of the CR, add lines to define users and roles. For example: apiVersion: broker.amq.io/v1alpha1 kind: ActiveMQArtemisSecurity metadata: name: ex-prop spec: loginModules: propertiesLoginModules: - name: "prop-module" users: - name: "sam" password: "samsecret" roles: - "sender" - name: "rob" password: "robsecret" roles: - "receiver" securityDomains: brokerDomain: name: "activemq" loginModules: - name: "prop-module" flag: "sufficient" securitySettings: broker: - match: "#" permissions: - operationType: "send" roles: - "sender" - operationType: "createAddress" roles: - "sender" - operationType: "createDurableQueue" roles: - "sender" - operationType: "consume" roles: - "receiver" ... The preceding configuration defines two users: a propertiesLoginModule named prop-module that defines a user named sam with a role named sender . a propertiesLoginModule named prop-module that defines a user named rob with a role named receiver . The properties of these roles are defined in the brokerDomain and broker sections of the securityDomains section. For example, the send role is defined to allow users with that role to create a durable queue on any address. By default, the configuration applies to all deployed brokers defined by CRs in the current namespace. To limit the configuration to particular broker deployments, use the applyToCrNames option described in Section 8.1.3, "Security Custom Resource configuration reference" . Note In the metadata section, you need to include the namespace property and specify a value only if you are using the OpenShift Container Platform web console to create your CR instance. The value that you should specify is the name of the OpenShift project for your broker deployment. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources Section 8.1.3, "Security Custom Resource configuration reference" Section 3.4.1, "Deploying a basic broker instance" 4.4. Configuring broker storage requirements To use persistent storage in an Operator-based broker deployment, you set persistenceEnabled to true in the Custom Resource (CR) instance used to create the deployment. If you do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that these are available to be claimed by the Operator using a Persistent Volume Claim (PVC). If you want to create a cluster of two brokers with persistent storage, for example, then you need to have two PVs available. By default, each broker in your deployment requires storage of 2 GiB. However, you can configure the CR for your broker deployment to specify the size of PVC required by each broker. Important You must add the configuration for broker storage size to the main CR for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running. 4.4.1. Configuring broker storage size The following procedure shows how to configure the Custom Resource (CR) instance for your broker deployment to specify the size of the Persistent Volume Claim (PVC) required by each broker for persistent message storage. Important You must add the configuration for broker storage size to the main CR for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running. Prerequisites You must be using at least the latest version of the Operator for AMQ Broker 7.7 (that is, version 0.17). To learn how to upgrade the Operator to the latest version for AMQ Broker 7.9, see Chapter 6, Upgrading an Operator-based broker deployment . You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . You must have already provisioned Persistent Volumes (PVs) and made these available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage, you need to have two PVs available. For more information about provisioning persistent storage, see: Understanding persistent storage (OpenShift Container Platform 4.5) Procedure Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . To specify broker storage requirements, in the deploymentPlan section of the CR, add a storage section. Add a size property and specify a value. For example: spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true storage: size: 4Gi storage.size Size, in bytes, of the Persistent Volume Claim (PVC) that each broker Pod requires for persistent storage. This property applies only when persistenceEnabled is set to true . The value that you specify must include a unit. Supports byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi). Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . 4.5. Configuring resource limits and requests for Operator-based broker deployments When you create an Operator-based broker deployment, the broker Pods in the deployment run in a StatefulSet on a node in your OpenShift cluster. You can configure the Custom Resource (CR) instance for the deployment to specify the host-node compute resources used by the broker container that runs in each Pod. By specifying limit and request values for CPU and memory (RAM), you can ensure satisfactory performance of the broker Pods. Important You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running. It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment. The Operator runs a type of container called an Init Container when initializing each broker Pod. Any resource limits and requests that you configure for each broker container also apply to each Init Container. For more information about the use of Init Containers in broker deployments, see Section 4.1, "How the Operator generates the broker configuration" . You can specify the following limit and request values: CPU limit For each broker container running in a Pod, this value is the maximum amount of host-node CPU that the container can consume. If a broker container attempts to exceed the specified CPU limit, OpenShift throttles the container. This ensures that containers have consistent performance, regardless of the number of Pods running on a node. Memory limit For each broker container running in a Pod, this value is the maximum amount of host-node memory that the container can consume. If a broker container attempts to exceed the specified memory limit, OpenShift terminates the container. The broker Pod restarts. CPU request For each broker container running in a Pod, this value is the amount of host-node CPU that the container requests. The OpenShift scheduler considers the CPU request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources. The CPU request value is the minimum amount of CPU that the broker container requires to run. However, if there is no contention for CPU on the node, the container can use all available CPU. If you have specified a CPU limit, the container cannot exceed that amount of CPU usage. If there is CPU contention on the node, CPU request values provide a way for OpenShift to weigh CPU usage across all containers. Memory request For each broker container running in a Pod, this value is the amount of host-node memory that the container requests. The OpenShift scheduler considers the memory request value during Pod placement, to bind the broker Pod to a node with sufficient compute resources. The memory request value is the minimum amount of memory that the broker container requires to run. However, the container can consume as much available memory as possible. If you have specified a memory limit, the broker container cannot exceed that amount of memory usage. CPU is measured in units called millicores. Each node in an OpenShift cluster inspects the operating system to determine the number of CPU cores on the node. Then, the node multiplies that value by 1000 to express the total capacity. For example, if a node has two cores, the CPU capacity of the node is expressed as 2000m . Therefore, if you want to use one-tenth of a single core, you specify a value of 100m . Memory is measured in bytes. You can specify the value using byte notation (E, P, T, G, M, K) or the binary equivalents (Ei, Pi, Ti, Gi, Mi, Ki). The value that you specify must include a unit. 4.5.1. Configuring broker resource limits and requests The following example shows how to configure the main Custom Resource (CR) instance for your broker deployment to set limits and requests for CPU and memory for each broker container that runs in a Pod in the deployment. Important You must add configuration for limits and requests to the CR instance for your broker deployment before deploying the CR for the first time. You cannot add the configuration to a broker deployment that is already running. It is not possible for Red Hat to recommend values for limits and requests because these are based on your specific messaging system use-cases and the resulting architecture that you have implemented. However, it is recommended that you test and tune these values in a development environment before configuring them for your production environment. Prerequisites You should be familiar with how to use a CR instance to create a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . Procedure Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . In the deploymentPlan section of the CR, add a resources section. Add limits and requests sub-sections. In each sub-section, add a cpu and memory property and specify values. For example: spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true resources: limits: cpu: "500m" memory: "1024M" requests: cpu: "250m" memory: "512M" limits.cpu Each broker container running in a Pod in the deployment cannot exceed this amount of host-node CPU usage. limits.memory Each broker container running in a Pod in the deployment cannot exceed this amount of host-node memory usage. requests.cpu Each broker container running in a Pod in the deployment requests this amount of host-node CPU. This value is the minimum amount of CPU required for the broker container to run. requests.memory Each broker container running in a Pod in the deployment requests this amount of host-node memory. This value is the minimum amount of memory required for the broker container to run. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . 4.6. Specifying a custom Init Container image As described in Section 4.1, "How the Operator generates the broker configuration" , the AMQ Broker Operator uses a default, built-in Init Container to generate the broker configuration. To generate the configuration, the Init Container uses the main Custom Resource (CR) instance for your deployment. The only items that you can specify in the CR are those that are exposed in the main broker Custom Resource Definition (CRD). However, there might a case where you need to include configuration that is not exposed in the CRD. In this case, in your main CR instance, you can specify a custom Init Container. The custom Init Container can modify or add to the configuration that has already been created by the Operator. For example, you might use a custom Init Container to modify the broker logging settings. Or, you might use a custom Init Container to include extra runtime dependencies (that is, .jar files) in the broker installation directory. When you build a custom Init Container image, you must follow these important guidelines: In the build script (for example, a Docker Dockerfile or Podman Containerfile) that you create for the custom image, the FROM instruction must specify the latest version of the AMQ Broker Operator built-in Init Container as the base image. In your script, include the following line: The custom image must include a script called post-config.sh that you include in a directory called /amq/scripts . The post-config.sh script is where you can modify or add to the initial configuration that the Operator generates. When you specify a custom Init Container, the Operator runs the post-config.sh script after it uses your CR instance to generate a configuration, but before it starts the broker application container. As described in Section 4.1.2, "Directory structure of a broker Pod" , the path to the installation directory used by the Init Container is defined in an environment variable called CONFIG_INSTANCE_DIR . The post-config.sh script should use this environment variable name when referencing the installation directory (for example, USD{CONFIG_INSTANCE_DIR}/lib ) and not the actual value of this variable (for example, /amq/init/config/lib ). If you want to include additional resources (for example, .xml or .jar files) in your custom broker configuration, you must ensure that these are included in the custom image and accessible to the post-config.sh script. The following procedure describes how to specify a custom Init Container image. Prerequisites You must be using at least version 7.9.4-opr-3 of the Operator. To learn how to upgrade to the latest Operator version, see Chapter 6, Upgrading an Operator-based broker deployment . You must have built a custom Init Container image that meets the guidelines described above. For a complete example of building and specifying a custom Init Container image for the ArtemisCloud Operator, see custom Init Container image for JDBC-based persistence . To provide a custom Init Container image for the AMQ Broker Operator, you need to be able to add the image to a repository in a container registry such as the Quay container registry . You should understand how the Operator uses an Init Container to generate the broker configuration. For more information, see Section 4.1, "How the Operator generates the broker configuration" . You should be familiar with how to use a CR to create a broker deployment. For more information, see Section 3.4, "Creating Operator-based broker deployments" . Procedure Start configuring a Custom Resource (CR) instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Open the sample CR file called broker_activemqartemis_cr.yaml that was included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to deploy CRs in the project in which you are creating the deployment. Start a new CR instance based on the main broker CRD. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Click Create ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to configure a CR instance. For a basic broker deployment, a configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR file. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Observe that in the broker_activemqartemis_cr.yaml sample CR file, the image property is set to a default value of placeholder . This value indicates that, by default, the image property does not specify a broker container image to use for the deployment. To learn how the Operator determines the appropriate broker container image to use, see Section 2.4, "How the Operator chooses container images" . In the deploymentPlan section of the CR, add the initImage property. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder initImage: requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true Set the value of the initImage property to the URL of your custom Init Container image. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder initImage: <custom_init_container_image_url> requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true initImage Specifies the full URL for your custom Init Container image, which you must have added to repository in a container registry. Deploy the CR instance. Using the OpenShift command-line interface: Save the CR file. Switch to the project in which you are creating the broker deployment. Create the CR instance. Using the OpenShift web console: When you have finished configuring the CR, click Create . Additional resources For a complete example of building and specifying a custom Init Container image for the ArtemisCloud Operator, see custom Init Container image for JDBC-based persistence . 4.7. Configuring Operator-based broker deployments for client connections 4.7.1. Configuring acceptors To enable client connections to broker Pods in your OpenShift deployment, you define acceptors for your deployment. Acceptors define how a broker Pod accepts connections. You define acceptors in the main Custom Resource (CR) used for your broker deployment. When you create an acceptor, you specify information such as the messaging protocols to enable on the acceptor, and the port on the broker Pod to use for these protocols. The following procedure shows how to define a new acceptor in the CR for your broker deployment. Prerequisites To configure acceptors, your broker deployment must be based on version 0.9 or greater of the AMQ Broker Operator. For more information about installing the latest version of the Operator, see Section 3.2, "Installing the Operator using the CLI" . Procedure In the deploy/crs directory of the Operator archive that you downloaded and extracted during your initial installation, open the broker_activemqartemis_cr.yaml Custom Resource (CR) file. In the acceptors element, add a named acceptor. Add the protocols and port parameters. Set values to specify the messaging protocols to be used by the acceptor and the port on each broker Pod to expose for those protocols. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp port: 5672 ... The configured acceptor exposes port 5672 to AMQP clients. The full set of values that you can specify for the protocols parameter is shown in the table. Protocol Value Core Protocol core AMQP amqp OpenWire openwire MQTT mqtt STOMP stomp All supported protocols all Note For each broker Pod in your deployment, the Operator also creates a default acceptor that uses port 61616. This default acceptor is required for broker clustering and has Core Protocol enabled. By default, the AMQ Broker management console uses port 8161 on the broker Pod. Each broker Pod in your deployment has a dedicated Service that provides access to the console. For more information, see Chapter 5, Connecting to AMQ Management Console for an Operator-based broker deployment . To use another protocol on the same acceptor, modify the protocols parameter. Specify a comma-separated list of protocols. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 ... The configured acceptor now exposes port 5672 to AMQP and OpenWire clients. To specify the number of concurrent client connections that the acceptor allows, add the connectionsAllowed parameter and set a value. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 ... By default, an acceptor is exposed only to clients in the same OpenShift cluster as the broker deployment. To also expose the acceptor to clients outside OpenShift, add the expose parameter and set the value to true . spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true ... ... When you expose an acceptor to clients outside OpenShift, the Operator automatically creates a dedicated Service and Route for each broker Pod in the deployment. To enable secure connections to the acceptor from clients outside OpenShift, add the sslEnabled parameter and set the value to true . spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true ... ... When you enable SSL (that is, Secure Sockets Layer) security on an acceptor (or connector), you can add related configuration, such as: The secret name used to store authentication credentials in your OpenShift cluster. A secret is required when you enable SSL on the acceptor. For more information on generating this secret, see Section 4.7.2, "Securing broker-client connections" . The Transport Layer Security (TLS) protocols to use for secure network communication. TLS is an updated, more secure version of SSL. You specify the TLS protocols in the enabledProtocols parameter. Whether the acceptor uses two-way TLS, also known as mutual authentication , between the broker and the client. You specify this by setting the value of the needClientAuth parameter to true . Additional resources To learn how to configure TLS to secure broker-client connections, including generating a secret to store authentication credentials, see Section 4.7.2, "Securing broker-client connections" . For a complete Custom Resource configuration reference, including configuration of acceptors and connectors, see Section 8.1, "Custom Resource configuration reference" . 4.7.2. Securing broker-client connections If you have enabled security on your acceptor or connector (that is, by setting sslEnabled to true ), you must configure Transport Layer Security (TLS) to allow certificate-based authentication between the broker and clients. TLS is an updated, more secure version of SSL. There are two primary TLS configurations: One-way TLS Only the broker presents a certificate. The certificate is used by the client to authenticate the broker. This is the most common configuration. Two-way TLS Both the broker and the client present certificates. This is sometimes called mutual authentication . The sections that follow describe: Configuration requirements for the broker certificate used by one-way and two-way TLS How to configure one-way TLS How to configure two-way TLS For both one-way and two-way TLS, you complete the configuration by generating a secret that stores the credentials required for a successful TLS handshake between the broker and the client. This is the secret name that you must specify in the sslSecret parameter of your secured acceptor or connector. The secret must contain a Base64-encoded broker key store (both one-way and two-way TLS), a Base64-encoded broker trust store (two-way TLS only), and the corresponding passwords for these files, also Base64-encoded. The one-way and two-way TLS configuration procedures show how to generate this secret. Note If you do not explicitly specify a secret name in the sslSecret parameter of a secured acceptor or connector, the acceptor or connector assumes a default secret name. The default secret name uses the format <custom_resource_name> - <acceptor_name> -secret or <custom_resource_name> - <connector_name> -secret . For example, my-broker-deployment-my-acceptor-secret . Even if the acceptor or connector assumes a default secrete name, you must still generate this secret yourself. It is not automatically created. 4.7.2.1. Configuring a broker certificate for host name verification Note This section describes some requirements for the broker certificate that you must generate when configuring one-way or two-way TLS. When a client tries to connect to a broker Pod in your deployment, the verifyHost option in the client connection URL determines whether the client compares the Common Name (CN) of the broker's certificate to its host name, to verify that they match. The client performs this verification if you specify verifyHost=true or similar in the client connection URL. You might omit this verification in rare cases where you have no concerns about the security of the connection, for example, if the brokers are deployed on an OpenShift cluster in an isolated network. Otherwise, for a secure connection, it is advisable for a client to perform this verification. In this case, correct configuration of the broker key store certificate is essential to ensure successful client connections. In general, when a client is using host verification, the CN that you specify when generating the broker certificate must match the full host name for the Route on the broker Pod that the client is connecting to. For example, if you have a deployment with a single broker Pod, the CN might look like the following: To ensure that the CN can resolve to any broker Pod in a deployment with multiple brokers, you can specify an asterisk ( * ) wildcard character in place of the ordinal of the broker Pod. For example: The CN shown in the preceding example successfully resolves to any broker Pod in the my-broker-deployment deployment. In addition, the Subject Alternative Name (SAN) that you specify when generating the broker certificate must individually list all broker Pods in the deployment, as a comma-separated list. For example: 4.7.2.2. Configuring one-way TLS The procedure in this section shows how to configure one-way Transport Layer Security (TLS) to secure a broker-client connection. In one-way TLS, only the broker presents a certificate. This certificate is used by the client to authenticate the broker. Prerequisites You should understand the requirements for broker certificate generation when clients use host name verification. For more information, see Section 4.7.2.1, "Configuring a broker certificate for host name verification" . Procedure Generate a self-signed certificate for the broker key store. USD keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example: USD keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem On the client, create a client trust store that imports the broker certificate. USD keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem Log in to OpenShift Container Platform as an administrator. For example: USD oc login -u system:admin Switch to the project that contains your broker deployment. For example: USD oc project <my_openshift_project> Create a secret to store the TLS credentials. For example: USD oc create secret generic my-tls-secret \ --from-file=broker.ks=~/broker.ks \ --from-file=client.ts=~/broker.ks \ --from-literal=keyStorePassword= <password> \ --from-literal=trustStorePassword= <password> Note When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts . For one-way TLS between the broker and a client, a trust store is not actually required. However, to successfully generate the secret, you need to specify some valid store file as a value for client.ts . The preceding step provides a "dummy" value for client.ts by reusing the previously-generated broker key store file. This is sufficient to generate a secret with all of the credentials required for one-way TLS. Link the secret to the service account that you created when installing the Operator. For example: USD oc secrets link sa/amq-broker-operator secret/my-tls-secret Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5 ... 4.7.2.3. Configuring two-way TLS The procedure in this section shows how to configure two-way Transport Layer Security (TLS) to secure a broker-client connection. In two-way TLS, both the broker and client presents certificates. The broker and client use these certificates to authenticate each other in a process sometimes called mutual authentication . Prerequisites You should understand the requirements for broker certificate generation when clients use host name verification. For more information, see Section 4.7.2.1, "Configuring a broker certificate for host name verification" . Procedure Generate a self-signed certificate for the broker key store. USD keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks Export the certificate from the broker key store, so that it can be shared with clients. Export the certificate in the Base64-encoded .pem format. For example: USD keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem On the client, create a client trust store that imports the broker certificate. USD keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem On the client, generate a self-signed certificate for the client key store. USD keytool -genkey -alias broker -keyalg RSA -keystore ~/client.ks On the client, export the certificate from the client key store, so that it can be shared with the broker. Export the certificate in the Base64-encoded .pem format. For example: USD keytool -export -alias broker -keystore ~/client.ks -file ~/client_cert.pem Create a broker trust store that imports the client certificate. USD keytool -import -alias broker -keystore ~/broker.ts -file ~/client_cert.pem Log in to OpenShift Container Platform as an administrator. For example: USD oc login -u system:admin Switch to the project that contains your broker deployment. For example: USD oc project <my_openshift_project> Create a secret to store the TLS credentials. For example: USD oc create secret generic my-tls-secret \ --from-file=broker.ks=~/broker.ks \ --from-file=client.ts=~/broker.ts \ --from-literal=keyStorePassword= <password> \ --from-literal=trustStorePassword= <password> Note When generating a secret, OpenShift requires you to specify both a key store and a trust store. The trust store key is generically named client.ts . For two-way TLS between the broker and a client, you must generate a secret that includes the broker trust store, because this holds the client certificate. Therefore, in the preceding step, the value that you specify for the client.ts key is actually the broker trust store file. Link the secret to the service account that you created when installing the Operator. For example: USD oc secrets link sa/amq-broker-operator secret/my-tls-secret Specify the secret name in the sslSecret parameter of your secured acceptor or connector. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5 ... 4.7.3. Networking Services in your broker deployments On the Networking pane of the OpenShift Container Platform web console for your broker deployment, there are two running Services; a headless Service and a ping Service. The default name of the headless Service uses the format <custom_resource_name> -hdls-svc , for example, my-broker-deployment-hdls-svc . The default name of the ping Service uses a format of <custom_resource_name> -ping-svc , for example, `my-broker-deployment-ping-svc . The headless Service provides access to ports 8161 and 61616 on each broker Pod. Port 8161 is used by the broker management console, and port 61616 is used for broker clustering. You can also use the headless Service to connect to a broker Pod from an internal client (that is, a client inside the same OpenShift cluster as the broker deployment). The ping Service is used by the brokers for discovery, and enables brokers to form a cluster within the OpenShift environment. Internally, this Service exposes port 8888. Additional resources To learn about using the headless Service to connect to a broker Pod from an internal client, see Section 4.7.4.1, "Connecting to the broker from internal clients" . 4.7.4. Connecting to the broker from internal and external clients The examples in this section show how to connect to the broker from internal clients (that is, clients in the same OpenShift cluster as the broker deployment) and external clients (that is, clients outside the OpenShift cluster). 4.7.4.1. Connecting to the broker from internal clients An internal client can connect to the broker Pod using the headless Service that is running for the broker deployment. To connect to a broker Pod using the headless Service, specify an address in the format <Protocol>://<PodName>.<HeadlessServiceName>.<ProjectName>.svc.cluster.local . For example: OpenShift DNS successfully resolves addresses in this format because the StatefulSets created by Operator-based broker deployments provide stable Pod names. Additional resources For more information about the headless Service that runs by a default in a broker deployment, see Section 4.7.3, "Networking Services in your broker deployments" . 4.7.4.2. Connecting to the broker from external clients When you expose an acceptor to external clients (that is, by setting the value of the expose parameter to true ), the Operator automatically creates a dedicated Service and Route for each broker Pod in the deployment. To see the Routes configured on a given broker Pod, select the Pod in the OpenShift Container Platform web console and click the Routes tab. An external client can connect to the broker by specifying the full host name of the Route created for the the broker Pod. You can use a basic curl command to test external access to this full host name. For example: The full host name for the Route must resolve to the node that's hosting the OpenShift router. The OpenShift router uses the host name to determine where to send the traffic inside the OpenShift internal network. By default, the OpenShift router listens to port 80 for non-secured (that is, non-SSL) traffic and port 443 for secured (that is, SSL-encrypted) traffic. For an HTTP connection, the router automatically directs traffic to port 443 if you specify a secure connection URL (that is, https ), or to port 80 if you specify a non-secure connection URL (that is, http ). For non-HTTP connections: Clients must explicitly specify the port number (for example, port 443) as part of the connection URL. For one-way TLS, the client must specify the path to its trust store and the corresponding password, as part of the connection URL. For two-way TLS, the client must also specify the path to its key store and the corresponding password, as part of the connection URL. Some example client connection URLs, for supported messaging protcols, are shown below. External Core client, using one-way TLS Note The useTopologyForLoadBalancing key is explicitly set to false in the connection URL because an external Core client cannot use topology information returned by the broker. If this key is set to true or you do not specify a value, it results in a DEBUG log message. External Core client, using two-way TLS External OpenWire client, using one-way TLS External OpenWire client, using two-way TLS External AMQP client, using one-way TLS External AMQP client, using two-way TLS 4.7.4.3. Connecting to the Broker using a NodePort As an alternative to using a Route, an OpenShift administrator can configure a NodePort to connect to a broker Pod from a client outside OpenShift. The NodePort should map to one of the protocol-specifc ports specified by the acceptors configured for the broker. By default, NodePorts are in the range 30000 to 32767, which means that a NodePort typically does not match the intended port on the broker Pod. To connect from a client outside OpenShift to the broker via a NodePort, you specify a URL in the format <protocol> :// <ocp_node_ip> : <node_port_number> . Additional resources For more information about using methods such as Routes and NodePorts for communicating from outside an OpenShift cluster with services running in the cluster, see: Configuring ingress cluster traffic overview (OpenShift Container Platform 4.5) 4.8. Configuring large message handling for AMQP messages Clients might send large AMQP messages that can exceed the size of the broker's internal buffer, causing unexpected errors. To prevent this situation, you can configure the broker to store messages as files when the messages are larger than a specified minimum value. Handling large messages in this way means that the broker does not hold the messages in memory. Instead, the broker stores the messages in a dedicated directory used for storing large message files. For a broker deployment on OpenShift Container Platform, the large messages directory is /opt/ <custom_resource_name> /data/large-messages on the Persistent Volume (PV) used by the broker for message storage. When the broker stores a message as a large message, the queue retains a reference to the file in the large messages directory. Important For Operator-based broker deployments in AMQ Broker 7.9, large message handling is available only for the AMQP protocol. 4.8.1. Configuring AMQP acceptors for large message handling The following procedure shows how to configure an acceptor to handle an AMQP message larger than a specified size as a large message. Prerequisites You should be familiar with how to configure acceptors for Operator-based broker deployments. See Section 4.7.1, "Configuring acceptors" . To store large AMQP messages in a dedicated large messages directory, your broker deployment must be using persistent storage (that is, persistenceEnabled is set to true in the Custom Resource (CR) instance used to create the deployment). For more information about configuring persistent storage, see: Section 2.5, "Operator deployment notes" Section 8.1, "Custom Resource configuration reference" Procedure Open the Custom Resource (CR) instance in which you previously defined an AMQP acceptor. Using the OpenShift command-line interface: USD oc edit -f <path/to/custom_resource_instance> .yaml Using the OpenShift Container Platform web console: In the left navigation menu, click Administration Custom Resource Definitions Click the ActiveMQArtemis CRD. Click the Instances tab. Locate the CR instance that corresponds to your project namespace. A previously-configured AMQP acceptor might resemble the following: spec: ... acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true ... Specify the minimum size, in bytes, of an AMQP message that the broker handles as a large message. For example: spec: ... acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true amqpMinLargeMessageSize: 204800 ... ... In the preceding example, the broker is configured to accept AMQP messages on port 5672. Based on the value of amqpMinLargeMessageSize , if the acceptor receives an AMQP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. The broker stores the message in the large messages directory ( /opt/ <custom_resource_name> /data/large-messages , by default) on the persistent volume (PV) used by the broker for message storage. If you do not explicitly specify a value for the amqpMinLargeMessageSize property, the broker uses a default value of 102400 (that is, 100 kilobytes). If you set amqpMinLargeMessageSize to a value of -1 , large message handling for AMQP messages is disabled. 4.9. High availability and message migration 4.9.1. High availability The term high availability refers to a system that can remain operational even when part of that system fails or is shut down. For AMQ Broker on OpenShift Container Platform, this means ensuring the integrity and availability of messaging data if a broker Pod fails, or shuts down due to intentional scaledown of your deployment. To allow high availability for AMQ Broker on OpenShift Container Platform, you run multiple broker Pods in a broker cluster. Each broker Pod writes its message data to an available Persistent Volume (PV) that you have claimed for use with a Persistent Volume Claim (PVC). If a broker Pod fails or is shut down, the message data stored in the PV is migrated to another available broker Pod in the broker cluster. The other broker Pod stores the message data in its own PV. The following figure shows a StatefulSet-based broker deployment. In this case, the two broker Pods in the broker cluster are still running. When a broker Pod shuts down, the AMQ Broker Operator automatically starts a scaledown controller that performs the migration of messages to an another broker Pod that is still running in the broker cluster. This message migration process is also known as Pod draining . The section that follows describes message migration. 4.9.2. Message migration Message migration is how you ensure the integrity of messaging data when a broker in a clustered deployment shuts down due to failure or intentional scaledown of the deployment. Also known as Pod draining , this process refers to removal and redistribution of messages from a broker Pod that has shut down. Note The scaledown controller that performs message migration can operate only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects. To use message migration, you must have a minimum of two brokers in your deployment. A broker with two or more brokers is clustered by default. For an Operator-based broker deployment, you enable message migration by setting messageMigration to true in the main broker Custom Resource for your deployment. The message migration process follows these steps: When a broker Pod in the deployment shuts down due to failure or intentional scaledown of the deployment, the Operator automatically starts a scaledown controller to prepare for message migration. The scaledown controller runs in the same OpenShift project name as the broker cluster. The scaledown controller registers itself and listens for Kubernetes events that are related to Persistent Volume Claims (PVCs) in the project. To check for Persistent Volumes (PVs) that have been orphaned, the scaledown controller looks at the ordinal on the volume claim. The controller compares the ordinal on the volume claim to that of the broker Pods that are still running in the StatefulSet (that is, the broker cluster) in the project. If the ordinal on the volume claim is higher than the ordinal on any of the broker Pods still running in the broker cluster, the scaledown controller determines that the broker Pod at that ordinal has been shut down and that messaging data must be migrated to another broker Pod. The scaledown controller starts a drainer Pod. The drainer Pod runs the broker and executes the message migration. Then, the drainer Pod identifies an alternative broker Pod to which the orphaned messages can be migrated. Note There must be at least one broker Pod still running in your deployment for message migration to occur. The following figure illustrates how the scaledown controller (also known as a drain controller ) migrates messages to a running broker Pod. After the messages are successfully migrated to an operational broker Pod, the drainer Pod shuts down and the scaledown controller removes the PVC for the orphaned PV. The PV is returned to a "Released" state. Note If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker Pod to which messaging data can be migrated. However, if you scale a deployment down to zero and then back up to a size that is smaller than the original deployment, drainer Pods are started for the brokers that remain shut down. Additional resources For an example of message migration when you scale down a broker deployment, see Migrating messages upon scaledown . 4.9.3. Migrating messages upon scaledown To migrate messages upon scaledown of your broker deployment, use the main broker Custom Resource (CR) to enable message migration. The AMQ Broker Operator automatically runs a dedicated scaledown controller to execute message migration when you scale down a clustered broker deployment. With message migration enabled, the scaledown controller within the Operator detects shutdown of a broker Pod and starts a drainer Pod to execute message migration. The drainer Pod connects to one of the other live broker Pods in the cluster and migrates messages to that live broker Pod. After migration is complete, the scaledown controller shuts down. Note A scaledown controller operates only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects. If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker Pod to which the messaging data can be migrated. However, if you scale a deployment down to zero brokers and then back up to only some of the brokers that were in the original deployment, drainer Pods are started for the brokers that remain shut down. The following example procedure shows the behavior of the scaledown controller. Prerequisites You already have a basic broker deployment. See Section 3.4.1, "Deploying a basic broker instance" . You should understand how message migration works. For more information, see Section 4.9.2, "Message migration" . Procedure In the deploy/crs directory of the Operator repository that you originally downloaded and extracted, open the main broker CR, broker_activemqartemis_cr.yaml . In the main broker CR set messageMigration and persistenceEnabled to true . These settings mean that when you later scale down the size of your clustered broker deployment, the Operator automatically starts a scaledown controller and migrates messages to a broker Pod that is still running. In your existing broker deployment, verify which Pods are running. USD oc get pods You see output that looks like the following. The preceding output shows that there are three Pods running; one for the broker Operator itself, and a separate Pod for each broker in the deployment. Log into each Pod and send some messages to each broker. Supposing that Pod ex-aao-ss-0 has a cluster IP address of 172.17.0.6 , run the following command: USD /opt/amq-broker/bin/artemis producer --url tcp://172.17.0.6:61616 --user admin --password admin Supposing that Pod ex-aao-ss-1 has a cluster IP address of 172.17.0.7 , run the following command: USD /opt/amq-broker/bin/artemis producer --url tcp://172.17.0.7:61616 --user admin --password admin The preceding commands create a queue called TEST on each broker and add 1000 messages to each queue. Scale the cluster down from two brokers to one. Open the main broker CR, broker_activemqartemis_cr.yaml . In the CR, set deploymentPlan.size to 1 . At the command line, apply the change: USD oc apply -f deploy/crs/broker_activemqartemis_cr.yaml You see that the Pod ex-aao-ss-1 starts to shut down. The scaledown controller starts a new drainer Pod of the same name. This drainer Pod also shuts down after it migrates all messages from broker Pod ex-aao-ss-1 to the other broker Pod in the cluster, ex-aao-ss-0 . When the drainer Pod is shut down, check the message count on the TEST queue of broker Pod ex-aao-ss-0 . You see that the number of messages in the queue is 2000, indicating that the drainer Pod successfully migrated 1000 messages from the broker Pod that shut down.
|
[
"<address-settings> <!-- if you define auto-create on certain queues, management has to be auto-create --> <address-setting match=\"activemq.management#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <!-- default for catch all --> <address-setting match=\"#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <address-settings>",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v2alpha2 kind: ActiveMQArtemisAddress metadata: name: myAddressDeployment0 namespace: myProject spec: addressName: myAddress0 queueName: myQueue0 routingType: anycast",
"oc project <project_name>",
"oc create -f <path/to/address_custom_resource_instance> .yaml",
"oc delete -f <path/to/address_custom_resource_instance> .yaml",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v2alpha2 kind: ActiveMQArtemisAddress metadata: name: ex-aaoaddress spec: addressName: myDeadLetterAddress queueName: myDeadLetterQueue routingType: anycast",
"oc project <project_name>",
"oc create -f <path/to/address_custom_resource_instance> .yaml",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting:",
"spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress",
"spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5",
"spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3",
"spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true addressSettings: applyRule: merge_all addressSetting: - match: myAddress deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 5 - match: 'myOtherAddresses*' deadLetterAddress: myDeadLetterAddress maxDeliveryAttempts: 3",
"oc create -f <path/to/broker_custom_resource_instance> .yaml",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v1alpha1 kind: ActiveMQArtemisSecurity metadata: name: ex-prop spec: loginModules: propertiesLoginModules: - name: \"prop-module\" users: - name: \"sam\" password: \"samsecret\" roles: - \"sender\" - name: \"rob\" password: \"robsecret\" roles: - \"receiver\" securityDomains: brokerDomain: name: \"activemq\" loginModules: - name: \"prop-module\" flag: \"sufficient\" securitySettings: broker: - match: \"#\" permissions: - operationType: \"send\" roles: - \"sender\" - operationType: \"createAddress\" roles: - \"sender\" - operationType: \"createDurableQueue\" roles: - \"sender\" - operationType: \"consume\" roles: - \"receiver\"",
"oc project <project_name>",
"oc create -f <path/to/address_custom_resource_instance> .yaml",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true storage: size: 4Gi",
"oc project <project_name>",
"oc create -f <path/to/custom_resource_instance> .yaml",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true resources: limits: cpu: \"500m\" memory: \"1024M\" requests: cpu: \"250m\" memory: \"512M\"",
"oc project <project_name>",
"oc create -f <path/to/custom_resource_instance> .yaml",
"FROM registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:d327d358e6cfccac14becc486bce643e34970ecfc6c4d187a862425867a9ac8a",
"login -u <user> -p <password> --server= <host:port>",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder initImage: requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.9.4 deploymentPlan: size: 1 image: placeholder initImage: <custom_init_container_image_url> requireLogin: false persistenceEnabled: true journalType: nio messageMigration: true",
"oc project <project_name>",
"oc create -f <path/to/custom_resource_instance> .yaml",
"spec: acceptors: - name: my-acceptor protocols: amqp port: 5672",
"spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672",
"spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5",
"spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true",
"spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true",
"CN=my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain",
"CN=my-broker-deployment-*-svc-rte-my-openshift-project.my-openshift-domain",
"\"SAN=DNS:my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain,DNS:my-broker-deployment-1-svc-rte-my-openshift-project.my-openshift-domain,...\"",
"keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks",
"keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem",
"keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem",
"oc login -u system:admin",
"oc project <my_openshift_project>",
"oc create secret generic my-tls-secret --from-file=broker.ks=~/broker.ks --from-file=client.ts=~/broker.ks --from-literal=keyStorePassword= <password> --from-literal=trustStorePassword= <password>",
"oc secrets link sa/amq-broker-operator secret/my-tls-secret",
"spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5",
"keytool -genkey -alias broker -keyalg RSA -keystore ~/broker.ks",
"keytool -export -alias broker -keystore ~/broker.ks -file ~/broker_cert.pem",
"keytool -import -alias broker -keystore ~/client.ts -file ~/broker_cert.pem",
"keytool -genkey -alias broker -keyalg RSA -keystore ~/client.ks",
"keytool -export -alias broker -keystore ~/client.ks -file ~/client_cert.pem",
"keytool -import -alias broker -keystore ~/broker.ts -file ~/client_cert.pem",
"oc login -u system:admin",
"oc project <my_openshift_project>",
"oc create secret generic my-tls-secret --from-file=broker.ks=~/broker.ks --from-file=client.ts=~/broker.ts --from-literal=keyStorePassword= <password> --from-literal=trustStorePassword= <password>",
"oc secrets link sa/amq-broker-operator secret/my-tls-secret",
"spec: acceptors: - name: my-acceptor protocols: amqp,openwire port: 5672 sslEnabled: true sslSecret: my-tls-secret expose: true connectionsAllowed: 5",
"tcp://my-broker-deployment-0.my-broker-deployment-hdls-svc.my-openshift-project.svc.cluster.local",
"curl https://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain",
"tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true &trustStorePath=~/client.ts&trustStorePassword= <password>",
"tcp://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?useTopologyForLoadBalancing=false&sslEnabled=true &keyStorePath=~/client.ks&keyStorePassword= <password> &trustStorePath=~/client.ts&trustStorePassword= <password>",
"ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443\" Also, specify the following JVM flags -Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword= <password>",
"ssl://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443\" Also, specify the following JVM flags -Djavax.net.ssl.keyStore=~/client.ks -Djavax.net.ssl.keyStorePassword= <password> -Djavax.net.ssl.trustStore=~/client.ts -Djavax.net.ssl.trustStorePassword= <password>",
"amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true &transport.trustStoreLocation=~/client.ts&transport.trustStorePassword= <password>",
"amqps://my-broker-deployment-0-svc-rte-my-openshift-project.my-openshift-domain:443?transport.verifyHost=true &transport.keyStoreLocation=~/client.ks&transport.keyStorePassword= <password> &transport.trustStoreLocation=~/client.ts&transport.trustStorePassword= <password>",
"oc edit -f <path/to/custom_resource_instance> .yaml",
"spec: acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true",
"spec: acceptors: - name: my-acceptor protocols: amqp port: 5672 connectionsAllowed: 5 expose: true sslEnabled: true amqpMinLargeMessageSize: 204800",
"oc get pods",
"activemq-artemis-operator-8566d9bf58-9g25l 1/1 Running 0 3m38s ex-aao-ss-0 1/1 Running 0 112s ex-aao-ss-1 1/1 Running 0 8s",
"/opt/amq-broker/bin/artemis producer --url tcp://172.17.0.6:61616 --user admin --password admin",
"/opt/amq-broker/bin/artemis producer --url tcp://172.17.0.7:61616 --user admin --password admin",
"oc apply -f deploy/crs/broker_activemqartemis_cr.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_amq_broker_on_openshift/assembly-br-configuring-operator-based-deployments_broker-ocp
|
Chapter 3. Assigning automation hub administrator permissions
|
Chapter 3. Assigning automation hub administrator permissions Hub administrative users will need to be assigned the role of hubadmin in order to manage user permissions and groups. You can assign the role of hubadmin to a user through the Ansible Automation Platform Central Authentication client. Prerequisites A user storage provider (e.g., LDAP) has been added to your central authentication Procedure Navigate to the ansible-automation-platform realm on your SSO client. From the navigation panel, select User Access Users . Select a user from the list by clicking their ID. Click the Role Mappings tab. From the Client Roles list, select automation-hub . Click hubadmin from the Available Roles field, then click Add selected > . The user is now a hubadmin . Repeat steps 3-6 to assign any additional users the hubadmin role.
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_and_configuring_central_authentication_for_the_ansible_automation_platform/assembly-assign-hub-admin-permissions
|
Chapter 4. Configuring your Logging deployment
|
Chapter 4. Configuring your Logging deployment 4.1. About the Cluster Logging custom resource To configure logging subsystem for Red Hat OpenShift you customize the ClusterLogging custom resource (CR). 4.1.1. About the ClusterLogging custom resource To make changes to your logging subsystem environment, create and modify the ClusterLogging custom resource (CR). Instructions for creating or modifying a CR are provided in this documentation as appropriate. The following example shows a typical custom resource for the logging subsystem. Sample ClusterLogging custom resource (CR) apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" 1 namespace: "openshift-logging" 2 spec: managementState: "Managed" 3 logStore: type: "elasticsearch" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: 5 type: "kibana" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: "fluentd" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi 1 The CR name must be instance . 2 The CR must be installed to the openshift-logging namespace. 3 The Red Hat OpenShift Logging Operator management state. When set to unmanaged the operator is in an unsupported state and will not get updates. 4 Settings for the log store, including retention policy, the number of nodes, the resource requests and limits, and the storage class. 5 Settings for the visualizer, including the resource requests and limits, and the number of pod replicas. 6 Settings for the log collector, including the resource requests and limits. 4.2. Configuring the logging collector Logging subsystem for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes . All supported modifications to the log collector can be performed though the spec.collection.log.fluentd stanza in the ClusterLogging custom resource (CR). 4.2.1. About unsupported configurations The supported way of configuring the logging subsystem for Red Hat OpenShift is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator or OpenShift Elasticsearch Operator to Unmanaged . An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed . 4.2.2. Viewing logging collector pods You can view the Fluentd logging collector pods and the corresponding nodes that they are running on. The Fluentd logging collector pods run only in the openshift-logging project. Procedure Run the following command in the openshift-logging project to view the Fluentd logging collector pods and their details: USD oc get pods --selector component=collector -o wide -n openshift-logging Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none> 4.2.3. Configure log collector CPU and memory limits The log collector allows for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi 1 Specify the CPU and memory limits and requests as needed. The values shown are the default values. 4.2.4. Advanced configuration for the log forwarder The logging subsystem for Red Hat OpenShift includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors: Chunk and chunk buffer sizes Chunk flushing behavior Chunk forwarding retry behavior Fluentd collects log data in a single blob called a chunk . When Fluentd creates a chunk, the chunk is considered to be in the stage , where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue , where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured. By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. These parameters can help you determine the trade-offs between latency and throughput. To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries. You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd. Note These parameters are: Not relevant to most users. The default settings should give good general performance. Only for advanced users with detailed knowledge of Fluentd configuration and performance. Only for performance tuning. They have no effect on functional aspects of logging. Table 4.1. Advanced Fluentd Configuration Parameters Parameter Description Default chunkLimitSize The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. 8m totalLimitSize The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. 8G flushInterval The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days). 1s flushMode The method to perform flushes: lazy : Flush chunks based on the timekey parameter. You cannot modify the timekey parameter. interval : Flush chunks based on the flushInterval parameter. immediate : Flush chunks immediately after data is added to a chunk. interval flushThreadCount The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. 2 overflowAction The chunking behavior when the queue is full: throw_exception : Raise an exception to show in the log. block : Stop data chunking until the full buffer issue is resolved. drop_oldest_chunk : Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks. block retryMaxInterval The maximum time in seconds for the exponential_backoff retry method. 300s retryType The retry method when flushing fails: exponential_backoff : Increase the time between flush retries. Fluentd doubles the time it waits until the retry until the retry_max_interval parameter is reached. periodic : Retries flushes periodically, based on the retryWait parameter. exponential_backoff retryTimeOut The maximum time interval to attempt retries before the record is discarded. 60m retryWait The time in seconds before the chunk flush. 1s For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Add or modify any of the following parameters: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: "300s" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9 ... 1 Specify the maximum size of each chunk before it is queued for flushing. 2 Specify the interval between chunk flushes. 3 Specify the method to perform chunk flushes: lazy , interval , or immediate . 4 Specify the number of threads to use for chunk flushes. 5 Specify the chunking behavior when the queue is full: throw_exception , block , or drop_oldest_chunk . 6 Specify the maximum interval in seconds for the exponential_backoff chunk flushing method. 7 Specify the retry type when chunk flushing fails: exponential_backoff or periodic . 8 Specify the time in seconds before the chunk flush. 9 Specify the maximum size of the chunk buffer. Verify that the Fluentd pods are redeployed: USD oc get pods -l component=collector -n openshift-logging Check that the new values are in the fluentd config map: USD oc extract configmap/fluentd --confirm Example fluentd.conf <buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer> 4.2.5. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: logs: type: "fluentd" fluentd: {} Verify that the collector pods are redeployed: USD oc get pods -l component=collector -n openshift-logging Additional resources Forwarding logs to third-party systems 4.3. Configuring the log store Logging subsystem for Red Hat OpenShift uses Elasticsearch 6 (ES) to store and organize the log data. You can make modifications to your log store, including: storage for your Elasticsearch cluster shard replication across data nodes in the cluster, from full replication to no replication external access to Elasticsearch data Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16G of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory, up to a maximum of 64G for each Elasticsearch node. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments. 4.3.1. Forwarding audit logs to the log store By default, OpenShift Logging does not store audit logs in the internal OpenShift Container Platform Elasticsearch log store. You can send audit logs to this log store so, for example, you can view them in Kibana. To send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API. Important The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. Verify that the system to which you forward audit logs complies with your organizational and governmental regulations and is properly secured. The logging subsystem for Red Hat OpenShift does not comply with those regulations. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create or edit a YAML file that defines the ClusterLogForwarder CR object: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources For more information on the Log Forwarding API, see Forwarding logs using the Log Forwarding API . 4.3.2. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites The logging subsystem for Red Hat OpenShift and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When OpenShift Container Platform deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 4.3.3. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 4.3.4. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit clusterlogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 4.3.5. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 4.3.6. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 4.3.7. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 4.3.8. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the collector pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}' Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the collector pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}' 4.3.9. Exposing the log store service as a route By default, the log store that is deployed with the logging subsystem for Red Hat OpenShift is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 4.4. Configuring the log visualizer OpenShift Container Platform uses Kibana to display the log data collected by the logging subsystem. You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. 4.4.1. Configuring CPU and memory limits The logging subsystem components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: "fluentd" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 4.4.2. Scaling redundancy for the log visualizer nodes You can scale the pod that hosts the log visualizer for redundancy. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: visualization: type: "kibana" kibana: replicas: 1 1 1 Specify the number of Kibana nodes. 4.5. Configuring logging subsystem storage Elasticsearch is a memory-intensive application. The default logging subsystem installation deploys 16G of memory for both memory requests and memory limits. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments. 4.5.1. Storage considerations for the logging subsystem for Red Hat OpenShift A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/ to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 4.5.2. Additional resources Configuring persistent storage for the log store 4.6. Configuring CPU and memory limits for logging subsystem components You can configure both the CPU and memory limits for each of the logging subsystem components as needed. 4.6.1. Configuring CPU and memory limits The logging subsystem components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: "fluentd" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 4.7. Using tolerations to control OpenShift Logging pod placement You can use taints and tolerations to ensure that logging subsystem pods run on specific nodes and that no other workload can run on those nodes. Taints and tolerations are simple key:value pair. A taint on a node instructs the node to repel all pods that do not tolerate the taint. The key is any string, up to 253 characters and the value is any string up to 63 characters. The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. Sample logging subsystem CR with tolerations apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 tolerations: 1 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: tolerations: 2 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: "fluentd" fluentd: tolerations: 3 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi 1 This toleration is added to the Elasticsearch pods. 2 This toleration is added to the Kibana pod. 3 This toleration is added to the logging collector pods. 4.7.1. Using tolerations to control the log store pod placement You can control which nodes the log store pods runs on and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to the log store pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the log store pods can run on that node. By default, the log store pods have the following toleration: tolerations: - effect: "NoExecute" key: "node.kubernetes.io/disk-pressure" operator: "Exists" Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Use the following command to add a taint to a node where you want to schedule the OpenShift Logging pods: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 elasticsearch=node:NoExecute This example places a taint on node1 that has key elasticsearch , value node , and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that match the taint and remove existing pods that do not match. Edit the logstore section of the ClusterLogging CR to configure a toleration for the Elasticsearch pods: logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 tolerations: - key: "elasticsearch" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require a taint with the key elasticsearch to be present on the Node. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration could be scheduled onto node1 . 4.7.2. Using tolerations to control the log visualizer pod placement You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to the log visualizer pod through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the Kibana pod can run on that node. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Use the following command to add a taint to a node where you want to schedule the log visualizer pod: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 kibana=node:NoExecute This example places a taint on node1 that has key kibana , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and remove existing pods that do not match. Edit the visualization section of the ClusterLogging CR to configure a toleration for the Kibana pod: visualization: type: "kibana" kibana: tolerations: - key: "kibana" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1 . 4.7.3. Using tolerations to control the log collector pod placement You can ensure which nodes the logging collector pods run on and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to logging collector pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. You can use taints and tolerations to ensure the pod does not get evicted for things like memory and CPU issues. By default, the logging collector pods have the following toleration: tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoExecute" Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Use the following command to add a taint to a node where you want logging collector pods to schedule logging collector pods: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 collector=node:NoExecute This example places a taint on node1 that has key collector , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and removes existing pods that do not match. Edit the collection stanza of the ClusterLogging custom resource (CR) to configure a toleration for the logging collector pods: collection: logs: type: "fluentd" fluentd: tolerations: - key: "collector" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1 . 4.7.4. Additional resources Controlling pod placement using node taints . 4.8. Moving logging subsystem resources with node selectors You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes. 4.8.1. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s 4.9. Configuring systemd-journald and Fluentd Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services. We recommend setting RateLimitIntervalSec=30s and RateLimitBurst=10000 (or even higher if necessary) to prevent the journal from losing entries. 4.9.1. Configuring systemd-journald for OpenShift Logging As you scale up your project, the default logging environment might need some adjustments. For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that OpenShift Logging does not use excessive resources without dropping logs. You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.9.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: "worker" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10 1 Set the permissions for the journal.conf file. It is recommended to set 0644 permissions. 2 Specify whether you want logs compressed before they are written to the file system. Specify yes to compress the message or no to not compress. The default is yes . 3 Configure whether to forward log messages. Defaults to no for each. Specify: ForwardToConsole to forward logs to the system console. ForwardToKsmg to forward logs to the kernel log buffer. ForwardToSyslog to forward to a syslog daemon. ForwardToWall to forward messages as wall messages to all logged-in users. 4 Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter 0 to disable. The default is 1month . 5 Configure rate limiting. If more logs are received than what is specified in RateLimitBurst during the time interval defined by RateLimitIntervalSec , all further messages within the interval are dropped until the interval is over. It is recommended to set RateLimitIntervalSec=30s and RateLimitBurst=10000 , which are the defaults. 6 Specify how logs are stored. The default is persistent : volatile to store logs in memory in /var/log/journal/ . persistent to store logs to disk in /var/log/journal/ . systemd creates the directory if it does not exist. auto to store logs in /var/log/journal/ if the directory exists. If it does not exist, systemd temporarily stores logs in /run/systemd/journal . none to not store logs. systemd drops all logs. 7 Specify the timeout before synchronizing journal files to disk for ERR , WARNING , NOTICE , INFO , and DEBUG logs. systemd immediately syncs after receiving a CRIT , ALERT , or EMERG log. The default is 1s . 8 Specify the maximum size the journal can use. The default is 8G . 9 Specify how much disk space systemd must leave free. The default is 20% . 10 Specify the maximum size for individual journal files stored persistently in /var/log/journal . The default is 10M . Note If you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled. For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html . The default settings listed on that page might not apply to OpenShift Container Platform. Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config. For example: USD oc apply -f 40-worker-custom-journald.yaml The controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Monitor the status of the rollout of the new rendered configuration to each node: USD oc describe machineconfigpool/worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e 4.10. Maintenance and support 4.10.1. About unsupported configurations The supported way of configuring the logging subsystem for Red Hat OpenShift is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator or OpenShift Elasticsearch Operator to Unmanaged . An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed . 4.10.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the unmanaged state to modify the following components: The Elasticsearch CR The Kibana deployment The fluent.conf file The Fluentd daemon set You must set the OpenShift Elasticsearch Operator to the unmanaged state to modify the following component: the Elasticsearch deployment files. Explicitly unsupported cases include: Configuring default log rotation . You cannot modify the default log rotation configuration. Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 4.10.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.
|
[
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: managementState: \"Managed\" 3 logStore: type: \"elasticsearch\" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: 5 type: \"kibana\" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: \"fluentd\" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc get pods --selector component=collector -o wide -n openshift-logging",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/fluentd --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer>",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: logs: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc edit clusterlogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch-",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"oc edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 tolerations: 1 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: \"ZeroRedundancy\" visualization: type: \"kibana\" kibana: tolerations: 2 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: tolerations: 3 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"tolerations: - effect: \"NoExecute\" key: \"node.kubernetes.io/disk-pressure\" operator: \"Exists\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 elasticsearch=node:NoExecute",
"logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 1 tolerations: - key: \"elasticsearch\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 kibana=node:NoExecute",
"visualization: type: \"kibana\" kibana: tolerations: - key: \"kibana\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"tolerations: - key: \"node-role.kubernetes.io/master\" operator: \"Exists\" effect: \"NoExecute\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"collection: logs: type: \"fluentd\" fluentd: tolerations: - key: \"collector\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"variant: openshift version: 4.9.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/configuring-your-logging-deployment
|
Chapter 5. Configuring kernel parameters at runtime
|
Chapter 5. Configuring kernel parameters at runtime As a system administrator, you can modify many facets of the Red Hat Enterprise Linux kernel's behavior at runtime. Configure kernel parameters at runtime by using the sysctl command and by modifying the configuration files in the /etc/sysctl.d/ and /proc/sys/ directories. Important Configuring kernel parameters on a production system requires careful planning. Unplanned changes can render the kernel unstable, requiring a system reboot. Verify that you are using valid options before changing any kernel values. For more information about tuning kernel on IBM DB2, see Tuning Red Hat Enterprise Linux for IBM DB2 . 5.1. What are kernel parameters Kernel parameters are tunable values that you can adjust while the system is running. Note that for changes to take effect, you do not need to reboot the system or recompile the kernel. It is possible to address the kernel parameters through: The sysctl command The virtual file system mounted at the /proc/sys/ directory The configuration files in the /etc/sysctl.d/ directory Tunables are divided into classes by the kernel subsystem. Red Hat Enterprise Linux has the following tunable classes: Table 5.1. Table of sysctl classes Tunable class Subsystem abi Execution domains and personalities crypto Cryptographic interfaces debug Kernel debugging interfaces dev Device-specific information fs Global and specific file system tunables kernel Global kernel tunables net Network tunables sunrpc Sun Remote Procedure Call (NFS) user User Namespace limits vm Tuning and management of memory, buffers, and cache Additional resources sysctl(8) , and sysctl.d(5) manual pages 5.2. Configuring kernel parameters temporarily with sysctl Use the sysctl command to temporarily set kernel parameters at runtime. The command is also useful for listing and filtering tunables. Prerequisites Root permissions Procedure List all parameters and their values. Note The # sysctl -a command displays kernel parameters, which can be adjusted at runtime and at boot time. To configure a parameter temporarily, enter: The sample command above changes the parameter value while the system is running. The changes take effect immediately, without a need for restart. Note The changes return back to default after your system reboots. Additional resources The sysctl(8) manual page Using configuration files in /etc/sysctl.d/ to adjust kernel parameters 5.3. Configuring kernel parameters permanently with sysctl Use the sysctl command to permanently set kernel parameters. Prerequisites Root permissions Procedure List all parameters. The command displays all kernel parameters that can be configured at runtime. Configure a parameter permanently: The sample command changes the tunable value and writes it to the /etc/sysctl.conf file, which overrides the default values of kernel parameters. The changes take effect immediately and persistently, without a need for restart. Note To permanently modify kernel parameters, you can also make manual changes to the configuration files in the /etc/sysctl.d/ directory. Additional resources The sysctl(8) and sysctl.conf(5) manual pages Using configuration files in /etc/sysctl.d/ to adjust kernel parameters 5.4. Using configuration files in /etc/sysctl.d/ to adjust kernel parameters You must modify the configuration files in the /etc/sysctl.d/ directory manually to permanently set kernel parameters. Prerequisites You have root permissions. Procedure Create a new configuration file in /etc/sysctl.d/ : Include kernel parameters, one per line: Save the configuration file. Reboot the machine for the changes to take effect. Alternatively, apply changes without rebooting: The command enables you to read values from the configuration file, which you created earlier. Additional resources sysctl(8) , sysctl.d(5) manual pages 5.5. Configuring kernel parameters temporarily through /proc/sys/ Set kernel parameters temporarily through the files in the /proc/sys/ virtual file system directory. Prerequisites Root permissions Procedure Identify a kernel parameter you want to configure. The writable files returned by the command can be used to configure the kernel. The files with read-only permissions provide feedback on the current settings. Assign a target value to the kernel parameter. The configuration changes applied by using a command are not permanent and will disappear once the system is restarted. Verification Verify the value of the newly set kernel parameter.
|
[
"sysctl -a",
"sysctl <TUNABLE_CLASS>.<PARAMETER>=<TARGET_VALUE>",
"sysctl -a",
"sysctl -w <TUNABLE_CLASS>.<PARAMETER>=<TARGET_VALUE> >> /etc/sysctl.conf",
"vim /etc/sysctl.d/< some_file.conf >",
"< TUNABLE_CLASS >.< PARAMETER >=< TARGET_VALUE > < TUNABLE_CLASS >.< PARAMETER >=< TARGET_VALUE >",
"sysctl -p /etc/sysctl.d/< some_file.conf >",
"ls -l /proc/sys/< TUNABLE_CLASS >/",
"echo < TARGET_VALUE > > /proc/sys/< TUNABLE_CLASS >/< PARAMETER >",
"cat /proc/sys/< TUNABLE_CLASS >/< PARAMETER >"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-parameters-at-runtime_managing-monitoring-and-updating-the-kernel
|
Chapter 14. Volume cloning
|
Chapter 14. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 14.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-cloning_osp
|
Chapter 3. Configuring the internal OAuth server
|
Chapter 3. Configuring the internal OAuth server 3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 3.2. OAuth token request flows and responses The OAuth server supports standard authorization code grant and the implicit grant OAuth authorization flows. When requesting an OAuth token using the implicit grant flow ( response_type=token ) with a client_id configured to request WWW-Authenticate challenges (like openshift-challenging-client ), these are the possible server responses from /oauth/authorize , and how they should be handled: Status Content Client response 302 Location header containing an access_token parameter in the URL fragment ( RFC 6749 section 4.2.2 ) Use the access_token value as the OAuth token. 302 Location header containing an error query parameter ( RFC 6749 section 4.1.2.1 ) Fail, optionally surfacing the error (and optional error_description ) query values to the user. 302 Other Location header Follow the redirect, and process the result using these rules. 401 WWW-Authenticate header present Respond to challenge if type is recognized (e.g. Basic , Negotiate , etc), resubmit request, and process the result using these rules. 401 WWW-Authenticate header missing No challenge authentication is possible. Fail and show response body (which might contain links or details on alternate methods to obtain an OAuth token). Other Other Fail, optionally surfacing response body to the user. 3.3. Options for the internal OAuth server Several configuration options are available for the internal OAuth server. 3.3.1. OAuth token duration options The internal OAuth server generates two kinds of tokens: Token Description Access tokens Longer-lived tokens that grant access to the API. Authorize codes Short-lived tokens whose only use is to be exchanged for an access token. You can configure the default duration for both types of token. If necessary, you can override the duration of the access token by using an OAuthClient object definition. 3.3.2. OAuth grant options When the OAuth server receives token requests for a client to which the user has not previously granted permission, the action that the OAuth server takes is dependent on the OAuth client's grant strategy. The OAuth client requesting token must provide its own grant strategy. You can apply the following default methods: Grant option Description auto Auto-approve the grant and retry the request. prompt Prompt the user to approve or deny the grant. 3.4. Configuring the internal OAuth server's token duration You can configure default options for the internal OAuth server's token duration. Important By default, tokens are only valid for 24 hours. Existing sessions expire after this time elapses. If the default time is insufficient, then this can be modified using the following procedure. Procedure Create a configuration file that contains the token duration options. The following file sets this to 48 hours, twice the default. apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1 1 Set accessTokenMaxAgeSeconds to control the lifetime of access tokens. The default lifetime is 24 hours, or 86400 seconds. This attribute cannot be negative. If set to zero, the default lifetime is used. Apply the new configuration file: Note Because you update the existing OAuth server, you must use the oc apply command to apply the change. USD oc apply -f </path/to/file.yaml> Confirm that the changes are in effect: USD oc describe oauth.config.openshift.io/cluster Example output ... Spec: Token Config: Access Token Max Age Seconds: 172800 ... 3.5. Configuring token inactivity timeout for the internal OAuth server You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in your OAuth client, that value overrides the timeout that is set in the internal OAuth server configuration. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuth configuration to set a token inactivity timeout. Edit the OAuth object: USD oc edit oauth cluster Add the spec.tokenConfig.accessTokenInactivityTimeout field and set your timeout value: apiVersion: config.openshift.io/v1 kind: OAuth metadata: ... spec: tokenConfig: accessTokenInactivityTimeout: 400s 1 1 Set a value with the appropriate units, for example 400s for 400 seconds, or 30m for 30 minutes. The minimum allowed timeout value is 300s . Save the file to apply the changes. Check that the OAuth server pods have restarted: USD oc get clusteroperators authentication Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 145m Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.14.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Verification Log in to the cluster with an identity from your IDP. Execute a command and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 400 seconds. Try to execute a command from the same identity's session. This command should fail because the token should have expired due to inactivity longer than the configured timeout. Example output error: You must be logged in to the server (Unauthorized) 3.6. Customizing the internal OAuth server URL You can customize the internal OAuth server URL by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Warning If you update the internal OAuth server URL, you might break trust from components in the cluster that need to communicate with the OpenShift OAuth server to retrieve OAuth access tokens. Components that need to trust the OAuth server will need to include the proper CA bundle when calling OAuth endpoints. For example: USD oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1 1 For self-signed certificates, the ca.crt file must contain the custom CA certificate, otherwise the login will not succeed. The Cluster Authentication Operator publishes the OAuth server's serving certificate in the oauth-serving-cert config map in the openshift-config-managed namespace. You can find the certificate in the data.ca-bundle.crt key of the config map. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 3.7. OAuth server metadata Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For example, they might have to discover what the address of the <namespace_route> is without manual configuration. To aid in this, OpenShift Container Platform implements the IETF OAuth 2.0 Authorization Server Metadata draft specification. Thus, any application running inside the cluster can issue a GET request to https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following information: 1 The authorization server's issuer identifier, which is a URL that uses the https scheme and has no query or fragment components. This is the location where .well-known RFC 5785 resources containing information about the authorization server are published. 2 URL of the authorization server's authorization endpoint. See RFC 6749 . 3 URL of the authorization server's token endpoint. See RFC 6749 . 4 JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server supports. Note that not all supported scope values are advertised. 5 JSON array containing a list of the OAuth 2.0 response_type values that this authorization server supports. The array values used are the same as those used with the response_types parameter defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591 . 6 JSON array containing a list of the OAuth 2.0 grant type values that this authorization server supports. The array values used are the same as those used with the grant_types parameter defined by OAuth 2.0 Dynamic Client Registration Protocol in RFC 7591 . 7 JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this authorization server. Code challenge method values are used in the code_challenge_method parameter defined in Section 4.3 of RFC 7636 . The valid code challenge method values are those registered in the IANA PKCE Code Challenge Methods registry. See IANA OAuth Parameters . 3.8. Troubleshooting OAuth API events In some cases the API server returns an unexpected condition error message that is difficult to debug without direct access to the API master log. The underlying reason for the error is purposely obscured in order to avoid providing an unauthenticated user with information about the server's state. A subset of these errors is related to service account OAuth configuration issues. These issues are captured in events that can be viewed by non-administrator users. When encountering an unexpected condition server error during OAuth, run oc get events to view these events under ServiceAccount . The following example warns of a service account that is missing a proper OAuth redirect URI: USD oc get events | grep ServiceAccount Example output 1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Running oc describe sa/<service_account_name> reports any OAuth events associated with the given service account name. USD oc describe sa/proxy | grep -A5 Events Example output Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> The following is a list of the possible event errors: No redirect URI annotations or an invalid URI is specified Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Invalid route specified Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io "<name>" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Invalid reference type specified Reason Message NoSAOAuthRedirectURIs [no kind "<name>" is registered for version "v1", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Missing SA tokens Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens
|
[
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1",
"oc apply -f </path/to/file.yaml>",
"oc describe oauth.config.openshift.io/cluster",
"Spec: Token Config: Access Token Max Age Seconds: 172800",
"oc edit oauth cluster",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1",
"oc get clusteroperators authentication",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 145m",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.14.0 True False False 145m",
"error: You must be logged in to the server (Unauthorized)",
"oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }",
"oc get events | grep ServiceAccount",
"1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"oc describe sa/proxy | grep -A5 Events",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/configuring-internal-oauth
|
Chapter 6. Proof of concept deployment using SSL/TLS certificates
|
Chapter 6. Proof of concept deployment using SSL/TLS certificates Use the following sections to configure a proof of concept Red Hat Quay deployment with SSL/TLS certificates. 6.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and a primary key file named ssl.cert and ssl.key . 6.1.1. Creating a Certificate Authority Use the following procedure to set up your own CA and use it to issue a server certificate for your domain. This allows you to secure communications with SSL/TLS using your own certificates. Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: Example openssl.cnf file [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf Confirm your created certificates and files by entering the following command: USD ls /path/to/certificates Example output rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr 6.2. Configuring SSL/TLS for standalone Red Hat Quay deployments For standalone Red Hat Quay deployments, SSL/TLS certificates must be configured by using the command-line interface and by updating your config.yaml file manually. 6.2.1. Configuring custom SSL/TLS certificates by using the command line interface SSL/TLS must be configured by using the command-line interface (CLI) and updating your config.yaml file manually. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory Navigate to the configuration directory by entering the following command: USD cd /path/to/configuration_directory Edit the config.yaml file and specify that you want Red Hat Quay to handle SSL/TLS: Example config.yaml file # ... SERVER_HOSTNAME: <quay-server.example.com> ... PREFERRED_URL_SCHEME: https # ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop <quay_container_name> Restart the registry by entering the following command: 6.3. Testing the SSL/TLS configuration Your SSL/TLS configuration can be tested by using the command-line interface (CLI). Use the following procedure to test your SSL/TLS configuration. 6.3.1. Testing the SSL/TLS configuration using the CLI Your SSL/TLS configuration can be tested by using the command-line interface (CLI). Use the following procedure to test your SSL/TLS configuration. Use the following procedure to test your SSL/TLS configuration using the CLI. Procedure Enter the following command to attempt to log in to the Red Hat Quay registry with SSL/TLS enabled: USD sudo podman login quay-server.example.com Example output Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority Because Podman does not trust self-signed certificates, you must use the --tls-verify=false option: USD sudo podman login --tls-verify=false quay-server.example.com Example output Login Succeeded! In a subsequent section, you will configure Podman to trust the root Certificate Authority. 6.3.2. Testing the SSL/TLS configuration using a browser Use the following procedure to test your SSL/TLS configuration using a browser. Procedure Navigate to your Red Hat Quay registry endpoint, for example, https://quay-server.example.com . If configured correctly, the browser warns of the potential risk: Proceed to the log in screen. The browser notifies you that the connection is not secure. For example: In the following section, you will configure Podman to trust the root Certificate Authority. 6.4. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 6.5. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates .
|
[
"openssl genrsa -out rootCA.key 2048",
"openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com",
"openssl genrsa -out ssl.key 2048",
"openssl req -new -key ssl.key -out ssl.csr",
"Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:",
"[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112",
"openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf",
"ls /path/to/certificates",
"rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr",
"cp ~/ssl.cert ~/ssl.key /path/to/configuration_directory",
"cd /path/to/configuration_directory",
"SERVER_HOSTNAME: <quay-server.example.com> PREFERRED_URL_SCHEME: https",
"cat rootCA.pem >> ssl.cert",
"sudo podman stop <quay_container_name>",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.13.3",
"sudo podman login quay-server.example.com",
"Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority",
"sudo podman login --tls-verify=false quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt",
"sudo podman login quay-server.example.com",
"Login Succeeded!",
"sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"trust list | grep quay label: quay-server.example.com",
"sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem",
"sudo update-ca-trust extract",
"trust list | grep quay"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/proof_of_concept_-_deploying_red_hat_quay/advanced-quay-poc-deployment
|
19.2. TCP Wrappers
|
19.2. TCP Wrappers Many UNIX system administrators are accustomed to using TCP wrappers to manage access to certain network services. Any network services managed by xinetd (as well as any program with built-in support for libwrap ) can use TCP wrappers to manage access. xinetd can use the /etc/hosts.allow and /etc/hosts.deny files to configure access to system services. As the names imply, hosts.allow contains a list of rules that allow clients to access the network services controlled by xinetd , and hosts.deny contains rules to deny access. The hosts.allow file takes precedence over the hosts.deny file. Permissions to grant or deny access can be based on individual IP address (or hostnames) or on a pattern of clients. Refer to the Reference Guide and hosts_access in section 5 of the man pages ( man 5 hosts_access ) for details. 19.2.1. xinetd To control access to Internet services, use xinetd , which is a secure replacement for inetd . The xinetd daemon conserves system resources, provides access control and logging, and can be used to start special-purpose servers. xinetd can be used to provide access only to particular hosts, to deny access to particular hosts, to provide access to a service at certain times, to limit the rate of incoming connections and/or the load created by connections, and more xinetd runs constantly and listens on all ports for the services it manages. When a connection request arrives for one of its managed services, xinetd starts up the appropriate server for that service. The configuration file for xinetd is /etc/xinetd.conf , but the file only contains a few defaults and an instruction to include the /etc/xinetd.d directory. To enable or disable an xinetd service, edit its configuration file in the /etc/xinetd.d directory. If the disable attribute is set to yes , the service is disabled. If the disable attribute is set to no , the service is enabled. You can edit any of the xinetd configuration files or change its enabled status using the Services Configuration Tool , ntsysv , or chkconfig . For a list of network services controlled by xinetd , review the contents of the /etc/xinetd.d directory with the command ls /etc/xinetd.d .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/controlling_access_to_services-tcp_wrappers
|
21.13. virt-diff: Listing the Differences between Virtual Machine Files
|
21.13. virt-diff: Listing the Differences between Virtual Machine Files The virt-diff command-line tool can be used to lists the differences between files in two virtual machines disk images. The output shows the changes to a virtual machine's disk images after it has been running. The command can also be used to show the difference between overlays. Note You can use virt-diff safely on live guest virtual machines, because it only needs read-only access. This tool finds the differences in file names, file sizes, checksums, extended attributes, file content and more between the running virtual machine and the selected image. Note The virt-diff command does not check the boot loader, unused space between partitions or within file systems, or "hidden" sectors. Therefore, it is recommended that you do not use this as a security or forensics tool. To install virt-diff , run one of the following commands: # yum install /usr/bin/virt-diff or # yum install libguestfs-tools-c To specify two guests, you have to use the -a or -d option for the first guest, and the -A or -D option for the second guest. For example: USD virt-diff -a old.img -A new.img You can also use names known to libvirt . For example: USD virt-diff -d oldguest -D newguest The following command options are available to use with virt-diff : Table 21.3. virt-diff options Command Description Example --help Displays a brief help entry about a particular command or about the virt-diff utility. For additional help, see the virt-diff man page. virt-diff --help -a [ file ] or --add [ file ] Adds the specified file , which should be a disk image from the first virtual machine. If the virtual machine has multiple block devices, you must supply all of them with separate -a options. The format of the disk image is auto-detected. To override this and force a particular format, use the --format option. virt-customize --add /dev/vms/original.img -A /dev/vms/new.img -a [ URI ] or --add [ URI ] Adds a remote disk. The URI format is compatible with guestfish. For more information, see Section 21.4.2, "Adding Files with guestfish" . virt-diff -a rbd://example.com[:port]/pool/newdisk -A rbd://example.com[:port]/pool/olddisk --all Same as --extra-stats --times --uids --xattrs . virt-diff --all --atime By default, virt-diff ignores changes in file access times, since those are unlikely to be interesting. Use the --atime option to show access time differences. virt-diff --atime -A [ file ] Adds the specified file or URI , which should be a disk image from the second virtual machine. virt-diff --add /dev/vms/original.img -A /dev/vms/new.img -c [ URI ] or --connect [ URI ] Connects to the given URI, if using libvirt . If omitted, then it connects to the default libvirt hypervisor. If you specify guest block devices directly ( virt-diff -a ), then libvirt is not used at all. virt-diff -c qemu:///system --csv Provides the results in a comma-separated values (CSV) format. This format can be imported easily into databases and spreadsheets. For further information, see Note . virt-diff --csv -d [ guest ] or --domain [ guest ] Adds all the disks from the specified guest virtual machine as the first guest virtual machine. Domain UUIDs can be used instead of domain names. USD virt-diff --domain 90df2f3f-8857-5ba9-2714-7d95907b1c9e -D [ guest ] Adds all the disks from the specified guest virtual machine as the second guest virtual machine. Domain UUIDs can be used instead of domain names. virt-diff --D 90df2f3f-8857-5ba9-2714-7d95907b1cd4 --extra-stats Displays extra statistics. virt-diff --extra-stats --format or --format=[ raw | qcow2 ] The default for the -a / -A option is to auto-detect the format of the disk image. Using this forces the disk format for -a / -A options that follow on the command line. Using --format auto switches back to auto-detection for subsequent -a options (see the -a command above). virt-diff --format raw -a new.img -A old.img forces raw format (no auto-detection) for new.img and old.img, but virt-diff --format raw -a new.img --format auto -a old.img forces raw format (no auto-detection) for new.img and reverts to auto-detection for old.img . If you have untrusted raw-format guest disk images, you should use this option to specify the disk format. This avoids a possible security problem with malicious guests. -h or --human-readable Displays file sizes in human-readable format. virt-diff -h --time-days Displays time fields for changed files as days before now (negative if in the future). Note that 0 in the output means between 86,399 seconds (23 hours, 59 minutes, and 59 seconds) before now and 86,399 seconds in the future. virt-diff --time-days -v or --verbose Enables verbose messages for debugging purposes. virt-diff --verbose -V or --version Displays the virt-diff version number and exits. virt-diff -V -x Enables tracing of libguestfs API calls. virt-diff -x Note The comma-separated values (CSV) format can be difficult to parse. Therefore, it is recommended that for shell scripts, you should use csvtool and for other languages, use a CSV processing library (such as Text::CSV for Perl or Python's built-in csv library). In addition, most spreadsheets and databases can import CSV directly. For more information, including additional options, see libguestfs.org .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Guest_virtual_machine_disk_access_with_offline_tools-Using_virt_diff
|
Chapter 24. Limiting SCHED_OTHER task migration
|
Chapter 24. Limiting SCHED_OTHER task migration You can limit the tasks that SCHED_OTHER migrates to other CPUs using the sched_nr_migrate variable. Prerequisites You have administrator privileges. 24.1. Task migration If a SCHED_OTHER task spawns a large number of other tasks, they will all run on the same CPU. The migration task or softirq will try to balance these tasks so they can run on idle CPUs. The sched_nr_migrate option can be adjusted to specify the number of tasks that will move at a time. Because real-time tasks have a different way to migrate, they are not directly affected by this. However, when softirq moves the tasks, it locks the run queue spinlock, thus disabling interrupts. If there are a large number of tasks that need to be moved, it occurs while interrupts are disabled, so no timer events or wakeups will be allowed to happen simultaneously. This can cause severe latencies for real-time tasks when sched_nr_migrate is set to a large value. 24.2. Limiting SCHED_OTHER task migration using the sched_nr_migrate variable Increasing the sched_nr_migrate variable provides high performance from SCHED_OTHER threads that spawn many tasks at the expense of real-time latency. For low real-time task latency at the expense of SCHED_OTHER task performance, the value must be lowered. The default value is 8 . Procedure To adjust the value of the sched_nr_migrate variable, echo the value directly to /proc/sys/kernel/sched_nr_migrate : Verification View the contents of /proc/sys/kernel/sched_nr_migrate :
|
[
"echo 2 > /proc/sys/kernel/sched_nr_migrate",
"cat > /proc/sys/kernel/sched_nr_migrate 2"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_limiting-sched_other-task-migration_optimizing-rhel9-for-real-time-for-low-latency-operation
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_devices/providing-feedback-on-red-hat-documentation_rhodf
|
Networking
|
Networking Red Hat Advanced Cluster Management for Kubernetes 2.11 Networking
|
[
"edit submarinerconfig -n <managed-cluster-ns> submariner",
"annotations: submariner.io/control-plane-sg-id: <control-plane-group-id> 1 submariner.io/subnet-id-list: <subnet-id-list> 2 submariner.io/vpc-id: <custom-vpc-id> 3 submariner.io/worker-sg-id: <worker-security-group-id> 4",
"get ManagedClusterSet <cluster-set-name> -o jsonpath=\"{.metadata.annotations['cluster\\.open-cluster-management\\.io/submariner-broker-ns']}\"",
"apiVersion: submariner.io/v1alpha1 kind: Broker metadata: name: submariner-broker 1 namespace: broker-namespace 2 spec: globalnetEnabled: true-or-false 3",
"apply -f submariner-broker.yaml",
"apiVersion: submariner.io/v1 kind: ClusterGlobalEgressIP metadata: name: cluster-egress.submariner.io spec: numberOfIPs: 8",
"tar -C /tmp/ -xf <name>.tar.xz",
"install -m744 /tmp/<version>/<name> /USDHOME/.local/bin/subctl",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: gatewayConfig: gateways: 1",
"az extension add --upgrade -s <path-to-extension>",
"az extension list",
"\"experimental\": false, \"extensionType\": \"whl\", \"name\": \"aro\", \"path\": \"<path-to-extension>\", \"preview\": true, \"version\": \"1.0.x\"",
"az feature registration create --namespace Microsoft.RedHatOpenShift --name AdminKubeconfig",
"az aro get-admin-kubeconfig -g <resource group> -n <cluster resource name>",
"export KUBECONFIG=<path-to-kubeconfig> get nodes",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: loadBalancerEnable: true",
"rosa login login <rosa-cluster-url>:6443 --username cluster-admin --password <password>",
"config view --flatten=true > rosa_kube/kubeconfig",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: loadBalancerEnable: true",
"apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: <managed-cluster-set-name>",
"apiVersion: submariner.io/v1alpha1 kind: Broker metadata: name: submariner-broker namespace: <managed-cluster-set-name>-broker labels: cluster.open-cluster-management.io/backup: submariner spec: globalnetEnabled: <true-or-false>",
"label managedclusters <managed-cluster-name> \"cluster.open-cluster-management.io/clusterset=<managed-cluster-set-name>\" --overwrite",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec:{}",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: submariner namespace: <managed-cluster-name> spec: installNamespace: submariner-operator",
"-n <managed-cluster-name> get managedclusteraddons submariner -oyaml",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds IPSecNATTPort: <NATTPort>",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds gatewayConfig: gateways: <gateways>",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds gatewayConfig: instanceType: <instance-type>",
"apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: cableDriver: vxlan credentialsSecret: name: <managed-cluster-name>-<provider>-creds",
"-n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine -n default expose deployment nginx --port=8080",
"subctl export service --namespace <service-namespace> <service-name>",
"-n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080",
"subctl unexport service --namespace <service-namespace> <service-name>",
"-n <managed-cluster-namespace> delete managedclusteraddon submariner",
"-n <managed-cluster-namespace> delete submarinerconfig submariner",
"delete managedclusterset <managedclusterset>",
"get cluster <CLUSTER_NAME> grep submariner",
"delete resource <RESOURCE_NAME> cluster <CLUSTER_NAME>"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/networking/index
|
Appendix B. Keytool
|
Appendix B. Keytool B.1. Keytool Keytool is an encryption key and certificate management utility. It enables users to create and manage their own public/private key pairs and associated certificates for use in self-authentication, and also to cache public keys (in the form of certificates) belonging to other parties, for securing communication to those parties.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/appe-keytool
|
Chapter 11. Finding and cleaning stale subvolumes (Technology Preview)
|
Chapter 11. Finding and cleaning stale subvolumes (Technology Preview) Sometimes stale subvolumes don't have a respective k8s reference attached. These subvolumes are of no use and can be deleted. You can find and delete stale subvolumes using the ODF CLI tool. Important Deleting stale subvolumes using the ODF CLI tool is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . Prerequisites Download the ODF CLI tool from the customer portal . Procedure Find the stale subvolumes by using the --stale flag with the subvolumes command: Example output: Delete the stale subvolumes: Replace <subvolumes> with a comma separated list of subvolumes from the output of the first command. The subvolumes must be of the same filesystem and subvolumegroup. Replace <filesystem> and <subvolumegroup> with the filesystem and subvolumegroup from the output of the first command. For example: Example output:
|
[
"odf subvolume ls --stale",
"Filesystem Subvolume Subvolumegroup State ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110004 csi stale ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110005 csi stale",
"odf subvolume delete <subvolumes> <filesystem> <subvolumegroup>",
"odf subvolume delete csi-vol-427774b4-340b-11ed-8d66-0242ac110004,csi-vol-427774b4-340b-11ed-8d66-0242ac110005 ocs-storagecluster csi",
"Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/finding-and-cleaning-subvolumes_rhodf
|
Chapter 2. Configuring an IBM Cloud account
|
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud(R) account. Important IBM Power Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. Prerequisites You have an IBM Cloud(R) account with a subscription. You cannot install OpenShift Container Platform on a free or on a trial IBM Cloud(R) account. 2.2. Quotas and limits on IBM Power Virtual Server The OpenShift Container Platform cluster uses several IBM Cloud(R) and IBM Power(R) Virtual Server components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud(R) account. For a comprehensive list of the default IBM Cloud(R) quotas and service limits, see the IBM Cloud(R) documentation for Quotas and service limits . Virtual Private Cloud Each OpenShift Container Platform cluster creates its own Virtual Private Cloud (VPC). The default quota of VPCs per region is 10. If you have 10 VPCs created, you will need to increase your quota before attempting an installation. Application load balancer By default, each cluster creates two application load balancers (ALBs): Internal load balancer for the control plane API server External load balancer for the control plane API server You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Power(R) Virtual Server. Cloud connections There is a limit of two cloud connections per IBM Power(R) Virtual Server instance. It is recommended that you have only one cloud connection in your IBM Power(R) Virtual Server instance to serve your cluster. Note Cloud Connections are no longer supported in dal10 . A transit gateway is used instead. Dynamic Host Configuration Protocol Service There is a limit of one Dynamic Host Configuration Protocol (DHCP) service per IBM Power(R) Virtual Server instance. Networking Due to networking limitations, there is a restriction of one OpenShift cluster installed through IPI per zone per account. This is not configurable. Virtual Server Instances By default, a cluster creates server instances with the following resources : 0.5 CPUs 32 GB RAM System Type: s922 Processor Type: uncapped , shared Storage Tier: Tier-3 The following nodes are created: One bootstrap machine, which is removed after the installation is complete Three control plane nodes Three compute nodes For more information, see Creating a Power Systems Virtual Server in the IBM Cloud(R) documentation. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud(R) Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud(R) DNS Services (DNS Services). 2.4. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud(R) Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Log in to IBM Cloud(R) by using the CLI: USD ibmcloud login Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard- 1 1 At a minimum, you require a Standard plan for CIS to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_CRN> 1 1 The instance CRN (Cloud Resource Name). For example: ibmcloud cis instance-set crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460d8c2e4bbfbd893c2c:c09233ac-48a5-4ccb-a051-d1cfb3fc7eb5:: Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud(R) documentation . 2.5. IBM Cloud IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud(R) account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud(R) service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud(R) IAM overview, see the IBM Cloud(R) documentation . 2.5.1. Pre-requisite permissions Table 2.1. Pre-requisite permissions Role Access Viewer, Operator, Editor, Administrator, Reader, Writer, Manager Internet Services service in <resource_group> resource group Viewer, Operator, Editor, Administrator, User API key creator, Service ID creator IAM Identity Service service Viewer, Operator, Administrator, Editor, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service in <resource_group> resource group Viewer Resource Group: Access to view the resource group itself. The resource type should equal Resource group , with a value of <your_resource_group_name>. 2.5.2. Cluster-creation permissions Table 2.2. Cluster-creation permissions Role Access Viewer <resource_group> (Resource Group Created for Your Team) Viewer, Operator, Editor, Reader, Writer, Manager All service in Default resource group Viewer, Reader Internet Services service Viewer, Operator, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Editor Cloud Object Storage service Viewer Default resource group: The resource type should equal Resource group , with a value of Default . If your account administrator changed your account's default resource group to something other than Default, use that value instead. Viewer, Operator, Editor, Reader, Manager IBM Power(R) Virtual Server service in <resource_group> resource group Viewer, Operator, Editor, Reader, Writer, Manager, Administrator Internet Services service in <resource_group> resource group: CIS functional scope string equals reliability Viewer, Operator, Editor Direct Link service Viewer, Operator, Editor, Administrator, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service <resource_group> resource group 2.5.3. Access policy assignment In IBM Cloud(R) IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.5.4. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud(R) account. Prerequisites You have assigned the required access policies to your IBM Cloud(R) account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud(R) API keys, see Understanding API keys . 2.6. Supported IBM Power Virtual Server regions and zones You can deploy an OpenShift Container Platform cluster to the following regions: dal (Dallas, USA) dal10 dal12 us-east (Washington DC, USA) us-east eu-de (Frankfurt, Germany) eu-de-1 eu-de-2 lon (London, UK) lon04 lon06 osa (Osaka, Japan) osa21 sao (Sao Paulo, Brazil) sao01 syd (Sydney, Australia) syd04 tok (Tokyo, Japan) tok04 tor (Toronto, Canada) tor01 You might optionally specify the IBM Cloud(R) region in which the installer will create any VPC components. Supported regions in IBM Cloud(R) are: us-south eu-de eu-gb jp-osa au-syd br-sao ca-tor jp-tok 2.7. steps Creating an IBM Power(R) Virtual Server workspace
|
[
"ibmcloud plugin install cis",
"ibmcloud login",
"ibmcloud cis instance-create <instance_name> standard-next 1",
"ibmcloud cis instance-set <instance_CRN> 1",
"ibmcloud cis domain-add <domain_name> 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power_virtual_server/installing-ibm-cloud-account-power-vs
|
9.2.2. Finding Packages with Filters
|
9.2.2. Finding Packages with Filters Once the software sources have been updated, it is often beneficial to apply some filters so that PackageKit retrieves the results of our Find queries faster. This is especially helpful when performing many package searches. Four of the filters in the Filters drop-down menu are used to split results by matching or not matching a single criterion. By default when PackageKit starts, these filters are all unapplied ( No filter ), but once you do filter by one of them, that filter remains set until you either change it or close PackageKit. Because you are usually searching for available packages that are not installed on the system, click Filters Installed and select the Only available radio button. Figure 9.5. Filtering out already-installed packages Also, unless you require development files such as C header files, click Filters Development and select the Only end user files radio button. This filters out all of the <package_name> -devel packages we are not interested in. Figure 9.6. Filtering out development packages from the list of Find results The two remaining filters with submenus are: Graphical Narrows the search to either applications which provide a GUI interface ( Only graphical ) or those that do not. This filter is useful when browsing for GUI applications that perform a specific function. Free Search for packages which are considered to be free software. See the Fedora Licensing List for details on approved licenses. The remaining filters can be enabled by selecting the check boxes to them: Hide subpackages Checking the Hide subpackages check box filters out generally-uninteresting packages that are typically only dependencies of other packages that we want. For example, checking Hide subpackages and searching for <package> would cause the following related packages to be filtered out of the Find results (if it exists): <package> -devel <package> -libs <package> -libs-devel <package> -debuginfo Only newest packages Checking Only newest packages filters out all older versions of the same package from the list of results, which is generally what we want. Note that this filter is often combined with the Only available filter to search for the latest available versions of new (not installed) packages. Only native packages Checking the Only native packages box on a multilib system causes PackageKit to omit listing results for packages compiled for the architecture that runs in compatibility mode . For example, enabling this filter on a 64-bit system with an AMD64 CPU would cause all packages built for the 32-bit x86 CPU architecture not to be shown in the list of results, even though those packages are able to run on an AMD64 machine. Packages which are architecture-agnostic (i.e. noarch packages such as crontabs-1.10-32.1.el6.noarch.rpm ) are never filtered out by checking Only native packages . This filter has no affect on non-multilib systems, such as x86 machines.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-finding_packages_with_filters
|
Chapter 1. Support policy for Cryostat
|
Chapter 1. Support policy for Cryostat Red Hat supports a major version of Cryostat for a minimum of 6 months. Red Hat bases this figure on the time that the product gets released on the Red Hat Customer Portal. You can install and deploy Cryostat on Red Hat OpenShift Container Platform 4.11 or a later version that runs on an x86_64 or ARM64 architecture. Additional resources For more information about the Cryostat life cycle policy, see Red Hat build of Cryostat on the Red Hat OpenShift Container Platform Life Cycle Policy web page.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.4/cryostat-support-policy_cryostat
|
Chapter 8. neutron
|
Chapter 8. neutron The following chapter contains information about the configuration options in the neutron service. 8.1. dhcp_agent.ini This section contains options for the /etc/neutron/dhcp_agent.ini file. 8.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/dhcp_agent.ini file. . Configuration option = Default value Type Description bulk_reload_interval = 0 integer value Time to sleep between reloading the DHCP allocations. This will only be invoked if the value is not 0. If a network has N updates in X seconds then we will reload once with the port changes in the X seconds and not N times. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. dhcp_broadcast_reply = False boolean value Use broadcast in DHCP replies. dhcp_confs = USDstate_path/dhcp string value Location to store DHCP server config files. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq string value The driver used to manage the DHCP server. dhcp_rebinding_time = 0 integer value DHCP rebinding time T2 (in seconds). If set to 0, it will default to 7/8 of the lease time. dhcp_renewal_time = 0 integer value DHCP renewal time T1 (in seconds). If set to 0, it will default to half of the lease time. dnsmasq_base_log_dir = None string value Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful for debugging issues with either DHCP or DNS. If this section is null, disable dnsmasq log. `dnsmasq_config_file = ` string value Override the default dnsmasq settings with this file. dnsmasq_dns_servers = [] list value Comma-separated list of the DNS servers which will be used as forwarders. dnsmasq_enable_addr6_list = False boolean value Enable dhcp-host entry with list of addresses when port has multiple IPv6 addresses in the same subnet. dnsmasq_lease_max = 16777216 integer value Limit number of leases to prevent a denial-of-service. dnsmasq_local_resolv = False boolean value Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. Effectively removes the --no-resolv option from the dnsmasq process arguments. Adding custom DNS resolvers to the dnsmasq_dns_servers option disables this feature. enable_isolated_metadata = False boolean value The DHCP server can assist with providing metadata support on isolated networks. Setting this value to True will cause the DHCP server to append specific host routes to the DHCP request. The metadata service will only be activated when the subnet does not contain any router port. The guest instance must be configured to request host routes via DHCP (Option 121). This option doesn't have any effect when force_metadata is set to True. enable_metadata_network = False boolean value Allows for serving metadata requests coming from a dedicated metadata access network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected to a Neutron router from which the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in VMs, as they will be able to reach 169.254.169.254 through a router. This option requires enable_isolated_metadata = True. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. force_metadata = False boolean value In some cases the Neutron router is not present to provide the metadata IP but the DHCP server can be used to provide this info. Setting this value will force the DHCP server to append specific host routes to the DHCP request. If this option is set, then the metadata service will be activated for all the networks. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". num_sync_threads = 4 integer value Number of threads to use during sync process. Should not exceed connection pool size configured on server. ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. resync_interval = 5 integer value The DHCP agent will resync its state with Neutron to recover from any transient notification or RPC errors. The interval is maximum number of seconds between attempts. The resync can be done more often based on the events triggered. resync_throttle = 1 integer value Throttle the number of resync state events between the local DHCP state and Neutron to only once per resync_throttle seconds. The value of throttle introduces a minimum interval between resync state events. Otherwise the resync may end up in a busy-loop. The value must be less than resync_interval. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.1.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/dhcp_agent.ini file. Table 8.1. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.1.3. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/dhcp_agent.ini file. Table 8.2. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used by ovsdb-client when monitoring and used for the all ovsdb commands when native ovsdb_interface is enabled ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.2. l3_agent.ini This section contains options for the /etc/neutron/l3_agent.ini file. 8.2.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/l3_agent.ini file. . Configuration option = Default value Type Description agent_mode = legacy string value The working mode for the agent. Allowed modes are: legacy - this preserves the existing behavior where the L3 agent is deployed on a centralized networking node to provide L3 services like DNAT, and SNAT. Use this mode if you do not want to adopt DVR. dvr - this mode enables DVR functionality and must be used for an L3 agent that runs on a compute host. dvr_snat - this enables centralized SNAT support in conjunction with DVR. This mode must be used for an L3 agent running on a centralized node (or in single-host deployments, e.g. devstack). dvr_no_external - this mode enables only East/West DVR routing functionality for a L3 agent that runs on a compute host, the North/South functionality such as DNAT and SNAT will be provided by the centralized network node that is running in dvr_snat mode. This mode should be used when there is no external network connectivity on the compute host. api_workers = None integer value Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance, capped by potential RAM usage. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. enable_metadata_proxy = True boolean value Allow running metadata proxy. external_ingress_mark = 0x2 string value Iptables mangle mark used to mark ingress from external network. This mark will be masked with 0xffff so that only the lower 16 bits will be used. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. ha_confs_path = USDstate_path/ha_confs string value Location to store keepalived config files ha_keepalived_state_change_server_threads = <based on operating system> integer value Number of concurrent threads for keepalived server connection requests. More threads create a higher CPU load on the agent node. ha_vrrp_advert_int = 2 integer value The advertisement interval in seconds ha_vrrp_auth_password = None string value VRRP authentication password ha_vrrp_auth_type = PASS string value VRRP authentication type ha_vrrp_garp_master_delay = 5 integer value The delay for second set of gratuitous ARPs after lower priority advert received when MASTER. NOTE: this config option will be available only in OSP13 and OSP16. Future releases will implement a template form to provide the "keepalived" configuration. ha_vrrp_garp_master_repeat = 5 integer value The number of gratuitous ARP messages to send at a time after transition to MASTER. NOTE: this config option will be available only in OSP13 and OSP16. Future releases will implement a template form to provide the "keepalived" configuration. ha_vrrp_health_check_interval = 0 integer value The VRRP health check interval in seconds. Values > 0 enable VRRP health checks. Setting it to 0 disables VRRP health checks. Recommended value is 5. This will cause pings to be sent to the gateway IP address(es) - requires ICMP_ECHO_REQUEST to be enabled on the gateway. If gateway fails, all routers will be reported as master, and master election will be repeated in round-robin fashion, until one of the router restore the gateway connection. handle_internal_only_routers = True boolean value Indicates that this L3 agent should also handle routers that do not have an external network gateway configured. This option should be True only for a single agent in a Neutron deployment, and may be False for all agents if all routers must have an external network gateway. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. `ipv6_gateway = ` string value With IPv6, the network used for the external gateway does not need to have an associated subnet, since the automatically assigned link-local address (LLA) can be used. However, an IPv6 gateway address is needed for use as the -hop for the default route. If no IPv6 gateway address is configured here, (and only then) the neutron router will be configured to get its default route from router advertisements (RAs) from the upstream router; in which case the upstream router must also be configured to send these RAs. The ipv6_gateway, when configured, should be the LLA of the interface on the upstream router. If a -hop using a global unique address (GUA) is desired, it needs to be done via a subnet allocated to the network and not through this parameter. keepalived_use_no_track = True boolean value If keepalived without support for "no_track" option is used, this should be set to False. Support for this option was introduced in keepalived 2.x log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_rtr_adv_interval = 100 integer value MaxRtrAdvInterval setting for radvd.conf metadata_access_mark = 0x1 string value Iptables mangle mark used to mark metadata valid requests. This mark will be masked with 0xffff so that only the lower 16 bits will be used. metadata_port = 9697 port value TCP Port used by Neutron metadata namespace proxy. min_rtr_adv_interval = 30 integer value MinRtrAdvInterval setting for radvd.conf ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. pd_confs = USDstate_path/pd string value Location to store IPv6 PD files. periodic_fuzzy_delay = 5 integer value Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 40 integer value Seconds between running periodic tasks. prefix_delegation_driver = dibbler string value Driver used for ipv6 prefix delegation. This needs to be an entry point defined in the neutron.agent.linux.pd_drivers namespace. See setup.cfg for entry points included with the neutron source. publish_errors = False boolean value Enables or disables publication of error events. ra_confs = USDstate_path/ra string value Location to store IPv6 RA config files `radvd_user = ` string value The username passed to radvd, used to drop root privileges and change user ID to username and group ID to the primary group of username. If no user specified (by default), the user executing the L3 agent will be passed. If "root" specified, because radvd is spawned as root, no "username" parameter will be passed. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. rpc_state_report_workers = 1 integer value Number of RPC worker processes dedicated to state reports queue. rpc_workers = None integer value Number of RPC worker processes for service. If not specified, the default is equal to half the number of API workers. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vendor_pen = 8888 string value A decimal value as Vendor's Registered Private Enterprise Number as required by RFC3315 DUID-EN. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.2.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/l3_agent.ini file. Table 8.3. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node extensions = [] list value Extensions list to use log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.2.3. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/l3_agent.ini file. Table 8.4. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.2.4. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/l3_agent.ini file. Table 8.5. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used by ovsdb-client when monitoring and used for the all ovsdb commands when native ovsdb_interface is enabled ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.3. linuxbridge_agent.ini This section contains options for the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. 8.3.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.3.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.6. agent Configuration option = Default value Type Description dscp = None integer value The DSCP value to use for outer headers during tunnel encapsulation. dscp_inherit = False boolean value If set to True, the DSCP value of tunnel interfaces is overwritten and set to inherit. The DSCP value of the inner header is then copied to the outer header. extensions = [] list value Extensions list to use polling_interval = 2 integer value The number of seconds the agent will wait between polling for local device changes. quitting_rpc_timeout = 10 integer value Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won't be changed 8.3.3. linux_bridge The following table outlines the options available under the [linux_bridge] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.7. linux_bridge Configuration option = Default value Type Description bridge_mappings = [] list value List of <physical_network>:<physical_bridge> physical_interface_mappings = [] list value Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical network names to the agent's node-specific physical network interfaces to be used for flat and VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. 8.3.4. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.8. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.3.5. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.9. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.3.6. vxlan The following table outlines the options available under the [vxlan] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.10. vxlan Configuration option = Default value Type Description arp_responder = False boolean value Enable local ARP responder which provides local responses instead of performing ARP broadcast into the overlay. Enabling local ARP responder is not fully compatible with the allowed-address-pairs extension. enable_vxlan = True boolean value Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin using linuxbridge mechanism driver l2_population = False boolean value Extension to use alongside ml2 plugin's l2population mechanism driver. It enables the plugin to populate VXLAN forwarding table. local_ip = None IP address value IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the overlay_ip_version option in the ML2 plug-in configuration file on the neutron server node(s). multicast_ranges = [] list value Optional comma-separated list of <multicast address>:<vni_min>:<vni_max> triples describing how to assign a multicast address to VXLAN according to its VNI ID. tos = None integer value TOS for vxlan interface protocol packets. This option is deprecated in favor of the dscp option in the AGENT section and will be removed in a future release. To convert the TOS value to DSCP, divide by 4. ttl = None integer value TTL for vxlan interface protocol packets. udp_dstport = None port value The UDP port used for VXLAN communication. By default, the Linux kernel doesn't use the IANA assigned standard value, so if you want to use it, this option must be set to 4789. It is not set by default because of backward compatibiltiy. udp_srcport_max = 0 port value The maximum of the UDP source port range used for VXLAN communication. udp_srcport_min = 0 port value The minimum of the UDP source port range used for VXLAN communication. vxlan_group = 224.0.0.1 string value Multicast group(s) for vxlan interface. A range of group addresses may be specified by using CIDR notation. Specifying a range allows different VNIs to use different group addresses, reducing or eliminating spurious broadcast traffic to the tunnel endpoints. To reserve a unique group for each possible (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on all the agents. 8.4. metadata_agent.ini This section contains options for the /etc/neutron/metadata_agent.ini file. 8.4.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/metadata_agent.ini file. . Configuration option = Default value Type Description auth_ca_cert = None string value Certificate Authority public key (CA cert) file for ssl debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_backlog = 4096 integer value Number of backlog requests to configure the metadata server socket with `metadata_proxy_group = ` string value Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). `metadata_proxy_shared_secret = ` string value When proxying metadata requests, Neutron signs the Instance-ID header with a shared secret to prevent spoofing. You may select any string for a secret, but it must match here and in the configuration used by the Nova Metadata Server. NOTE: Nova uses the same config key, but in [neutron] section. metadata_proxy_socket = USDstate_path/metadata_proxy string value Location for Metadata Proxy UNIX domain socket. metadata_proxy_socket_mode = deduce string value Metadata Proxy UNIX domain socket mode, 4 values allowed: deduce : deduce mode from metadata_proxy_user/group values, user : set metadata proxy socket mode to 0o644, to use when metadata_proxy_user is agent effective user or root, group : set metadata proxy socket mode to 0o664, to use when metadata_proxy_group is agent effective group or root, all : set metadata proxy socket mode to 0o666, to use otherwise. `metadata_proxy_user = ` string value User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). metadata_workers = <based on operating system> integer value Number of separate worker processes for metadata server (defaults to half of the number of CPUs) `nova_client_cert = ` string value Client certificate for nova metadata api server. `nova_client_priv_key = ` string value Private key of client certificate. nova_metadata_host = 127.0.0.1 host address value IP address or DNS name of Nova metadata server. nova_metadata_insecure = False boolean value Allow to perform insecure SSL (https) requests to nova metadata nova_metadata_port = 8775 port value TCP Port used by Nova metadata server. nova_metadata_protocol = http string value Protocol to access nova metadata, http or https publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.4.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/metadata_agent.ini file. Table 8.11. agent Configuration option = Default value Type Description log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.4.3. cache The following table outlines the options available under the [cache] group in the /etc/neutron/metadata_agent.ini file. Table 8.12. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enabled = False boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 8.5. metering_agent.ini This section contains options for the /etc/neutron/metering_agent.ini file. 8.5.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/metering_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. driver = neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver string value Metering driver fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". measure_interval = 30 integer value Interval between two metering measures ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. report_interval = 300 integer value Interval between two metering reports rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.5.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/metering_agent.ini file. Table 8.13. agent Configuration option = Default value Type Description log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.5.3. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/metering_agent.ini file. Table 8.14. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used by ovsdb-client when monitoring and used for the all ovsdb commands when native ovsdb_interface is enabled ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.6. ml2_conf.ini This section contains options for the /etc/neutron/plugins/ml2/ml2_conf.ini file. 8.6.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.6.2. ml2 The following table outlines the options available under the [ml2] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.15. ml2 Configuration option = Default value Type Description extension_drivers = [] list value An ordered list of extension driver entrypoints to be loaded from the neutron.ml2.extension_drivers namespace. For example: extension_drivers = port_security,qos external_network_type = None string value Default network type for external networks when no provider attributes are specified. By default it is None, which means that if provider attributes are not specified while creating external networks then they will have the same type as tenant networks. Allowed values for external_network_type config option depend on the network type values configured in type_drivers config option. mechanism_drivers = [] list value An ordered list of networking mechanism driver entrypoints to be loaded from the neutron.ml2.mechanism_drivers namespace. overlay_ip_version = 4 integer value IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6. path_mtu = 0 integer value Maximum size of an IP packet (MTU) that can traverse the underlying physical network infrastructure without fragmentation when using an overlay/tunnel protocol. This option allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. physical_network_mtus = [] list value A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val>. This mapping allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. tenant_network_types = ['local'] list value Ordered list of network_types to allocate as tenant networks. The default value local is useful for single-box testing but provides no connectivity between hosts. type_drivers = ['local', 'flat', 'vlan', 'gre', 'vxlan', 'geneve'] list value List of network type driver entrypoints to be loaded from the neutron.ml2.type_drivers namespace. 8.6.3. ml2_type_flat The following table outlines the options available under the [ml2_type_flat] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.16. ml2_type_flat Configuration option = Default value Type Description flat_networks = * list value List of physical_network names with which flat networks can be created. Use default * to allow flat networks with arbitrary physical_network names. Use an empty list to disable flat networks. 8.6.4. ml2_type_geneve The following table outlines the options available under the [ml2_type_geneve] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.17. ml2_type_geneve Configuration option = Default value Type Description max_header_size = 30 integer value Geneve encapsulation header size is dynamic, this value is used to calculate the maximum MTU for the driver. This is the sum of the sizes of the outer ETH + IP + UDP + GENEVE header sizes. The default size for this field is 50, which is the size of the Geneve header without any additional option headers. vni_ranges = [] list value Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of Geneve VNI IDs that are available for tenant network allocation 8.6.5. ml2_type_gre The following table outlines the options available under the [ml2_type_gre] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.18. ml2_type_gre Configuration option = Default value Type Description tunnel_id_ranges = [] list value Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation 8.6.6. ml2_type_vlan The following table outlines the options available under the [ml2_type_vlan] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.19. ml2_type_vlan Configuration option = Default value Type Description network_vlan_ranges = [] list value List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks. 8.6.7. ml2_type_vxlan The following table outlines the options available under the [ml2_type_vxlan] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.20. ml2_type_vxlan Configuration option = Default value Type Description vni_ranges = [] list value Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation vxlan_group = None string value Multicast group for VXLAN. When configured, will enable sending all broadcast traffic to this multicast group. When left unconfigured, will disable multicast VXLAN mode. 8.6.8. ovs_driver The following table outlines the options available under the [ovs_driver] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.21. ovs_driver Configuration option = Default value Type Description vnic_type_blacklist = [] list value Comma-separated list of VNIC types for which support is administratively prohibited by the mechanism driver. Please note that the supported vnic_types depend on your network interface card, on the kernel version of your operating system, and on other factors, like OVS version. In case of ovs mechanism driver the valid vnic types are normal and direct. Note that direct is supported only from kernel 4.8, and from ovs 2.8.0. Bind DIRECT (SR-IOV) port allows to offload the OVS flows using tc to the SR-IOV NIC. This allows to support hardware offload via tc and that allows us to manage the VF by OpenFlow control plane using representor net-device. 8.6.9. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.22. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.6.10. sriov_driver The following table outlines the options available under the [sriov_driver] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.23. sriov_driver Configuration option = Default value Type Description vnic_type_blacklist = [] list value Comma-separated list of VNIC types for which support is administratively prohibited by the mechanism driver. Please note that the supported vnic_types depend on your network interface card, on the kernel version of your operating system, and on other factors. In case of sriov mechanism driver the valid VNIC types are direct, macvtap and direct-physical. 8.7. neutron.conf This section contains options for the /etc/neutron/neutron.conf file. 8.7.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/neutron.conf file. . Configuration option = Default value Type Description agent_down_time = 75 integer value Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent is down for good. allow_automatic_dhcp_failover = True boolean value Automatically remove networks from offline DHCP agents. allow_automatic_l3agent_failover = False boolean value Automatically reschedule routers from offline L3 agents to online L3 agents. allow_bulk = True boolean value Allow the usage of the bulk API allow_overlapping_ips = False boolean value Allow overlapping IP support in Neutron. Attention: the following parameter MUST be set to False if Neutron is being used in conjunction with Nova security groups. allowed_conntrack_helpers = [{'amanda': 'tcp'}, {'ftp': 'tcp'}, {'h323': 'udp'}, {'h323': 'tcp'}, {'irc': 'tcp'}, {'netbios-ns': 'udp'}, {'pptp': 'tcp'}, {'sane': 'tcp'}, {'sip': 'udp'}, {'sip': 'tcp'}, {'snmp': 'udp'}, {'tftp': 'udp'}] list value Defines the allowed conntrack helpers, and conntack helper module protocol constraints. `api_extensions_path = ` string value The path for API extensions. Note that this can be a colon-separated list of paths. For example: api_extensions_path = extensions:/path/to/more/exts:/even/more/exts. The path of neutron.extensions is appended to this, so if your extensions are in there you don't need to specify them here. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service api_workers = None integer value Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance, capped by potential RAM usage. auth_strategy = keystone string value The type of authentication to use backlog = 4096 integer value Number of backlog requests to configure the socket with base_mac = fa:16:3e:00:00:00 string value The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. bind_host = 0.0.0.0 host address value The host IP to bind to. bind_port = 9696 port value The port to bind to client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = neutron string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. core_plugin = None string value The core plugin Neutron will use debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_availability_zones = [] list value Default value of availability zone hints. The availability zone aware schedulers use this when the resources availability_zone_hints is empty. Multiple availability zones can be specified by a comma separated string. This value can be empty. In this case, even if availability_zone_hints for a resource is empty, availability zone is considered for high availability while scheduling the resource. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. dhcp_agent_notification = True boolean value Allow sending resource operation notification to DHCP agent dhcp_agents_per_network = 1 integer value Number of DHCP agents scheduled to host a tenant network. If this number is greater than 1, the scheduler automatically assigns multiple DHCP agents for a given tenant network, providing high availability for DHCP service. dhcp_lease_duration = 86400 integer value DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite lease times. dhcp_load_type = networks string value Representing the resource type whose load is being reported by the agent. This can be "networks", "subnets" or "ports". When specified (Default is networks), the server will extract particular load sent as part of its agent configuration object from the agent report state, which is the number of resources being consumed, at every report_interval.dhcp_load_type can be used in combination with network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler When the network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured to represent the choice for the resource being balanced. Example: dhcp_load_type=networks dns_domain = openstacklocal string value Domain to use for building the hostnames dvr_base_mac = fa:16:3f:00:00:00 string value The base mac address used for unique DVR instances by Neutron. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. The dvr_base_mac must be different from base_mac to avoid mixing them up with MAC's allocated for tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00. The default is 3 octet enable_dvr = True boolean value Determine if setup is configured for DVR. If False, DVR API extension will be disabled. enable_new_agents = True boolean value Agent starts with admin_state_up=False when enable_new_agents=False. In the case, user's resources will not be scheduled automatically to the agent until admin changes admin_state_up to True. enable_services_on_agents_with_admin_state_down = False boolean value Enable services on an agent with admin_state_up False. If this option is False, when admin_state_up of an agent is turned False, services on it will be disabled. Agents with admin_state_up False are not selected for automatic scheduling regardless of this option. But manual scheduling to such agents is available if this option is True. enable_snat_by_default = True boolean value Define the default value of enable_snat if not provided in external_gateway_info. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. external_dns_driver = None string value Driver for external DNS integration. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. filter_validation = True boolean value If True, then allow plugins to decide whether to perform validations on filter parameters. Filter validation is enabled if this config is turned on and it is supported by all plugins global_physnet_mtu = 1500 integer value MTU of the underlying physical network. Neutron uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, neutron uses this value without modification. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol overhead from this value. Defaults to 1500, the standard value for Ethernet. host = <based on operating system> host address value Hostname to be used by the Neutron server, agents and services running on this machine. All the agents and services running on this machine must use the same host value. host_dvr_for_dhcp = True boolean value Flag to determine if hosting a DVR local router to the DHCP agent is desired. If False, any L3 function supported by the DHCP agent instance will not be possible, for instance: DNS. http_retries = 3 integer value Number of times client connections (nova, ironic) should be retried on a failed HTTP call. 0 (zero) meansconnection is attempted only once (not retried). Setting to any positive integer means that on failure the connection is retried that many times. For example, setting to 3 means total attempts to connect will be 4. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. ipam_driver = internal string value Neutron IPAM (IP address management) driver to use. By default, the reference implementation of the Neutron IPAM driver is used. ipv6_pd_enabled = False boolean value Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable environment. Users making subnet creation requests for IPv6 subnets without providing a CIDR or subnetpool ID will be given a CIDR via the Prefix Delegation mechanism. Note that enabling PD will override the behavior of the default IPv6 subnetpool. l3_ha = False boolean value Enable HA mode for virtual routers. l3_ha_net_cidr = 169.254.192.0/18 string value Subnet used for the l3 HA admin network. `l3_ha_network_physical_name = ` string value The physical network name with which the HA network can be created. `l3_ha_network_type = ` string value The network type to use when creating the HA network for an HA router. By default or if empty, the first tenant_network_types is used. This is helpful when the VRRP traffic should use a specific network which is not the default one. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_allowed_address_pair = 10 integer value Maximum number of allowed address pairs max_dns_nameservers = 5 integer value Maximum number of DNS nameservers per subnet max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_l3_agents_per_router = 3 integer value Maximum number of L3 agents which a HA router will be scheduled on. If it is set to 0 then the router will be scheduled on every agent. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_routes = 30 integer value Maximum number of routes per router max_subnet_host_routes = 20 integer value Maximum number of host routes per subnet `metadata_proxy_group = ` string value Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). metadata_proxy_socket = USDstate_path/metadata_proxy string value Location for Metadata Proxy UNIX domain socket. `metadata_proxy_user = ` string value User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). network_auto_schedule = True boolean value Allow auto scheduling networks to DHCP agent. network_link_prefix = None string value This string is prepended to the normal URL that is returned in links to the OpenStack Network API. If it is empty (the default), the URLs are returned unchanged. network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler string value Driver to use for scheduling network to DHCP agent notify_nova_on_port_data_changes = True boolean value Send notification to nova when port data (fixed_ips/floatingip) changes so nova can update its cache. notify_nova_on_port_status_changes = True boolean value Send notification to nova when port status changes pagination_max_limit = -1 string value The maximum number of items returned in a single response, value was infinite or negative integer means no limit periodic_fuzzy_delay = 5 integer value Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 40 integer value Seconds between running periodic tasks. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. retry_until_window = 30 integer value Number of seconds to keep retrying to listen router_auto_schedule = True boolean value Allow auto scheduling of routers to L3 agent. router_distributed = False boolean value System-wide flag to determine the type of router that tenants can create. Only admin can override. router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler string value Driver to use for scheduling router to a default L3 agent rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. rpc_state_report_workers = 1 integer value Number of RPC worker processes dedicated to state reports queue. rpc_workers = None integer value Number of RPC worker processes for service. If not specified, the default is equal to half the number of API workers. send_events_interval = 2 integer value Number of seconds between sending events to nova if there are any events to send. service_plugins = [] list value The service plugins Neutron will use setproctitle = on string value Set process name to match child worker role. Available options are: off - retains the behavior; on - renames processes to neutron-server: role (original string) ; brief - renames the same as on , but without the original string, such as neutron-server: role . state_path = /var/lib/neutron string value Where to store Neutron state files. This directory must be writable by the agent. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_ssl = False boolean value Enable SSL on the API server use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vlan_transparent = False boolean value If True, then allow plugins that support it to create VLAN transparent networks. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. 8.7.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/neutron.conf file. Table 8.24. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node check_child_processes_action = respawn string value Action to be executed when a child process dies check_child_processes_interval = 60 integer value Interval between checks of child process liveness (seconds), use 0 to disable comment_iptables_rules = True boolean value Add comments to iptables rules. Set to false to disallow the addition of comments to generated iptables rules that describe each rule's purpose. System must support the iptables comments module for addition of comments. debug_iptables_rules = False boolean value Duplicate every iptables difference calculation to ensure the format being generated matches the format of iptables-save. This option should not be turned on for production systems because it imposes a performance penalty. kill_scripts_path = /etc/neutron/kill_scripts/ string value Location of scripts used to kill external processes. Names of scripts here must follow the pattern: "<process-name>-kill" where <process-name> is name of the process which should be killed using this script. For example, kill script for dnsmasq process should be named "dnsmasq-kill". If path is set to None, then default "kill" command will be used to stop processes. log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. root_helper = sudo string value Root helper application. Use sudo neutron-rootwrap /etc/neutron/rootwrap.conf to use the real root filter facility. Change to sudo to skip the filtering and just run the command directly. root_helper_daemon = None string value Root helper daemon application to use when possible. Use sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf to run rootwrap in "daemon mode" which has been reported to improve performance at scale. For more information on running rootwrap in "daemon mode", see: https://docs.openstack.org/oslo.rootwrap/latest/user/usage.html#daemon-mode For the agent which needs to execute commands in Dom0 in the hypervisor of XenServer, this option should be set to xenapi_root_helper , so that it will keep a XenAPI session to pass commands to Dom0. use_helper_for_ns_read = True boolean value Use the root helper when listing the namespaces on a system. This may not be required depending on the security configuration. If the root helper is not required, set this to False for a performance improvement. 8.7.3. cors The following table outlines the options available under the [cors] group in the /etc/neutron/neutron.conf file. Table 8.25. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID', 'OpenStack-Volume-microversion'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 8.7.4. database The following table outlines the options available under the [database] group in the /etc/neutron/neutron.conf file. Table 8.26. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. `engine = ` string value Database engine for which script will be generated when using offline migration. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 8.7.5. ironic The following table outlines the options available under the [ironic] group in the /etc/neutron/neutron.conf file. Table 8.27. ironic Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to enable_notifications = False boolean value Send notification events to ironic. (For example on relevant port status changes.) insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.6. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/neutron/neutron.conf file. Table 8.28. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 8.7.7. nova The following table outlines the options available under the [nova] group in the /etc/neutron/neutron.conf file. Table 8.29. nova Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint_type = public string value Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file live_migration_events = False boolean value When this option is enabled, during the live migration, the OVS agent will only send the "vif-plugged-event" when the destination host interface is bound. This option also disables any other agent (like DHCP) to send to Nova this event when the port is provisioned.This option can be enabled if Nova patch https://review.opendev.org/c/openstack/nova/+/767368 is in place.This option is temporary and will be removed in Y and the behavior will be "True". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region_name = None string value Name of nova region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.8. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/neutron/neutron.conf file. Table 8.30. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 8.7.9. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/neutron/neutron.conf file. Table 8.31. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 8.7.10. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/neutron/neutron.conf file. Table 8.32. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 8.7.11. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/neutron/neutron.conf file. Table 8.33. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 8.7.12. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/neutron/neutron.conf file. Table 8.34. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 8.7.13. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/neutron/neutron.conf file. Table 8.35. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 8.7.14. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/neutron/neutron.conf file. Table 8.36. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 8.7.15. privsep The following table outlines the options available under the [privsep] group in the /etc/neutron/neutron.conf file. Table 8.37. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 8.7.16. quotas The following table outlines the options available under the [quotas] group in the /etc/neutron/neutron.conf file. Table 8.38. quotas Configuration option = Default value Type Description default_quota = -1 integer value Default number of resource allowed per tenant. A negative value means unlimited. quota_driver = neutron.db.quota.driver.DbQuotaDriver string value Default driver to use for quota checks. quota_floatingip = 50 integer value Number of floating IPs allowed per tenant. A negative value means unlimited. quota_network = 100 integer value Number of networks allowed per tenant. A negative value means unlimited. quota_port = 500 integer value Number of ports allowed per tenant. A negative value means unlimited. quota_router = 10 integer value Number of routers allowed per tenant. A negative value means unlimited. quota_security_group = 10 integer value Number of security groups allowed per tenant. A negative value means unlimited. quota_security_group_rule = 100 integer value Number of security rules allowed per tenant. A negative value means unlimited. quota_subnet = 100 integer value Number of subnets allowed per tenant, A negative value means unlimited. track_quota_usage = True boolean value Keep in track in the database of current resource quota usage. Plugins which do not leverage the neutron database should set this flag to False. 8.7.17. ssl The following table outlines the options available under the [ssl] group in the /etc/neutron/neutron.conf file. Table 8.39. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 8.8. openvswitch_agent.ini This section contains options for the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. 8.8.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.8.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.40. agent Configuration option = Default value Type Description agent_type = Open vSwitch agent string value Selects the Agent Type reported. arp_responder = False boolean value Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2 l2population driver. Allows the switch (when supporting an overlay) to respond to an ARP request locally without performing a costly ARP broadcast into the overlay. NOTE: If enable_distributed_routing is set to True then arp_responder will automatically be set to True in the agent, regardless of the setting in the config file. baremetal_smartnic = False boolean value Enable the agent to process Smart NIC ports. dont_fragment = True boolean value Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel. drop_flows_on_start = False boolean value Reset flow table on start. Setting this to True will cause brief traffic interruption. enable_distributed_routing = False boolean value Make the l2 agent run in DVR mode. explicitly_egress_direct = False boolean value When set to True, the accepted egress unicast traffic will not use action NORMAL. The accepted egress packets will be taken care of in the final egress tables direct output flows for unicast traffic. extensions = [] list value Extensions list to use l2_population = False boolean value Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scalability. minimize_polling = True boolean value Minimize polling by monitoring ovsdb for interface changes. ovsdb_monitor_respawn_interval = 30 integer value The number of seconds to wait before respawning the ovsdb monitor after losing communication with it. tunnel_csum = False boolean value Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel. tunnel_types = [] list value Network types supported by the agent (gre, vxlan and/or geneve). veth_mtu = 9000 integer value MTU size of veth interfaces vxlan_udp_port = 4789 port value The UDP port to use for VXLAN tunnels. 8.8.3. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.41. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.8.4. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.42. ovs Configuration option = Default value Type Description bridge_mappings = [] list value Comma-separated list of <physical_network>:<bridge> tuples mapping physical network names to the agent's node-specific Open vSwitch bridge names to be used for flat and VLAN networks. The length of bridge names should be no more than 11. Each bridge must exist, and should have a physical network interface configured as a port. All physical networks configured on the server should have mappings to appropriate bridges on each agent. Note: If you remove a bridge from this mapping, make sure to disconnect it from the integration bridge as it won't be managed by the agent anymore. datapath_type = system string value OVS datapath to use. system is the default value and corresponds to the kernel datapath. To enable the userspace datapath set this value to netdev . int_peer_patch_port = patch-tun string value Peer patch port in integration bridge for tunnel bridge. integration_bridge = br-int string value Integration bridge to use. Do not change this parameter unless you have a good reason to. This is the name of the OVS integration bridge. There is one per hypervisor. The integration bridge acts as a virtual patch bay . All VM VIFs are attached to this bridge and then patched according to their network connectivity. local_ip = None IP address value IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the overlay_ip_version option in the ML2 plug-in configuration file on the neutron server node(s). of_connect_timeout = 300 integer value Timeout in seconds to wait for the local switch connecting the controller. of_inactivity_probe = 10 integer value The inactivity_probe interval in seconds for the local switch connection to the controller. A value of 0 disables inactivity probes. of_listen_address = 127.0.0.1 IP address value Address to listen on for OpenFlow connections. of_listen_port = 6633 port value Port to listen on for OpenFlow connections. of_request_timeout = 300 integer value Timeout in seconds to wait for a single OpenFlow request. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used by ovsdb-client when monitoring and used for the all ovsdb commands when native ovsdb_interface is enabled ovsdb_debug = False boolean value Enable OVSDB debug logs resource_provider_bandwidths = [] list value Comma-separated list of <bridge>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given bridge in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The bridge must appear in bridge_mappings as the value. But not all bridges in bridge_mappings must be listed here. For a bridge not listed here we neither create a resource provider in placement nor report inventories against. An omitted direction means we do not report an inventory for the corresponding class. resource_provider_hypervisors = {} dict value Mapping of bridges to hypervisors: <bridge>:<hypervisor>,... hypervisor name is used to locate the parent of the resource provider tree. Only needs to be set in the rare case when the hypervisor name is different from the DEFAULT.host config option value as known by the nova-compute managing that hypervisor. resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'reserved': 0, 'step_size': 1} dict value Key:value pairs to specify defaults used while reporting resource provider inventories. Possible keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int, See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection tun_peer_patch_port = patch-int string value Peer patch port in tunnel bridge for integration bridge. tunnel_bridge = br-tun string value Tunnel bridge to use. use_veth_interconnection = False boolean value Use veths instead of patch ports to interconnect the integration bridge to physical networks. Support kernel without Open vSwitch patch port support so long as it is set to True. vhostuser_socket_dir = /var/run/openvswitch string value OVS vhost-user socket directory. 8.8.5. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.43. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.8.6. xenapi The following table outlines the options available under the [xenapi] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.44. xenapi Configuration option = Default value Type Description connection_password = None string value Password for connection to XenServer/Xen Cloud Platform. connection_url = None string value URL for connection to XenServer/Xen Cloud Platform. connection_username = None string value Username for connection to XenServer/Xen Cloud Platform. 8.9. sriov_agent.ini This section contains options for the /etc/neutron/plugins/ml2/sriov_agent.ini file. 8.9.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.9.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. Table 8.45. agent Configuration option = Default value Type Description extensions = [] list value Extensions list to use 8.9.3. sriov_nic The following table outlines the options available under the [sriov_nic] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. Table 8.46. sriov_nic Configuration option = Default value Type Description exclude_devices = [] list value Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping network_device to the agent's node-specific list of virtual functions that should not be used for virtual networking. vfs_to_exclude is a semicolon-separated list of virtual functions to exclude from network_device. The network_device in the mapping should appear in the physical_device_mappings list. physical_device_mappings = [] list value Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent's node-specific physical network device interfaces of SR-IOV physical function to be used for VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. resource_provider_bandwidths = [] list value Comma-separated list of <network_device>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given device in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The device must appear in physical_device_mappings as the value. But not all devices in physical_device_mappings must be listed here. For a device not listed here we neither create a resource provider in placement nor report inventories against. An omitted direction means we do not report an inventory for the corresponding class. resource_provider_hypervisors = {} dict value Mapping of network devices to hypervisors: <network_device>:<hypervisor>,... hypervisor name is used to locate the parent of the resource provider tree. Only needs to be set in the rare case when the hypervisor name is different from the DEFAULT.host config option value as known by the nova-compute managing that hypervisor. resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'reserved': 0, 'step_size': 1} dict value Key:value pairs to specify defaults used while reporting resource provider inventories. Possible keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int, See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/neutron_2
|
Chapter 6. Content distribution with Red Hat Quay
|
Chapter 6. Content distribution with Red Hat Quay Content distribution features in Red Hat Quay include: Repository mirroring Geo-replication Deployment in air-gapped environments 6.1. Repository mirroring Red Hat Quay repository mirroring lets you mirror images from external container registries, or another local registry, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags. From your Red Hat Quay cluster with repository mirroring enabled, you can perform the following: Choose a repository from an external registry to mirror Add credentials to access the external registry Identify specific container image repository names and tags to sync Set intervals at which a repository is synced Check the current state of synchronization To use the mirroring functionality, you need to perform the following actions: Enable repository mirroring in the Red Hat Quay configuration file Run a repository mirroring worker Create mirrored repositories All repository mirroring configurations can be performed using the configuration tool UI or by the Red Hat Quay API. 6.1.1. Using repository mirroring The following list shows features and limitations of Red Hat Quay repository mirroring: With repository mirroring, you can mirror an entire repository or selectively limit which images are synced. Filters can be based on a comma-separated list of tags, a range of tags, or other means of identifying tags through Unix shell-style wildcards. For more information, see the documentation for wildcards . When a repository is set as mirrored, you cannot manually add other images to that repository. Because the mirrored repository is based on the repository and tags you set, it will hold only the content represented by the repository and tag pair. For example if you change the tag so that some images in the repository no longer match, those images will be deleted. Only the designated robot can push images to a mirrored repository, superseding any role-based access control permissions set on the repository. Mirroring can be configured to rollback on failure, or to run on a best-effort basis. With a mirrored repository, a user with read permissions can pull images from the repository but cannot push images to the repository. Changing settings on your mirrored repository can be performed in the Red Hat Quay user interface, using the Repositories Mirrors tab for the mirrored repository you create. Images are synced at set intervals, but can also be synced on demand. 6.1.2. Repository mirroring recommendations Best practices for repository mirroring include the following: Repository mirroring pods can run on any node. This means that you can run mirroring on nodes where Red Hat Quay is already running. Repository mirroring is scheduled in the database and runs in batches. As a result, repository workers check each repository mirror configuration file and reads when the sync needs to be. More mirror workers means more repositories can be mirrored at the same time. For example, running 10 mirror workers means that a user can run 10 mirroring operators in parallel. If a user only has 2 workers with 10 mirror configurations, only 2 operators can be performed. The optimal number of mirroring pods depends on the following conditions: The total number of repositories to be mirrored The number of images and tags in the repositories and the frequency of changes Parallel batching For example, if a user is mirroring a repository that has 100 tags, the mirror will be completed by one worker. Users must consider how many repositories one wants to mirror in parallel, and base the number of workers around that. Multiple tags in the same repository cannot be mirrored in parallel. 6.1.3. Event notifications for mirroring There are three notification events for repository mirroring: Repository Mirror Started Repository Mirror Success Repository Mirror Unsuccessful The events can be configured inside of the Settings tab for each repository, and all existing notification methods such as email, Slack, Quay UI, and webhooks are supported. 6.1.4. Mirroring API You can use the Red Hat Quay API to configure repository mirroring: Mirroring API More information is available in the Red Hat Quay API Guide 6.2. Geo-replication Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. 6.2.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 6.2.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 6.2.3. Geo-replication using standalone Red Hat Quay In the following image, Red Hat Quay is running standalone in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Red Hat Quay instance, and will then be replicated, in the background, to the other storage engines. Note If Clair fails in one cluster, for example, the US cluster, US users would not see vulnerability reports in Red Hat Quay for the second cluster (EU). This is because all Clair instances have the same state. When Clair fails, it is usually because of a problem within the cluster. Geo-replication architecture 6.2.4. Geo-replication using the Red Hat Quay Operator In the example shown above, the Red Hat Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines. Because the Operator now manages the Clair security scanner and its database separately, geo-replication setups can be leveraged so that they do not manage the Clair database. Instead, an external shared database would be used. Red Hat Quay and Clair support several providers and vendors of PostgreSQL, which can be found in the Red Hat Quay 3.x test matrix . Additionally, the Operator also supports custom Clair configurations that can be injected into the deployment, which allows users to configure Clair with the connection credentials for the external database. 6.2.5. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication. 6.3. Repository mirroring compared to geo-replication Red Hat Quay geo-replication mirrors the entire image storage backend data between 2 or more different storage backends while the database is shared, for example, one Red Hat Quay registry with two different blob storage endpoints. The primary use cases for geo-replication include the following: Speeding up access to the binary blobs for geographically dispersed setups Guaranteeing that the image content is the same across regions Repository mirroring synchronizes selected repositories, or subsets of repositories, from one registry to another. The registries are distinct, with each registry having a separate database and separate image storage. The primary use cases for mirroring include the following: Independent registry deployments in different data centers or regions, where a certain subset of the overall content is supposed to be shared across the data centers and regions Automatic synchronization or mirroring of selected (allowlisted) upstream repositories from external registries into a local Red Hat Quay deployment Note Repository mirroring and geo-replication can be used simultaneously. Table 6.1. Red Hat Quay Repository mirroring and geo-replication comparison Feature / Capability Geo-replication Repository mirroring What is the feature designed to do? A shared, global registry Distinct, different registries What happens if replication or mirroring has not been completed yet? The remote copy is used (slower) No image is served Is access to all storage backends in both regions required? Yes (all Red Hat Quay nodes) No (distinct storage) Can users push images from both sites to the same repository? Yes No Is all registry content and configuration identical across all regions (shared database)? Yes No Can users select individual namespaces or repositories to be mirrored? No Yes Can users apply filters to synchronization rules? No Yes Are individual / different role-base access control configurations allowed in each region No Yes 6.4. Air-gapped or disconnected deployments In the following diagram, the upper deployment in the diagram shows Red Hat Quay and Clair connected to the internet, with an air-gapped OpenShift Container Platform cluster accessing the Red Hat Quay registry through an explicit, allowlisted hole in the firewall. The lower deployment in the diagram shows Red Hat Quay and Clair running inside of the firewall, with image and CVE data transferred to the target system using offline media. The data is exported from a separate Red Hat Quay and Clair deployment that is connected to the internet. The following diagram shows how Red Hat Quay and Clair can be deployed in air-gapped or disconnected environments: Red Hat Quay and Clair in disconnected, or air-gapped, environments
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_architecture/content-distrib-intro
|
4.29. VMware over SOAP API
|
4.29. VMware over SOAP API Table 4.30, "VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)" lists the fence device parameters used by fence_vmware_soap , the fence agent for VMware over SOAP API. Table 4.30. VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later) luci Field cluster.conf Attribute Description Name name Name of the virtual machine fencing device. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use for connection with the device. The default port is 80, or 443 if Use SSL is selected. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. VM name port Name of virtual machine in inventory path format (for example, /datacenter/vm/Discovered_virtual_machine/myMachine). VM UUID uuid The UUID of the virtual machine to fence. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSL ssl Use SSL connections to communicate with the device. Figure 4.22, "VMware over SOAP Fencing" shows the configuration screen for adding a VMware over SOAP fence device Figure 4.22. VMware over SOAP Fencing The following command creates a fence device instance for a VMware over SOAP fence device: The following is the cluster.conf entry for the fence_vmware_soap device:
|
[
"ccs -f cluster.conf --addfencedev vmwaresoaptest1 agent=fence_vmware_soap login=root passwd=password123 power_wait=60 separator=,",
"<fencedevices> <fencedevice agent=\"fence_vmware_soap\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"vmwaresoaptest1\" passwd=\"password123\" power_wait=\"60\" separator=\".\"/> </fencedevices>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-vmware-soap-ca
|
7.139. ntp
|
7.139. ntp 7.139.1. RHSA-2015:1459 - Moderate: ntp security, bug fix, and enhancement update Updated ntp packages that fix multiple security issues, several bugs, and add two enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The Network Time Protocol (NTP) is used to synchronize a computer's time with another referenced time source. Security Fixes CVE-2014-9298 It was found that because NTP's access control was based on a source IP address, an attacker could bypass source IP restrictions and send malicious control and configuration packets by spoofing ::1 addresses. CVE-2015-1799 A denial of service flaw was found in the way NTP hosts that were peering with each other authenticated themselves before updating their internal state variables. An attacker could send packets to one peer host, which could cascade to other peers, and stop the synchronization process among the reached peers. CVE-2015-3405 A flaw was found in the way the ntp-keygen utility generated MD5 symmetric keys on big-endian systems. An attacker could possibly use this flaw to guess generated MD5 keys, which could then be used to spoof an NTP client or server. CVE-2014-9297 A stack-based buffer overflow was found in the way the NTP autokey protocol was implemented. When an NTP client decrypted a secret received from an NTP server, it could cause that client to crash. CVE-2015-1798 It was found that ntpd did not check whether a Message Authentication Code (MAC) was present in a received packet when ntpd was configured to use symmetric cryptographic keys. A man-in-the-middle attacker could use this flaw to send crafted packets that would be accepted by a client or a peer without the attacker knowing the symmetric key. The CVE-2015-1798 and CVE-2015-1799 issues were discovered by Miroslav Lichvar of Red Hat. Bug Fixes BZ# 1053551 The ntpd daemon truncated symmetric keys specified in the key file to 20 bytes. As a consequence, it was impossible to configure NTP authentication to work with peers that use longer keys. The maximum length of keys has now been changed to 32 bytes. BZ# 1184421 The ntp-keygen utility used the exponent of 3 when generating RSA keys, and generating RSA keys failed when FIPS mode was enabled. ntp-keygen has been modified to use the exponent of 65537, and generating keys in FIPS mode now works as expected. BZ# 1045376 The ntpd daemon included a root delay when calculating its root dispersion. Consequently, the NTP server reported larger root dispersion than it should have and clients could reject the source when its distance reached the maximum synchronization distance (1.5 seconds by default). Calculation of root dispersion has been fixed, the root dispersion is now reported correctly, and clients no longer reject the server due to a large synchronization distance. BZ# 1171630 The ntpd daemon dropped incoming NTP packets if their source port was lower than 123 (the NTP port). Clients behind Network Address Translation (NAT) were unable to synchronize with the server if their source port was translated to ports below 123. With this update, ntpd no longer checks the source port number. Enhancements BZ# 1122015 This update introduces configurable access of memory segments used for Shared Memory Driver (SHM) reference clocks. Previously, only the first two memory segments were created with owner-only access, allowing just two SHM reference clocks to be used securely on a system. Now, the owner-only access to SHM is configurable with the "mode" option, and it is therefore possible to use more SHM reference clocks securely. BZ# 1117704 Support for nanosecond resolution has been added to the SHM reference clock. Prior to this update, when a Precision Time Protocol (PTP) hardware clock was used as a time source to synchronize the system clock (for example, with the timemaster service from the linuxptp package), the accuracy of the synchronization was limited due to the microsecond resolution of the SHM protocol. The nanosecond extension in the SHM protocol now enables sub-microsecond synchronization of the system clock. All users of ntp are advised to upgrade to these updated packages, which correct these issues and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-ntp
|
25.9. Removing a Storage Device
|
25.9. Removing a Storage Device Before removing access to the storage device itself, it is advisable to back up data from the device first. Afterwards, flush I/O and remove all operating system references to the device (as described below). If the device uses multipathing, then do this for the multipath "pseudo device" ( Section 25.8.2, "World Wide Identifier (WWID)" ) and each of the identifiers that represent a path to the device. If you are only removing a path to a multipath device, and other paths will remain, then the procedure is simpler, as described in Section 25.11, "Adding a Storage Device or Path" . Removal of a storage device is not recommended when the system is under memory pressure, since the I/O flush will add to the load. To determine the level of memory pressure, run the command vmstat 1 100 ; device removal is not recommended if: Free memory is less than 5% of the total memory in more than 10 samples per 100 (the command free can also be used to display the total memory). Swapping is active (non-zero si and so columns in the vmstat output). The general procedure for removing all access to a device is as follows: Procedure 25.11. Ensuring a Clean Device Removal Close all users of the device and backup device data as needed. Use umount to unmount any file systems that mounted the device. Remove the device from any md and LVM volume using it. If the device is a member of an LVM Volume group, then it may be necessary to move data off the device using the pvmove command, then use the vgreduce command to remove the physical volume, and (optionally) pvremove to remove the LVM metadata from the disk. Run multipath -l command to find the list of devices which are configured as multipath device. If the device is configured as a multipath device, run multipath -f device command to flush any outstanding I/O and to remove the multipath device. Flush any outstanding I/O to the used paths. This is important for raw devices, where there is no umount or vgreduce operation to cause an I/O flush. You need to do this step only if: the device is not configured as a multipath device, or the device is configured as a multipath device and I/O has been issued directly to its individual paths at some point in the past. Use the following command to flush any outstanding I/O: Remove any reference to the device's path-based name, like /dev/sd , /dev/disk/by-path or the major:minor number, in applications, scripts, or utilities on the system. This is important in ensuring that different devices added in the future will not be mistaken for the current device. Finally, remove each path to the device from the SCSI subsystem. To do so, use the command echo 1 > /sys/block/ device-name /device/delete where device-name may be sde , for example. Another variation of this operation is echo 1 > /sys/class/scsi_device/ h : c : t : l /device/delete , where h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN. Note The older form of these commands, echo "scsi remove-single-device 0 0 0 0" > /proc/scsi/scsi , is deprecated. You can determine the device-name , HBA number, HBA channel, SCSI target ID and LUN for a device from various commands, such as lsscsi , scsi_id , multipath -l , and ls -l /dev/disk/by-* . After performing Procedure 25.11, "Ensuring a Clean Device Removal" , a device can be physically removed safely from a running system. It is not necessary to stop I/O to other devices while doing so. Other procedures, such as the physical removal of the device, followed by a rescan of the SCSI bus (as described in Section 25.12, "Scanning Storage Interconnects" ) to cause the operating system state to be updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must be done while I/O is paused, as described in Section 25.12, "Scanning Storage Interconnects" .
|
[
"blockdev --flushbufs device"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/removing_devices
|
probe::linuxmib.TCPMemoryPressures
|
probe::linuxmib.TCPMemoryPressures Name probe::linuxmib.TCPMemoryPressures - Count of times memory pressure was used Synopsis linuxmib.TCPMemoryPressures Values sk Pointer to the struct sock being acted on op Value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function linuxmib_filter_key . If the packet passes the filter is is counted in the global TCPMemoryPressures (equivalent to SNMP's MIB LINUX_MIB_TCPMEMORYPRESSURES)
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-linuxmib-tcpmemorypressures
|
Admin Portal Guide
|
Admin Portal Guide Red Hat 3scale API Management 2.15 Manage aspects related to Red Hat 3scale API Management. Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/index
|
Chapter 64. System and Subscription Management
|
Chapter 64. System and Subscription Management System upgrade may cause Yum to install unneeded 32-bit packages if rdma-core is installed In Red Hat Enterprise Linux 7.4, the rdma-core.noarch packages are obsoleted by rdma-core.i686 and rdma-core.x86_64 . During a system upgrade, Yum replaces the original package with both of the new packages, and installs any required dependencies. This means that the 32-bit package, as well a potentially large amount of its 32-bit dependencies, is installed by default, even if not required. To work around this problem, you can either use the yum update command with the --exclude=\*.i686 option, or you can use yum remove rdma-core.i686 after the upgrade to remove the 32-bit package. (BZ#1458338)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/known_issues_system_and_subscription_management
|
Chapter 5. Access Control Lists
|
Chapter 5. Access Control Lists Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented. The Red Hat Enterprise Linux kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. The cp and mv commands copy or move any ACLs associated with files and directories. 5.1. Mounting File Systems Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command: mount -t ext3 -o acl device-name partition For example: mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option: If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs are recognized because Samba has been compiled with the --with-acl-support option. No special flags are required when accessing or mounting a Samba share. 5.1.1. NFS By default, if the file system being exported by an NFS server supports ACLs and the NFS client can read ACLs, ACLs are utilized by the client system. To disable ACLs on NFS shares when configuring the server, include the no_acl option in the /etc/exports file. To disable ACLs on an NFS share when mounting it on a client, mount it with the no_acl option via the command line or the /etc/fstab file. 5.2. Setting Access ACLs There are two types of ACLs: access ACLs and default ACLs . An access ACL is the access control list for a specific file or directory. A default ACL can only be associated with a directory; if a file within the directory does not have an access ACL, it uses the rules of the default ACL for the directory. Default ACLs are optional. ACLs can be configured: Per user Per group Via the effective rights mask For users not in the user group for the file The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory: Rules ( rules ) must be specified in the following formats. Multiple rules can be specified in the same command if they are separated by commas. u: uid : perms Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system. g: gid : perms Sets the access ACL for a group. The group name or GID may be specified. The group may be any valid group on the system. m: perms Sets the effective rights mask. The mask is the union of all permissions of the owning group and all of the user and group entries. o: perms Sets the access ACL for users other than the ones in the group for the file. Permissions ( perms ) must be a combination of the characters r , w , and x for read, write, and execute. If a file or directory already has an ACL, and the setfacl command is used, the additional rules are added to the existing ACL or the existing rule is modified. Example 5.1. Give read and write permissions For example, to give read and write permissions to user andrius: To remove all the permissions for a user, group, or others, use the -x option and do not specify any permissions: Example 5.2. Remove all permissions For example, to remove all permissions from the user with UID 500: 5.3. Setting Default ACLs To set a default ACL, add d: before the rule and specify a directory instead of a file name. Example 5.3. Setting default ACLs For example, to set the default ACL for the /share/ directory to read and execute for users not in the user group (an access ACL for an individual file can override it): 5.4. Retrieving ACLs To determine the existing ACLs for a file or directory, use the getfacl command. In the example below, the getfacl is used to determine the existing ACLs for a file. Example 5.4. Retrieving ACLs The above command returns the following output: If a directory with a default ACL is specified, the default ACL is also displayed as illustrated below. For example, getfacl home/sales/ will display similar output: 5.5. Archiving File Systems With ACLs By default, the dump command now preserves ACLs during a backup operation. When archiving a file or file system with tar , use the --acls option to preserve ACLs. Similarly, when using cp to copy files with ACLs, include the --preserve=mode option to ensure that ACLs are copied across too. In addition, the -a option (equivalent to -dR --preserve=all ) of cp also preserves ACLs during a backup along with other information such as timestamps, SELinux contexts, and the like. For more information about dump , tar , or cp , refer to their respective man pages. The star utility is similar to the tar utility in that it can be used to generate archives of files; however, some of its options are different. Refer to Table 5.1, "Command Line Options for star " for a listing of more commonly used options. For all available options, refer to man star . The star package is required to use this utility. Table 5.1. Command Line Options for star Option Description -c Creates an archive file. -n Do not extract the files; use in conjunction with -x to show what extracting the files does. -r Replaces files in the archive. The files are written to the end of the archive file, replacing any files with the same path and file name. -t Displays the contents of the archive file. -u Updates the archive file. The files are written to the end of the archive if they do not exist in the archive, or if the files are newer than the files of the same name in the archive. This option only works if the archive is a file or an unblocked tape that may backspace. -x Extracts the files from the archive. If used with -U and a file in the archive is older than the corresponding file on the file system, the file is not extracted. -help Displays the most important options. -xhelp Displays the least important options. -/ Do not strip leading slashes from file names when extracting the files from an archive. By default, they are stripped when files are extracted. -acl When creating or extracting, archives or restores any ACLs associated with the files and directories. 5.6. Compatibility with Older Systems If an ACL has been set on any file on a given file system, that file system has the ext_attr attribute. This attribute can be seen using the following command: A file system that has acquired the ext_attr attribute can be mounted with older kernels, but those kernels do not enforce any ACLs which have been set. Versions of the e2fsck utility included in version 1.22 and higher of the e2fsprogs package (including the versions in Red Hat Enterprise Linux 2.1 and 4) can check a file system with the ext_attr attribute. Older versions refuse to check it. 5.7. ACL References Refer to the following man pages for more information. man acl - Description of ACLs man getfacl - Discusses how to get file access control lists man setfacl - Explains how to set file access control lists man star - Explains more about the star utility and its many options
|
[
"LABEL=/work /work ext3 acl 1 2",
"setfacl -m rules files",
"setfacl -m u:andrius:rw /project/somefile",
"setfacl -x rules files",
"setfacl -x u:500 /project/somefile",
"setfacl -m d:o:rx /share",
"getfacl home/john/picture.png",
"file: home/john/picture.png owner: john group: john user::rw- group::r-- other::r--",
"file: home/sales/ owner: john group: john user::rw- user:barryg:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:john:rwx default:group::r-x default:mask::rwx default:other::r-x",
"tune2fs -l filesystem-device"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-access_control_lists
|
Chapter 2. Configuration recommendations for the Telemetry service
|
Chapter 2. Configuration recommendations for the Telemetry service Because the Red Hat OpenStack Platform (RHOSP) Telemetry service is CPU-intensive, telemetry is not enabled by default in RHOSP 16.0. However, by following these deployment recommendations, you can avoid performance degradation if you enable telemetry. These procedures- one for a small, test overcloud and one for a large, production overcloud- contain recommendations that maximize Telemetry service performance. 2.1. Configuring the Telemetry service on a small, test overcloud When you enable the Red Hat OpenStack Platform (RHOSP) Telemetry service on small, test overclouds, you can improve its performance by using a file back end. Prerequisites The overcloud deployment on which you are configuring the Telemetry service is not a production system. The overcloud is a small deployment that supports fewer than 100 instances, with a maximum of 12 physical cores on each Controller node, or 24 cores with hyperthreading enabled. The overcloud deployment has high availability disabled . Procedure Add the following to parameter_defaults in your /usr/share/openstack-tripleo-heat-templates/environments/enable-legacy-telemetry.yaml environment file and replace <FILE> with the name of the gnocchi configuration file: Add the enable-legacy-telemetry.yaml file to your openstack overcloud deploy command: Additional resources Modifying the Overcloud Environment in the Director Installation and Usage guide 2.2. Configuring the Telemetry service on a large, production overcloud When you enable the Red Hat OpenStack Platform (RHOSP) Telemetry service on a large production overcloud, you can improve its performance by deploying the Telemetry service on a dedicated node. The Telemetry service uses whichever RHOSP object store that has been chosen as its storage back end. If you do not enable Red Hat Ceph Storage, the Telemetry service uses the the RHOSP Object Storage service (swift). By default, RHOSP director colocates the Object Storage service with the Telemetry service on the Controller. Prerequisites The overcloud on which you are deploying the Telemetry service is a large, production overcloud. Procedure To set dedicated telemetry nodes, remove the telemetry services from the Controller role. Create an Orchestration service (heat) custom environment file by copying /usr/share/openstack-tripleo-heat-templates/roles_data.yaml to /home/stack/templates/roles_data.yaml . In /home/stack/templates/roles_data.yaml , remove the following lines from the ServicesDefault list of the Controller role: Add the following snippet, and save roles_data.yaml : In the /home/stack/storage-environment.yaml file, set the number of dedicated nodes for the Telemetry service. For example, add TelemetryCount: 3 to the parameter_defaults to deploy three dedicated telemetry nodes: You now have a custom telemetry role. With this role, you can define a new flavor to tag and assign specific telemetry nodes. When you deploy your overcloud, include roles_data.yaml and storage-environment.yaml to the list of your templates and environment files that openstack overcloud deploy command calls: If you cannot allocate dedicated nodes to the Telemetry service, and you still need to use the Object Storage service as its back end, configure the Object Storage service on the Controller node. Locating the Object Storage service on the Controller lowers the overall storage I/O. Additional resources Creating a New Role in the Advanced Overcloud Customization guide Configuration recommendations for the Object Storage service (swift) Modifying the Overcloud Environment in the Director Installation and Usage guide
|
[
"parameter_defaults: GnocchiBackend: <FILE>",
"openstack overcloud deploy -e /home/stack/environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/enable-legacy-telemetry.yaml [...]",
"- OS::TripleO::Services::CeilometerAgentCentral - OS::TripleO::Services::CeilometerAgentNotification - OS::TripleO::Services::GnocchiApi - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::AodhApi - OS::TripleO::Services::AodhEvaluator - OS::TripleO::Services::AodhNotifier - OS::TripleO::Services::AodhListener - OS::TripleO::Services::PankoApi - OS::TripleO::Services::CeilometerAgentIpmi",
"- name: Telemetry ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Kernel - OS::TripleO::Services::Ntp - OS::TripleO::Services::Timezone - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Securetty - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::SensuClient - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::AuditD - OS::TripleO::Services::Collectd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Docker - OS::TripleO::Services::CeilometerAgentCentral - OS::TripleO::Services::CeilometerAgentNotification - OS::TripleO::Services::GnocchiApi - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::AodhApi - OS::TripleO::Services::AodhEvaluator - OS::TripleO::Services::AodhNotifier - OS::TripleO::Services::AodhListener - OS::TripleO::Services::PankoApi - OS::TripleO::Services::CeilometerAgentIpmi",
"parameter_defaults: TelemetryCount: *3*",
"openstack overcloud deploy -r /home/stack/templates/roles_data.yaml -e /home/stack/templates/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/enable-legacy-telemetry.yaml [...]"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deployment_recommendations_for_specific_red_hat_openstack_platform_services/config-recommend-telemetry_config-recommend-telemetry
|
Chapter 1. Machine APIs
|
Chapter 1. Machine APIs 1.1. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1] Description ContainerRuntimeConfig describes a customized Container Runtime configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ControllerConfig [machineconfiguration.openshift.io/v1] Description ControllerConfig describes configuration for MachineConfigController. This is currently only used to drive the MachineConfig objects generated by the TemplateController. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ControlPlaneMachineSet [machine.openshift.io/v1] Description ControlPlaneMachineSet ensures that a specified number of control plane machine replicas are running at any given time. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. KubeletConfig [machineconfiguration.openshift.io/v1] Description KubeletConfig describes a customized Kubelet configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. MachineConfig [machineconfiguration.openshift.io/v1] Description MachineConfig defines the configuration for a machine Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. MachineHealthCheck [machine.openshift.io/v1beta1] Description MachineHealthCheck is the Schema for the machinehealthchecks API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.8. Machine [machine.openshift.io/v1beta1] Description Machine is the Schema for the machines API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.9. MachineSet [machine.openshift.io/v1beta1] Description MachineSet ensures that a specified number of machines replicas are running at any given time. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_apis/machine-apis
|
Chapter 67. Persistence and transactions in the process engine
|
Chapter 67. Persistence and transactions in the process engine The process engine implements persistence for process states. The implementation uses the JPA framework with an SQL database backend. It can also store audit log information in the database. The process engine also enables transactional execution of processes using the JTA framework, relying on the persistence backend to support the transactions. 67.1. Persistence of process runtime states The process engine supports persistent storage of the runtime state of running process instances. Because it stores the runtime states, it can continue execution of a process instance if the process engine stopped or encountered a problem at any point. The process engine also persistently stores the process definitions and the history logs of current and process states. You can use the persistence.xml file, specified by the JPA framework, to configure persistence in an SQL database. You can plug in different persistence strategies. For more information about the persistence.xml file, see Section 67.4.1, "Configuration in the persistence.xml file" . By default, if you do not configure persistence in the process engine, process information, including process instance states, is not made persistent. When the process engine starts a process, it creates a process instance , which represents the execution of the process in that specific context. For example, when executing a process that processes a sales order, one process instance is created for each sales request. The process instance contains the current runtime state and context of a process, including current values of any process variables. However, it does not include information about the history of past states of the process, as this information is not required for ongoing execution of a process. When the runtime state of process instances is made persistent, you can restore the state of execution of all running processes in case the process engine fails or is stopped. You can also remove a particular process instance from memory and then restore it at a later time. If you configure the process engine to use persistence, it automatically stores the runtime state into the database. You do not need to trigger persistence in the code. When you restore the state of the process engine from a database, all instances are automatically restored to their last recorded state. Process instances automatically resume execution if they are triggered, for example, by an expired timer, the completion of a task that was requested by the process instance, or a signal being sent to the process instance. You do not need to load separate instances and trigger their execution manually. The process engine also automatically reloads process instances on demand. 67.1.1. Safe points for persistence The process engine saves the state of a process instance to persistent storage at safe points during the execution of the process. When a process instance is started or resumes execution from a wait state, the process engine continues the execution until no more actions can be performed. If no more actions can be performed, it means that the process has completed or else has reached a wait state. If the process contains several parallel paths, all the paths must reach a wait state. This point in the execution of the process is considered a safe point. At this point, the process engine stores the state of the process instance, and of any other process instances that were affected by the execution, to persistent storage. 67.2. The persistent audit log The process engine can store information about the execution of process instances, including the successive historical states of the instances. This information can be useful in many cases. For example, you might want to verify which actions have been executed for a particular process instance or to monitor and analyze the efficiency of a particular process. However, storing history information in the runtime database would result in the database rapidly increasing in size and would also affect the performance of the persistence layer. Therefore, history log information is stored separately. The process engine creates a log based on events that it generates during execution of processes. It uses the event listener mechanism to receive events and extract the necessary information, then persists this information to a database. The jbpm-audit module contains an event listener that stores process-related information in a database using JPA. You can use filters to limit the scope of the logged information. 67.2.1. The process engine audit log data model You can query process engine audit log information to use it in different scenarios, for example, creating a history log for one specific process instance or analyzing the performance of all instances of a specific process. The audit log data model is a default implementation. Depending on your use cases, you might also define your own data model for storing the information you require. You can use process event listeners to extract the information. The data model contains three entities: one for process instance information, one for node instance information, and one for process variable instance information. The ProcessInstanceLog table contains the basic log information about a process instance. Table 67.1. ProcessInstanceLog table fields Field Description Nullable id The primary key and ID of the log entity NOT NULL correlationKey The correlation of this process instance duration Actual duration of this process instance since its start date end_date When applicable, the end date of the process instance externalId Optional external identifier used to correlate to some elements, for example, a deployment ID user_identity Optional identifier of the user who started the process instance outcome The outcome of the process instance. This field contains the error code if the process instance was finished with an error event. parentProcessInstanceId The process instance ID of the parent process instance, if applicable processid The ID of the process processinstanceid The process instance ID NOT NULL processname The name of the process processtype The type of the instance (process or case) processversion The version of the process sla_due_date The due date of the process according to the service level agreement (SLA) slaCompliance The level of compliance with the SLA start_date The start date of the process instance status The status of the process instance that maps to the process instance state The NodeInstanceLog table contains more information about which nodes were executed inside each process instance. Whenever a node instance is entered from one of its incoming connections or is exited through one of its outgoing connections, information about the event is stored in this table. Table 67.2. NodeInstanceLog table fields Field Description Nullable id The primary key and ID of the log entity NOT NULL connection Actual identifier of the sequence flow that led to this node instance log_date The date of the event externalId Optional external identifier used to correlate to some elements, for example, a deployment ID nodeid The node ID of the corresponding node in the process definition nodeinstanceid The node instance ID nodename The name of the node nodetype The type of the node processid The ID of the process that the process instance is executing processinstanceid The process instance ID NOT NULL sla_due_date The due date of the node according to the service level agreement (SLA) slaCompliance The level of compliance with the SLA type The type of the event (0 = enter, 1 = exit) NOT NULL workItemId (Optional, only for certain node types) The identifier of the work item nodeContainerId The identifier of the container, if the node is inside an embedded sub-process node referenceId The reference identifier observation The original node instance ID and job ID, if the node is of the scheduled event type. You can use this information to trigger the job again. The VariableInstanceLog table contains information about changes in variable instances. By default, the process engine generates log entries after a variable changes its value. The process engine can also log entries before the changes. Table 67.3. VariableInstanceLog table fields Field Description Nullable id The primary key and ID of the log entity NOT NULL externalId Optional external identifier used to correlate to some elements, for example, a deployment ID log_date The date of the event processid The ID of the process that the process instance is executing processinstanceid The process instance ID NOT NULL oldvalue The value of the variable at the time that the log is made value The value of the variable at the time that the log is made variableid The variable ID in the process definition variableinstanceid The ID of the variable instance The AuditTaskImpl table contains information about user tasks. Table 67.4. AuditTaskImpl table fields Field Description Nullable id The primary key and ID of the task log entity activationTime Time when this task was activated actualOwner Actual owner assigned to this task. This value is set only when the owner claims the task. createdBy User who created this task createdOn Date when the task was created deploymentId The ID of the deployment of which this task is a part description Description of the task dueDate Due date set on this task name Name of the task parentId Parent task ID priority Priority of the task processId Process definition ID to which this task belongs processInstanceId Process instance ID with which this task is associated processSessionId KIE session ID used to create this task status Current status of the task taskId Identifier of the task workItemId Identifier of the work item assigned on the process side to this task ID lastModificationDate The date and time when the process instance state was last recorded in the persistence database The BAMTaskSummary table collects information about tasks that is used by the BAM engine to build charts and dashboards. Table 67.5. BAMTaskSummary table fields Field Description Nullable pk The primary key and ID of the log entity NOT NULL createdDate Date when the task was created duration Duration since the task was created endDate Date when the task reached an end state (complete, exit, fail, skip) processinstanceid The process instance ID startDate Date when the task was started status Current status of the task taskId Identifier of the task taskName Name of the task userId User ID assigned to the task optlock The version field that serves as its optimistic lock value The TaskVariableImpl table contains information about task variable instances. Table 67.6. TaskVariableImpl table fields Field Description Nullable id The primary key and ID of the log entity NOT NULL modificationDate Date when the variable was modified most recently name Name of the task processid The ID of the process that the process instance is executing processinstanceid The process instance ID taskId Identifier of the task type Type of the variable: either input or output of the task value Variable value The TaskEvent table contains information about changes in task instances. Operations such as claim , start , and stop are stored in this table to provide a timeline view of events that happened to the given task. Table 67.7. TaskEvent table fields Field Description Nullable id The primary key and ID of the log entity NOT NULL logTime Date when this event was saved message Log event message processinstanceid The process instance ID taskId Identifier of the task type Type of the event. Types correspond to life cycle phases of the task userId User ID assigned to the task workItemId Identifier of the work item to which the task is assigned optlock The version field that serves as its optimistic lock value correlationKey Correlation key of the process instance processType Type of the process instance (process or case) currentOwner The current owner of the task 67.2.2. Configuration for storing the process events log in a database To log process history information in a database with a default data model, you must register the logger on your session. Registering the logger on your KIE session KieSession ksession = ...; ksession.addProcessEventListener(AuditLoggerFactory.newInstance(Type.JPA, ksession, null)); // invoke methods for your session here To specify the database for storing the information, you must modify the persistence.xml file to include the audit log classes: ProcessInstanceLog , NodeInstanceLog , and VariableInstanceLog . Modified persistence.xml file that includes the audit log classes <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persistence version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/jbpm-ds</jta-data-source> <mapping-file>META-INF/JBPMorm.xml</mapping-file> <class>org.drools.persistence.info.SessionInfo</class> <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class> <class>org.drools.persistence.info.WorkItemInfo</class> <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class> <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class> <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class> <class>org.jbpm.process.audit.ProcessInstanceLog</class> <class>org.jbpm.process.audit.NodeInstanceLog</class> <class>org.jbpm.process.audit.VariableInstanceLog</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="update"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.connection.release_mode" value="after_transaction"/> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform"/> </properties> </persistence-unit> </persistence> 67.2.3. Configuration for sending the process events log to a JMS queue When the process engine stores events in the database with the default audit log implementation, the database operation is completed synchronously, within the same transaction as the actual execution of the process instance. This operation takes time, and on highly loaded systems it might have some impact on database performance, especially when both the history log and the runtime data are stored in the same database. As an alternative, you can use the JMS-based logger that the process engine provides. You can configure this logger to submit process log entries as messages to a JMS queue, instead of directly persisting them in the database. You can configure the JMS logger to be transactional, in order to avoid data inconsistencies if a process engine transaction is rolled back. Using the JMS audit logger ConnectionFactory factory = ...; Queue queue = ...; StatefulKnowledgeSession ksession = ...; Map<String, Object> jmsProps = new HashMap<String, Object>(); jmsProps.put("jbpm.audit.jms.transacted", true); jmsProps.put("jbpm.audit.jms.connection.factory", factory); jmsProps.put("jbpm.audit.jms.queue", queue); ksession.addProcessEventListener(AuditLoggerFactory.newInstance(Type.JMS, ksession, jmsProps)); // invoke methods one your session here This is just one of the possible ways to configure JMS audit logger. You can use the AuditLoggerFactory class to set additional configuration parameters. 67.2.4. Auditing of variables By default, values of process and task variables are stored in audit tables as string representations. To create string representations of non-string variable types, the process engine calls the variable.toString() method. If you use a custom class for a variable, you can implement this method for the class. In many cases this representation is sufficient. However, sometimes a string representation in the logs might not be sufficient, especially when there is a need for efficient queries by process or task variables. For example, a Person object, used as a value for a variable, might have the following structure: Example Person object, used as a process or task variable value public class Person implements Serializable { private static final long serialVersionUID = -5172443495317321032L; private String name; private int age; public Person(String name, int age) { this.name = name; this.age = age; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } @Override public String toString() { return "Person [name=" + name + ", age=" + age + "]"; } } The toString() method provides a human-readable format. However, it might not be sufficient for a search. A sample string value is Person [name="john", age="34"] . Searching through a large number of such strings to find people of age 34 would make a database query inefficient. To enable more efficient searching, you can audit variables using VariableIndexer objects, which extract relevant parts of the variable for storage in the audit log. Definition of the VariableIndexer interface /** * Variable indexer that transforms a variable instance into another representation (usually string) * for use in log queries. * * @param <V> type of the object that will represent the indexed variable */ public interface VariableIndexer<V> { /** * Tests if this indexer can index a given variable * * NOTE: only one indexer can be used for a given variable * * @param variable variable to be indexed * @return true if the variable should be indexed with this indexer */ boolean accept(Object variable); /** * Performs an index/transform operation on the variable. The result of this operation can be * either a single value or a list of values, to support complex type separation. * For example, when the variable is of the type Person that has name, address, and phone fields, * the indexer could build three entries out of it to represent individual fields: * person = person.name * address = person.address.street * phone = person.phone * this configuration allows advanced queries for finding relevant entries. * @param name name of the variable * @param variable actual variable value * @return */ List<V> index(String name, Object variable); } The default indexer uses the toString() method to produce a single audit entry for a single variable. Other indexers can return a list of objects from indexing a single variable. To enable efficient queries for the Person type, you can build a custom indexer that indexes a Person instance into separate audit entries, one representing the name and another representing the age. Sample indexer for the Person type public class PersonTaskVariablesIndexer implements TaskVariableIndexer { @Override public boolean accept(Object variable) { if (variable instanceof Person) { return true; } return false; } @Override public List<TaskVariable> index(String name, Object variable) { Person person = (Person) variable; List<TaskVariable> indexed = new ArrayList<TaskVariable>(); TaskVariableImpl personNameVar = new TaskVariableImpl(); personNameVar.setName("person.name"); personNameVar.setValue(person.getName()); indexed.add(personNameVar); TaskVariableImpl personAgeVar = new TaskVariableImpl(); personAgeVar.setName("person.age"); personAgeVar.setValue(person.getAge()+""); indexed.add(personAgeVar); return indexed; } } The process engine can use this indexer to index values when they are of the Person type, while all other variables are indexed with the default toString() method. Now, to query for process instances or tasks that refer to a person with age 34, you can use the following query: variable name: person.age variable value: 34 As a LIKE type query is not used, the database server can optimize the query and make it efficient on a large set of data. Custom indexers The process engine supports indexers for both process and task variables. However, it uses different interfaces for the indexers, because they must produce different types of objects that represent an audit view of the variable. You must implement the following interfaces to build custom indexers: For process variables: org.kie.internal.process.ProcessVariableIndexer For task variables: org.kie.internal.task.api.TaskVariableIndexer You must implement two methods for either of the interfaces: accept : Indicates whether a type is handled by this indexer. The process engine expects that only one indexer can index a given variable value, so it uses the first indexer that accepts the type. index : Indexes a value, producing a object or list of objects (usually strings) for inclusion in the audit log. After implementing the interface, you must package this implementation as a JAR file and list the implementation in one of the following files: For process variables, the META-INF/services/org.kie.internal.process.ProcessVariableIndexer file, which lists fully qualified class names of process variable indexers (single class name per line) For task variables, the META-INF/services/org.kie.internal.task.api.TaskVariableIndexer file, which lists fully qualified class names of task variable indexers (single class name per line) The ServiceLoader mechanism discovers the indexers using these files. When indexing a process or task variable, the process engine examines the registered indexers to find any indexer that accepts the value of the variable. If no other indexer accepts the value, the process engine applies the default indexer that uses the toString() method. 67.3. Transactions in the process engine The process engine supports Java Transaction API (JTA) transactions. The current version of the process engine does not support pure local transactions. If you do not provide transaction boundaries inside your application, the process engine automatically executes each method invocation on the process engine in a separate transaction. Optionally, you can specify the transaction boundaries in the application code, for example, to combine multiple commands into one transaction. 67.3.1. Registration of a transaction manager You must register a transaction manager in the environment to use user-defined transactions. The following sample code registers the transaction manager and uses JTA calls to specify transaction boundaries. Registering a transaction manager and using transactions // Create the entity manager factory EntityManagerFactory emf = EntityManagerFactoryManager.get().getOrCreate("org.jbpm.persistence.jpa"); TransactionManager tm = TransactionManagerServices.getTransactionManager(); // Set up the runtime environment RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder() .addAsset(ResourceFactory.newClassPathResource("MyProcessDefinition.bpmn2"), ResourceType.BPMN2) .addEnvironmentEntry(EnvironmentName.TRANSACTION_MANAGER, tm) .get(); // Get the KIE session RuntimeManager manager = RuntimeManagerFactory.Factory.get().newPerRequestRuntimeManager(environment); RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get()); KieSession ksession = runtime.getKieSession(); // Start the transaction UserTransaction ut = InitialContext.doLookup("java:comp/UserTransaction"); ut.begin(); // Perform multiple commands inside one transaction ksession.insert( new Person( "John Doe" ) ); ksession.startProcess("MyProcess"); // Commit the transaction ut.commit(); You must provide a jndi.properties file in you root class path to create a JNDI InitialContextFactory object, because transaction-related objects like UserTransaction , TransactionManager , and TransactionSynchronizationRegistry are registered in JNDI. If your project includes the jbpm-test module, this file is already included by default. Otherwise, you must create the jndi.properties file with the following content: Content of the jndi.properties file java.naming.factory.initial=org.jbpm.test.util.CloseSafeMemoryContextFactory org.osjava.sj.root=target/test-classes/config org.osjava.jndi.delimiter=/ org.osjava.sj.jndi.shared=true This configuration assumes that the simple-jndi:simple-jndi artifact is present in the class path of your project. You can also use a different JNDI implementation. By default, the Narayana JTA transaction manager is used. If you want to use a different JTA transaction manager, you can change the persistence.xml file to use the required transaction manager. For example, if your application runs on Red Hat JBoss EAP version 7 or later, you can use the JBoss transaction manager. In this case, change the transaction manager property in the persistence.xml file: Transaction manager property in the persistence.xml file for the JBoss transaction manager <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform" /> Warning Using the Singleton strategy of the RuntimeManager class with JTA transactions ( UserTransaction or CMT) creates a race condition. This race condition can result in an IllegalStateException exception with a message similar to Process instance XXX is disconnected . To avoid this race condition, explicitly synchronize around the KieSession instance when invoking the transaction in the user application code. synchronized (ksession) { try { tx.begin(); // use ksession // application logic tx.commit(); } catch (Exception e) { //... } } 67.3.2. Configuring container-managed transactions If you embed the process engine in an application that executes in container-managed transaction (CMT) mode, for example, EJB beans, you must complete additional configuration. This configuration is especially important if the application runs on an application server that does not allow a CMT application to access a UserTransaction instance from JNDI, for example, WebSphere Application Server. The default transaction manager implementation in the process engine relies on UserTransaction to query transaction status and then uses the status to determine whether to start a transaction. In environments that prevent access to a UserTransaction instance, this implementation fails. To enable proper execution in CMT environments, the process engine provides a dedicated transaction manager implementation: org.jbpm.persistence.jta.ContainerManagedTransactionManager . This transaction manager expects that the transaction is active and always returns ACTIVE when the getStatus() method is invoked. Operations such as begin , commit , and rollback are no-op methods, because the transaction manager cannot affect these operations in container-managed transaction mode. Note During process execution your code must propagate any exceptions thrown by the engine to the container to ensure that the container rolls transactions back when necessary. To configure this transaction manager, complete the steps in this procedure. Procedure In your code, insert the transaction manager and persistence context manager into the environment before creating or loading a session: Inserting the transaction manager and persistence context manager into the environment Environment env = EnvironmentFactory.newEnvironment(); env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, emf); env.set(EnvironmentName.TRANSACTION_MANAGER, new ContainerManagedTransactionManager()); env.set(EnvironmentName.PERSISTENCE_CONTEXT_MANAGER, new JpaProcessPersistenceContextManager(env)); env.set(EnvironmentName.TASK_PERSISTENCE_CONTEXT_MANAGER, new JPATaskPersistenceContextManager(env)); In the persistence.xml file, configure the JPA provider. The following example uses hibernate and WebSphere Application Server. Configuring the JPA provider in the persistence.xml file <property name="hibernate.transaction.factory_class" value="org.hibernate.transaction.CMTTransactionFactory"/> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.WebSphereJtaPlatform"/> To dispose a KIE session, do not dispose it directly. Instead, execute the org.jbpm.persistence.jta.ContainerManagedTransactionDisposeCommand command. This commands ensures that the session is disposed at the completion of the current transaction. In the following example, ksession is the KieSession object that you want to dispose. Disposing a KIE session using the ContainerManagedTransactionDisposeCommand command ksession.execute(new ContainerManagedTransactionDisposeCommand()); Directly disposing the session causes an exception at the completion of the transaction, because the process engine registers transaction synchronization to clean up the session state. 67.3.3. Transaction retries When the process engine commits a transaction, sometimes the commit operation fails because another transaction is being committed at the same time. In this case, the process engine must retry the transaction. If several retries fail, the transaction fails permanently. You can use JVM system properties to control the retrying process. Table 67.8. System properties for retrying committing transactions Property Values Default Description org.kie.optlock.retries Integer 5 This property describes how many times the process engine retries a transaction before failing permanently. org.kie.optlock.delay Integer 50 The delay time before the first retry, in milliseconds. org.kie.optlock.delayFactor Integer 4 The multiplier for increasing the delay time for each subsequent retry. With the default values, the process engine waits 50 milliseconds before the first retry, 200 milliseconds before the second retry, 800 milliseconds before the third retry, and so on. 67.4. Configuration of persistence in the process engine If you use the process engine without configuring any persistence, it does not save runtime data to any database; no in-memory database is available by default. You can use this mode if it is required for performance reasons or when you want to manage persistence yourself. To use JPA persistence in the process engine, you must configure it. Configuration usually requires adding the necessary dependencies, configuring a data source, and creating the process engine classes with persistence configured. 67.4.1. Configuration in the persistence.xml file To use JPA persistence, you must add a persistence.xml persistence configuration to your class path to configure JPA to use Hibernate and the H2 database (or any other database that you prefer). Place this file in the META-INF directory of your project. Sample persistence.xml file <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persistence version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/jbpm-ds</jta-data-source> <mapping-file>META-INF/JBPMorm.xml</mapping-file> <class>org.drools.persistence.info.SessionInfo</class> <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class> <class>org.drools.persistence.info.WorkItemInfo</class> <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class> <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class> <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="update"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.connection.release_mode" value="after_transaction"/> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform"/> </properties> </persistence-unit> </persistence> The example refers to a jdbc/jbpm-ds data source. For instructions about configuring a data source, see Section 67.4.2, "Configuration of data sources for process engine persistence" . 67.4.2. Configuration of data sources for process engine persistence To configure JPA persistence in the process engine, you must provide a data source, which represents a database backend. If you run your application in an application server, such as Red Hat JBoss EAP, you can use the application server to set up data sources, for example, by adding a data source configuration file in the deploy directory. For instructions about creating data sources, see the documentation for the application server. If you deploy your application to Red Hat JBoss EAP, you can create a data source by creating a configuration file in the deploy directory: Example data source configuration file for Red Hat JBoss EAP <?xml version="1.0" encoding="UTF-8"?> <datasources> <local-tx-datasource> <jndi-name>jdbc/jbpm-ds</jndi-name> <connection-url>jdbc:h2:tcp://localhost/~/test</connection-url> <driver-class>org.h2.jdbcx.JdbcDataSource</driver-class> <user-name>sa</user-name> <password></password> </local-tx-datasource> </datasources> If your application runs in a plain Java environment, you can use Narayana and Tomcat DBCP by using the DataSourceFactory class from the kie-test-util module supplied by Red Hat Process Automation Manager. See the following code fragment. This example uses the H2 in-memory database in combination with Narayana and Tomcat DBCP. Example code configuring an H2 in-memory database data source Properties driverProperties = new Properties(); driverProperties.put("user", "sa"); driverProperties.put("password", "sa"); driverProperties.put("url", "jdbc:h2:mem:jbpm-db;MVCC=true"); driverProperties.put("driverClassName", "org.h2.Driver"); driverProperties.put("className", "org.h2.jdbcx.JdbcDataSource"); PoolingDataSourceWrapper pdsw = DataSourceFactory.setupPoolingDataSource("jdbc/jbpm-ds", driverProperties); 67.4.3. Dependencies for persistence Persistence requires certain JAR artifact dependencies. The jbpm-persistence-jpa.jar file is always required. This file contains the code for saving the runtime state whenever necessary. Depending on the persistence solution and database you are using, you might need additional dependencies. The default configuration combination includes the following components: Hibernate as the JPA persistence provider H2 in-memory database Narayana for JTA-based transaction management Tomcat DBCP for connection pooling capabilities This configuration requires the following additional dependencies: jbpm-persistence-jpa ( org.jbpm ) drools-persistence-jpa ( org.drools ) persistence-api ( javax.persistence ) hibernate-entitymanager ( org.hibernate ) hibernate-annotations ( org.hibernate ) hibernate-commons-annotations ( org.hibernate ) hibernate-core ( org.hibernate ) commons-collections ( commons-collections ) dom4j ( org.dom4j ) jta ( javax.transaction ) narayana-jta ( org.jboss.narayana.jta ) tomcat-dbcp ( org.apache.tomcat ) jboss-transaction-api_1.2_spec ( org.jboss.spec.javax.transaction ) javassist ( javassist ) slf4j-api ( org.slf4j ) slf4j-jdk14 ( org.slf4j ) simple-jndi ( simple-jndi ) h2 ( com.h2database ) jbpm-test ( org.jbpm ) only for testing, do not include this artifact in the production application 67.4.4. Creating a KIE session with persistence If your code creates KIE sessions directly, you can use the JPAKnowledgeService class to create your KIE session. This approach provides full access to the underlying configuration. Procedure Create a KIE session using the JPAKnowledgeService class, based on a KIE base, a KIE session configuration (if necessary), and an environment. The environment must contain a reference to the Entity Manager Factory that you use for persistence. Creating a KIE session with persistence // create the entity manager factory and register it in the environment EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa" ); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf ); // create a new KIE session that uses JPA to store the runtime state StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env ); int sessionId = ksession.getId(); // invoke methods on your method here ksession.startProcess( "MyProcess" ); ksession.dispose(); To re-create a session from the database based on a specific session ID, use the JPAKnowledgeService.loadStatefulKnowledgeSession() method: Re-creating a KIE session from the persistence database // re-create the session from database using the sessionId ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(sessionId, kbase, null, env ); 67.4.5. Persistence in the runtime manager If your code uses the RuntimeManager class, use the RuntimeEnvironmentBuilder class to configure the environment for persistence. By default, the runtime manager searches for the org.jbpm.persistence.jpa persistence unit. The following example creates a KieSession with an empty context. Creating a KIE session with an empty context using the runtime manager RuntimeEnvironmentBuilder builder = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder() .knowledgeBase(kbase); RuntimeManager manager = RuntimeManagerFactory.Factory.get() .newSingletonRuntimeManager(builder.get(), "com.sample:example:1.0"); RuntimeEngine engine = manager.getRuntimeEngine(EmptyContext.get()); KieSession ksession = engine.getKieSession(); The example requires a KIE base as the kbase parameter. You can use a kmodule.xml KJAR descriptor on the class path to build the KIE base. Building a KIE base from a kmodule.xml KJAR descriptor KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieBase kbase = kContainer.getKieBase("kbase"); A kmodule.xml descriptor file can include an attribute for resource packages to scan to find and deploy process engine workflows. Sample kmodule.xml descriptor file <kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule"> <kbase name="kbase" packages="com.sample"/> </kmodule> To control the persistence, you can use the RuntimeEnvironmentBuilder::entityManagerFactory methods. Controlling configuration of persistence in the runtime manager EntityManagerFactory emf = Persistence.createEntityManagerFactory("org.jbpm.persistence.jpa"); RuntimeEnvironment runtimeEnv = RuntimeEnvironmentBuilder.Factory .get() .newDefaultBuilder() .entityManagerFactory(emf) .knowledgeBase(kbase) .get(); StatefulKnowledgeSession ksession = (StatefulKnowledgeSession) RuntimeManagerFactory.Factory.get() .newSingletonRuntimeManager(runtimeEnv) .getRuntimeEngine(EmptyContext.get()) .getKieSession(); After creating the ksession KIE session in this example, you can call methods in ksession , for example, StartProcess() . The process engine persists the runtime state in the configured data source. You can restore a process instance from persistent storage by using the process instance ID. The runtime manager automatically re-creates the required session. Re-creating a KIE session from the persistence database using a process instance ID RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get(processInstanceId)); KieSession session = runtime.getKieSession(); 67.5. Persisting process variables in a separate database schema in Red Hat Process Automation Manager When you create process variables to use within the processes that you define, Red Hat Process Automation Manager stores those process variables as binary data in a default database schema. You can persist process variables in a separate database schema for greater flexibility in maintaining and implementing your process data. For example, persisting your process variables in a separate database schema can help you perform the following tasks: Maintain process variables in human-readable format Make the variables available to services outside of Red Hat Process Automation Manager Clear the log of the default database tables in Red Hat Process Automation Manager without losing process variable data Note This procedure applies to process variables only. This procedure does not apply to case variables. Prerequisites You have defined processes in Red Hat Process Automation Manager for which you want to implement variables. If you want to persist variables in a database schema outside of Red Hat Process Automation Manager, you have created a data source and the separate database schema that you want to use. For information about creating data sources, see Configuring Business Central settings and properties . Procedure In the data object file that you use as a process variable, add the following elements to configure variable persistence: Example Person.java object configured for variable persistence @javax.persistence.Entity 1 @javax.persistence.Table(name = "Person") 2 public class Person extends org.drools.persistence.jpa.marshaller.VariableEntity 3 implements java.io.Serializable { 4 static final long serialVersionUID = 1L; @javax.persistence.GeneratedValue(strategy = javax.persistence.GenerationType.AUTO, generator = "PERSON_ID_GENERATOR") @javax.persistence.Id 5 @javax.persistence.SequenceGenerator(name = "PERSON_ID_GENERATOR", sequenceName = "PERSON_ID_SEQ") private java.lang.Long id; private java.lang.String name; private java.lang.Integer age; public Person() { } public java.lang.Long getId() { return this.id; } public void setId(java.lang.Long id) { this.id = id; } public java.lang.String getName() { return this.name; } public void setName(java.lang.String name) { this.name = name; } public java.lang.Integer getAge() { return this.age; } public void setAge(java.lang.Integer age) { this.age = age; } public Person(java.lang.Long id, java.lang.String name, java.lang.Integer age) { this.id = id; this.name = name; this.age = age; } } 1 Configures the data object as a persistence entity. 2 Defines the database table name used for the data object. 3 Creates a separate MappedVariable mapping table that maintains the relationship between this data object and the associated process instance. If you do not need this relationship maintained, you do not need to extend the VariableEntity class. Without this extension, the data object is still persisted, but contains no additional data. 4 Configures the data object as a serializable object. 5 Sets a persistence ID for the object. To make the data object persistable using Business Central, navigate to the data object file in your project, click the Persistence icon in the upper-right corner of the window, and configure the persistence behavior: Figure 67.1. Persistence configuration in Business Central In the pom.xml file of your project, add the following dependency for persistence support. This dependency contains the VariableEntity class that you configured in your data object. Project dependency for persistence <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> <version>USD{rhpam.version}</version> <scope>provided</scope> </dependency> In the ~/META-INF/kie-deployment-descriptor.xml file of your project, configure the JPA marshalling strategy and a persistence unit to be used with the marshaller. The JPA marshalling strategy and persistence unit are required for objects defined as entities. JPA marshaller and persistence unit configured in the kie-deployment-descriptor.xml file <marshalling-strategy> <resolver>mvel</resolver> <identifier>new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy("myPersistenceUnit", classLoader)</identifier> <parameters/> </marshalling-strategy> In the ~/META-INF directory of your project, create a persistence.xml file that specifies in which data source you want to persist the process variable: Example persistence.xml file with data source configuration <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"> <persistence-unit name="myPersistenceUnit" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> 1 <class>org.space.example.Person</class> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="update"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.id.new_generator_mappings" value="false"/> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/> </properties> </persistence-unit> </persistence> 1 Sets the data source in which the process variable is persisted To configure the marshalling strategy, persistence unit, and data source using Business Central, navigate to project Settings Deployments Marshalling Strategies and to project Settings Persistence : Figure 67.2. JPA marshaller configuration in Business Central Figure 67.3. Persistence unit and data source configuration in Business Central
|
[
"KieSession ksession = ...; ksession.addProcessEventListener(AuditLoggerFactory.newInstance(Type.JPA, ksession, null)); // invoke methods for your session here",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <persistence version=\"2.0\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd\" xmlns=\"http://java.sun.com/xml/ns/persistence\" xmlns:orm=\"http://java.sun.com/xml/ns/persistence/orm\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <persistence-unit name=\"org.jbpm.persistence.jpa\" transaction-type=\"JTA\"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/jbpm-ds</jta-data-source> <mapping-file>META-INF/JBPMorm.xml</mapping-file> <class>org.drools.persistence.info.SessionInfo</class> <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class> <class>org.drools.persistence.info.WorkItemInfo</class> <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class> <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class> <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class> <class>org.jbpm.process.audit.ProcessInstanceLog</class> <class>org.jbpm.process.audit.NodeInstanceLog</class> <class>org.jbpm.process.audit.VariableInstanceLog</class> <properties> <property name=\"hibernate.dialect\" value=\"org.hibernate.dialect.H2Dialect\"/> <property name=\"hibernate.max_fetch_depth\" value=\"3\"/> <property name=\"hibernate.hbm2ddl.auto\" value=\"update\"/> <property name=\"hibernate.show_sql\" value=\"true\"/> <property name=\"hibernate.connection.release_mode\" value=\"after_transaction\"/> <property name=\"hibernate.transaction.jta.platform\" value=\"org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform\"/> </properties> </persistence-unit> </persistence>",
"ConnectionFactory factory = ...; Queue queue = ...; StatefulKnowledgeSession ksession = ...; Map<String, Object> jmsProps = new HashMap<String, Object>(); jmsProps.put(\"jbpm.audit.jms.transacted\", true); jmsProps.put(\"jbpm.audit.jms.connection.factory\", factory); jmsProps.put(\"jbpm.audit.jms.queue\", queue); ksession.addProcessEventListener(AuditLoggerFactory.newInstance(Type.JMS, ksession, jmsProps)); // invoke methods one your session here",
"public class Person implements Serializable { private static final long serialVersionUID = -5172443495317321032L; private String name; private int age; public Person(String name, int age) { this.name = name; this.age = age; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } @Override public String toString() { return \"Person [name=\" + name + \", age=\" + age + \"]\"; } }",
"/** * Variable indexer that transforms a variable instance into another representation (usually string) * for use in log queries. * * @param <V> type of the object that will represent the indexed variable */ public interface VariableIndexer<V> { /** * Tests if this indexer can index a given variable * * NOTE: only one indexer can be used for a given variable * * @param variable variable to be indexed * @return true if the variable should be indexed with this indexer */ boolean accept(Object variable); /** * Performs an index/transform operation on the variable. The result of this operation can be * either a single value or a list of values, to support complex type separation. * For example, when the variable is of the type Person that has name, address, and phone fields, * the indexer could build three entries out of it to represent individual fields: * person = person.name * address = person.address.street * phone = person.phone * this configuration allows advanced queries for finding relevant entries. * @param name name of the variable * @param variable actual variable value * @return */ List<V> index(String name, Object variable); }",
"public class PersonTaskVariablesIndexer implements TaskVariableIndexer { @Override public boolean accept(Object variable) { if (variable instanceof Person) { return true; } return false; } @Override public List<TaskVariable> index(String name, Object variable) { Person person = (Person) variable; List<TaskVariable> indexed = new ArrayList<TaskVariable>(); TaskVariableImpl personNameVar = new TaskVariableImpl(); personNameVar.setName(\"person.name\"); personNameVar.setValue(person.getName()); indexed.add(personNameVar); TaskVariableImpl personAgeVar = new TaskVariableImpl(); personAgeVar.setName(\"person.age\"); personAgeVar.setValue(person.getAge()+\"\"); indexed.add(personAgeVar); return indexed; } }",
"// Create the entity manager factory EntityManagerFactory emf = EntityManagerFactoryManager.get().getOrCreate(\"org.jbpm.persistence.jpa\"); TransactionManager tm = TransactionManagerServices.getTransactionManager(); // Set up the runtime environment RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder() .addAsset(ResourceFactory.newClassPathResource(\"MyProcessDefinition.bpmn2\"), ResourceType.BPMN2) .addEnvironmentEntry(EnvironmentName.TRANSACTION_MANAGER, tm) .get(); // Get the KIE session RuntimeManager manager = RuntimeManagerFactory.Factory.get().newPerRequestRuntimeManager(environment); RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get()); KieSession ksession = runtime.getKieSession(); // Start the transaction UserTransaction ut = InitialContext.doLookup(\"java:comp/UserTransaction\"); ut.begin(); // Perform multiple commands inside one transaction ksession.insert( new Person( \"John Doe\" ) ); ksession.startProcess(\"MyProcess\"); // Commit the transaction ut.commit();",
"java.naming.factory.initial=org.jbpm.test.util.CloseSafeMemoryContextFactory org.osjava.sj.root=target/test-classes/config org.osjava.jndi.delimiter=/ org.osjava.sj.jndi.shared=true",
"<property name=\"hibernate.transaction.jta.platform\" value=\"org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform\" />",
"synchronized (ksession) { try { tx.begin(); // use ksession // application logic tx.commit(); } catch (Exception e) { // } }",
"Environment env = EnvironmentFactory.newEnvironment(); env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, emf); env.set(EnvironmentName.TRANSACTION_MANAGER, new ContainerManagedTransactionManager()); env.set(EnvironmentName.PERSISTENCE_CONTEXT_MANAGER, new JpaProcessPersistenceContextManager(env)); env.set(EnvironmentName.TASK_PERSISTENCE_CONTEXT_MANAGER, new JPATaskPersistenceContextManager(env));",
"<property name=\"hibernate.transaction.factory_class\" value=\"org.hibernate.transaction.CMTTransactionFactory\"/> <property name=\"hibernate.transaction.jta.platform\" value=\"org.hibernate.service.jta.platform.internal.WebSphereJtaPlatform\"/>",
"ksession.execute(new ContainerManagedTransactionDisposeCommand());",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <persistence version=\"2.0\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd\" xmlns=\"http://java.sun.com/xml/ns/persistence\" xmlns:orm=\"http://java.sun.com/xml/ns/persistence/orm\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <persistence-unit name=\"org.jbpm.persistence.jpa\" transaction-type=\"JTA\"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/jbpm-ds</jta-data-source> <mapping-file>META-INF/JBPMorm.xml</mapping-file> <class>org.drools.persistence.info.SessionInfo</class> <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class> <class>org.drools.persistence.info.WorkItemInfo</class> <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class> <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class> <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class> <properties> <property name=\"hibernate.dialect\" value=\"org.hibernate.dialect.H2Dialect\"/> <property name=\"hibernate.max_fetch_depth\" value=\"3\"/> <property name=\"hibernate.hbm2ddl.auto\" value=\"update\"/> <property name=\"hibernate.show_sql\" value=\"true\"/> <property name=\"hibernate.connection.release_mode\" value=\"after_transaction\"/> <property name=\"hibernate.transaction.jta.platform\" value=\"org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform\"/> </properties> </persistence-unit> </persistence>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <datasources> <local-tx-datasource> <jndi-name>jdbc/jbpm-ds</jndi-name> <connection-url>jdbc:h2:tcp://localhost/~/test</connection-url> <driver-class>org.h2.jdbcx.JdbcDataSource</driver-class> <user-name>sa</user-name> <password></password> </local-tx-datasource> </datasources>",
"Properties driverProperties = new Properties(); driverProperties.put(\"user\", \"sa\"); driverProperties.put(\"password\", \"sa\"); driverProperties.put(\"url\", \"jdbc:h2:mem:jbpm-db;MVCC=true\"); driverProperties.put(\"driverClassName\", \"org.h2.Driver\"); driverProperties.put(\"className\", \"org.h2.jdbcx.JdbcDataSource\"); PoolingDataSourceWrapper pdsw = DataSourceFactory.setupPoolingDataSource(\"jdbc/jbpm-ds\", driverProperties);",
"// create the entity manager factory and register it in the environment EntityManagerFactory emf = Persistence.createEntityManagerFactory( \"org.jbpm.persistence.jpa\" ); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf ); // create a new KIE session that uses JPA to store the runtime state StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env ); int sessionId = ksession.getId(); // invoke methods on your method here ksession.startProcess( \"MyProcess\" ); ksession.dispose();",
"// re-create the session from database using the sessionId ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(sessionId, kbase, null, env );",
"RuntimeEnvironmentBuilder builder = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder() .knowledgeBase(kbase); RuntimeManager manager = RuntimeManagerFactory.Factory.get() .newSingletonRuntimeManager(builder.get(), \"com.sample:example:1.0\"); RuntimeEngine engine = manager.getRuntimeEngine(EmptyContext.get()); KieSession ksession = engine.getKieSession();",
"KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieBase kbase = kContainer.getKieBase(\"kbase\");",
"<kmodule xmlns=\"http://jboss.org/kie/6.0.0/kmodule\"> <kbase name=\"kbase\" packages=\"com.sample\"/> </kmodule>",
"EntityManagerFactory emf = Persistence.createEntityManagerFactory(\"org.jbpm.persistence.jpa\"); RuntimeEnvironment runtimeEnv = RuntimeEnvironmentBuilder.Factory .get() .newDefaultBuilder() .entityManagerFactory(emf) .knowledgeBase(kbase) .get(); StatefulKnowledgeSession ksession = (StatefulKnowledgeSession) RuntimeManagerFactory.Factory.get() .newSingletonRuntimeManager(runtimeEnv) .getRuntimeEngine(EmptyContext.get()) .getKieSession();",
"RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get(processInstanceId)); KieSession session = runtime.getKieSession();",
"@javax.persistence.Entity 1 @javax.persistence.Table(name = \"Person\") 2 public class Person extends org.drools.persistence.jpa.marshaller.VariableEntity 3 implements java.io.Serializable { 4 static final long serialVersionUID = 1L; @javax.persistence.GeneratedValue(strategy = javax.persistence.GenerationType.AUTO, generator = \"PERSON_ID_GENERATOR\") @javax.persistence.Id 5 @javax.persistence.SequenceGenerator(name = \"PERSON_ID_GENERATOR\", sequenceName = \"PERSON_ID_SEQ\") private java.lang.Long id; private java.lang.String name; private java.lang.Integer age; public Person() { } public java.lang.Long getId() { return this.id; } public void setId(java.lang.Long id) { this.id = id; } public java.lang.String getName() { return this.name; } public void setName(java.lang.String name) { this.name = name; } public java.lang.Integer getAge() { return this.age; } public void setAge(java.lang.Integer age) { this.age = age; } public Person(java.lang.Long id, java.lang.String name, java.lang.Integer age) { this.id = id; this.name = name; this.age = age; } }",
"<dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> <version>USD{rhpam.version}</version> <scope>provided</scope> </dependency>",
"<marshalling-strategy> <resolver>mvel</resolver> <identifier>new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy(\"myPersistenceUnit\", classLoader)</identifier> <parameters/> </marshalling-strategy>",
"<persistence xmlns=\"http://java.sun.com/xml/ns/persistence\" xmlns:orm=\"http://java.sun.com/xml/ns/persistence/orm\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" version=\"2.0\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd\"> <persistence-unit name=\"myPersistenceUnit\" transaction-type=\"JTA\"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> 1 <class>org.space.example.Person</class> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name=\"hibernate.dialect\" value=\"org.hibernate.dialect.PostgreSQLDialect\"/> <property name=\"hibernate.max_fetch_depth\" value=\"3\"/> <property name=\"hibernate.hbm2ddl.auto\" value=\"update\"/> <property name=\"hibernate.show_sql\" value=\"true\"/> <property name=\"hibernate.id.new_generator_mappings\" value=\"false\"/> <property name=\"hibernate.transaction.jta.platform\" value=\"org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform\"/> </properties> </persistence-unit> </persistence>"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/persistence-con_process-engine
|
Chapter 7. Planning storage and shared file systems
|
Chapter 7. Planning storage and shared file systems Red Hat OpenStack Services on OpenShift (RHOSO) uses ephemeral and persistent storage to service the storage needs of the deployment. Ephemeral storage is associated with a specific Compute instance. When this instance is terminated, so is the associated ephemeral storage. Ephemeral storage is useful for runtime requirements, such as storing the operating system of an instance. Persistent storage is independent of any running instance. Persistent storage is useful for storing reusable data, such as data volumes, disk images, and shareable file systems. The storage requirements of the deployment should be taken into consideration and carefully planned before beginning the deployment. This includes considerations such as: Supported features and topologies Storage technologies Networking Scalability Accessibility Performances Costs Security Redundancy and disaster recovery Storage management 7.1. Supported storage features and topologies RHOSO supports the following storage and networking features: Red Hat Ceph Storage integration: Ceph Block Device (RBD) with the Block Storage service (cinder) for persistent storage, the Image service (glance), and the Compute service (nova) for ephemeral storage. Ceph File System (Native CephFS or CephFS via NFS) with the Shared File Systems service (manila). Object Storage service integration with Ceph Object Gateway (RGW) Hyperconverged infrastructure (HCI): Hyperconverged infrastructures consist of hyperconverged nodes. Hyperconverged nodes are external data plane nodes with Compute and Red Hat Ceph Storage services colocated on the same nodes for optimized hardware footprint. Transport protocols for the Block Storage service with appropriate configuration and drivers: NVMe over TCP RBD NFS FC Note You must install host bus adapters (HBAs) on all Compute and OCP workers nodes in any deployment that uses the Block Storage service and a Fibre Channel (FC) back end. iSCSI Multipathing with iSCSI, FC, and NVMe over TCP is available on the control plane with the appropriate RHOCP MachineConfig. Transport protocols for the Shared File Systems service with appropriate configuration and drivers: CephFS NFS CIFS Object Storage through native Swift or Amazon S3 compatible API RHOSO supports the following storage services. Service Back ends Image service (glance) Red Hat Ceph Storage RBD Block Storage (cinder) Object Storage (swift) NFS Compute service (nova) local file storage Red Hat Ceph Storage RBD Block Storage service (cinder) Red Hat Ceph Storage RBD Fiber Channel iSCSI NFS NVMe over TCP Note Support is provided through third party drivers. Shared File Systems service (manila) Red Hat Ceph Storage CephFS Red Hat Ceph Storage CephFS-NFS NFS or CIFS through third party vendor storage systems Object Storage service (swift) disks on external data plane nodes PersistentVolumes (PVs) on OpenShift nodes (default) Integration with Ceph RGW To manage the consumption of system resources by projects, you can configure quotas for the Block Storage service (cinder) and the Shared File Systems service (manila). You can override the default quotas so that individual projects have different consumption limits. 7.2. Storage technologies RHOSO supports a number of storage technologies that can act separately or in combination to provide the storage solution for your deployment. 7.2.1. Red Hat Ceph Storage Red Hat Ceph Storage is a distributed data object store designed for performance, reliability, and scalability. Distributed object stores use unstructured data to simultaneously service modern and legacy object interfaces. It provides access to block, file, and object storage. Red Hat Ceph Storage is deployed as a cluster. A cluster consists of two primary types of daemons: Ceph Object Storage Daemon (CephOSD) - The CephOSD performs data storage, data replication, rebalancing, recovery, monitoring, and reporting tasks. Ceph Monitor (CephMon) - The CephMon maintains the primary copy of the cluster map with the current state of the cluster. RHOSO supports Red Hat Ceph Storage 7 in the following deployment scenarios: Integration with an externally deployed Red Hat Ceph Storage 7 cluster. A hyperconverged infrastructure (HCI) environment that consists of external data plane nodes that have Compute and Red Hat Ceph Storage services colocated on the same nodes for optimized resource use. Note Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports erasure coding with Red Hat Ceph Storage Object Gateway (RGW). Erasure coding with the Red Hat Ceph Storage Block Device (RDB) is not currently supported. For more information about Red Hat Ceph Storage architecture, see the Red Hat Ceph Storage 7 Architecture Guide . 7.2.2. Block storage (cinder) The Block Storage service (cinder) allows users to provision block storage volumes on back ends. Users can attach volumes to instances to augment their ephemeral storage with general-purpose persistent storage. You can detach and re-attach volumes to instances, but you can only access these volumes through the attached instance. You can also configure instances so that they do not use ephemeral storage. Instead of using ephemeral storage, you can configure the Block Storage service to write images to a volume. You can then use the volume as a bootable root volume for an instance. Volumes also provide inherent redundancy and disaster recovery through backups and snapshots. However, backups are only provided if you deploy the optional Block Storage backup service. In addition, you can encrypt volumes for added security. 7.2.3. Images (glance) The Image service (glance) provides discovery, registration, and delivery services for instance images. It also provides the ability to store snapshots of instances ephemeral disks for cloning or restore purposes. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services. 7.2.4. Object Storage (swift) The Object Storage service (swift) provides a fully-distributed storage solution that you can use to store any kind of static data or binary object; such as media files, large datasets, and disk images. The Object Storage service organizes objects by using object containers, which are similar to directories in a file system, but they cannot be nested. You can use the Object Storage service as a repository for nearly every service in the cloud. Red Hat Ceph Storage RGW can be used as an alternative to the Object Storage service. 7.2.5. Shared File Systems (manila) The Shared File Systems service (manila) provides the means to provision remote, shareable file systems. These are known as shares. Shares allow projects in the cloud to share POSIX compliant storage, and they can be consumed by multiple instances simultaneously. Shares are used for instance consumption, and they can be consumed by multiple instances at the same time with read/write access mode. 7.3. Storage networks Two default storage-related networks are configured during the RHOSO installation: the Storage and Storage Management networks. These isolated networks follow best practices for network connectivity between storage components and the deployments. The Storage network is used for data storage access and retrieval. The Storage Management network is used by RHOSO services to have access to specific interfaces in the storage solution that allows access to the management consoles. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data. The following table lists the properties of the default storage-related networks. Network name VLAN CIDR NetConfig allocationRange MetalLB IPAddressPool range nad ipam range OCP worker nncp range storage 21 172.18.0.0/24 172.18.0.100 - 172.18.0.250 N/A 172.18.0.30 - 172.18.0.70 172.18.0.10 - 172.18.0.20 storageMgmt 23 172.20.0.0/24 172.20.0.100 - 172.20.0.250 N/A 172.20.0.30 - 172.20.0.70 172.20.0.10 - 172.20.0.20 Your storage solution may require additional network configurations. These defaults provide a basis for building a full deployment. All Block Storage services with back ends ( cinder-volume and cinder-backup ) require access to all the storage networks, which may not include the storage management network depending on the back end. Block Storage services with back ends require access only to their storage management network. In most deployments there's a single management network, but if there are multiple storage management networks, each service-back end pair only needs access to their respective management network. You must install host bus adapters (HBAs) on all OCP worker nodes in any deployment that uses the Block Storage service and a Fibre Channel (FC) back end. 7.3.1. Planning networking for the Block Storage service Storage best practices recommend using two different networks: One network for data I/O One network for storage management These networks are referred to as storage and storageMgmt . If your deployment diverges from the architecture of two networks, adapt the documented examples as necessary. For example, if the management interface for the storage system is available on the storage network, replace storageMgmt with storage when there is only one network, and remove storageMgmt when the storage network is already present. The storage services in Red Hat OpenStack Services on OpenShift (RHOSO), with the exception of the Object Storage service (swift), require access to the storage and storageMgmt networks. You can configure the storage and storageMgmt networks in the networkAttachments field of your OpenStackControlPlane CR. The networkAttachments field accepts a list of strings with all the networks the component requires access to. Different components can have different network requirements, for example, the Block Storage service (cinder) API component does not require access to any of the storage networks. The following example shows the networkAttachments for Block Storage volumes: 7.3.2. Planning networking for the Shared File Systems service Plan the networking on your cloud to ensure that cloud users can connect their shares to workloads that run on Red Hat OpenStack Services on OpenShift (RHOSO) virtual machines, bare-metal servers, and containers. Depending on the level of security and isolation required for cloud users, you can set the driver_handles_share_servers parameter (DHSS) to true or false . 7.3.2.1. Setting DHSS to true If you set the DHSS parameter to true , you can use the Shared File Systems service to export shares to end-user defined share networks with isolated share servers. Users can provision their workloads on self-service share networks to ensure that isolated NAS file servers on dedicated network segments export their shares. As a project administrator, you must ensure that the physical network to which you map the isolated networks extends to your storage infrastructure. You must also ensure that the storage system that you are using supports network segments. Storage systems, such as NetApp ONTAP and Dell EMC PowerMax, Unity, and VNX, do not support virtual overlay segmentation styles such as GENEVE or VXLAN. As an alternative to overlay networking, you can do any of the following: Use VLAN networking for your project networks. Allow VLAN segments on shared provider networks. Provide access to a pre-existing segmented network that is already connected to your storage system. 7.3.2.2. Setting DHSS to false If you set the DHSS parameter to false , cloud users cannot create shares on their own share networks. You can create a dedicated shared storage network, and cloud users must connect their clients to the configured network to access their shares. Not all Shared File System storage drivers support both DHSS=true and DHSS=false . Both DHSS=true and DHSS=false ensure data path multi-tenancy isolation. However, if you require network path multi-tenancy isolation for tenant workloads as part of a self-service model, you must deploy the Shared File Systems service (manila) with back ends that support DHSS=true . 7.3.2.3. Ensuring network connectivity to the share To connect to a file share, clients must have network connectivity to one or more of the export locations for that share. When administrators set the driver_handles_share_servers parameter (DHSS) for a share type to true , cloud users can create a share network with the details of a network to which the Compute instance attaches. Cloud users can then reference the share network when creating shares. When administrators set the DHSS parameter for a share type to false , cloud users must connect their Compute instance to the shared storage network that has been configured for their Red Hat OpenStack Services on OpenShift (RHOSO) deployment. For more information about how to configure and validate network connectivity to a shared network, see Connecting to a shared network to access shares in Performing storage operations . 7.4. Scalability and back-end storage In general, a clustered storage solution provides greater back end scalability and resiliency. For example, when you use Red Hat Ceph Storage as a Block Storage (cinder) back end, you can scale storage capacity and redundancy by adding more Ceph Object Storage Daemon (OSD) nodes. Block Storage, Object Storage (swift), and Shared File Systems Storage (manila) services support Red Hat Ceph Storage as a back end. The Block Storage service can use multiple storage solutions as discrete back ends. At the service level, you can scale capacity by adding more back ends. By default, the Object Storage service consumes space by allocating persistent volumes in the OpenShift underlying infrastructure. It can be configured to use a file system on dedicated storage nodes, and it can use as much space as is available. The Object Storage service supports the XFS and ext4 file systems, and you can scale both file systems to consume as much underlying block storage as is available. You can also scale capacity by adding more storage devices to the storage node. The Shared File Systems service provisions file shares from designated storage pools that are managed by Red Hat Ceph Storage or other back-end storage systems. You can scale this shared storage by increasing the size or number of storage pools available to the service or by adding more back-end storage systems to the deployment. Each back-end storage system is integrated with a dedicated service to interact with and manage the storage system. 7.5. Storage accessibility and administration Volumes are consumed only through instances. Users can extend, create snapshots of volumes and use the snapshots to clone or restore a volume to a state. You can use the Block Storage service (cinder) to create volume types, which aggregate volume settings. You can associate volume types with encryption and Quality of Service (QoS) specifications to provide different levels of performance for your cloud users. Your cloud users can specify the volume type they require when creating new volumes. For example, volumes that use higher performance QoS specifications could provide your users with more IOPS, or your users could assign lighter workloads to volumes that use lower performance QoS specifications to conserve resources. Shares can be consumed simultaneously by one or more instances, bare metal nodes or containers. The Shared File Systems service (manila) also supports share resize, snapshots and cloning, and administrators can create share types to aggregate settings. Users can access objects in a container by using the Object Storage service (swift) API, and administrators can make objects accessible to instances and services in the cloud. This accessibility makes objects ideal as repositories for services; for example, you can store Image service (glance) images in containers that are managed by the Object Storage service. 7.6. Storage security The Block Storage service provides data security through the Key Manager service (barbican). The Block Storage service uses a one-to-one, key to volume mapping with the key managed by the Key Manager service. The encryption type is defined when configuring the volume type. Security can also be improved at the backend level by encrypting control and/or data traffic, for example with Red Hat Ceph Storage, this can be achieved by enabling messengerv2 secure mode. This way, network traffic amongst Ceph services as well as from OpenStack compute nodes are encrypted. You configure object and container security at the service and node level. The Object Storage service (swift) provides no native encryption for containers and objects. However, with the Key Manager service enabled, the Object Storage service can transparently encrypt and decrypt your stored (at-rest) objects. At-rest encryption is distinct from in-transit encryption in that it refers to the objects being encrypted while being stored on disk. The Shared File Systems service (manila) can secure shares through access restriction, whether by instance IP, user or group, or TLS certificate. Some Shared File Systems service deployments can feature separate share servers to manage the relationship between share networks and shares. Some share servers support, or even require, additional network security. For example, a CIFS share server requires the deployment of an LDAP, Active Directory, or Kerberos authentication service. Some backends also support encrypting the data AT REST. This enables extra security by encrypting the backend disks themselves, preventing physical security threats such as theft or unwiped recycled disks. For more information about configuring security options for the Block Storage service, Object Storage service, and Shared File Systems service, see Configuring security services . 7.7. Storage redundancy and disaster recovery If you deploy the optional Block Storage backup service, then the Block Storage service (cinder) provides volume backup and restoration for basic disaster recovery of user storage. You can use backups to protect volume contents. The Block Storage service also supports snapshots. In addition to cloning, you can use snapshots to restore a volume to a state. If your environment includes multiple back ends, you can also migrate volumes between these back ends. This is useful if you need to take a back end offline for maintenance. Backups are typically stored in a storage back end separate from their source volumes to help protect the data. This is not possible with snapshots because snapshots are dependent on their source volumes. The Block Storage service also supports the creation of consistency groups to group volumes together for simultaneous snapshot creation. This provides a greater level of data consistency across multiple volumes. Note Red Hat does not currently support Block Storage service replication. The Object Storage service (swift) provides no built-in backup features. You must perform all backups at the file system or node level. However, the Object Storage service features robust redundancy and fault tolerance. Even the most basic deployment of the Object Storage service replicates objects multiple times. You can use failover features like device mapper multipathing (DM Multipath) to enhance redundancy. The Shared File Systems service (manila) provides no built-in backup features for shares, but you can create snapshots for cloning and restoration. 7.8. Managing the storage solution You can manage your RHOSO configuration using the RHOSO Dashboard (horizon) or the RHOSO command line interface (CLI). You can perform most procedures using either method but some advanced procedures can only be completed using the CLI. You can manage your storage solution configuration using the dedicated management interface provided by the storage vendor. 7.9. Sizing Red Hat OpenShift storage The Image and Object Storage services can be configured to allocate space in the Red Hat OpenShift backing storage. In this scenario, the Red Hat OpenShift storage sizing should be estimated based on the expected use of these services. 7.9.1. Image service considerations The Image service (glance) requires a staging area to manipulate data during an import operation. It is possible to copy image data into multiple stores so some persistence is required for the Image service. Although PVCs represent the main storage model for the Image service, an External model can also be chosen. External model If External is chosen, no PVCs are created and the Image service acts like a stateless instance with no persistence provided. In this instance, persistence must be provided using extraMounts . NFS is often used to provide persistence. It can be mapped to /var/lib/glance : Replace <nfs_export_path> with the export path of your NFS share. Replace <nfs_ip_address> with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service. It should be noted that the configuration sample conflicts with the distributed image import feature. Distributed image import requires RWO storage plugged into a particular instance; it owns the data and receives requests in case its staged data is required for an upload operation. When the External model is adopted, if Red Hat Ceph Storage is used as a backend, and an image conversion operation is run in one of the existing replicas, the glance-operator does not have to make any assumption about the underlying storage that is tied to the staging area, and the conversion operation that uses the os_glance_staging_store directory (within the Pod) interacts with the RWX NFS backend provided via extraMounts . With this scenario, no image-cache PVC can be requested and mounted to a subPath, because it should be the administrator's responsibility to plan for persistence using extraMounts . PVC model The PVC model is the default. When a GlanceAPI instance is deployed, a PVC is created and bound to /var/lib/glance according to the storageClass and storageRequest passed as input. In this model, if Red Hat Ceph Storage is set as a backend, no dedicated image conversion PVC is created. The administrator must think about the PVC sizing in advance; the size of the PVC should be at least up to the largest converted image size. Concurrent conversions within the same Pod might be problematic in terms of PVC size; a conversion will fail or cannot take place if the PVC is full and there's not enough space. The upload should be retried after the conversion is over and the staging area space is released. However, concurrent conversion operations might happen in different Pods. You should deploy at least 3 replicas for a particular glanceAPI . This helps to handle heavy operations like image conversion. For a PVC-based layout, the scale out of a glanceAPI in terms of replicas is limited by the available storage provided by the storageClass , and depends on the storageRequest . The storageRequest is a critical parameter, it can be globally defined for all the glanceAPI , or defined with a different value for each API. It will influence the scale out operations for each of them. Other than a local PVC required for the staging area, it is possible to enable image cache, which is translated into an additional PVC bound to each glanceAPI instance. A glance-cache PVC is bound to /var/lib/glance/image-cache . The glance-operator configures the glanceAPI instance accordingly, setting both image_cache_max_size and the image_cache_dir parameters. The number of image cache PVCs follows the same rules described for the local PVC, and the number of requested PVCs is proportional to the number of replicas. 7.9.2. Object Storage service considerations The Object Storage service requires storage devices for data. These devices must be accessible using the same hostname or IP address during their lifetime. The configuration of a StatefulSet with a Headless Service is how this is achieved. If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. The Object Storage service requires quite a few services to access these PVs, and all of them are running in a single pod. Additionally, volumes are not deleted if the StatefulSet is deleted. An unwanted removal of the StatefulSet (or the whole deployment) will not immediately result in a catastrophic data loss, but can be recovered from with administrator interaction. The Headless Service makes it possible to access the storage pod directly by using a DNS name. For example, if the pod name is swift-storage-0 and the SwiftStorage instance is named swift-storage , it becomes accessible using swift-storage-0.swift-storage . This makes it easily usable within the Object Storage service rings, and IP changes are now transparent and don't require an update of the rings. Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and to not wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod. This option only affects the behavior for scaling operations. Updates are not affected. This is required to scale by more than one; including new deployments with more than one replica. It is required to create all pods at the same time, otherwise there will be PVCs that are not bound and the Object Storage service rings cannot be created, eventually blocking the start of these pods. Storage pods should be distributed to different nodes to avoid single points of failure. A podAntiAffinity rule with preferredDuringSchedulingIgnoredDuringExecution is used to distribute pods to different nodes if possible. Using a separate storageClass and PersistentVolumes that are located on different nodes can be used to enforce further distribution. Object Storage service backend services must only be accessible by other backend services and the Object Storage service proxy. To limit access, a NetworkPolicy is added to allow only traffic between these pods. The NetworkPolicy itself depends on labels, and these must match to allow traffic. Therefore labels must not be unique; instead all pods must use the same label to allow access. This is also the reason why the swift-operator is not using labels from lib-common . Object Storage service rings require information about the disks to use, and this includes sizes and hostnames or IPs. Sizes are not known when starting the StatefulSet using PVCs, the size requirement is a lower limit, but the actual PVs might be much bigger. However, StatefulSets do create PVCs before the ConfigMaps are available and simply wait starting the pods until these become available. The SwiftRing reconciler is watching the SwiftStorage instances and iterates over PVCs to get actual information about the used disks. Once these are bound, the size is known and the swift-ring-rebalance job creates the Swift rings and eventually the ConfigMap . After the ConfigMap becomes available, StatefulSets will start the service pods. Rings are stored in a ConfigMap mounted by the SwiftProxy and SwiftStorage instances using projected volumes. This makes it possible to mount all required files at the same place, without merging these from other places. Updated ConfigMaps will update these files, and these changes are are detected by the Swift services eventually reloading these. Some operators are using the customServiceConfig option to customize settings. However, the SwiftRing instance deploys multiple backend services, and each of these requires specific files to be customized. Therefore only defaultConfigOverwrite using specific keys as filenames is supported when using the swift-operator .
|
[
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: iscsi: networkAttachments: - storage - storageMgmt",
"default: storage: external: true extraMounts: - extraVol: - extraVolType: NFS mounts: - mountPath: /var/lib/glance/os_glance_staging_store name: nfs volumes: - name: nfs nfs: path: <nfs_export_path> server: <nfs_ip_address>",
"default: replicas: 3 storage: storageRequest: 10G"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/assembly_planning-storage
|
Chapter 13. PMML model execution
|
Chapter 13. PMML model execution You can import PMML files into your Red Hat Process Automation Manager project using Business Central ( Menu Design Projects Import Asset ) or package the PMML files as part of your project knowledge JAR (KJAR) file without Business Central. After you implement your PMML files in your Red Hat Process Automation Manager project, you can execute the PMML-based decision service by embedding PMML calls directly in your Java application or by sending an ApplyPmmlModelCommand command to a configured KIE Server. For more information about including PMML assets with your project packaging and deployment method, see Packaging and deploying an Red Hat Process Automation Manager project . Note You can also include a PMML model as part of a Decision Model and Notation (DMN) service in Business Central. When you include a PMML model within a DMN file, you can invoke that PMML model as a boxed function expression for a DMN decision node or business knowledge model node. For more information about including PMML models in a DMN service, see Designing a decision service using DMN models . 13.1. Embedding a PMML trusty call directly in a Java application A KIE container is local when the knowledge assets are either embedded directly into the calling program or are physically pulled in using Maven dependencies for the KJAR. You embed knowledge assets directly into a project if there is a tight relationship between the version of the code and the version of the PMML definition. Any changes to the decision take effect after you have intentionally updated and redeployed the application. A benefit of this approach is that proper operation does not rely on any external dependencies to the run time, which can be a limitation of locked-down environments. Prerequisites A KJAR containing the PMML model to execute has been created. For more information about project packaging, see Packaging and deploying an Red Hat Process Automation Manager project . Procedure In your client application, add the following dependencies to the relevant classpath of your Java project: <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Process Automation Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Process Automation Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHPAM product and maven library version? . Create a KIE container from classpath or ReleaseId : KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); Alternative option: KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer(); Create an instance of the PMMLRuntime that is used to execute the model: PMMLRuntime pmmlRuntime = KieRuntimeFactory.of(kieContainer.getKieBase()).get(PMMLRuntime.class); Create an instance of the PMMLRequestData class that applies your PMML model to a data set: PMMLRequestData pmmlRequestData = new PMMLRequestData({correlation_id}, {model_name}); pmmlRequestData.addRequestParam({parameter_name}, {parameter_value}) ... Create an instance of the PMMLContext class that contains the input data: PMMLContext pmmlContext = new PMMLContextImpl(pmmlRequestData); Retrieve the PMML4Result while executing the PMML model with the required PMML class instances that you created: PMML4Result pmml4Result = pmmlRuntime.evaluate({model_name}, pmmlContext); 13.2. Embedding a PMML legacy call directly in a Java application A KIE container is local when the knowledge assets are either embedded directly into the calling program or are physically pulled in using Maven dependencies for the KJAR. You embed knowledge assets directly into a project if there is a tight relationship between the version of the code and the version of the PMML definition. Any changes to the decision take effect after you have intentionally updated and redeployed the application. A benefit of this approach is that proper operation does not rely on any external dependencies to the run time, which can be a limitation of locked-down environments. Using Maven dependencies enables further flexibility because the specific version of the decision can dynamically change (for example, by using a system property), and it can be periodically scanned for updates and automatically updated. This introduces an external dependency on the deploy time of the service, but executes the decision locally, reducing reliance on an external service being available during run time. Prerequisites A KJAR containing the PMML model to execute has been created. For more information about project packaging, see Packaging and deploying an Red Hat Process Automation Manager project . Procedure In your client application, add the following dependencies to the relevant classpath of your Java project: <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Process Automation Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Process Automation Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHPAM product and maven library version? . Important To use the legacy implementation, ensure that the kie-pmml-implementation system property is set as legacy . Create a KIE container from classpath or ReleaseId : KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); Alternative option: KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer(); Create an instance of the PMMLRequestData class, which applies your PMML model to a set of data: public class PMMLRequestData { private String correlationId; 1 private String modelName; 2 private String source; 3 private List<ParameterInfo<?>> requestParams; 4 ... } 1 Identifies data that is associated with a particular request or result 2 The name of the model that should be applied to the request data 3 Used by internally generated PMMLRequestData objects to identify the segment that generated the request 4 The default mechanism for sending input data points Create an instance of the PMML4Result class, which holds the output information that is the result of applying the PMML-based rules to the input data: public class PMML4Result { private String correlationId; private String segmentationId; 1 private String segmentId; 2 private int segmentIndex; 3 private String resultCode; 4 private Map<String, Object> resultVariables; 5 ... } 1 Used when the model type is MiningModel . The segmentationId is used to differentiate between multiple segmentations. 2 Used in conjunction with the segmentationId to identify which segment generated the results. 3 Used to maintain the order of segments. 4 Used to determine whether the model was successfully applied, where OK indicates success. 5 Contains the name of a resultant variable and its associated value. In addition to the normal getter methods, the PMML4Result class also supports the following methods for directly retrieving the values for result variables: public <T> Optional<T> getResultValue(String objName, String objField, Class<T> clazz, Object...params) public Object getResultValue(String objName, String objField, Object...params) Create an instance of the ParameterInfo class, which serves as a wrapper for basic data type objects used as part of the PMMLRequestData class: public class ParameterInfo<T> { 1 private String correlationId; private String name; 2 private String capitalizedName; private Class<T> type; 3 private T value; 4 ... } 1 The parameterized class to handle many different types 2 The name of the variable that is expected as input for the model 3 The class that is the actual type of the variable 4 The actual value of the variable Execute the PMML model based on the required PMML class instances that you have created: public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource("request"); DataSource<PMML4Result> resultData = executor.newDataSource("results"); DataSource<PMMLData> internalData = executor.newDataSource("pmmlData"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit("RuleUnitIndicator", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( "OK".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll("\\s",""); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + "." + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+"."+javaModelId; packageNames.add(basePkgName); return packageNames; } Rules are executed by the RuleUnitExecutor class. The RuleUnitExecutor class creates KIE sessions and adds the required DataSource objects to those sessions, and then executes the rules based on the RuleUnit that is passed as a parameter to the run() method. The calculatePossiblePackageNames and the getStartingRuleUnit methods determine the fully qualified name of the RuleUnit class that is passed to the run() method. To facilitate your PMML model execution, you can also use a PMML4ExecutionHelper class supported in Red Hat Process Automation Manager. For more information about the PMML helper class, see Section 13.2.1, "PMML execution helper class" . 13.2.1. PMML execution helper class Red Hat Process Automation Manager provides a PMML4ExecutionHelper class that helps create the PMMLRequestData class required for PMML model execution and that helps execute rules using the RuleUnitExecutor class. The following are examples of a PMML model execution without and with the PMML4ExecutionHelper class, as a comparison: Example PMML model execution without using PMML4ExecutionHelper public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource("request"); DataSource<PMML4Result> resultData = executor.newDataSource("results"); DataSource<PMMLData> internalData = executor.newDataSource("pmmlData"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit("RuleUnitIndicator", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( "OK".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll("\\s",""); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + "." + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+"."+javaModelId; packageNames.add(basePkgName); return packageNames; } Example PMML model execution using PMML4ExecutionHelper public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String modelPkgName, String correlationId) { PMML4ExecutionHelper helper = PMML4ExecutionHelperFactory.getExecutionHelper(modelName, kbase); helper.addPossiblePackageName(modelPkgName); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); variables.entrySet().forEach(entry -> { request.addRequestParam(entry.getKey(), entry.getValue); }); PMML4Result resultHolder = helper.submitRequest(request); if ("OK".equals(resultHolder.getResultCode)) { // extract result variables here } } When you use the PMML4ExecutionHelper , you do not need to specify the possible package names nor the RuleUnit class as you would in a typical PMML model execution. To construct a PMML4ExecutionHelper class, you use the PMML4ExecutionHelperFactory class to determine how instances of PMML4ExecutionHelper are retrieved. The following are the available PMML4ExecutionHelperFactory class methods for constructing a PMML4ExecutionHelper class: PMML4ExecutionHelperFactory methods for PMML assets in a KIE base Use these methods when PMML assets have already been compiled and are being used from an existing KIE base: public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase) public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase, boolean includeMiningDataSources) PMML4ExecutionHelperFactory methods for PMML assets on the project classpath Use these methods when PMML assets are on the project classpath. The classPath argument is the project classpath location of the PMML file: public static PMML4ExecutionHelper getExecutionHelper(String modelName, String classPath, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName,String classPath, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources) PMML4ExecutionHelperFactory methods for PMML assets in a byte array Use these methods when PMML assets are in the form of a byte array: public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources) PMML4ExecutionHelperFactory methods for PMML assets in a Resource Use these methods when PMML assets are in the form of an org.kie.api.io.Resource object: public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources) Note The classpath, byte array, and resource PMML4ExecutionHelperFactory methods create a KIE container for the generated rules and Java classes. The container is used as the source of the KIE base that the RuleUnitExecutor uses. The container is not persisted. The PMML4ExecutionHelperFactory method for PMML assets that are already in a KIE base does not create a KIE container in this way. 13.3. Executing a PMML model using KIE Server You can execute PMML models that have been deployed to KIE Server by sending the ApplyPmmlModelCommand command to the configured KIE Server. When you use this command, a PMMLRequestData object is sent to KIE Server and a PMML4Result result object is received as a reply. You can send PMML requests to KIE Server through the KIE Server REST API from a configured Java class or directly from a REST client. Prerequisites KIE Server is installed and configured, including a known user name and credentials for a user with the kie-server role. For installation options, see Planning a Red Hat Process Automation Manager installation . A KIE container is deployed in KIE Server in the form of a KJAR that includes the PMML model. For more information about project packaging, see Packaging and deploying an Red Hat Process Automation Manager project . You have the container ID of the KIE container containing the PMML model. Procedure In your client application, add the following dependencies to the relevant classpath of your Java project: Example of legacy implementation <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> Important To use the legacy implementation, ensure that the kie-pmml-implementation system property is set as legacy . Example of trusty implementation <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Process Automation Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Process Automation Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHPAM product and maven library version? . Create a KIE container from classpath or ReleaseId : KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); Alternative option: KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer(); Create a class for sending requests to KIE Server and receiving responses: public class ApplyScorecardModel { private static final ReleaseId releaseId = new ReleaseId("org.acme","my-kjar","1.0.0"); private static final String containerId = "SampleModelContainer"; private static KieCommands commandFactory; private static ClassLoader kjarClassLoader; 1 private RuleServicesClient serviceClient; 2 // Attributes specific to your class instance private String rankedFirstCode; private Double score; // Initialization of non-final static attributes static { commandFactory = KieServices.Factory.get().getCommands(); // Specifications for kjarClassLoader, if used KieMavenRepository kmp = KieMavenRepository.getMavenRepository(); File artifactFile = kmp.resolveArtifact(releaseId).getFile(); if (artifactFile != null) { URL urls[] = new URL[1]; try { urls[0] = artifactFile.toURI().toURL(); classLoader = new KieURLClassLoader(urls,PMML4Result.class.getClassLoader()); } catch (MalformedURLException e) { logger.error("Error getting classLoader for "+containerId); logger.error(e.getMessage()); } } else { logger.warn("Did not find the artifact file for "+releaseId.toString()); } } public ApplyScorecardModel(KieServicesConfiguration kieConfig) { KieServicesClient clientFactory = KieServicesFactory.newKieServicesClient(kieConfig); serviceClient = clientFactory.getServicesClient(RuleServicesClient.class); } ... // Getters and setters ... // Method for executing the PMML model on KIE Server public void applyModel(String occupation, int age) { PMMLRequestData input = new PMMLRequestData("1234","SampleModelName"); 3 input.addRequestParam(new ParameterInfo("1234","occupation",String.class,occupation)); input.addRequestParam(new ParameterInfo("1234","age",Integer.class,age)); CommandFactoryServiceImpl cf = (CommandFactoryServiceImpl)commandFactory; ApplyPmmlModelCommand command = (ApplyPmmlModelCommand) cf.newApplyPmmlModel(request); 4 ServiceResponse<ExecutionResults> results = ruleClient.executeCommandsWithResults(CONTAINER_ID, command); 5 if (results != null) { 6 PMML4Result resultHolder = (PMML4Result)results.getResult().getValue("results"); if (resultHolder != null && "OK".equals(resultHolder.getResultCode())) { this.score = resultHolder.getResultValue("ScoreCard","score",Double.class).get(); Map<String,Object> rankingMap = (Map<String,Object>)resultHolder.getResultValue("ScoreCard","ranking"); if (rankingMap != null && !rankingMap.isEmpty()) { this.rankedFirstCode = rankingMap.keySet().iterator().(); } } } } } 1 Defines the class loader if you did not include the KJAR in your client project dependencies 2 Identifies the service client as defined in the configuration settings, including KIE Server REST API access credentials 3 Initializes a PMMLRequestData object 4 Creates an instance of the ApplyPmmlModelCommand 5 Sends the command using the service client 6 Retrieves the results of the executed PMML model Execute the class instance to send the PMML invocation request to KIE Server. Alternatively, you can use JMS and REST interfaces to send the ApplyPmmlModelCommand command to KIE Server. For REST requests, you use the ApplyPmmlModelCommand command as a POST request to http://SERVER:PORT/kie-server/services/rest/server/containers/instances/{containerId} in JSON, JAXB, or XStream request format. Example POST endpoint Example JSON request body { "commands": [ { "apply-pmml-model-command": { "outIdentifier": null, "packageName": null, "hasMining": false, "requestData": { "correlationId": "123", "modelName": "SimpleScorecard", "source": null, "requestParams": [ { "correlationId": "123", "name": "param1", "type": "java.lang.Double", "value": "10.0" }, { "correlationId": "123", "name": "param2", "type": "java.lang.Double", "value": "15.0" } ] } } } ] } Example curl request with endpoint and body Example JSON response { "results" : [ { "value" : {"org.kie.api.pmml.DoubleFieldOutput":{ "value" : 40.8, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "name" : "OverallScore", "displayValue" : "OverallScore", "weight" : 1.0 }}, "key" : "OverallScore" }, { "value" : {"org.kie.api.pmml.PMML4Result":{ "resultVariables" : { "OverallScore" : { "value" : 40.8, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "name" : "OverallScore", "displayValue" : "OverallScore", "weight" : 1.0 }, "ScoreCard" : { "modelName" : "SimpleScorecard", "score" : 40.8, "holder" : { "modelName" : "SimpleScorecard", "correlationId" : "123", "voverallScore" : null, "moverallScore" : true, "vparam1" : 10.0, "mparam1" : false, "vparam2" : 15.0, "mparam2" : false }, "enableRC" : true, "pointsBelow" : true, "ranking" : { "reasonCh1" : 5.0, "reasonCh2" : -6.0 } } }, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "segmentIndex" : 0, "resultCode" : "OK", "resultObjectName" : null }}, "key" : "results" } ], "facts" : [ ] }
|
[
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( \"org.acme\", \"my-kjar\", \"1.0.0\" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );",
"KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();",
"PMMLRuntime pmmlRuntime = KieRuntimeFactory.of(kieContainer.getKieBase()).get(PMMLRuntime.class);",
"PMMLRequestData pmmlRequestData = new PMMLRequestData({correlation_id}, {model_name}); pmmlRequestData.addRequestParam({parameter_name}, {parameter_value})",
"PMMLContext pmmlContext = new PMMLContextImpl(pmmlRequestData);",
"PMML4Result pmml4Result = pmmlRuntime.evaluate({model_name}, pmmlContext);",
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( \"org.acme\", \"my-kjar\", \"1.0.0\" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );",
"KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();",
"public class PMMLRequestData { private String correlationId; 1 private String modelName; 2 private String source; 3 private List<ParameterInfo<?>> requestParams; 4 }",
"public class PMML4Result { private String correlationId; private String segmentationId; 1 private String segmentId; 2 private int segmentIndex; 3 private String resultCode; 4 private Map<String, Object> resultVariables; 5 }",
"public <T> Optional<T> getResultValue(String objName, String objField, Class<T> clazz, Object...params) public Object getResultValue(String objName, String objField, Object...params)",
"public class ParameterInfo<T> { 1 private String correlationId; private String name; 2 private String capitalizedName; private Class<T> type; 3 private T value; 4 }",
"public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource(\"request\"); DataSource<PMML4Result> resultData = executor.newDataSource(\"results\"); DataSource<PMMLData> internalData = executor.newDataSource(\"pmmlData\"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit(\"RuleUnitIndicator\", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( \"OK\".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll(\"\\\\s\",\"\"); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + \".\" + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+\".\"+javaModelId; packageNames.add(basePkgName); return packageNames; }",
"public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource(\"request\"); DataSource<PMML4Result> resultData = executor.newDataSource(\"results\"); DataSource<PMMLData> internalData = executor.newDataSource(\"pmmlData\"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit(\"RuleUnitIndicator\", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( \"OK\".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll(\"\\\\s\",\"\"); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + \".\" + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+\".\"+javaModelId; packageNames.add(basePkgName); return packageNames; }",
"public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String modelPkgName, String correlationId) { PMML4ExecutionHelper helper = PMML4ExecutionHelperFactory.getExecutionHelper(modelName, kbase); helper.addPossiblePackageName(modelPkgName); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); variables.entrySet().forEach(entry -> { request.addRequestParam(entry.getKey(), entry.getValue); }); PMML4Result resultHolder = helper.submitRequest(request); if (\"OK\".equals(resultHolder.getResultCode)) { // extract result variables here } }",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase) public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase, boolean includeMiningDataSources)",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, String classPath, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName,String classPath, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)",
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( \"org.acme\", \"my-kjar\", \"1.0.0\" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );",
"KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();",
"public class ApplyScorecardModel { private static final ReleaseId releaseId = new ReleaseId(\"org.acme\",\"my-kjar\",\"1.0.0\"); private static final String containerId = \"SampleModelContainer\"; private static KieCommands commandFactory; private static ClassLoader kjarClassLoader; 1 private RuleServicesClient serviceClient; 2 // Attributes specific to your class instance private String rankedFirstCode; private Double score; // Initialization of non-final static attributes static { commandFactory = KieServices.Factory.get().getCommands(); // Specifications for kjarClassLoader, if used KieMavenRepository kmp = KieMavenRepository.getMavenRepository(); File artifactFile = kmp.resolveArtifact(releaseId).getFile(); if (artifactFile != null) { URL urls[] = new URL[1]; try { urls[0] = artifactFile.toURI().toURL(); classLoader = new KieURLClassLoader(urls,PMML4Result.class.getClassLoader()); } catch (MalformedURLException e) { logger.error(\"Error getting classLoader for \"+containerId); logger.error(e.getMessage()); } } else { logger.warn(\"Did not find the artifact file for \"+releaseId.toString()); } } public ApplyScorecardModel(KieServicesConfiguration kieConfig) { KieServicesClient clientFactory = KieServicesFactory.newKieServicesClient(kieConfig); serviceClient = clientFactory.getServicesClient(RuleServicesClient.class); } // Getters and setters // Method for executing the PMML model on KIE Server public void applyModel(String occupation, int age) { PMMLRequestData input = new PMMLRequestData(\"1234\",\"SampleModelName\"); 3 input.addRequestParam(new ParameterInfo(\"1234\",\"occupation\",String.class,occupation)); input.addRequestParam(new ParameterInfo(\"1234\",\"age\",Integer.class,age)); CommandFactoryServiceImpl cf = (CommandFactoryServiceImpl)commandFactory; ApplyPmmlModelCommand command = (ApplyPmmlModelCommand) cf.newApplyPmmlModel(request); 4 ServiceResponse<ExecutionResults> results = ruleClient.executeCommandsWithResults(CONTAINER_ID, command); 5 if (results != null) { 6 PMML4Result resultHolder = (PMML4Result)results.getResult().getValue(\"results\"); if (resultHolder != null && \"OK\".equals(resultHolder.getResultCode())) { this.score = resultHolder.getResultValue(\"ScoreCard\",\"score\",Double.class).get(); Map<String,Object> rankingMap = (Map<String,Object>)resultHolder.getResultValue(\"ScoreCard\",\"ranking\"); if (rankingMap != null && !rankingMap.isEmpty()) { this.rankedFirstCode = rankingMap.keySet().iterator().next(); } } } } }",
"http://localhost:8080/kie-server/services/rest/server/containers/instances/SampleModelContainer",
"{ \"commands\": [ { \"apply-pmml-model-command\": { \"outIdentifier\": null, \"packageName\": null, \"hasMining\": false, \"requestData\": { \"correlationId\": \"123\", \"modelName\": \"SimpleScorecard\", \"source\": null, \"requestParams\": [ { \"correlationId\": \"123\", \"name\": \"param1\", \"type\": \"java.lang.Double\", \"value\": \"10.0\" }, { \"correlationId\": \"123\", \"name\": \"param2\", \"type\": \"java.lang.Double\", \"value\": \"15.0\" } ] } } } ] }",
"curl -X POST \"http://localhost:8080/kie-server/services/rest/server/containers/instances/SampleModelContainer\" -H \"accept: application/json\" -H \"content-type: application/json\" -d \"{ \\\"commands\\\": [ { \\\"apply-pmml-model-command\\\": { \\\"outIdentifier\\\": null, \\\"packageName\\\": null, \\\"hasMining\\\": false, \\\"requestData\\\": { \\\"correlationId\\\": \\\"123\\\", \\\"modelName\\\": \\\"SimpleScorecard\\\", \\\"source\\\": null, \\\"requestParams\\\": [ { \\\"correlationId\\\": \\\"123\\\", \\\"name\\\": \\\"param1\\\", \\\"type\\\": \\\"java.lang.Double\\\", \\\"value\\\": \\\"10.0\\\" }, { \\\"correlationId\\\": \\\"123\\\", \\\"name\\\": \\\"param2\\\", \\\"type\\\": \\\"java.lang.Double\\\", \\\"value\\\": \\\"15.0\\\" } ] } } } ]}\"",
"{ \"results\" : [ { \"value\" : {\"org.kie.api.pmml.DoubleFieldOutput\":{ \"value\" : 40.8, \"correlationId\" : \"123\", \"segmentationId\" : null, \"segmentId\" : null, \"name\" : \"OverallScore\", \"displayValue\" : \"OverallScore\", \"weight\" : 1.0 }}, \"key\" : \"OverallScore\" }, { \"value\" : {\"org.kie.api.pmml.PMML4Result\":{ \"resultVariables\" : { \"OverallScore\" : { \"value\" : 40.8, \"correlationId\" : \"123\", \"segmentationId\" : null, \"segmentId\" : null, \"name\" : \"OverallScore\", \"displayValue\" : \"OverallScore\", \"weight\" : 1.0 }, \"ScoreCard\" : { \"modelName\" : \"SimpleScorecard\", \"score\" : 40.8, \"holder\" : { \"modelName\" : \"SimpleScorecard\", \"correlationId\" : \"123\", \"voverallScore\" : null, \"moverallScore\" : true, \"vparam1\" : 10.0, \"mparam1\" : false, \"vparam2\" : 15.0, \"mparam2\" : false }, \"enableRC\" : true, \"pointsBelow\" : true, \"ranking\" : { \"reasonCh1\" : 5.0, \"reasonCh2\" : -6.0 } } }, \"correlationId\" : \"123\", \"segmentationId\" : null, \"segmentId\" : null, \"segmentIndex\" : 0, \"resultCode\" : \"OK\", \"resultObjectName\" : null }}, \"key\" : \"results\" } ], \"facts\" : [ ] }"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/pmml-invocation-options-con_pmml-models
|
Managing AMQ Broker
|
Managing AMQ Broker Red Hat AMQ 2020.Q4 For Use with AMQ Broker 7.8
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/managing_amq_broker/index
|
2.4.3. Sharing files between services
|
2.4.3. Sharing files between services Type Enforcement helps prevent processes from accessing files intended for use by another process. For example, by default, Samba cannot read files labeled with the httpd_sys_content_t type, which are intended for use by the Apache HTTP Server. Files can be shared between the Apache HTTP Server, FTP, rsync, and Samba, if the desired files are labeled with the public_content_t or public_content_rw_t type. The following example creates a directory and files, and allows that directory and files to be shared (read only) through the Apache HTTP Server, FTP, rsync, and Samba: Run the mkdir /shares command as the root user to create a new top-level directory to share files between multiple services. Files and directories that do not match a pattern in file-context configuration may be labeled with the default_t type. This type is inaccessible to confined services: As the root user, create a /shares/index.html file. Copy and paste the following content into /shares/index.html : Labeling /shares/ with the public_content_t type allows read-only access by the Apache HTTP Server, FTP, rsync, and Samba. Run the following command as the root user to add the label change to file-context configuration: Run the restorecon -R -v /shares/ command as the root user to apply the label changes: To share /shares/ through Samba: Run the rpm -q samba samba-common samba-client command to confirm the samba , samba-common , and samba-client packages are installed (version numbers may differ): If any of these packages are not installed, install them by running the yum install package-name command as the root user. Edit /etc/samba/smb.conf as the root user. Add the following entry to the bottom of this file to share the /shares/ directory through Samba: A Samba account is required to mount a Samba file system. Run the smbpasswd -a username command as the root user to create a Samba account, where username is an existing Linux user. For example, smbpasswd -a testuser creates a Samba account for the Linux testuser user: Running smbpasswd -a username , where username is the user name of a Linux account that does not exist on the system, causes a Cannot locate Unix account for ' username '! error. Run the service smb start command as the root user to start the Samba service: Run the smbclient -U username -L localhost command to list the available shares, where username is the Samba account added in step 3. When prompted for a password, enter the password assigned to the Samba account in step 3 (version numbers may differ): Run the mkdir /test/ command as the root user to create a new directory. This directory will be used to mount the shares Samba share. Run the following command as the root user to mount the shares Samba share to /test/ , replacing username with the user name from step 3: Enter the password for username , which was configured in step 3. Run the cat /test/index.html command to view the file, which is being shared through Samba: To share /shares/ through the Apache HTTP Server: Run the rpm -q httpd command to confirm the httpd package is installed (version number may differ): If this package is not installed, run the yum install httpd command as the root user to install it. Change into the /var/www/html/ directory. Run the following command as the root user to create a link (named shares ) to the /shares/ directory: Run the service httpd start command as the root user to start the Apache HTTP Server: Use a web browser to navigate to http://localhost/shares . The /shares/index.html file is displayed. By default, the Apache HTTP Server reads an index.html file if it exists. If /shares/ did not have index.html , and instead had file1 , file2 , and file3 , a directory listing would occur when accessing http://localhost/shares : Run the rm -i /shares/index.html command as the root user to remove the index.html file. Run the touch /shares/file{1,2,3} command as the root user to create three files in /shares/ : Run the service httpd status command as the root user to see the status of the Apache HTTP Server. If the server is stopped, run service httpd start as the root user to start it. Use a web browser to navigate to http://localhost/shares . A directory listing is displayed:
|
[
"~]USD ls -dZ /shares drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /shares",
"<html> <body> <p>Hello</p> </body> </html>",
"~]# semanage fcontext -a -t public_content_t \"/shares(/.*)?\"",
"~]# restorecon -R -v /shares/ restorecon reset /shares context unconfined_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0 restorecon reset /shares/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0",
"~]USD rpm -q samba samba-common samba-client samba-3.4.0-0.41.el6.3.i686 samba-common-3.4.0-0.41.el6.3.i686 samba-client-3.4.0-0.41.el6.3.i686",
"[shares] comment = Documents for Apache HTTP Server, FTP, rsync, and Samba path = /shares public = yes writable = no",
"~]# smbpasswd -a testuser New SMB password: Enter a password Retype new SMB password: Enter the same password again Added user testuser.",
"~]# service smb start Starting SMB services: [ OK ]",
"~]USD smbclient -U username -L localhost Enter username 's password: Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Sharename Type Comment --------- ---- ------- shares Disk Documents for Apache HTTP Server, FTP, rsync, and Samba IPCUSD IPC IPC Service (Samba Server Version 3.4.0-0.41.el6) username Disk Home Directories Domain=[ HOSTNAME ] OS=[Unix] Server=[Samba 3.4.0-0.41.el6] Server Comment --------- ------- Workgroup Master --------- -------",
"~]# mount //localhost/shares /test/ -o user= username",
"~]USD cat /test/index.html <html> <body> <p>Hello</p> </body> </html>",
"~]USD rpm -q httpd httpd-2.2.11-6.i386",
"~]# ln -s /shares/ shares",
"~]# service httpd start Starting httpd: [ OK ]",
"~]# touch /shares/file{1,2,3} ~]# ls -Z /shares/ -rw-r--r-- root root system_u:object_r:public_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:public_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:public_content_t:s0 file3"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-configuration_examples-sharing_files_between_services
|
Chapter 2. Differences from upstream OpenJDK 11
|
Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.15/rn-openjdk-diff-from-upstream
|
8.2.2. Backup Software: Buy Versus Build
|
8.2.2. Backup Software: Buy Versus Build In order to perform backups, it is first necessary to have the proper software. This software must not only be able to perform the basic task of making copies of bits onto backup media, it must also interface cleanly with your organization's personnel and business needs. Some of the features to consider when reviewing backup software include: Schedules backups to run at the proper time Manages the location, rotation, and usage of backup media Works with operators (and/or robotic media changers) to ensure that the proper media is available Assists operators in locating the media containing a specific backup of a given file As you can see, a real-world backup solution entails much more than just scribbling bits onto your backup media. Most system administrators at this point look at one of two solutions: Purchase a commercially-developed solution Create an in-house developed backup system from scratch (possibly integrating one or more open source technologies) Each approach has its good and bad points. Given the complexity of the task, an in-house solution is not likely to handle some aspects (such as media management, or have comprehensive documentation and technical support) very well. However, for some organizations, this might not be a shortcoming. A commercially-developed solution is more likely to be highly functional, but may also be overly-complex for the organization's present needs. That said, the complexity might make it possible to stick with one solution even as the organization grows. As you can see, there is no clear-cut method for deciding on a backup system. The only guidance that can be offered is to ask you to consider these points: Changing backup software is difficult; once implemented, you will be using the backup software for a long time. After all, you will have long-term archive backups that you must be able to read. Changing backup software means you must either keep the original software around (to access the archive backups), or you must convert your archive backups to be compatible with the new software. Depending on the backup software, the effort involved in converting archive backups may be as straightforward (though time-consuming) as running the backups through an already-existing conversion program, or it may require reverse-engineering the backup format and writing custom software to perform the task. The software must be 100% reliable -- it must back up what it is supposed to, when it is supposed to. When the time comes to restore any data -- whether a single file or an entire file system -- the backup software must be 100% reliable.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-disaster-backups-buybuild
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.