title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift
|
Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster. 2.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services: The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage. The cinder-api service has high network usage due to resource listing requests. The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. The cinder-backup service has high memory, network, and CPU requirements. Therefore, you can pin the cinder-api , cinder-volume , and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity. Additional resources Placing pods on specific nodes using node selectors Machine configuration overview Node Feature Discovery Operator 2.2. Creating a storage class You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. Use the Logical Volume Manager (LVM) Storage storage class with RHOSO. You specify this storage class as the cluster storage back end for the RHOSO deployment. Use a storage back end based on SSD or NVMe drives for the storage class. If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes. To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage": The storage is ready when this command returns three non-zero values For more information about how to configure the LVM Storage storage class, see Persistent storage using Logical Volume Manager Storage in the RHOCP Storage guide. 2.3. Creating the openstack namespace You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment. Prerequisites You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. Procedure Create the openstack project for the deployed RHOSO environment: Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators: If the security context constraint (SCC) is not "privileged", use the following commands to change it: Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack : 2.4. Providing secure access to the Red Hat OpenStack Services on OpenShift services You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. Warning You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage. Procedure Create a Secret CR file on your workstation, for example, openstack_service_secret.yaml . Add the following initial configuration to openstack_service_secret.yaml : Replace <base64_password> with a 32-character key that is base64 encoded. You can use the following command to manually generate a base64 encoded password: Alternatively, if you are using a Linux workstation and you are generating the Secret CR definition file by using a Bash command such as cat , you can replace <base64_password> with the following command to auto-generate random passwords for each service: Replace the <base64_fernet_key> with a fernet key that is base64 encoded. You can use the following command to manually generate the fernet key: Note The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32. Create the Secret CR in the cluster: Verify that the Secret CR is created:
|
[
"oc get node -l \"topology.topolvm.io/node in (USD(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\\n' ',' | sed 's/.\\{1\\}USD//'))\" -o=jsonpath='{.items[*].metadata.annotations.capacity\\.topolvm\\.io/local-storage}' | tr ' ' '\\n'",
"oc new-project openstack",
"oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { \"kubernetes.io/metadata.name\": \"openstack\", \"pod-security.kubernetes.io/enforce\": \"privileged\", \"security.openshift.io/scc.podSecurityLabelSync\": \"false\" }",
"oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite",
"oc project openstack",
"apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> AodhDatabasePassword: <base64_password> BarbicanDatabasePassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderDatabasePassword: <base64_password> CinderPassword: <base64_password> DatabasePassword: <base64_password> DbRootPassword: <base64_password> DesignateDatabasePassword: <base64_password> DesignatePassword: <base64_password> GlanceDatabasePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatDatabasePassword: <base64_password> HeatPassword: <base64_password> IronicDatabasePassword: <base64_password> IronicInspectorDatabasePassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> KeystoneDatabasePassword: <base64_password> ManilaDatabasePassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronDatabasePassword: <base64_password> NeutronPassword: <base64_password> NovaAPIDatabasePassword: <base64_password> NovaAPIMessageBusPassword: <base64_password> NovaCell0DatabasePassword: <base64_password> NovaCell0MessageBusPassword: <base64_password> NovaCell1DatabasePassword: <base64_password> NovaCell1MessageBusPassword: <base64_password> NovaPassword: <base64_password> OctaviaDatabasePassword: <base64_password> OctaviaPassword: <base64_password> PlacementDatabasePassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: Opaque",
"echo -n <password> | base64",
"USD(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)",
"python3 -c \"from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))\" | base64",
"oc create -f openstack_service_secret.yaml -n openstack",
"oc describe secret osp-secret -n openstack"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_red_hat_openstack_services_on_openshift/assembly_preparing-RHOCP-for-RHOSO
|
Release notes
|
Release notes OpenShift Container Platform 4.17 Highlights of what is new and what has changed with this OpenShift Container Platform release Red Hat OpenShift Documentation Team
|
[
"conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules",
"The kube-controller-manager logs might contain more details.",
"The cloud-controller-manager logs may contain more details.",
"apiVersion: v1 data: enable-nodeip-debug: \"true\" kind: ConfigMap metadata: name: logging namespace: openshift-vsphere-infra",
"2024/08/02 12:18:03 [ERROR]: [OperatorImageCollector] pinging container registry localhost:55000: Get \"https://localhost:55000/v2/\": http: server gave HTTP response to HTTPS client.",
"[ERROR] : [OperatorImageCollector] pinging container registry registry.redhat.io: Get \"http://registry.redhat.io/v2/\": dial tcp 23.217.255.152:80: i/o timeout",
"[ERROR]: Detected a v2 ImageSetConfiguration, please use --v2 instead of -v2.",
"controlPlane: platform: azure: {}",
"oc adm release info 4.17.20 --pullspecs",
"oc adm release info 4.17.19 --pullspecs",
"oc adm release info 4.17.18 --pullspecs",
"oc adm release info 4.17.17 --pullspecs",
"oc adm release info 4.17.16 --pullspecs",
"oc adm release info 4.17.15 --pullspecs",
"oc adm release info 4.17.14 --pullspecs",
"oc adm release info 4.17.12 --pullspecs",
"oc adm release info 4.17.11 --pullspecs",
"oc adm release info 4.17.10 --pullspecs",
"oc adm release info 4.17.9 --pullspecs",
"oc adm release info 4.17.8 --pullspecs",
"oc adm release info 4.17.7 --pullspecs",
"oc adm release info 4.17.6 --pullspecs",
"oc adm release info 4.17.5 --pullspecs",
"oc adm release info 4.17.4 --pullspecs",
"oc adm release info 4.17.3 --pullspecs",
"oc adm release info 4.17.2 --pullspecs",
"Invalid log bundle or the bootstrap machine could not be reached and bootstrap logs were not collected",
"oc adm release info 4.17.1 --pullspecs",
"oc adm release info 4.17.0 --pullspecs"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/release_notes/index
|
probe::stap.cache_get
|
probe::stap.cache_get Name probe::stap.cache_get - Found item in stap cache Synopsis stap.cache_get Values module_path the path of the .ko kernel module file source_path the path of the .c source file Description Fires just before the return of get_from_cache, when the cache grab is successful.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-cache-get
|
3.6. Software Collection .pc Files Support
|
3.6. Software Collection .pc Files Support The .pc files are special metadata files used by the pkg-config program to store information about libraries available on the system. In case you distribute .pc files that you intend to use only in the Software Collection environment or in addition to the .pc files installed on the system, update the PKG_CONFIG_PATH environment variable. Depending on what is defined in your .pc files, update the PKG_CONFIG_PATH environment variable for the %{_libdir} macro (which expands to the library directory, typically /usr/lib/ or /usr/lib64/ ), or for the %{_datadir} macro (which expands to the share directory, typically /usr/share/ ). If the library directory is defined in your .pc files, update the PKG_CONFIG_PATH environment variable by adjusting the %install section of the Software Collection spec file as follows: %install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PKG_CONFIG_PATH="%{_libdir}/pkgconfig\USD{PKG_CONFIG_PATH:+:\USD{PKG_CONFIG_PATH}}" EOF If the share directory is defined in your .pc files, update the PKG_CONFIG_PATH environment variable by adjusting the %install section of the Software Collection spec file as follows: %install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PKG_CONFIG_PATH="%{_datadir}/pkgconfig\USD{PKG_CONFIG_PATH:+:\USD{PKG_CONFIG_PATH}}" EOF The two examples above both configure the enable scriptlet so that it ensures that the .pc files in the Software Collection are preferred over the .pc files available on the system if the Software Collection is enabled. The Software Collection can provide a wrapper script that is visible to the system to enable the Software Collection, for example in the /usr/bin/ directory. In this case, ensure that the .pc files are visible to the system even if the Software Collection is disabled. To allow your system to use .pc files from the disabled Software Collection, update the PKG_CONFIG_PATH environment variable with the paths to the .pc files associated with the Software Collection. Depending on what is defined in your .pc files, update the PKG_CONFIG_PATH environment variable for the %{_libdir} macro (which expands to the library directory), or for the %{_datadir} macro (which expands to the share directory). Procedure 3.5. Updating the PKG_CONFIG_PATH environment variable for %{_libdir} To update the PKG_CONFIG_PATH environment variable for the %{_libdir} macro, create a custom script /etc/profile.d/ name.sh . The script is preloaded when a shell is started on the system. For example, create the following file: Use the pc-libdir.sh short script that modifies the PKG_CONFIG_PATH variable to refer to your .pc files: Add the file to your Software Collection package's spec file: SOURCE2: %{?scl_prefix}pc-libdir.sh Install this file into the system /etc/profile.d/ directory by adjusting the %install section of the Software Collection package's spec file: %install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/ Procedure 3.6. Updating the PKG_CONFIG_PATH environment variable for %{_datadir} To update the PKG_CONFIG_PATH environment variable for the %{_datadir} macro, create a custom script /etc/profile.d/ name.sh . The script is preloaded when a shell is started on the system. For example, create the following file: Use the pc-datadir.sh short script that modifies the PKG_CONFIG_PATH variable to refer to your .pc files: Add the file to your Software Collection package's spec file: SOURCE2: %{?scl_prefix}pc-datadir.sh Install this file into the system /etc/profile.d/ directory by adjusting the %install section of the Software Collection package's spec file: %install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/
|
[
"%install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PKG_CONFIG_PATH=\"%{_libdir}/pkgconfig\\USD{PKG_CONFIG_PATH:+:\\USD{PKG_CONFIG_PATH}}\" EOF",
"%install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PKG_CONFIG_PATH=\"%{_datadir}/pkgconfig\\USD{PKG_CONFIG_PATH:+:\\USD{PKG_CONFIG_PATH}}\" EOF",
"%{?scl_prefix}pc-libdir.sh",
"export PKG_CONFIG_PATH=\"%{_libdir}/pkgconfig:/opt/ provider / software_collection/path/to/your/pc_files \"",
"SOURCE2: %{?scl_prefix}pc-libdir.sh",
"%install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/",
"%{?scl_prefix}pc-datadir.sh",
"export PKG_CONFIG_PATH=\"%{_datadir}/pkgconfig:/opt/ provider / software_collection/path/to/your/pc_files \"",
"SOURCE2: %{?scl_prefix}pc-datadir.sh",
"%install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/"
] |
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-Software_Collection_pc_Files_Support
|
Chapter 1. Getting started with AMQ Interconnect on OpenShift Container Platform
|
Chapter 1. Getting started with AMQ Interconnect on OpenShift Container Platform AMQ Interconnect is a lightweight AMQP 1.0 message router for building large, highly resilient messaging networks for hybrid cloud and IoT/edge deployments. AMQ Interconnect automatically learns the addresses of messaging endpoints (such as clients, servers, and message brokers) and flexibly routes messages between them. This document describes how to deploy AMQ Interconnect on OpenShift Container Platform by using the AMQ Interconnect Operator and the Interconnect Custom Resource Definition (CRD) that it provides. The CRD defines an AMQ Interconnect deployment, and the Operator creates and manages the deployment in OpenShift Container Platform. 1.1. What Operators are Operators are a method of packaging, deploying, and managing a Kubernetes application. They take human operational knowledge and encode it into software that is more easily shared with consumers to automate common or complex tasks. In OpenShift Container Platform 4.0, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the life cycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way. The OLM runs by default in OpenShift Container Platform 4.0, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. OperatorHub is the graphical interface that OpenShift Container Platform cluster administrators use to discover, install, and upgrade Operators. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments. Additional resources For more information about Operators, see the OpenShift documentation . 1.2. Provided Custom Resources The AMQ Interconnect Operator provides the Interconnect Custom Resource Definition (CRD), which allows you to interact with an AMQ Interconnect deployment running on OpenShift Container Platform just like other OpenShift Container Platform API objects. The Interconnect CRD represents a deployment of AMQ Interconnect routers. The CRD provides elements for defining many different aspects of a router deployment in OpenShift Container Platform such as: Number of AMQ Interconnect routers Deployment topology Connectivity Address semantics
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/deploying_amq_interconnect_on_openshift/getting-started-router-openshift-router-ocp
|
Chapter 2. Planning a deployment of AMQ Broker on OpenShift Container Platform
|
Chapter 2. Planning a deployment of AMQ Broker on OpenShift Container Platform This section describes how to plan an Operator-based deployment. Operators are programs that enable you to package, deploy, and manage OpenShift applications. Often, Operators automate common or complex tasks. Commonly, Operators are intended to provide: Consistent, repeatable installations Health checks of system components Over-the-air (OTA) updates Managed upgrades Operators enable you to make changes while your broker instances are running, because they are always listening for changes to the Custom Resource (CR) instances that you used to configure your deployment. When you make changes to a CR, the Operator reconciles the changes with the existing broker deployment and updates the deployment to reflect the changes. In addition, the Operator provides a message migration capability, which ensures the integrity of messaging data. When a broker in a clustered deployment shuts down due to an intentional scaledown of the deployment, this capability migrates messages to a broker Pod that is still running in the same broker cluster. 2.1. Overview of high availability (HA) The term high availability refers to a system that can remain operational even when part of that system fails or is shut down. For AMQ Broker on OpenShift Container Platform, this means ensuring the integrity and availability of messaging data if a broker pod, node on which a pod is running, or cluster fails. AMQ Broker uses the HA capabilities provided in OpenShift Container Platform to mitigate pod and node failures: If persistent storage is enabled on AMQ Broker, each broker pod writes its data to a Persistent Volume (PV) that was claimed by using a Persistent Volume Claim (PVC). A PV remains available even after a pod is deleted. If a broker pod fails, OpenShift restarts the pod with the same name and uses the existing PV that contains the messaging data. You can run multiple broker pods in a cluster and distribute pods on separate nodes to recover from a node failure. Each broker pod writes its message data to its own PV which is then available to that broker pod if it is restarted on a different node. If the mean time to repair (MTTR) to recover from a node failure on your Openshift cluster does not meet the service availability requirements for AMQ Broker, you can create leader-follower deployments to provide faster recovery. You can also use leader-follower deployments to protect against a cluster or wider data center outage. For more information, see Section 4.23, "Configuring leader-follower broker deployments for high availability" . Additional resources For information on how to use persistent storage, see Section 2.9, "Operator deployment notes" . For information on how to distribute broker pods on separate nodes, see Section 4.17.2, "Controlling pod placement using tolerations" . 2.2. Overview of the AMQ Broker Operator Custom Resource Definitions In general, a Custom Resource Definition (CRD) is a schema of configuration items that you can modify for a custom OpenShift object deployed with an Operator. By creating a corresponding Custom Resource (CR) instance, you can specify values for configuration items in the CRD. If you are an Operator developer, what you expose through a CRD essentially becomes the API for how a deployed object is configured and used. You can directly access the CRD through regular HTTP curl commands, because the CRD gets exposed automatically through Kubernetes. You can install the AMQ Broker Operator using either the OpenShift command-line interface (CLI), or the Operator Lifecycle Manager, through the OperatorHub graphical interface. In either case, the AMQ Broker Operator includes the CRDs described below. Main broker CRD You deploy a CR instance based on this CRD to create and configure a broker deployment. Based on how you install the Operator, this CRD is: The broker_activemqartemis_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemis CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method) Address CRD You deploy a CR instance based on this CRD to create addresses and queues for a broker deployment. Based on how you install the Operator, this CRD is: The broker_activemqartemisaddress_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemisAddresss CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method) Note The address CRD is deprecated in 7.12. You can use the brokerProperties attribute in an ActiveMQArtemis CR instance instead of creating a CR instance based on the addresss CRD. Security CRD You deploy a CR instance based on this CRD to create users and associate those users with security contexts. Based on how you install the Operator, this CRD is: The broker_activemqartemissecurity_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemisSecurity CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method). Note The security CRD is deprecated in 7.12. You can use the brokerProperties attribute in an ActiveMQArtemis CR instance instead of creating a CR instance based on the security CRD. Scaledown CRD The Operator automatically creates a CR instance based on this CRD when instantiating a scaledown controller for message migration. Based on how you install the Operator, this CRD is: The broker_activemqartemisscaledown_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemisScaledown CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method). Note The scaledown CRD is deprecated in 7.12 and is not required to scale down a cluster. Additional resources To learn how to install the AMQ Broker Operator (and all included CRDs) using: The OpenShift CLI, see Section 3.2, "Installing the Operator using the CLI" The Operator Lifecycle Manager and OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . For complete configuration references to use when creating CR instances based on the main broker and address CRDs, see: Section 8.1.1, "Broker Custom Resource configuration reference" Section 8.1.2, "Address Custom Resource configuration reference" 2.3. Overview of the AMQ Broker Operator sample Custom Resources The AMQ Broker Operator archive that you download and extract during installation includes sample Custom Resource (CR) files in the deploy/crs directory. These sample CR files enable you to: Deploy a minimal broker without SSL or clustering. Define addresses. The broker Operator archive that you download and extract also includes CRs for example deployments in the deploy/examples/address and deploy/examples/artemis directories, as listed below. address_queue.yaml Deploys an address and queue with different names. Deletes the queue when the CR is undeployed. address_topic.yaml Deploys an address with a multicast routing type. Deletes the address when the CR is undeployed. artemis_address_settings.yaml Deploys a broker with specific address settings. artemis_cluster_persistence.yaml Deploys clustered brokers with persistent storage. artemis_enable_metrics_plugin.yaml Enables the Prometheus metrics plugin to collect metrics. artemis_resources.yaml Sets CPU and memory resource limits for the broker. artemis_single.yaml Deploys a single broker. 2.4. Configuring items not exposed in a custom resource definition (CRD) You can use the brokerProperties attribute in an ActiveMQArtemis custom resource to configure any configuration setting for a broker. Using brokerProperties is particularly useful if you want to configure settings that: are not exposed in the ActiveMQArtemis CRD are exposed in the ActiveMQArtemisAddress and ActiveMQArtemisSecurity CRDs. Note Both the ActiveMQArtemisAddress and ActiveMQArtemisSecurity CRDs are deprecated starting in AMQ Broker 7.12 Configuration settings added under a brokerProperties attribute are stored in a secret. This secret is mounted as a properties file on the broker pod. At startup, the properties file is applied directly to the internal java configuration bean after the XML configuration is applied. Examples In the following example, a single property is applied to the configuration bean. spec: ... brokerProperties: - globalMaxSize=500m ... In the following example, multiple properties are applied to nested collections of configuration beans to create a broker connection named target that mirror messages with another broker. spec: ... brokerProperties - "AMQPConnections.target.uri=tcp://< hostname >:< port >" - "AMQPConnections.target.connectionElements.mirror.type=MIRROR" - "AMQPConnections.target.connectionElements.mirror.messageAcknowledgements=true" - "AMQPConnections.target.connectionElements.mirror.queueCreation=true" - "AMQPConnections.target.connectionElements.mirror.queueRemoval=true" ... Important Using the brokerProperties attribute provides access to many configuration items that you cannot otherwise configure for AMQ Broker on OpenShift Container Platform. If used incorrectly, some properties can have serious consequences for your deployment. Always exercise caution when configuring the broker using this method. Status reporting for brokerProperties The status of items configured in a brokerProperties attribute is provided in the BrokerPropertiesApplied status section of the ActiveMQArtemis CR. For example: - lastTransitionTime: "2023-02-06T20:50:01Z" message: "" reason: Applied status: "True" type: BrokerPropertiesApplied The reason field contains one of the following values to show the status of the items configured in a brokerProperties attribute: Applied OpenShift Container Platform propagated the updated secret to the properties file on each broker pod. AppliedWithError OpenShift Container Platform propagated the updated secret to the properties file on each broker pod. However, an error was found in the brokerProperties configuration. In the status section of the CR, check the message field to identify the invalid property and correct it in the CR. OutOfSync OpenShift Container Platform has not yet propagated the updated secret to the properties file on each broker pod. When OpenShift Container Platform propagates the updated secret to each pod, the reason field value changes to Applied . Note The broker checks periodically for configuration changes, including updates to the properties file that is mounted on the pod, and reloads the configuration if it detects any changes. However, updates to properties that are read only when the broker starts, for example, JVM settings, are not reloaded until you restart the broker. For more information about which properties are reloaded, see Reloading configuration updates in Configuring AMQ Broker . Additional Information For a list of properties that you can configure in the brokerProperties element in a CR, see Broker Properties in Configuring AMQ Broker . 2.5. Watch options for a Cluster Operator deployment When the Cluster Operator is running, it starts to watch for updates of AMQ Broker custom resources (CRs). You can choose to deploy the Cluster Operator to watch CRs from: A single namespace (the same namespace containing the Operator) All namespaces Note If you have already installed a version of the AMQ Broker Operator in a namespace on your cluster, Red Hat recommends that you do not install the AMQ Broker Operator 7.12 version to watch that namespace to avoid potential conflicts. 2.6. How the Operator determines the configuration to use to deploy images In the ActiveMQArtemis CR, you can use any of the following configurations to deploy container images: Specify a version number in the spec.version attribute and allow the Operator to choose the broker and init container images to deploy for that version number. Specify the registry URLs of the specific broker and init container images that you want the Operator to deploy in the spec.deploymentPlan.image and spec.deploymentPlan.initImage attributes. Set the value of the spec.deploymentPlan.image attribute to placeholder , which means that the Operator chooses the latest broker and init container images that are known to the Operator version. Note If you do not use any of these configurations to deploy container images, the Operator chooses the latest broker and init container images that are known to the Operator version. After you save a CR, the Operator performs the following validation to determine the configuration to use. The Operator checks if the CR contains a spec.version attribute. If the CR does not contain a spec.version attribute, the Operator checks if the CR contains a spec.deploymentPlan.image and a spec.deployment.Plan.initImage attribute. If the CR contains a spec.deploymentPlan.image and a spec.deployment.Plan.initImage attribute, the Operator deploys the container images that are identified by their registry URLs. If the CR does not contain a spec.deploymentPlan.image and a spec.deployment.Plan.initImage attribute, the Operator chooses the container images to deploy. For more information, see Section 2.7, "How the Operator chooses container images" . If the CR contains a spec.version attribute, the Operator verifies that the version number specified is within the valid range of versions that the Operator supports. If the value of the spec.version attribute is not valid, the Operator stops the deployment. If the value of the spec.version attribute is valid, the Operator checks if the CR contains a spec.deploymentPlan.image and a spec.deployment.Plan.initImage attribute. If the CR contains a spec.deploymentPlan.image and a spec.deployment.Plan.initImage attribute, the Operator deploys the container images that are identified by their registry URLs. If the CR does not contain a spec.deploymentPlan.image and a spec.deployment.Plan.initImage attribute, the Operator chooses the container images to deploy. For more information, see Section 2.7, "How the Operator chooses container images" . Note If the CR contains only one of the spec.deploymentPlan.image and the spec.deployment.Plan.initImage attributes, the Operator uses the spec.version number attribute to choose an image for the attribute that is not in the CR, or chooses the latest known image for that attribute if the spec.version attribute is not in the CR. Red Hat recommends that you do not specify the spec.deploymentPlan.image attribute without the spec.deployment.Plan.initImage attribute, or vice versa, to prevent mismatched versions of broker and init container images from being deployed. 2.7. How the Operator chooses container images If a CR does not contain a spec.deploymentPlan.image and a spec.deployment.Plan.initImage attribute, which specify the registry URLs of specific container images the Operator must deploy, the Operator automatically chooses the appropriate container images to deploy. Note If you install the Operator using the OpenShift command-line interface, the Operator installation archive includes a sample CR file called broker_activemqartemis_cr.yaml . In the sample CR, the spec.deploymentPlan.image property is included and set to its default value of placeholder . This value indicates that the Operator does not choose a broker container image until you deploy the CR. The spec.deploymentPlan.initImage property, which specifies the Init Container image, is not included in the broker_activemqartemis_cr.yaml sample CR file. If you do not explicitly include the spec.deploymentPlan.initImage property in your CR and specify a value, the Operator chooses a built-in Init Container image that matches the version of the Operator container image chosen. To choose broker and Init Container images, the Operator first determines an AMQ Broker version of the images that is required. The Operator gets the version from the value of the spec.version property. If the spec.version property is not set, the Operator uses the latest version of the images for AMQ Broker. The Operator then detects your container platform. The AMQ Broker Operator can run on the following container platforms: OpenShift Container Platform (x86_64) OpenShift Container Platform on IBM Z (s390x) OpenShift Container Platform on IBM Power Systems (ppc64le) Based on the version of AMQ Broker and your container platform, the Operator then references two sets of environment variables in the operator.yaml configuration file. These sets of environment variables specify broker and Init Container images for various versions of AMQ Broker, as described in the following section. 2.7.1. Environment variables for broker and init container images The environment variables included in the operator.yaml have the following naming convention. Table 2.1. Naming conventions for environment variables Container platform Naming convention OpenShift Container Platform RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_< AMQ_Broker_version > OpenShift Container Platform on IBM Z RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_< AMQ_Broker_version >_s390x OpenShift Container Platform on IBM Power Systems RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_< AMQ_Broker_version >_ppc64le The following are examples of environment variable names for broker and init container images for each supported container platform. Table 2.2. Example environment variable names Container platform Environment variable names OpenShift Container Platform RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_7123 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_7123 OpenShift Container Platform on IBM Z RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_7123_s390x RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_s390x_7123 OpenShift Container Platform on IBM Power Systems RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_7123_ppc64le RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_ppc64le_7123 The value of each environment variable specifies the address of a container image that is available from Red Hat. The image name is represented by a Secure Hash Algorithm (SHA) value. For example: - name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_7123 value: registry.redhat.io/amq7/amq-broker-rhel8@sha256:55ae4e28b100534d63c34ab86f69230d274c999d46d1493f26fe3e75ba7a0cec Therefore, based on an AMQ Broker version and your container platform, the Operator determines the applicable environment variable names for the broker and init container. The Operator uses the corresponding image values when starting the broker container. Additional resources To learn how to use the AMQ Broker Operator to create a broker deployment, see Chapter 3, Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator . For more information about how the Operator uses an Init Container to generate the broker configuration, see Section 4.1, "How the Operator generates the broker configuration" . To learn how to build and specify a custom Init Container image, see Section 4.11, "Specifying a custom Init Container image" . 2.8. Validation of image and version configuration in a custom resource (CR) After you save a CR, the Operator performs the following validation of the CR configuration and provides a status in the CR. Table 2.3. Operator validation of CR configuration Validation Purpose of validation Status reported in CR Does the CR contain a spec.deploymentPlan.image attribute without a spec.version attribute. A spec.deploymentPlan.image attribute without a spec.version attribute causes the Operator to restart the broker pods each time the Operator is upgraded. Pod restarts are required because the new Operator updates a label in the StatefulSet with the latest supported broker version unless a version number is explicitly set in the spec.version attribute. The Valid condition is Unknown and the following status message is displayed: Unknown image version, set a supported broker version in spec.version when images are specified . Does the CR contain a spec.deploymentPlan.image attribute without a spec.deploymentPlan.initImage attribute or vice versa. With this configuration, different versions of the broker and init container images could be deployed, which might prevent your broker from starting. The`Valid` condition is Unknown and the following status message is displayed: Init image and broker image must both be configured as an interdependent pair . If the CR contain a spec.version attribute, is the version specified within the range of versions that the Operator supports. If the value of the spec.version attribute is a broker version that is not supported by the Operator, the Operator does not proceed with the deployment of broker pods. The Valid condition is False and the following status message is displayed: Spec.Version does not resolve to a supported broker version, reason did not find a matching broker in the supported list for < version > . Does the version of the broker image deployed, based on the URL of a container image in the spec.deploymentPlan.image attribute, match the broker version in the spec.version attribute. Flags a mismatch between the actual broker version deployed and the version shown in the spec.version attribute if both attributes are configured in the CR. This is for information purposes to highlight that the version shown in the spec.version attribute is not the version deployed. The status of the BrokerVersionAligned condition is Unknown and the following message is displayed: broker version non aligned on pod < pod name >, the detected version < version > doesn't match the spec.version < version > resolved as < version > . Additional resources For more information on viewing status information in a CR, see Viewing status information for your broker deployment . 2.9. Operator deployment notes This section describes some important considerations when planning an Operator-based deployment Deploying the Custom Resource Definitions (CRDs) that accompany the AMQ Broker Operator requires cluster administrator privileges for your OpenShift cluster. When the Operator is deployed, non-administrator users can create broker instances via corresponding Custom Resources (CRs). To enable regular users to deploy CRs, the cluster administrator must first assign roles and permissions to the CRDs. For more information, see Creating cluster roles for Custom Resource Definitions in the OpenShift Container Platform documentation. When you update your cluster with the CRDs for the latest Operator version, this update affects all projects in the cluster. Any broker pods deployed from versions of the Operator might become unable to update their status. When you click the Logs tab of a running broker pod in the OpenShift Container Platform web console, you see messages indicating that 'UpdatePodStatus' has failed. However, the broker pods and Operator in that project continue to work as expected. To fix this issue for an affected project, you must also upgrade that project to use the latest version of the Operator. While you can create more than one broker deployment in a given OpenShift project by deploying multiple Custom Resource (CR) instances, typically, you create a single broker deployment in a project, and then deploy multiple CR instances for addresses. Red Hat recommends you create broker deployments in separate projects. If you intend to deploy brokers with persistent storage and do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that these are available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage (that is, by setting persistenceEnabled=true in your CR), you need to have two persistent volumes available. By default, each broker instance requires storage of 2 GiB. If you specify persistenceEnabled=false in your CR, the deployed brokers uses ephemeral storage. Ephemeral storage means that that every time you restart the broker pods, any existing data is lost. For more information about provisioning persistent storage in OpenShift Container Platform, see: Understanding persistent storage You must add configuration for the items listed below to the main broker CR instance before deploying the CR for the first time. You cannot add configuration for these items to a broker deployment that is already running. The size and storage class of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage Limits and requests for memory and CPU for each broker in a deployment If you update a parameter in your CR that the Operator is unable to dynamically update in the StatefulSet, the Operator deletes the StatefulSet and recreates it with the updated parameter value. Deleting the StatefulSet causes all pods to be deleted and recreated, which causes a temporary broker outage. An example of a CR update that the Operator cannot dynamically update in the StatefulSet is if you change persistenceEnabled=false to persistenceEnabled=true . 2.10. Identifying namespaces watched by existing Operators If the cluster already contains installed Operators for AMQ Broker, and you want a new Operator to watch all or multiple namespaces, you must ensure that the new Operator does not watch any of the same namespaces as existing Operators. Use the following procedure to identify the namespaces watched by existing Operators. Procedure In the left pane of the OpenShift Container Platform web console, click Workloads Deployments . In the Project drop-down list, select All Projects . In the Filter Name box, specify a string, for example, amq , to display the Operators for AMQ Broker that are installed on the cluster. Note The namespace column displays the namespace where each operator is deployed . Check the namespaces that each installed Operator for AMQ Broker is configured to watch . Click the Operator name to display the Operator details and click the YAML tab. Search for WATCH_NAMESPACE and note the namespaces that the Operator watches. If the WATCH_NAMESPACE section has a fieldPath field that has a value of metadata.namespace , the Operator is watching the namespace where it is deployed. If the WATCH_NAMESPACE section has a value field that has list of namespaces, the Operator is watching the specified namespaces. For example: - name: WATCH_NAMESPACE value: "namespace1, namespace2" If the WATCH_NAMESPACE section has a value field that is empty or has an asterisk, the Operator is watching all the namespaces on the cluster. For example: - name: WATCH_NAMESPACE value: "" In this case, before you deploy the new Operator, you must either uninstall the existing Operator or reconfigure it to watch specific namespaces. The procedures in the section show you how to install the Operator and use Custom Resources (CRs) to create broker deployments on OpenShift Container Platform. After you complete the procedures, the Operator runs in an individual Pod and each broker instance that you create runs as an individual Pod in a StatefulSet in the same project as the Operator. Later, you will see how to use a dedicated addressing CR to define addresses in your broker deployment.
|
[
"spec: brokerProperties: - globalMaxSize=500m",
"spec: brokerProperties - \"AMQPConnections.target.uri=tcp://< hostname >:< port >\" - \"AMQPConnections.target.connectionElements.mirror.type=MIRROR\" - \"AMQPConnections.target.connectionElements.mirror.messageAcknowledgements=true\" - \"AMQPConnections.target.connectionElements.mirror.queueCreation=true\" - \"AMQPConnections.target.connectionElements.mirror.queueRemoval=true\"",
"- lastTransitionTime: \"2023-02-06T20:50:01Z\" message: \"\" reason: Applied status: \"True\" type: BrokerPropertiesApplied",
"- name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_7123 value: registry.redhat.io/amq7/amq-broker-rhel8@sha256:55ae4e28b100534d63c34ab86f69230d274c999d46d1493f26fe3e75ba7a0cec",
"- name: WATCH_NAMESPACE value: \"namespace1, namespace2\"",
"- name: WATCH_NAMESPACE value: \"\""
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/deploying_amq_broker_on_openshift/assembly-br-planning-a-deployment_broker-ocp
|
Getting Started Guide
|
Getting Started Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
|
[
"bin/kc.sh start-dev",
"bin\\kc.bat start-dev"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/getting_started_guide/index
|
Chapter 51. DeploymentTemplate schema reference
|
Chapter 51. DeploymentTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate Full list of DeploymentTemplate schema properties Use deploymentStrategy to specify the strategy used to replace old pods with new ones when deployment configuration changes. Use one of the following values: RollingUpdate : Pods are restarted with zero downtime. Recreate : Pods are terminated before new ones are created. Using the Recreate deployment strategy has the advantage of not requiring spare resources, but the disadvantage is the application downtime. Example showing the deployment strategy set to Recreate . # ... template: deployment: deploymentStrategy: Recreate # ... This configuration change does not cause a rolling update. 51.1. DeploymentTemplate schema properties Property Description metadata Metadata applied to the resource. MetadataTemplate deploymentStrategy Pod replacement strategy for deployment configuration changes. Valid values are RollingUpdate and Recreate . Defaults to RollingUpdate . string (one of [RollingUpdate, Recreate])
|
[
"template: deployment: deploymentStrategy: Recreate"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-DeploymentTemplate-reference
|
Chapter 62. VulnerabilityRequestService
|
Chapter 62. VulnerabilityRequestService 62.1. DeferVulnerability POST /v1/cve/requests/defer DeferVulnerability starts the deferral process for the specified vulnerability. 62.1.1. Description 62.1.2. Parameters 62.1.2.1. Body Parameter Name Description Required Default Pattern body V1DeferVulnRequest X 62.1.3. Return Type V1DeferVulnResponse 62.1.4. Content Type application/json 62.1.5. Responses Table 62.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeferVulnResponse 0 An unexpected error response. GooglerpcStatus 62.1.6. Samples 62.1.7. Common object reference 62.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.1.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.1.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.1.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.1.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.1.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.1.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.1.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.1.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.1.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.1.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.1.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.1.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.1.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.1.7.16. V1DeferVulnRequest Field Name Required Nullable Type Description Format cve String This field indicates the CVEs requested to be deferred. comment String scope StorageVulnerabilityRequestScope expiresWhenFixed Boolean expiresOn Date date-time 62.1.7.17. V1DeferVulnResponse Field Name Required Nullable Type Description Format requestInfo StorageVulnerabilityRequest 62.1.7.18. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.1.7.19. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.2. FalsePositiveVulnerability POST /v1/cve/requests/false-positive FalsePositiveVulnerability starts the process to mark the specified vulnerability as false-positive. 62.2.1. Description 62.2.2. Parameters 62.2.2.1. Body Parameter Name Description Required Default Pattern body V1FalsePositiveVulnRequest X 62.2.3. Return Type V1FalsePositiveVulnResponse 62.2.4. Content Type application/json 62.2.5. Responses Table 62.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1FalsePositiveVulnResponse 0 An unexpected error response. GooglerpcStatus 62.2.6. Samples 62.2.7. Common object reference 62.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.2.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.2.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.2.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.2.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.2.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.2.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.2.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.2.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.2.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.2.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.2.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.2.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.2.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.2.7.16. V1FalsePositiveVulnRequest Field Name Required Nullable Type Description Format cve String This field indicates the CVE requested to be marked as false-positive. scope StorageVulnerabilityRequestScope comment String 62.2.7.17. V1FalsePositiveVulnResponse Field Name Required Nullable Type Description Format requestInfo StorageVulnerabilityRequest 62.2.7.18. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.2.7.19. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.3. ListVulnerabilityRequests GET /v1/cve/requests ListVulnerabilityRequests returns the list of vulnerability requests. 62.3.1. Description 62.3.2. Parameters 62.3.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 62.3.3. Return Type V1ListVulnerabilityRequestsResponse 62.3.4. Content Type application/json 62.3.5. Responses Table 62.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListVulnerabilityRequestsResponse 0 An unexpected error response. GooglerpcStatus 62.3.6. Samples 62.3.7. Common object reference 62.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.3.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.3.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.3.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.3.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.3.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.3.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.3.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.3.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.3.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.3.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.3.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.3.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.3.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.3.7.16. V1ListVulnerabilityRequestsResponse Field Name Required Nullable Type Description Format requestInfos List of StorageVulnerabilityRequest 62.3.7.17. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.3.7.18. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.4. ApproveVulnerabilityRequest POST /v1/cve/requests/{id}/approve ApproveVulnRequest approve a vulnerability request. If it is an unwatch vulnerability request then the associated vulnerabilities are not watched in workflows such as policy detection, risk, etc. 62.4.1. Description 62.4.2. Parameters 62.4.2.1. Path Parameters Name Description Required Default Pattern id X null 62.4.2.2. Body Parameter Name Description Required Default Pattern body VulnerabilityRequestServiceApproveVulnerabilityRequestBody X 62.4.3. Return Type V1ApproveVulnRequestResponse 62.4.4. Content Type application/json 62.4.5. Responses Table 62.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1ApproveVulnRequestResponse 0 An unexpected error response. GooglerpcStatus 62.4.6. Samples 62.4.7. Common object reference 62.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.4.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.4.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.4.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.4.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.4.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.4.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.4.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.4.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.4.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.4.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.4.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.4.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.4.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.4.7.16. V1ApproveVulnRequestResponse Field Name Required Nullable Type Description Format requestInfo StorageVulnerabilityRequest 62.4.7.17. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.4.7.18. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.4.7.19. VulnerabilityRequestServiceApproveVulnerabilityRequestBody Field Name Required Nullable Type Description Format comment String 62.5. DeleteVulnerabilityRequest DELETE /v1/cve/requests/{id} DeleteVulnerabilityRequest deletes a vulnerability request. 62.5.1. Description 62.5.2. Parameters 62.5.2.1. Path Parameters Name Description Required Default Pattern id X null 62.5.3. Return Type Object 62.5.4. Content Type application/json 62.5.5. Responses Table 62.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 62.5.6. Samples 62.5.7. Common object reference 62.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.6. DenyVulnerabilityRequest POST /v1/cve/requests/{id}/deny DenyVulnRequest denies a vulnerability request. 62.6.1. Description 62.6.2. Parameters 62.6.2.1. Path Parameters Name Description Required Default Pattern id X null 62.6.2.2. Body Parameter Name Description Required Default Pattern body VulnerabilityRequestServiceDenyVulnerabilityRequestBody X 62.6.3. Return Type V1DenyVulnRequestResponse 62.6.4. Content Type application/json 62.6.5. Responses Table 62.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1DenyVulnRequestResponse 0 An unexpected error response. GooglerpcStatus 62.6.6. Samples 62.6.7. Common object reference 62.6.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.6.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.6.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.6.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.6.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.6.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.6.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.6.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.6.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.6.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.6.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.6.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.6.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.6.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.6.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.6.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.6.7.16. V1DenyVulnRequestResponse Field Name Required Nullable Type Description Format requestInfo StorageVulnerabilityRequest 62.6.7.17. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.6.7.18. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.6.7.19. VulnerabilityRequestServiceDenyVulnerabilityRequestBody Field Name Required Nullable Type Description Format comment String 62.7. GetVulnerabilityRequest GET /v1/cve/requests/{id} GetVulnerabilityRequest returns the requested vulnerability request by ID. 62.7.1. Description 62.7.2. Parameters 62.7.2.1. Path Parameters Name Description Required Default Pattern id X null 62.7.3. Return Type V1GetVulnerabilityRequestResponse 62.7.4. Content Type application/json 62.7.5. Responses Table 62.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetVulnerabilityRequestResponse 0 An unexpected error response. GooglerpcStatus 62.7.6. Samples 62.7.7. Common object reference 62.7.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.7.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.7.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.7.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.7.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.7.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.7.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.7.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.7.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.7.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.7.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.7.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.7.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.7.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.7.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.7.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.7.7.16. V1GetVulnerabilityRequestResponse Field Name Required Nullable Type Description Format requestInfo StorageVulnerabilityRequest 62.7.7.17. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.7.7.18. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.8. UndoVulnerabilityRequest POST /v1/cve/requests/{id}/undo UndoVulnerabilityRequest undoes a vulnerability request. 62.8.1. Description 62.8.2. Parameters 62.8.2.1. Path Parameters Name Description Required Default Pattern id X null 62.8.3. Return Type V1UndoVulnRequestResponse 62.8.4. Content Type application/json 62.8.5. Responses Table 62.8. HTTP Response Codes Code Message Datatype 200 A successful response. V1UndoVulnRequestResponse 0 An unexpected error response. GooglerpcStatus 62.8.6. Samples 62.8.7. Common object reference 62.8.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.8.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.8.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.8.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.8.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.8.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.8.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.8.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.8.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.8.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.8.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.8.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.8.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.8.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.8.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.8.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.8.7.16. V1UndoVulnRequestResponse Field Name Required Nullable Type Description Format requestInfo StorageVulnerabilityRequest 62.8.7.17. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.8.7.18. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.9. UpdateVulnerabilityRequest POST /v1/cve/requests/{id}/update UpdateVulnerabilityRequest updates an existing vulnerability request. Currently only deferral expiration time can be updated. 62.9.1. Description 62.9.2. Parameters 62.9.2.1. Path Parameters Name Description Required Default Pattern id X null 62.9.2.2. Body Parameter Name Description Required Default Pattern body VulnerabilityRequestServiceUpdateVulnerabilityRequestBody X 62.9.3. Return Type V1UpdateVulnRequestResponse 62.9.4. Content Type application/json 62.9.5. Responses Table 62.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1UpdateVulnRequestResponse 0 An unexpected error response. GooglerpcStatus 62.9.6. Samples 62.9.7. Common object reference 62.9.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 62.9.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 62.9.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 62.9.7.3. RequestExpiryExpiryType Enum Values TIME ALL_CVE_FIXABLE ANY_CVE_FIXABLE 62.9.7.4. StorageApprover Field Name Required Nullable Type Description Format id String name String 62.9.7.5. StorageDeferralRequest Field Name Required Nullable Type Description Format expiry StorageRequestExpiry 62.9.7.6. StorageDeferralUpdate Field Name Required Nullable Type Description Format CVEs List of string expiry StorageRequestExpiry 62.9.7.7. StorageFalsePositiveUpdate Field Name Required Nullable Type Description Format CVEs List of string 62.9.7.8. StorageRequestComment Field Name Required Nullable Type Description Format id String message String user StorageSlimUser createdAt Date date-time 62.9.7.9. StorageRequestExpiry Field Name Required Nullable Type Description Format expiresWhenFixed Boolean Indicates that this request expires when the associated vulnerability is fixed. expiresOn Date Indicates the timestamp when this request expires. date-time expiryType RequestExpiryExpiryType TIME, ALL_CVE_FIXABLE, ANY_CVE_FIXABLE, 62.9.7.10. StorageRequestStatus Indicates the status of a request. Requests canceled by the user before they are acted upon by the approver are not tracked/persisted (with the exception of audit logs if it is turned on). PENDING: Default request state. It indicates that the request has not been fulfilled and that an action (approve/deny) is required. APPROVED: Indicates that the request has been approved by the approver. DENIED: Indicates that the request has been denied by the approver. APPROVED_PENDING_UPDATE: Indicates that the original request was approved, but an update is still pending an approval or denial. Enum Values PENDING APPROVED DENIED APPROVED_PENDING_UPDATE 62.9.7.11. StorageRequester Field Name Required Nullable Type Description Format id String name String 62.9.7.12. StorageSlimUser Field Name Required Nullable Type Description Format id String name String 62.9.7.13. StorageVulnerabilityRequest available tag: 30 VulnerabilityRequest encapsulates a request such as deferral request and false-positive request. Field Name Required Nullable Type Description Format id String name String targetState StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, status StorageRequestStatus PENDING, APPROVED, DENIED, APPROVED_PENDING_UPDATE, expired Boolean Indicates if this request is a historical request that is no longer in effect due to deferral expiry, cancellation, or restarting cve observation. requestor StorageSlimUser approvers List of StorageSlimUser createdAt Date date-time lastUpdated Date date-time comments List of StorageRequestComment scope StorageVulnerabilityRequestScope requesterV2 StorageRequester approversV2 List of StorageApprover deferralReq StorageDeferralRequest fpRequest Object cves VulnerabilityRequestCVEs updatedDeferralReq StorageDeferralRequest deferralUpdate StorageDeferralUpdate falsePositiveUpdate StorageFalsePositiveUpdate 62.9.7.14. StorageVulnerabilityRequestScope Field Name Required Nullable Type Description Format imageScope VulnerabilityRequestScopeImage globalScope Object 62.9.7.15. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 62.9.7.16. V1UpdateVulnRequestResponse Field Name Required Nullable Type Description Format requestInfo StorageVulnerabilityRequest 62.9.7.17. VulnerabilityRequestCVEs Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. 62.9.7.18. VulnerabilityRequestScopeImage Field Name Required Nullable Type Description Format registry String remote String tag String 62.9.7.19. VulnerabilityRequestServiceUpdateVulnerabilityRequestBody Field Name Required Nullable Type Description Format comment String expiry StorageRequestExpiry
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"next available tag: 6",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/vulnerabilityrequestservice
|
4.60. file
|
4.60. file 4.60.1. RHBA-2011:0934 - file bug fix update Updated file packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The file command is used to identify a particular file according to the type of data contained in the file. [Updated 7 September 2011] This update fixes a bug in which the file utility did not parse ELF (Executable and Linkable Format) binary files correctly. If an entry in the program header table contained a file offset beyond the end of file, dynamically linked files were reported as being linked statically. The file utility now recognizes files in the described scenario correctly. (BZ# 730336 ) Bug Fixes BZ# 676045 , BZ# 712992 , BZ# 712988 Prior to this update, the file utility could have been unable to recognize RPM files for certain supported architectures. This update improves the file type recognition, and the RPM files for all supported architectures are now correctly identified as expected. BZ# 688700 Prior to this update, the file utility did not correctly recognized the IBM System z kernel images. This problem has been corrected so that the IBM System z kernel images are now correctly recognized as expected. BZ# 692098 Prior to this update, the file utility attempted to show information related to core dumps for binary files that were not core dumps. This undesired behavior has been fixed in this update so that information related to core dumps is showed only for core dumps and not for the binary files which are not core dumps. BZ# 675691 Prior to this update, file patterns for LaTeX checked only the first 400 bytes of a file to determine the pattern type. This caused an incorrect pattern type recognition as some files could have contained a larger number of comments at the beginning of the file. Furthermore, file patterns which matched a Python script were tried before the LaTeX patterns and this undesired behavior could have caused an incorrect pattern type recognition as LaTeX files could have included a source code written in Python. With this update, the aforementioned problems have been fixed by increasing the number of first bytes checked for a LaTeX file to 4096 bytes, and by trying the LaTeX patterns before the Python patterns. BZ# 690801 Prior to this update, there were several spelling mistakes contained in the magic(5) manual page. This update corrects the spelling mistakes in the respective manual page. BZ# 716665 Prior to this update, the file utility treated MP3 files as text files, and therefore was unable to recognize the MP3 files. This undesired behavior has been fixed in this update, and the file utility now treats the MP3 files as binary files and is able to properly recognize them. All users of file are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/file
|
Part II. Managing developer accounts
|
Part II. Managing developer accounts
| null |
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/managing_developer_accounts
|
2.7.4. Site-to-Site VPN Using Libreswan
|
2.7.4. Site-to-Site VPN Using Libreswan To create a site-to-site IPsec VPN, joining together two networks, an IPsec tunnel is created between two hosts, endpoints, which are configured to permit traffic from one or more subnets to pass through. They can therefore be thought of as gateways to the remote portion of the network. The configuration of the site-to-site VPN only differs from the host-to-host VPN in that one or more networks or subnets must be specified in the configuration file. To configure Libreswan to create a site-to-site IPsec VPN, first configure a host-to-host IPsec VPN as described in Section 2.7.3, "Host-To-Host VPN Using Libreswan" and then copy or move the file to a file with a suitable name, such as /etc/ipsec.d/my_site-to-site.conf . Using an editor running as root , edit the custom configuration file /etc/ipsec.d/my_site-to-site.conf as follows: To bring the tunnels up, restart Libreswan or manually load and initiate all the connections using the following commands as root : 2.7.4.1. Verify Site-to-Site VPN Using Libreswan Verifying that packets are being sent via the VPN tunnel is the same procedure as explained in Section 2.7.3.1, "Verify Host-To-Host VPN Using Libreswan" .
|
[
"conn mysubnet also=mytunnel leftsubnet=192.0.1.0/24 rightsubnet=192.0.2.0/24 conn mysubnet6 also=mytunnel connaddrfamily=ipv6 leftsubnet=2001:db8:0:1::/64 rightsubnet=2001:db8:0:2::/64 conn mytunnel [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig",
"~]# ipsec auto --add mysubnet",
"~]# ipsec auto --add mysubnet6",
"~]# ipsec auto --add mytunnel",
"~]# ipsec auto --up mysubnet 104 \"mysubnet\" #1: STATE_MAIN_I1: initiate 003 \"mysubnet\" #1: received Vendor ID payload [Dead Peer Detection] 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 106 \"mysubnet\" #1: STATE_MAIN_I2: sent MI2, expecting MR2 108 \"mysubnet\" #1: STATE_MAIN_I3: sent MI3, expecting MR3 003 \"mysubnet\" #1: received Vendor ID payload [CAN-IKEv2] 004 \"mysubnet\" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_RSA_SIG cipher=aes_128 prf=oakley_sha group=modp2048} 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x9414a615 <0x1a8eb4ef xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"~]# ipsec auto --up mysubnet6 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x06fe2099 <0x75eaa862 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"~]# ipsec auto --up mytunnel 104 \"mytunnel\" #1: STATE_MAIN_I1: initiate 003 \"mytunnel\" #1: received Vendor ID payload [Dead Peer Detection] 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 106 \"mytunnel\" #1: STATE_MAIN_I2: sent MI2, expecting MR2 108 \"mytunnel\" #1: STATE_MAIN_I3: sent MI3, expecting MR3 003 \"mytunnel\" #1: received Vendor ID payload [CAN-IKEv2] 004 \"mytunnel\" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_RSA_SIG cipher=aes_128 prf=oakley_sha group=modp2048} 117 \"mytunnel\" #2: STATE_QUICK_I1: initiate 004 \"mytunnel\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x16bca4f7 >0x9c2ae273 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/site-to-site_vpn_using_libreswan
|
Release Notes
|
Release Notes Red Hat Trusted Profile Analyzer 1.2 Release notes for Red Hat Trusted Profile Analyzer 1.2.2 Red Hat Trusted Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1.2/html/release_notes/index
|
Chapter 14. Log Record Fields
|
Chapter 14. Log Record Fields The following fields can be present in log records exported by OpenShift Logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings. To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch /_search URL , to look for a Kubernetes pod name, use /_search/q=kubernetes.pod_name:name-of-my-pod . The top level fields may be present in every record.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/cluster-logging-exported-fields
|
29.2. Examples for Using migrate-ds
|
29.2. Examples for Using migrate-ds The data migration is performed with the ipa migrate-ds command. At its simplest, the command takes the LDAP URL of the directory to migrate and exports the data based on common default settings. It is possible to customize how the migrate-ds commands identifies and exports data. This is useful if the original directory tree has a unique structure or if some entries or attributes within entries should be excluded from migration. 29.2.1. Migrating Specific Subtrees The default directory structure places person entries in the ou=People subtree and group entries in the ou=Groups subtree. These subtrees are container entries for those different types of directory data. If no options are passed with the migrate-ds command, then the utility assumes that the given LDAP directory uses the ou=People and ou=Groups structure. Many deployments may have an entirely different directory structure (or may only want to export certain parts of the directory tree). There are two options which allow administrators to give the RDN of a different user or group subtree: --user-container --group-container Note In both cases, the subtree must be the RDN only and must be relative to the base DN. For example, the ou=Employees,dc=example,dc=com subtree can be migrated using --user-container=ou=Employees , but ou=Employees,ou=People,dc=example,dc=com cannot be migrated with that option because ou=Employees is not a direct child of the base DN. For example: There is a third option that allows administrators to set a base DN for migration: --base-dn . With this option, it is possible to change the target for container subtrees. For example: Now, the ou=Employees user subtree can be migrated from within the larger ou=People subtree without migrating every people-related subtree. 29.2.2. Specifically Including or Excluding Entries By default, the migrate-ds script exports every user entry with the person object class and every group entry within the given user and group subtrees. In some migration paths, only specific types of users and groups may need to be exported, or, conversely, specific users and groups may need to be excluded. On option is to set positively which types of users and groups to include. This is done by setting which object classes to search for when looking for user or group entries. This is a really useful option when there are custom object classes used in an environment for different user types. For example, this migrates only users with the custom fullTimeEmployee object class: Because of the different types of groups, this is also very useful for migrating only certain types of groups (such as user groups) while excluding other types of groups, like certificate groups. For example: Positively specifying user and groups to migrate based on object class implicitly excludes all other users and groups from migration. Alternatively, it can be useful to migrate all user and group entries except for just a small handful of entries. Specific user or group accounts can be excluded while all others of that type are migrated. For example, this excludes a hobbies group and two users: Specifying an object class to migrate can be used together with excluding specific entries. For example, this specifically includes users with the fullTimeEmployee object class, yet excludes three managers: 29.2.3. Excluding Entry Attributes By default, every attribute and object class for a user or group entry is migrated. There are some cases where that may not be realistic, either because of bandwidth and network constraints or because the attribute data are no longer relevant. For example, if users are going to be assigned new user certificates as they join the IdM domain, then there is no reason to migrate the userCertificate attribute. Specific object classes and attributes can be ignored by the migrate-ds by using any of several different options: --user-ignore-objectclass --user-ignore-attribute --group-ignore-objectclass --group-ignore-attribute For example, to exclude the userCertificate attribute and strongAuthenticationUser object class for users and the groupOfCertificates object class for groups: Note Make sure not to ignore any required attributes. Also, when excluding object classes, make sure to exclude any attributes which are only supported by that object class. 29.2.4. Setting the Schema to Use By default, Identity Management uses RFC2307bis schema to define user, host, hostgroup, and other network identities. This schema option can be reset to use RFC2307 schema instead:
|
[
"ipa migrate-ds ldap://ldap.example.com:389",
"ipa migrate-ds --user-container=ou=employees --group-container=\"ou=employee groups\" ldap://ldap.example.com:389",
"ipa migrate-ds --user-container=ou=employees --base-dn=\"ou=people,dc=example,dc=com\" ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee ldap://ldap.example.com:389",
"ipa migrate-ds --group-objectclass=groupOfNames,groupOfUniqueNames ldap://ldap.example.com:389",
"ipa migrate-ds --exclude-groups=\"Golfers Group\" --exclude-users=jsmith,bjensen ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee --exclude-users=jsmith,bjensen,mreynolds ldap://ldap.example.com:389",
"ipa migrate-ds --user-ignore-attribute=userCertificate --user-ignore-objectclass=strongAuthenticationUser --group-ignore-objectclass=groupOfCertificates ldap://ldap.example.com:389",
"ipa migrate-ds --schema=RFC2307 ldap://ldap.example.com:389"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/using-migrate-ds
|
Chapter 13. Job [batch/v1]
|
Chapter 13. Job [batch/v1] Description Job represents the configuration of a single job. Type object 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object JobSpec describes how the job execution will look like. status object JobStatus represents the current state of a Job. 13.1.1. .spec Description JobSpec describes how the job execution will look like. Type object Required template Property Type Description activeDeadlineSeconds integer Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it; value must be positive integer. If a Job is suspended (at creation or through an update), this timer will effectively be stopped and reset when the Job is resumed again. backoffLimit integer Specifies the number of retries before marking this job failed. Defaults to 6 completionMode string CompletionMode specifies how Pod completions are tracked. It can be NonIndexed (default) or Indexed . NonIndexed means that the Job is considered complete when there have been .spec.completions successfully completed Pods. Each Pod completion is homologous to each other. Indexed means that the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1), available in the annotation batch.kubernetes.io/job-completion-index. The Job is considered complete when there is one successfully completed Pod for each index. When value is Indexed , .spec.completions must be specified and .spec.parallelism must be less than or equal to 10^5. In addition, The Pod name takes the form USD(job-name)-USD(index)-USD(random-string) , the Pod hostname takes the form USD(job-name)-USD(index) . More completion modes can be added in the future. If the Job controller observes a mode that it doesn't recognize, which is possible during upgrades due to version skew, the controller skips updates for the Job. completions integer Specifies the desired number of successfully finished pods the job should be run with. Setting to nil means that the success of any pod signals the success of all pods, and allows parallelism to have any positive value. Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ manualSelector boolean manualSelector controls generation of pod labels and pod selectors. Leave manualSelector unset unless you are certain what you are doing. When false or unset, the system pick labels unique to this job and appends those labels to the pod template. When true, the user is responsible for picking unique labels and specifying the selector. Failure to pick a unique label may cause this and other jobs to not function correctly. However, You may see manualSelector=true in jobs that were created with the old extensions/v1beta1 API. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#specifying-your-own-pod-selector parallelism integer Specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ podFailurePolicy object PodFailurePolicy describes how failed pods influence the backoffLimit. selector LabelSelector A label query over pods that should match the pod count. Normally, the system sets this field for you. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors suspend boolean Suspend specifies whether the Job controller should create Pods or not. If a Job is created with suspend set to true, no Pods are created by the Job controller. If a Job is suspended after creation (i.e. the flag goes from false to true), the Job controller will delete all active Pods associated with this Job. Users must design their workload to gracefully handle this. Suspending a Job will reset the StartTime field of the Job, effectively resetting the ActiveDeadlineSeconds timer too. Defaults to false. template PodTemplateSpec Describes the pod that will be created when executing a job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ ttlSecondsAfterFinished integer ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, ttlSecondsAfterFinished after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the Job won't be automatically deleted. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes. 13.1.2. .spec.podFailurePolicy Description PodFailurePolicy describes how failed pods influence the backoffLimit. Type object Required rules Property Type Description rules array A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. rules[] object PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of OnExitCodes and onPodConditions, but not both, can be used in each rule. 13.1.3. .spec.podFailurePolicy.rules Description A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. Type array 13.1.4. .spec.podFailurePolicy.rules[] Description PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of OnExitCodes and onPodConditions, but not both, can be used in each rule. Type object Required action Property Type Description action string Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: - FailJob: indicates that the pod's job is marked as Failed and all running pods are terminated. - Ignore: indicates that the counter towards the .backoffLimit is not incremented and a replacement pod is created. - Count: indicates that the pod is handled in the default way - the counter towards the .backoffLimit is incremented. Additional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule. Possible enum values: - "Count" This is an action which might be taken on a pod failure - the pod failure is handled in the default way - the counter towards .backoffLimit, represented by the job's .status.failed field, is incremented. - "FailJob" This is an action which might be taken on a pod failure - mark the pod's job as Failed and terminate all running pods. - "Ignore" This is an action which might be taken on a pod failure - the counter towards .backoffLimit, represented by the job's .status.failed field, is not incremented and a replacement pod is created. onExitCodes object PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. onPodConditions array Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. onPodConditions[] object PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. 13.1.5. .spec.podFailurePolicy.rules[].onExitCodes Description PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. Type object Required operator values Property Type Description containerName string Restricts the check for exit codes to the container with the specified name. When null, the rule applies to all containers. When specified, it should match one the container or initContainer names in the pod template. operator string Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: - In: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is in the set of specified values. - NotIn: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is not in the set of specified values. Additional values are considered to be added in the future. Clients should react to an unknown operator by assuming the requirement is not satisfied. Possible enum values: - "In" - "NotIn" values array (integer) Specifies the set of values. Each returned container exit code (might be multiple in case of multiple containers) is checked against this set of values with respect to the operator. The list of values must be ordered and must not contain duplicates. Value '0' cannot be used for the In operator. At least one element is required. At most 255 elements are allowed. 13.1.6. .spec.podFailurePolicy.rules[].onPodConditions Description Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. Type array 13.1.7. .spec.podFailurePolicy.rules[].onPodConditions[] Description PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. Type object Required type status Property Type Description status string Specifies the required Pod condition status. To match a pod condition it is required that the specified status equals the pod condition status. Defaults to True. type string Specifies the required Pod condition type. To match a pod condition it is required that specified type equals the pod condition type. 13.1.8. .status Description JobStatus represents the current state of a Job. Type object Property Type Description active integer The number of pending and running pods. completedIndexes string CompletedIndexes holds the completed indexes when .spec.completionMode = "Indexed" in a text format. The indexes are represented as decimal integers separated by commas. The numbers are listed in increasing order. Three or more consecutive numbers are compressed and represented by the first and last element of the series, separated by a hyphen. For example, if the completed indexes are 1, 3, 4, 5 and 7, they are represented as "1,3-5,7". completionTime Time Represents time when the job was completed. It is not guaranteed to be set in happens-before order across separate operations. It is represented in RFC3339 form and is in UTC. The completion time is only set when the job finishes successfully. conditions array The latest available observations of an object's current state. When a Job fails, one of the conditions will have type "Failed" and status true. When a Job is suspended, one of the conditions will have type "Suspended" and status true; when the Job is resumed, the status of this condition will become false. When a Job is completed, one of the conditions will have type "Complete" and status true. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ conditions[] object JobCondition describes current state of a job. failed integer The number of pods which reached phase Failed. ready integer The number of pods which have a Ready condition. This field is beta-level. The job controller populates the field when the feature gate JobReadyPods is enabled (enabled by default). startTime Time Represents time when the job controller started processing a job. When a Job is created in the suspended state, this field is not set until the first time it is resumed. This field is reset every time a Job is resumed from suspension. It is represented in RFC3339 form and is in UTC. succeeded integer The number of pods which reached phase Succeeded. uncountedTerminatedPods object UncountedTerminatedPods holds UIDs of Pods that have terminated but haven't been accounted in Job status counters. 13.1.9. .status.conditions Description The latest available observations of an object's current state. When a Job fails, one of the conditions will have type "Failed" and status true. When a Job is suspended, one of the conditions will have type "Suspended" and status true; when the Job is resumed, the status of this condition will become false. When a Job is completed, one of the conditions will have type "Complete" and status true. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ Type array 13.1.10. .status.conditions[] Description JobCondition describes current state of a job. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of job condition, Complete or Failed. 13.1.11. .status.uncountedTerminatedPods Description UncountedTerminatedPods holds UIDs of Pods that have terminated but haven't been accounted in Job status counters. Type object Property Type Description failed array (string) Failed holds UIDs of failed Pods. succeeded array (string) Succeeded holds UIDs of succeeded Pods. 13.2. API endpoints The following API endpoints are available: /apis/batch/v1/jobs GET : list or watch objects of kind Job /apis/batch/v1/watch/jobs GET : watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/jobs DELETE : delete collection of Job GET : list or watch objects of kind Job POST : create a Job /apis/batch/v1/watch/namespaces/{namespace}/jobs GET : watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/jobs/{name} DELETE : delete a Job GET : read the specified Job PATCH : partially update the specified Job PUT : replace the specified Job /apis/batch/v1/watch/namespaces/{namespace}/jobs/{name} GET : watch changes to an object of kind Job. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status GET : read status of the specified Job PATCH : partially update status of the specified Job PUT : replace status of the specified Job 13.2.1. /apis/batch/v1/jobs Table 13.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Job Table 13.2. HTTP responses HTTP code Reponse body 200 - OK JobList schema 401 - Unauthorized Empty 13.2.2. /apis/batch/v1/watch/jobs Table 13.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. Table 13.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.3. /apis/batch/v1/namespaces/{namespace}/jobs Table 13.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 13.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Job Table 13.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 13.8. Body parameters Parameter Type Description body DeleteOptions schema Table 13.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Job Table 13.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK JobList schema 401 - Unauthorized Empty HTTP method POST Description create a Job Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body Job schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 202 - Accepted Job schema 401 - Unauthorized Empty 13.2.4. /apis/batch/v1/watch/namespaces/{namespace}/jobs Table 13.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 13.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. Table 13.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.5. /apis/batch/v1/namespaces/{namespace}/jobs/{name} Table 13.18. Global path parameters Parameter Type Description name string name of the Job namespace string object name and auth scope, such as for teams and projects Table 13.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Job Table 13.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.21. Body parameters Parameter Type Description body DeleteOptions schema Table 13.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Job Table 13.23. HTTP responses HTTP code Reponse body 200 - OK Job schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Job Table 13.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.25. Body parameters Parameter Type Description body Patch schema Table 13.26. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Job Table 13.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.28. Body parameters Parameter Type Description body Job schema Table 13.29. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty 13.2.6. /apis/batch/v1/watch/namespaces/{namespace}/jobs/{name} Table 13.30. Global path parameters Parameter Type Description name string name of the Job namespace string object name and auth scope, such as for teams and projects Table 13.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Job. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 13.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.7. /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status Table 13.33. Global path parameters Parameter Type Description name string name of the Job namespace string object name and auth scope, such as for teams and projects Table 13.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Job Table 13.35. HTTP responses HTTP code Reponse body 200 - OK Job schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Job Table 13.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.37. Body parameters Parameter Type Description body Patch schema Table 13.38. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Job Table 13.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.40. Body parameters Parameter Type Description body Job schema Table 13.41. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/workloads_apis/job-batch-v1
|
Deploying and managing RHEL systems in hybrid clouds
|
Deploying and managing RHEL systems in hybrid clouds Red Hat Insights 1-latest Deploying and managing your customized RHEL system images in hybrid clouds Red Hat Customer Content Services
|
[
"ipa-hcc register <registration token> Domain information: realm name: <REALM_NAME> domain name: <domain_name> dns domains: <dns_domains>",
"Proceed with registration? Yes/No (default No): <Yes>",
"dnf remove ipa-hcc-server",
"chmod 400 <your-instance-name.pem>",
"ssh -i \"<_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>\"",
"{ \"image_status\": { \"status\": \"success\", \"upload_status\": { \"options\": { \"image_name\": \"composer-api-03f0e19c-0050-4c8a-a69e-88790219b086\", \"project_id\": \"red-hat-image-builder\" }, \"status\": \"success\", \"type\": \"gcp\" } } }",
"gcloud config set project PROJECT_ID",
"gcloud compute instances create INSTANCE_NAME --image-project PROJECT_ID_FROM_RESPONSE --image IMAGE_NAME --zone GCP_ZONE",
"gcloud compute instances describe INSTANCE_NAME",
"gcloud compute ssh --project= PROJECT_ID --zone= ZONE INSTANCE_NAME",
"{ \"image_status\": { \"status\": \"success\", \"upload_status\": { \"options\": { \"image_name\": \"composer-api-03f0e19c-0050-4c8a-a69e-88790219b086\", \"project_id\": \"red-hat-image-builder\" }, \"status\": \"success\", \"type\": \"gcp\" } } }",
"\"image_name\": \"composer-api-03f0e19c-0050-4c8a-a69e-88790219b086\", \"project_id\": \"red-hat-image-builder\"",
"gcloud config set project PROJECT_ID",
"gcloud compute images create MY_IMAGE_NAME --source-image-project red-hat-image-builder --source-image IMAGE_NAME",
"gcloud compute images list --no-standard-images",
"instance-id: cloud-vm local-hostname: vmname",
"#cloud-config users: - name: admin sudo: \"ALL=(ALL) NOPASSWD:ALL\" ssh_authorized_keys: - ssh-rsa AAA...fhHQ== [email protected]",
"request.json { \"image_name\": \"ova_image_name\", \"distribution\": \"rhel-94\", \"image_requests\": [ { \"architecture\": \"x86_64\", \"image_type\": \"vsphere-ova\", \"upload_request\": { \"type\": \"vmdk\", \"options\": {} } } ], \"customizations\": { \"users\": [ { \"name\": \"user-name\", \"ssh_key\": \"ssh-rsa AAAAB...qfGI+vk\", \"password\": \"password\" } ] } }",
"curl --silent --request POST --header \"Authorization: Bearer USDaccess_token\" --header \"Content-Type: application/json\" --data @request.json https://console.redhat.com/api/image-builder/v1/compose",
"{\"id\":\"fd4ecf3c-f0ce-43dd-9fcc-6ad11208b939\"}",
"curl --silent --header \"Authorization: Bearer USDaccess_token\" \"https://console.redhat.com/api/image-builder/v1/composes/USDcompose_id\" | image_ID.",
"{\"id\":\"fd4ecf3c-f0ce-43dd-9fcc-6ad11208b939\"}",
"{ \"image_status\": { \"status\": \"success\", \"upload_status\": { \"options\": { \"url\": \"https://image-builder-service-production.s3.amazonaws.com/composer-api-76...-disk.ova?e42...\" }, \"status\": \"success\", \"type\": \"aws.s3\" } } }",
"curl --location --output vsphere-ova.vmdk \"https://image-builder-service-production.s3.amazonaws.com/composer-api-76...-disk.ova?e42...\"",
"GOVC_URL GOVC_DATACENTER GOVC_FOLDER GOVC_DATASTORE GOVC_RESOURCE_POOL GOVC_NETWORK",
"export METADATA=USD(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) USERDATA=USD(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })",
"govc import.vmdk ./composer-api.vmdk foldername",
"govc vm.create -net.adapter=vmxnet3 -m=4096 -c=2 -g=rhel8_64Guest -firmware=bios -disk=\" foldername /composer-api.vmdk\" -disk.controller=ide -on=false vmname",
"govc vm.change -vm vmname -e guestinfo.metadata=\"USD{METADATA}\" -e guestinfo.metadata.encoding=\"gzip+base64\" -e guestinfo.userdata=\"USD{USERDATA}\" -e guestinfo.userdata.encoding=\"gzip+base64\"",
"govc vm.power -on vmname",
"HOST=USD(govc vm.ip vmname )",
"ssh admin@HOST",
"instance-id: nocloud local-hostname: vmname",
"#cloud-config user: admin password: password chpasswd: {expire: False} ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAA...fhHQ== [email protected]",
"genisoimage -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data I: -input-charset not specified, using utf-8 (detected in locale settings) Total translation table size: 0 Total rockridge attributes bytes: 331 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 0 183 extents written (0 MB)",
"virt-install --memory 4096 --vcpus 4 --name myvm --disk composer-api.qcow2,device=disk,bus=virtio,format=qcow2 --disk cloud-init.iso,device=cdrom --os-variant rhel1-latest --virt-type kvm --graphics none --import",
"Starting install Connected to domain myvm [ OK ] Started Execute cloud user/final scripts. [ OK ] Reached target Cloud-init target. Red Hat Enterprise Linux 1-latest (Ootpa) Kernel 4.18.0-221.el8.x86_64 on an x86_64",
"subscription-manager refresh",
"CLIENT_ID=\" YOUR_CLIENT_ID \" CLIENT_SECRET=\" YOUR_CLIENT_SECRET \" ACCESS_TOKEN=USD( curl -d \"client_id=USDCLIENT_ID\" -d \"client_secret=USDCLIENT_SECRET\" -d \"grant_type=client_credentials\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" -d \"scope=api.console\" | jq -r .access_token )"
] |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html-single/deploying_and_managing_rhel_systems_in_hybrid_clouds/index
|
Chapter 18. Red Hat Build of OptaPlanner and Java: a school timetable quickstart guide
|
Chapter 18. Red Hat Build of OptaPlanner and Java: a school timetable quickstart guide This guide walks you through the process of creating a simple Java application with the OptaPlanner constraint solving artificial intelligence (AI). You will build a command-line application that optimizes a school timetable for students and teachers: Your application will assign Lesson instances to Timeslot and Room instances automatically by using AI to adhere to hard and soft scheduling constraints , for example: A room can have at most one lesson at the same time. A teacher can teach at most one lesson at the same time. A student can attend at most one lesson at the same time. A teacher prefers to teach all lessons in the same room. A teacher prefers to teach sequential lessons and dislikes gaps between lessons. A student dislikes sequential lessons on the same subject. Mathematically speaking, school timetabling is an NP-hard problem. This means it is difficult to scale. Simply brute force iterating through all possible combinations takes millions of years for a non-trivial data set, even on a supercomputer. Fortunately, AI constraint solvers such as OptaPlanner have advanced algorithms that deliver a near-optimal solution in a reasonable amount of time. Prerequisites OpenJDK (JDK) 11 is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. An IDE, such as IntelliJ IDEA , VSCode or Eclipse 18.1. Creating the Maven or Gradle build file and add dependencies You can use Maven or Gradle for the OptaPlanner school timetable application. After you create the build files, add the following dependencies: optaplanner-core (compile scope) to solve the school timetable problem optaplanner-test (test scope) to JUnit test the school timetabling constraints An implementation such as logback-classic (runtime scope) to view the steps that OptaPlanner takes Procedure Create the Maven or Gradle build file. Add optaplanner-core , optaplanner-test , and logback-classic dependencies to your build file: For Maven, add the following dependencies to the pom.xml file: The following example shows the complete pom.xml file. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>optaplanner-hello-world-school-timetabling-quickstart</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.release>11</maven.compiler.release> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <version.org.optaplanner>8.38.0.Final-redhat-00004</version.org.optaplanner> <version.org.logback>1.2.3</version.org.logback> <version.compiler.plugin>3.8.1</version.compiler.plugin> <version.surefire.plugin>3.0.0-M5</version.surefire.plugin> <version.exec.plugin>3.0.0</version.exec.plugin> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-bom</artifactId> <version>USD{version.org.optaplanner}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{version.org.logback}</version> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <scope>runtime</scope> </dependency> <!-- Testing --> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{version.compiler.plugin}</version> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{version.surefire.plugin}</version> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>USD{version.exec.plugin}</version> <configuration> <mainClass>org.acme.schooltimetabling.TimeTableApp</mainClass> </configuration> </plugin> </plugins> </build> <repositories> <repository> <id>jboss-public-repository-group</id> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <!-- Get releases only from Maven Central which is faster. --> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> </project> For Gradle, add the following dependencies to the gradle.build file: The following example shows the completed gradle.build file. plugins { id "java" id "application" } def optaplannerVersion = "{optaplanner-version}" def logbackVersion = "1.2.9" group = "org.acme" version = "1.0-SNAPSHOT" repositories { mavenCentral() } dependencies { implementation platform("org.optaplanner:optaplanner-bom:USD{optaplannerVersion}") implementation "org.optaplanner:optaplanner-core" testImplementation "org.optaplanner:optaplanner-test" runtimeOnly "ch.qos.logback:logback-classic:USD{logbackVersion}" } java { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } compileJava { options.encoding = "UTF-8" options.compilerArgs << "-parameters" } compileTestJava { options.encoding = "UTF-8" } application { mainClass = "org.acme.schooltimetabling.TimeTableApp" } test { // Log the test execution results. testLogging { events "passed", "skipped", "failed" } } 18.2. Model the domain objects The goal of the Red Hat Build of OptaPlanner timetable project is to assign each lesson to a time slot and a room. To do this, add three classes, Timeslot , Lesson , and Room , as shown in the following diagram: Timeslot The Timeslot class represents a time interval when lessons are taught, for example, Monday 10:30 - 11:30 or Tuesday 13:30 - 14:30 . In this example, all time slots have the same duration and there are no time slots during lunch or other breaks. A time slot has no date because a high school schedule just repeats every week. There is no need for continuous planning . A timeslot is called a problem fact because no Timeslot instances change during solving. Such classes do not require any OptaPlanner-specific annotations. Room The Room class represents a location where lessons are taught, for example, Room A or Room B . In this example, all rooms are without capacity limits and they can accommodate all lessons. Room instances do not change during solving so Room is also a problem fact . Lesson During a lesson, represented by the Lesson class, a teacher teaches a subject to a group of students, for example, Math by A.Turing for 9th grade or Chemistry by M.Curie for 10th grade . If a subject is taught multiple times each week by the same teacher to the same student group, there are multiple Lesson instances that are only distinguishable by id . For example, the 9th grade has six math lessons a week. During solving, OptaPlanner changes the timeslot and room fields of the Lesson class to assign each lesson to a time slot and a room. Because OptaPlanner changes these fields, Lesson is a planning entity : Most of the fields in the diagram contain input data, except for the orange fields. A lesson's timeslot and room fields are unassigned ( null ) in the input data and assigned (not null ) in the output data. OptaPlanner changes these fields during solving. Such fields are called planning variables. In order for OptaPlanner to recognize them, both the timeslot and room fields require an @PlanningVariable annotation. Their containing class, Lesson , requires an @PlanningEntity annotation. Procedure Create the src/main/java/com/example/domain/Timeslot.java class: package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + " " + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } } Notice the toString() method keeps the output short so it is easier to read OptaPlanner's DEBUG or TRACE log, as shown later. Create the src/main/java/com/example/domain/Room.java class: package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } } Create the src/main/java/com/example/domain/Lesson.java class: package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = "timeslotRange") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = "roomRange") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + "(" + id + ")"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } } The Lesson class has an @PlanningEntity annotation, so OptaPlanner knows that this class changes during solving because it contains one or more planning variables. The timeslot field has an @PlanningVariable annotation, so OptaPlanner knows that it can change its value. In order to find potential Timeslot instances to assign to this field, OptaPlanner uses the valueRangeProviderRefs property to connect to a value range provider that provides a List<Timeslot> to pick from. See Section 18.4, "Gather the domain objects in a planning solution" for information about value range providers. The room field also has an @PlanningVariable annotation for the same reasons. 18.3. Define the constraints and calculate the score When solving a problem, a score represents the quality of a specific solution. The higher the score the better. Red Hat Build of OptaPlanner looks for the best solution, which is the solution with the highest score found in the available time. It might be the optimal solution. Because the timetable example use case has hard and soft constraints, use the HardSoftScore class to represent the score: Hard constraints must not be broken. For example: A room can have at most one lesson at the same time. Soft constraints should not be broken. For example: A teacher prefers to teach in a single room. Hard constraints are weighted against other hard constraints. Soft constraints are weighted against other soft constraints. Hard constraints always outweigh soft constraints, regardless of their respective weights. To calculate the score, you could implement an EasyScoreCalculator class: public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the "complete" implementation return HardSoftScore.of(hardScore, softScore); } } Unfortunately, this solution does not scale well because it is non-incremental: every time a lesson is assigned to a different time slot or room, all lessons are re-evaluated to calculate the new score. A better solution is to create a src/main/java/com/example/solver/TimeTableConstraintProvider.java class to perform incremental score calculation. This class uses OptaPlanner's ConstraintStream API which is inspired by Java 8 Streams and SQL. The ConstraintProvider scales an order of magnitude better than the EasyScoreCalculator : O (n) instead of O (n2). Procedure Create the following src/main/java/com/example/solver/TimeTableConstraintProvider.java class: package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the "complete" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson ... return constraintFactory.forEach(Lesson.class) // ... and pair it with another lesson ... .join(Lesson.class, // ... in the same timeslot ... Joiners.equal(Lesson::getTimeslot), // ... in the same room ... Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize(HardSoftScore.ONE_HARD) .asConstraint("Room conflict"); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint("Teacher conflict"); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint("Student group conflict"); } } 18.4. Gather the domain objects in a planning solution A TimeTable instance wraps all Timeslot , Room , and Lesson instances of a single dataset. Furthermore, because it contains all lessons, each with a specific planning variable state, it is a planning solution and it has a score: If lessons are still unassigned, then it is an uninitialized solution, for example, a solution with the score -4init/0hard/0soft . If it breaks hard constraints, then it is an infeasible solution, for example, a solution with the score -2hard/-3soft . If it adheres to all hard constraints, then it is a feasible solution, for example, a solution with the score 0hard/-7soft . The TimeTable class has an @PlanningSolution annotation, so Red Hat Build of OptaPlanner knows that this class contains all of the input and output data. Specifically, this class is the input of the problem: A timeslotList field with all time slots This is a list of problem facts, because they do not change during solving. A roomList field with all rooms This is a list of problem facts, because they do not change during solving. A lessonList field with all lessons This is a list of planning entities because they change during solving. Of each Lesson : The values of the timeslot and room fields are typically still null , so unassigned. They are planning variables. The other fields, such as subject , teacher and studentGroup , are filled in. These fields are problem properties. However, this class is also the output of the solution: A lessonList field for which each Lesson instance has non-null timeslot and room fields after solving A score field that represents the quality of the output solution, for example, 0hard/-5soft Procedure Create the src/main/java/com/example/domain/TimeTable.java class: package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = "timeslotRange") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = "roomRange") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } } The value range providers The timeslotList field is a value range provider. It holds the Timeslot instances which OptaPlanner can pick from to assign to the timeslot field of Lesson instances. The timeslotList field has an @ValueRangeProvider annotation to connect those two, by matching the id with the valueRangeProviderRefs of the @PlanningVariable in the Lesson . Following the same logic, the roomList field also has an @ValueRangeProvider annotation. The problem fact and planning entity properties Furthermore, OptaPlanner needs to know which Lesson instances it can change as well as how to retrieve the Timeslot and Room instances used for score calculation by your TimeTableConstraintProvider . The timeslotList and roomList fields have an @ProblemFactCollectionProperty annotation, so your TimeTableConstraintProvider can select from those instances. The lessonList has an @PlanningEntityCollectionProperty annotation, so OptaPlanner can change them during solving and your TimeTableConstraintProvider can select from those too. 18.5. The TimeTableApp.java class After you have created all of the components of the school timetable application, you will put them all together in the TimeTableApp.java class. The main() method performs the following tasks: Creates the SolverFactory to build a Solver for each data set. Loads a data set. Solves it with Solver.solve() . Visualizes the solution for that data set. Typically, an application has a single SolverFactory to build a new Solver instance for each problem data set to solve. A SolverFactory is thread-safe, but a Solver is not. For the school timetable application, there is only one data set, so only one Solver instance. Here is the completed TimeTableApp.java class: package org.acme.schooltimetabling; import java.time.DayOfWeek; import java.time.Duration; import java.time.LocalTime; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.stream.Collectors; import org.acme.schooltimetabling.domain.Lesson; import org.acme.schooltimetabling.domain.Room; import org.acme.schooltimetabling.domain.TimeTable; import org.acme.schooltimetabling.domain.Timeslot; import org.acme.schooltimetabling.solver.TimeTableConstraintProvider; import org.optaplanner.core.api.solver.Solver; import org.optaplanner.core.api.solver.SolverFactory; import org.optaplanner.core.config.solver.SolverConfig; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TimeTableApp { private static final Logger LOGGER = LoggerFactory.getLogger(TimeTableApp.class); public static void main(String[] args) { SolverFactory<TimeTable> solverFactory = SolverFactory.create(new SolverConfig() .withSolutionClass(TimeTable.class) .withEntityClasses(Lesson.class) .withConstraintProviderClass(TimeTableConstraintProvider.class) // The solver runs only for 5 seconds on this small data set. // It's recommended to run for at least 5 minutes ("5m") otherwise. .withTerminationSpentLimit(Duration.ofSeconds(5))); // Load the problem TimeTable problem = generateDemoData(); // Solve the problem Solver<TimeTable> solver = solverFactory.buildSolver(); TimeTable solution = solver.solve(problem); // Visualize the solution printTimetable(solution); } public static TimeTable generateDemoData() { List<Timeslot> timeslotList = new ArrayList<>(10); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(3); roomList.add(new Room("Room A")); roomList.add(new Room("Room B")); roomList.add(new Room("Room C")); List<Lesson> lessonList = new ArrayList<>(); long id = 0; lessonList.add(new Lesson(id++, "Math", "A. Turing", "9th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "9th grade")); lessonList.add(new Lesson(id++, "Physics", "M. Curie", "9th grade")); lessonList.add(new Lesson(id++, "Chemistry", "M. Curie", "9th grade")); lessonList.add(new Lesson(id++, "Biology", "C. Darwin", "9th grade")); lessonList.add(new Lesson(id++, "History", "I. Jones", "9th grade")); lessonList.add(new Lesson(id++, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(id++, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(id++, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(id++, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "10th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "10th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "10th grade")); lessonList.add(new Lesson(id++, "Physics", "M. Curie", "10th grade")); lessonList.add(new Lesson(id++, "Chemistry", "M. Curie", "10th grade")); lessonList.add(new Lesson(id++, "French", "M. Curie", "10th grade")); lessonList.add(new Lesson(id++, "Geography", "C. Darwin", "10th grade")); lessonList.add(new Lesson(id++, "History", "I. Jones", "10th grade")); lessonList.add(new Lesson(id++, "English", "P. Cruz", "10th grade")); lessonList.add(new Lesson(id++, "Spanish", "P. Cruz", "10th grade")); return new TimeTable(timeslotList, roomList, lessonList); } private static void printTimetable(TimeTable timeTable) { LOGGER.info(""); List<Room> roomList = timeTable.getRoomList(); List<Lesson> lessonList = timeTable.getLessonList(); Map<Timeslot, Map<Room, List<Lesson>>> lessonMap = lessonList.stream() .filter(lesson -> lesson.getTimeslot() != null && lesson.getRoom() != null) .collect(Collectors.groupingBy(Lesson::getTimeslot, Collectors.groupingBy(Lesson::getRoom))); LOGGER.info("| | " + roomList.stream() .map(room -> String.format("%-10s", room.getName())).collect(Collectors.joining(" | ")) + " |"); LOGGER.info("|" + "------------|".repeat(roomList.size() + 1)); for (Timeslot timeslot : timeTable.getTimeslotList()) { List<List<Lesson>> cellList = roomList.stream() .map(room -> { Map<Room, List<Lesson>> byRoomMap = lessonMap.get(timeslot); if (byRoomMap == null) { return Collections.<Lesson>emptyList(); } List<Lesson> cellLessonList = byRoomMap.get(room); if (cellLessonList == null) { return Collections.<Lesson>emptyList(); } return cellLessonList; }) .collect(Collectors.toList()); LOGGER.info("| " + String.format("%-10s", timeslot.getDayOfWeek().toString().substring(0, 3) + " " + timeslot.getStartTime()) + " | " + cellList.stream().map(cellLessonList -> String.format("%-10s", cellLessonList.stream().map(Lesson::getSubject).collect(Collectors.joining(", ")))) .collect(Collectors.joining(" | ")) + " |"); LOGGER.info("| | " + cellList.stream().map(cellLessonList -> String.format("%-10s", cellLessonList.stream().map(Lesson::getTeacher).collect(Collectors.joining(", ")))) .collect(Collectors.joining(" | ")) + " |"); LOGGER.info("| | " + cellList.stream().map(cellLessonList -> String.format("%-10s", cellLessonList.stream().map(Lesson::getStudentGroup).collect(Collectors.joining(", ")))) .collect(Collectors.joining(" | ")) + " |"); LOGGER.info("|" + "------------|".repeat(roomList.size() + 1)); } List<Lesson> unassignedLessons = lessonList.stream() .filter(lesson -> lesson.getTimeslot() == null || lesson.getRoom() == null) .collect(Collectors.toList()); if (!unassignedLessons.isEmpty()) { LOGGER.info(""); LOGGER.info("Unassigned lessons"); for (Lesson lesson : unassignedLessons) { LOGGER.info(" " + lesson.getSubject() + " - " + lesson.getTeacher() + " - " + lesson.getStudentGroup()); } } } } The main() method first creates the SolverFactory : SolverFactory<TimeTable> solverFactory = SolverFactory.create(new SolverConfig() .withSolutionClass(TimeTable.class) .withEntityClasses(Lesson.class) .withConstraintProviderClass(TimeTableConstraintProvider.class) // The solver runs only for 5 seconds on this small data set. // It's recommended to run for at least 5 minutes ("5m") otherwise. .withTerminationSpentLimit(Duration.ofSeconds(5))); The SolverFactory creation registers the @PlanningSolution class, the @PlanningEntity classes, and the ConstraintProvider class, all of which you created earlier. Without a termination setting or a terminationEarly() event, the solver runs forever. To avoid that, the solver limits the solving time to five seconds. After five seconds, the main() method loads the problem, solves it, and prints the solution: // Load the problem TimeTable problem = generateDemoData(); // Solve the problem Solver<TimeTable> solver = solverFactory.buildSolver(); TimeTable solution = solver.solve(problem); // Visualize the solution printTimetable(solution); The solve() method doesn't return instantly. It runs for five seconds before returning the best solution. OptaPlanner returns the best solution found in the available termination time. Due to the nature of NP-hard problems, the best solution might not be optimal, especially for larger data sets. Increase the termination time to potentially find a better solution. The generateDemoData() method generates the school timetable problem to solve. The printTimetable() method prettyprints the timetable to the console, so it's easy to determine visually whether or not it's a good schedule. 18.6. Creating and running the school timetable application Now that you have completed all of the components of the school timetable Java application, you are ready to put them all together in the TimeTableApp.java class and run it. Prerequisites You have created all of the required components of the school timetable application. Procedure Create the src/main/java/org/acme/schooltimetabling/TimeTableApp.java class: package org.acme.schooltimetabling; import java.time.DayOfWeek; import java.time.Duration; import java.time.LocalTime; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.stream.Collectors; import org.acme.schooltimetabling.domain.Lesson; import org.acme.schooltimetabling.domain.Room; import org.acme.schooltimetabling.domain.TimeTable; import org.acme.schooltimetabling.domain.Timeslot; import org.acme.schooltimetabling.solver.TimeTableConstraintProvider; import org.optaplanner.core.api.solver.Solver; import org.optaplanner.core.api.solver.SolverFactory; import org.optaplanner.core.config.solver.SolverConfig; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TimeTableApp { private static final Logger LOGGER = LoggerFactory.getLogger(TimeTableApp.class); public static void main(String[] args) { SolverFactory<TimeTable> solverFactory = SolverFactory.create(new SolverConfig() .withSolutionClass(TimeTable.class) .withEntityClasses(Lesson.class) .withConstraintProviderClass(TimeTableConstraintProvider.class) // The solver runs only for 5 seconds on this small data set. // It's recommended to run for at least 5 minutes ("5m") otherwise. .withTerminationSpentLimit(Duration.ofSeconds(5))); // Load the problem TimeTable problem = generateDemoData(); // Solve the problem Solver<TimeTable> solver = solverFactory.buildSolver(); TimeTable solution = solver.solve(problem); // Visualize the solution printTimetable(solution); } public static TimeTable generateDemoData() { List<Timeslot> timeslotList = new ArrayList<>(10); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(3); roomList.add(new Room("Room A")); roomList.add(new Room("Room B")); roomList.add(new Room("Room C")); List<Lesson> lessonList = new ArrayList<>(); long id = 0; lessonList.add(new Lesson(id++, "Math", "A. Turing", "9th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "9th grade")); lessonList.add(new Lesson(id++, "Physics", "M. Curie", "9th grade")); lessonList.add(new Lesson(id++, "Chemistry", "M. Curie", "9th grade")); lessonList.add(new Lesson(id++, "Biology", "C. Darwin", "9th grade")); lessonList.add(new Lesson(id++, "History", "I. Jones", "9th grade")); lessonList.add(new Lesson(id++, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(id++, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(id++, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(id++, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "10th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "10th grade")); lessonList.add(new Lesson(id++, "Math", "A. Turing", "10th grade")); lessonList.add(new Lesson(id++, "Physics", "M. Curie", "10th grade")); lessonList.add(new Lesson(id++, "Chemistry", "M. Curie", "10th grade")); lessonList.add(new Lesson(id++, "French", "M. Curie", "10th grade")); lessonList.add(new Lesson(id++, "Geography", "C. Darwin", "10th grade")); lessonList.add(new Lesson(id++, "History", "I. Jones", "10th grade")); lessonList.add(new Lesson(id++, "English", "P. Cruz", "10th grade")); lessonList.add(new Lesson(id++, "Spanish", "P. Cruz", "10th grade")); return new TimeTable(timeslotList, roomList, lessonList); } private static void printTimetable(TimeTable timeTable) { LOGGER.info(""); List<Room> roomList = timeTable.getRoomList(); List<Lesson> lessonList = timeTable.getLessonList(); Map<Timeslot, Map<Room, List<Lesson>>> lessonMap = lessonList.stream() .filter(lesson -> lesson.getTimeslot() != null && lesson.getRoom() != null) .collect(Collectors.groupingBy(Lesson::getTimeslot, Collectors.groupingBy(Lesson::getRoom))); LOGGER.info("| | " + roomList.stream() .map(room -> String.format("%-10s", room.getName())).collect(Collectors.joining(" | ")) + " |"); LOGGER.info("|" + "------------|".repeat(roomList.size() + 1)); for (Timeslot timeslot : timeTable.getTimeslotList()) { List<List<Lesson>> cellList = roomList.stream() .map(room -> { Map<Room, List<Lesson>> byRoomMap = lessonMap.get(timeslot); if (byRoomMap == null) { return Collections.<Lesson>emptyList(); } List<Lesson> cellLessonList = byRoomMap.get(room); if (cellLessonList == null) { return Collections.<Lesson>emptyList(); } return cellLessonList; }) .collect(Collectors.toList()); LOGGER.info("| " + String.format("%-10s", timeslot.getDayOfWeek().toString().substring(0, 3) + " " + timeslot.getStartTime()) + " | " + cellList.stream().map(cellLessonList -> String.format("%-10s", cellLessonList.stream().map(Lesson::getSubject).collect(Collectors.joining(", ")))) .collect(Collectors.joining(" | ")) + " |"); LOGGER.info("| | " + cellList.stream().map(cellLessonList -> String.format("%-10s", cellLessonList.stream().map(Lesson::getTeacher).collect(Collectors.joining(", ")))) .collect(Collectors.joining(" | ")) + " |"); LOGGER.info("| | " + cellList.stream().map(cellLessonList -> String.format("%-10s", cellLessonList.stream().map(Lesson::getStudentGroup).collect(Collectors.joining(", ")))) .collect(Collectors.joining(" | ")) + " |"); LOGGER.info("|" + "------------|".repeat(roomList.size() + 1)); } List<Lesson> unassignedLessons = lessonList.stream() .filter(lesson -> lesson.getTimeslot() == null || lesson.getRoom() == null) .collect(Collectors.toList()); if (!unassignedLessons.isEmpty()) { LOGGER.info(""); LOGGER.info("Unassigned lessons"); for (Lesson lesson : unassignedLessons) { LOGGER.info(" " + lesson.getSubject() + " - " + lesson.getTeacher() + " - " + lesson.getStudentGroup()); } } } } Run the TimeTableApp class as the main class of a normal Java application. The following output should result: Verify the console output. Does it conform to all hard constraints? What happens if you comment out the roomConflict constraint in TimeTableConstraintProvider ? The info log shows what OptaPlanner did in those five seconds: 18.7. Testing the application A good application includes test coverage. Test the constraints and the solver in your timetable project. 18.7.1. Test the school timetable constraints To test each constraint of the timetable project in isolation, use a ConstraintVerifier in unit tests. This tests each constraint's corner cases in isolation from the other tests, which lowers maintenance when adding a new constraint with proper test coverage. This test verifies that the constraint TimeTableConstraintProvider::roomConflict , when given three lessons in the same room and two of the lessons have the same timeslot, penalizes with a match weight of 1. So if the constraint weight is 10hard it reduces the score by -10hard . Procedure Create the src/test/java/org/acme/optaplanner/solver/TimeTableConstraintProviderTest.java class: package org.acme.optaplanner.solver; import java.time.DayOfWeek; import java.time.LocalTime; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.junit.jupiter.api.Test; import org.optaplanner.test.api.score.stream.ConstraintVerifier; @QuarkusTest class TimeTableConstraintProviderTest { private static final Room ROOM = new Room("Room1"); private static final Timeslot TIMESLOT1 = new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9,0), LocalTime.NOON); private static final Timeslot TIMESLOT2 = new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9,0), LocalTime.NOON); @Inject ConstraintVerifier<TimeTableConstraintProvider, TimeTable> constraintVerifier; @Test void roomConflict() { Lesson firstLesson = new Lesson(1, "Subject1", "Teacher1", "Group1"); Lesson conflictingLesson = new Lesson(2, "Subject2", "Teacher2", "Group2"); Lesson nonConflictingLesson = new Lesson(3, "Subject3", "Teacher3", "Group3"); firstLesson.setRoom(ROOM); firstLesson.setTimeslot(TIMESLOT1); conflictingLesson.setRoom(ROOM); conflictingLesson.setTimeslot(TIMESLOT1); nonConflictingLesson.setRoom(ROOM); nonConflictingLesson.setTimeslot(TIMESLOT2); constraintVerifier.verifyThat(TimeTableConstraintProvider::roomConflict) .given(firstLesson, conflictingLesson, nonConflictingLesson) .penalizesBy(1); } } Notice how ConstraintVerifier ignores the constraint weight during testing even if those constraint weights are hardcoded in the ConstraintProvider . This is because constraint weights change regularly before going into production. This way, constraint weight tweaking does not break the unit tests. 18.7.2. Test the school timetable solver This example tests the Red Hat Build of OptaPlanner school timetable project on the Red Hat build of Quarkus platform. It uses a JUnit test to generate a test data set and send it to the TimeTableController to solve. Procedure Create the src/test/java/com/example/rest/TimeTableResourceTest.java class with the following content: package com.exmaple.optaplanner.rest; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import com.exmaple.optaplanner.domain.Room; import com.exmaple.optaplanner.domain.Timeslot; import com.exmaple.optaplanner.domain.Lesson; import com.exmaple.optaplanner.domain.TimeTable; import com.exmaple.optaplanner.rest.TimeTableResource; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableResource.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room("Room A")); roomList.add(new Room("Room B")); roomList.add(new Room("Room C")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, "Math", "B. May", "9th grade")); lessonList.add(new Lesson(102L, "Physics", "M. Curie", "9th grade")); lessonList.add(new Lesson(103L, "Geography", "M. Polo", "9th grade")); lessonList.add(new Lesson(104L, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(105L, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(201L, "Math", "B. May", "10th grade")); lessonList.add(new Lesson(202L, "Chemistry", "M. Curie", "10th grade")); lessonList.add(new Lesson(203L, "History", "I. Jones", "10th grade")); lessonList.add(new Lesson(204L, "English", "P. Cruz", "10th grade")); lessonList.add(new Lesson(205L, "French", "M. Curie", "10th grade")); return new TimeTable(timeslotList, roomList, lessonList); } } This test verifies that after solving, all lessons are assigned to a time slot and a room. It also verifies that it found a feasible solution (no hard constraints broken). Add test properties to the src/main/resources/application.properties file: Normally, the solver finds a feasible solution in less than 200 milliseconds. Notice how the application.properties file overwrites the solver termination during tests to terminate as soon as a feasible solution (0hard/*soft) is found. This avoids hard coding a solver time, because the unit test might run on arbitrary hardware. This approach ensures that the test runs long enough to find a feasible solution, even on slow systems. But it does not run a millisecond longer than it strictly must, even on fast systems. 18.8. Logging After you complete the Red Hat Build of OptaPlanner school timetable project, you can use logging information to help you fine-tune the constraints in the ConstraintProvider . Review the score calculation speed in the info log file to assess the impact of changes to your constraints. Run the application in debug mode to show every step that your application takes or use trace logging to log every step and every move. Procedure Run the school timetable application for a fixed amount of time, for example, five minutes. Review the score calculation speed in the log file as shown in the following example: Change a constraint, run the planning application again for the same amount of time, and review the score calculation speed recorded in the log file. Run the application in debug mode to log every step that the application makes: To run debug mode from the command line, use the -D system property. To permanently enable debug mode, add the following line to the application.properties file: quarkus.log.category."org.optaplanner".level=debug The following example shows output in the log file in debug mode: Use trace logging to show every step and every move for each step. 18.9. Using Micrometer and Prometheus to monitor your school timetable OptaPlanner Java application OptaPlanner exposes metrics through Micrometer , a metrics instrumentation library for Java applications. You can use Micrometer with Prometheus to monitor the OptaPlanner solver in the school timetable application. Prerequisites You have created the OptaPlanner school timetable application with Java. Prometheus is installed. For information about installing Prometheus, see the Prometheus website. Procedure Add the Micrometer Prometheus dependencies to the school timetable pom.xml file where <MICROMETER_VERSION> is the version of Micrometer that you installed: Note The micrometer-core dependency is also required. However this dependency is contained in the optaplanner-core dependency so you do not need to add it to the pom.xml file. Add the following import statements to the TimeTableApp.java class. Add the following lines to the top of the main method of the TimeTableApp.java class so Prometheus can scrap data from com.sun.net.httpserver.HttpServer before the solution starts: PrometheusMeterRegistry prometheusRegistry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT); try { HttpServer server = HttpServer.create(new InetSocketAddress(8080), 0); server.createContext("/prometheus", httpExchange -> { String response = prometheusRegistry.scrape(); httpExchange.sendResponseHeaders(200, response.getBytes().length); try (OutputStream os = httpExchange.getResponseBody()) { os.write(response.getBytes()); } }); new Thread(server::start).start(); } catch (IOException e) { throw new RuntimeException(e); } Metrics.addRegistry(prometheusRegistry); solve(); } Add the following line to control the solving time. By adjusting the solving time, you can see how the metrics change based on the time spent solving. Start the school timetable application. Open http://localhost:8080/prometheus in a web browser to view the timetable application in Prometheus. Open your monitoring system to view the metrics for your OptaPlanner project. The following metrics are exposed: optaplanner_solver_errors_total : the total number of errors that occurred while solving since the start of the measuring. optaplanner_solver_solve_duration_seconds_active_count : the number of solvers currently solving. optaplanner_solver_solve_duration_seconds_max : run time of the longest-running currently active solver. optaplanner_solver_solve_duration_seconds_duration_sum : the sum of each active solver's solve duration. For example, if there are two active solvers, one running for three minutes and the other for one minute, the total solve time is four minutes.
|
[
"INFO Solving ended: time spent (5000), best score (0hard/9soft), INFO INFO | | Room A | Room B | Room C | INFO |------------|------------|------------|------------| INFO | MON 08:30 | English | Math | | INFO | | I. Jones | A. Turing | | INFO | | 9th grade | 10th grade | | INFO |------------|------------|------------|------------| INFO | MON 09:30 | History | Physics | | INFO | | I. Jones | M. Curie | | INFO | | 9th grade | 10th grade | | INFO |------------|------------|------------|------------| INFO | MON 10:30 | History | Physics | | INFO | | I. Jones | M. Curie | | INFO | | 10th grade | 9th grade | | INFO |------------|------------|------------|------------| INFO |------------|------------|------------|------------|",
"<dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.2.3</version> </dependency>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>optaplanner-hello-world-school-timetabling-quickstart</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.release>11</maven.compiler.release> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <version.org.optaplanner>8.38.0.Final-redhat-00004</version.org.optaplanner> <version.org.logback>1.2.3</version.org.logback> <version.compiler.plugin>3.8.1</version.compiler.plugin> <version.surefire.plugin>3.0.0-M5</version.surefire.plugin> <version.exec.plugin>3.0.0</version.exec.plugin> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-bom</artifactId> <version>USD{version.org.optaplanner}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{version.org.logback}</version> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <scope>runtime</scope> </dependency> <!-- Testing --> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{version.compiler.plugin}</version> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{version.surefire.plugin}</version> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>USD{version.exec.plugin}</version> <configuration> <mainClass>org.acme.schooltimetabling.TimeTableApp</mainClass> </configuration> </plugin> </plugins> </build> <repositories> <repository> <id>jboss-public-repository-group</id> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <!-- Get releases only from Maven Central which is faster. --> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> </project>",
"dependencies { implementation platform(\"org.optaplanner:optaplanner-bom:USD{optaplannerVersion}\") implementation \"org.optaplanner:optaplanner-core\" testImplementation \"org.optaplanner:optaplanner-test\" runtimeOnly \"ch.qos.logback:logback-classic:USD{logbackVersion}\" }",
"plugins { id \"java\" id \"application\" } def optaplannerVersion = \"{optaplanner-version}\" def logbackVersion = \"1.2.9\" group = \"org.acme\" version = \"1.0-SNAPSHOT\" repositories { mavenCentral() } dependencies { implementation platform(\"org.optaplanner:optaplanner-bom:USD{optaplannerVersion}\") implementation \"org.optaplanner:optaplanner-core\" testImplementation \"org.optaplanner:optaplanner-test\" runtimeOnly \"ch.qos.logback:logback-classic:USD{logbackVersion}\" } java { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } compileJava { options.encoding = \"UTF-8\" options.compilerArgs << \"-parameters\" } compileTestJava { options.encoding = \"UTF-8\" } application { mainClass = \"org.acme.schooltimetabling.TimeTableApp\" } test { // Log the test execution results. testLogging { events \"passed\", \"skipped\", \"failed\" } }",
"package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + \" \" + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } }",
"package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } }",
"package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = \"timeslotRange\") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = \"roomRange\") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + \"(\" + id + \")\"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } }",
"public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the \"complete\" implementation return HardSoftScore.of(hardScore, softScore); } }",
"package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the \"complete\" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson return constraintFactory.forEach(Lesson.class) // ... and pair it with another lesson .join(Lesson.class, // ... in the same timeslot Joiners.equal(Lesson::getTimeslot), // ... in the same room Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize(HardSoftScore.ONE_HARD) .asConstraint(\"Room conflict\"); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint(\"Teacher conflict\"); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint(\"Student group conflict\"); } }",
"package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = \"timeslotRange\") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = \"roomRange\") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } }",
"package org.acme.schooltimetabling; import java.time.DayOfWeek; import java.time.Duration; import java.time.LocalTime; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.stream.Collectors; import org.acme.schooltimetabling.domain.Lesson; import org.acme.schooltimetabling.domain.Room; import org.acme.schooltimetabling.domain.TimeTable; import org.acme.schooltimetabling.domain.Timeslot; import org.acme.schooltimetabling.solver.TimeTableConstraintProvider; import org.optaplanner.core.api.solver.Solver; import org.optaplanner.core.api.solver.SolverFactory; import org.optaplanner.core.config.solver.SolverConfig; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TimeTableApp { private static final Logger LOGGER = LoggerFactory.getLogger(TimeTableApp.class); public static void main(String[] args) { SolverFactory<TimeTable> solverFactory = SolverFactory.create(new SolverConfig() .withSolutionClass(TimeTable.class) .withEntityClasses(Lesson.class) .withConstraintProviderClass(TimeTableConstraintProvider.class) // The solver runs only for 5 seconds on this small data set. // It's recommended to run for at least 5 minutes (\"5m\") otherwise. .withTerminationSpentLimit(Duration.ofSeconds(5))); // Load the problem TimeTable problem = generateDemoData(); // Solve the problem Solver<TimeTable> solver = solverFactory.buildSolver(); TimeTable solution = solver.solve(problem); // Visualize the solution printTimetable(solution); } public static TimeTable generateDemoData() { List<Timeslot> timeslotList = new ArrayList<>(10); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(3); roomList.add(new Room(\"Room A\")); roomList.add(new Room(\"Room B\")); roomList.add(new Room(\"Room C\")); List<Lesson> lessonList = new ArrayList<>(); long id = 0; lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Physics\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Chemistry\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Biology\", \"C. Darwin\", \"9th grade\")); lessonList.add(new Lesson(id++, \"History\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(id++, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(id++, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Physics\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Chemistry\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(id++, \"French\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Geography\", \"C. Darwin\", \"10th grade\")); lessonList.add(new Lesson(id++, \"History\", \"I. Jones\", \"10th grade\")); lessonList.add(new Lesson(id++, \"English\", \"P. Cruz\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Spanish\", \"P. Cruz\", \"10th grade\")); return new TimeTable(timeslotList, roomList, lessonList); } private static void printTimetable(TimeTable timeTable) { LOGGER.info(\"\"); List<Room> roomList = timeTable.getRoomList(); List<Lesson> lessonList = timeTable.getLessonList(); Map<Timeslot, Map<Room, List<Lesson>>> lessonMap = lessonList.stream() .filter(lesson -> lesson.getTimeslot() != null && lesson.getRoom() != null) .collect(Collectors.groupingBy(Lesson::getTimeslot, Collectors.groupingBy(Lesson::getRoom))); LOGGER.info(\"| | \" + roomList.stream() .map(room -> String.format(\"%-10s\", room.getName())).collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"|\" + \"------------|\".repeat(roomList.size() + 1)); for (Timeslot timeslot : timeTable.getTimeslotList()) { List<List<Lesson>> cellList = roomList.stream() .map(room -> { Map<Room, List<Lesson>> byRoomMap = lessonMap.get(timeslot); if (byRoomMap == null) { return Collections.<Lesson>emptyList(); } List<Lesson> cellLessonList = byRoomMap.get(room); if (cellLessonList == null) { return Collections.<Lesson>emptyList(); } return cellLessonList; }) .collect(Collectors.toList()); LOGGER.info(\"| \" + String.format(\"%-10s\", timeslot.getDayOfWeek().toString().substring(0, 3) + \" \" + timeslot.getStartTime()) + \" | \" + cellList.stream().map(cellLessonList -> String.format(\"%-10s\", cellLessonList.stream().map(Lesson::getSubject).collect(Collectors.joining(\", \")))) .collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"| | \" + cellList.stream().map(cellLessonList -> String.format(\"%-10s\", cellLessonList.stream().map(Lesson::getTeacher).collect(Collectors.joining(\", \")))) .collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"| | \" + cellList.stream().map(cellLessonList -> String.format(\"%-10s\", cellLessonList.stream().map(Lesson::getStudentGroup).collect(Collectors.joining(\", \")))) .collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"|\" + \"------------|\".repeat(roomList.size() + 1)); } List<Lesson> unassignedLessons = lessonList.stream() .filter(lesson -> lesson.getTimeslot() == null || lesson.getRoom() == null) .collect(Collectors.toList()); if (!unassignedLessons.isEmpty()) { LOGGER.info(\"\"); LOGGER.info(\"Unassigned lessons\"); for (Lesson lesson : unassignedLessons) { LOGGER.info(\" \" + lesson.getSubject() + \" - \" + lesson.getTeacher() + \" - \" + lesson.getStudentGroup()); } } } }",
"SolverFactory<TimeTable> solverFactory = SolverFactory.create(new SolverConfig() .withSolutionClass(TimeTable.class) .withEntityClasses(Lesson.class) .withConstraintProviderClass(TimeTableConstraintProvider.class) // The solver runs only for 5 seconds on this small data set. // It's recommended to run for at least 5 minutes (\"5m\") otherwise. .withTerminationSpentLimit(Duration.ofSeconds(5)));",
"// Load the problem TimeTable problem = generateDemoData(); // Solve the problem Solver<TimeTable> solver = solverFactory.buildSolver(); TimeTable solution = solver.solve(problem); // Visualize the solution printTimetable(solution);",
"package org.acme.schooltimetabling; import java.time.DayOfWeek; import java.time.Duration; import java.time.LocalTime; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.stream.Collectors; import org.acme.schooltimetabling.domain.Lesson; import org.acme.schooltimetabling.domain.Room; import org.acme.schooltimetabling.domain.TimeTable; import org.acme.schooltimetabling.domain.Timeslot; import org.acme.schooltimetabling.solver.TimeTableConstraintProvider; import org.optaplanner.core.api.solver.Solver; import org.optaplanner.core.api.solver.SolverFactory; import org.optaplanner.core.config.solver.SolverConfig; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TimeTableApp { private static final Logger LOGGER = LoggerFactory.getLogger(TimeTableApp.class); public static void main(String[] args) { SolverFactory<TimeTable> solverFactory = SolverFactory.create(new SolverConfig() .withSolutionClass(TimeTable.class) .withEntityClasses(Lesson.class) .withConstraintProviderClass(TimeTableConstraintProvider.class) // The solver runs only for 5 seconds on this small data set. // It's recommended to run for at least 5 minutes (\"5m\") otherwise. .withTerminationSpentLimit(Duration.ofSeconds(5))); // Load the problem TimeTable problem = generateDemoData(); // Solve the problem Solver<TimeTable> solver = solverFactory.buildSolver(); TimeTable solution = solver.solve(problem); // Visualize the solution printTimetable(solution); } public static TimeTable generateDemoData() { List<Timeslot> timeslotList = new ArrayList<>(10); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(3); roomList.add(new Room(\"Room A\")); roomList.add(new Room(\"Room B\")); roomList.add(new Room(\"Room C\")); List<Lesson> lessonList = new ArrayList<>(); long id = 0; lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Physics\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Chemistry\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Biology\", \"C. Darwin\", \"9th grade\")); lessonList.add(new Lesson(id++, \"History\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(id++, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(id++, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Math\", \"A. Turing\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Physics\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Chemistry\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(id++, \"French\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Geography\", \"C. Darwin\", \"10th grade\")); lessonList.add(new Lesson(id++, \"History\", \"I. Jones\", \"10th grade\")); lessonList.add(new Lesson(id++, \"English\", \"P. Cruz\", \"10th grade\")); lessonList.add(new Lesson(id++, \"Spanish\", \"P. Cruz\", \"10th grade\")); return new TimeTable(timeslotList, roomList, lessonList); } private static void printTimetable(TimeTable timeTable) { LOGGER.info(\"\"); List<Room> roomList = timeTable.getRoomList(); List<Lesson> lessonList = timeTable.getLessonList(); Map<Timeslot, Map<Room, List<Lesson>>> lessonMap = lessonList.stream() .filter(lesson -> lesson.getTimeslot() != null && lesson.getRoom() != null) .collect(Collectors.groupingBy(Lesson::getTimeslot, Collectors.groupingBy(Lesson::getRoom))); LOGGER.info(\"| | \" + roomList.stream() .map(room -> String.format(\"%-10s\", room.getName())).collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"|\" + \"------------|\".repeat(roomList.size() + 1)); for (Timeslot timeslot : timeTable.getTimeslotList()) { List<List<Lesson>> cellList = roomList.stream() .map(room -> { Map<Room, List<Lesson>> byRoomMap = lessonMap.get(timeslot); if (byRoomMap == null) { return Collections.<Lesson>emptyList(); } List<Lesson> cellLessonList = byRoomMap.get(room); if (cellLessonList == null) { return Collections.<Lesson>emptyList(); } return cellLessonList; }) .collect(Collectors.toList()); LOGGER.info(\"| \" + String.format(\"%-10s\", timeslot.getDayOfWeek().toString().substring(0, 3) + \" \" + timeslot.getStartTime()) + \" | \" + cellList.stream().map(cellLessonList -> String.format(\"%-10s\", cellLessonList.stream().map(Lesson::getSubject).collect(Collectors.joining(\", \")))) .collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"| | \" + cellList.stream().map(cellLessonList -> String.format(\"%-10s\", cellLessonList.stream().map(Lesson::getTeacher).collect(Collectors.joining(\", \")))) .collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"| | \" + cellList.stream().map(cellLessonList -> String.format(\"%-10s\", cellLessonList.stream().map(Lesson::getStudentGroup).collect(Collectors.joining(\", \")))) .collect(Collectors.joining(\" | \")) + \" |\"); LOGGER.info(\"|\" + \"------------|\".repeat(roomList.size() + 1)); } List<Lesson> unassignedLessons = lessonList.stream() .filter(lesson -> lesson.getTimeslot() == null || lesson.getRoom() == null) .collect(Collectors.toList()); if (!unassignedLessons.isEmpty()) { LOGGER.info(\"\"); LOGGER.info(\"Unassigned lessons\"); for (Lesson lesson : unassignedLessons) { LOGGER.info(\" \" + lesson.getSubject() + \" - \" + lesson.getTeacher() + \" - \" + lesson.getStudentGroup()); } } } }",
"INFO | | Room A | Room B | Room C | INFO |------------|------------|------------|------------| INFO | MON 08:30 | English | Math | | INFO | | I. Jones | A. Turing | | INFO | | 9th grade | 10th grade | | INFO |------------|------------|------------|------------| INFO | MON 09:30 | History | Physics | | INFO | | I. Jones | M. Curie | | INFO | | 9th grade | 10th grade | |",
"... Solving started: time spent (33), best score (-8init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... Construction Heuristic phase (0) ended: time spent (73), best score (0hard/0soft), score calculation speed (459/sec), step total (4). ... Local Search phase (1) ended: time spent (5000), best score (0hard/0soft), score calculation speed (28949/sec), step total (28398). ... Solving ended: time spent (5000), best score (0hard/0soft), score calculation speed (28524/sec), phase total (2), environment mode (REPRODUCIBLE).",
"package org.acme.optaplanner.solver; import java.time.DayOfWeek; import java.time.LocalTime; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import org.acme.optaplanner.domain.Lesson; import org.acme.optaplanner.domain.Room; import org.acme.optaplanner.domain.TimeTable; import org.acme.optaplanner.domain.Timeslot; import org.junit.jupiter.api.Test; import org.optaplanner.test.api.score.stream.ConstraintVerifier; @QuarkusTest class TimeTableConstraintProviderTest { private static final Room ROOM = new Room(\"Room1\"); private static final Timeslot TIMESLOT1 = new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9,0), LocalTime.NOON); private static final Timeslot TIMESLOT2 = new Timeslot(DayOfWeek.TUESDAY, LocalTime.of(9,0), LocalTime.NOON); @Inject ConstraintVerifier<TimeTableConstraintProvider, TimeTable> constraintVerifier; @Test void roomConflict() { Lesson firstLesson = new Lesson(1, \"Subject1\", \"Teacher1\", \"Group1\"); Lesson conflictingLesson = new Lesson(2, \"Subject2\", \"Teacher2\", \"Group2\"); Lesson nonConflictingLesson = new Lesson(3, \"Subject3\", \"Teacher3\", \"Group3\"); firstLesson.setRoom(ROOM); firstLesson.setTimeslot(TIMESLOT1); conflictingLesson.setRoom(ROOM); conflictingLesson.setTimeslot(TIMESLOT1); nonConflictingLesson.setRoom(ROOM); nonConflictingLesson.setTimeslot(TIMESLOT2); constraintVerifier.verifyThat(TimeTableConstraintProvider::roomConflict) .given(firstLesson, conflictingLesson, nonConflictingLesson) .penalizesBy(1); } }",
"package com.exmaple.optaplanner.rest; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import javax.inject.Inject; import io.quarkus.test.junit.QuarkusTest; import com.exmaple.optaplanner.domain.Room; import com.exmaple.optaplanner.domain.Timeslot; import com.exmaple.optaplanner.domain.Lesson; import com.exmaple.optaplanner.domain.TimeTable; import com.exmaple.optaplanner.rest.TimeTableResource; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @QuarkusTest public class TimeTableResourceTest { @Inject TimeTableResource timeTableResource; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableResource.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room(\"Room A\")); roomList.add(new Room(\"Room B\")); roomList.add(new Room(\"Room C\")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, \"Math\", \"B. May\", \"9th grade\")); lessonList.add(new Lesson(102L, \"Physics\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(103L, \"Geography\", \"M. Polo\", \"9th grade\")); lessonList.add(new Lesson(104L, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(105L, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(201L, \"Math\", \"B. May\", \"10th grade\")); lessonList.add(new Lesson(202L, \"Chemistry\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(203L, \"History\", \"I. Jones\", \"10th grade\")); lessonList.add(new Lesson(204L, \"English\", \"P. Cruz\", \"10th grade\")); lessonList.add(new Lesson(205L, \"French\", \"M. Curie\", \"10th grade\")); return new TimeTable(timeslotList, roomList, lessonList); } }",
"The solver runs only for 5 seconds to avoid a HTTP timeout in this simple implementation. It's recommended to run for at least 5 minutes (\"5m\") otherwise. quarkus.optaplanner.solver.termination.spent-limit=5s Effectively disable this termination in favor of the best-score-limit %test.quarkus.optaplanner.solver.termination.spent-limit=1h %test.quarkus.optaplanner.solver.termination.best-score-limit=0hard/*soft",
"... Solving ended: ..., score calculation speed (29455/sec),",
"quarkus.log.category.\"org.optaplanner\".level=debug",
"... Solving started: time spent (67), best score (-20init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... CH step (0), time spent (128), score (-18init/0hard/0soft), selected move count (15), picked move ([Math(101) {null -> Room A}, Math(101) {null -> MONDAY 08:30}]). ... CH step (1), time spent (145), score (-16init/0hard/0soft), selected move count (15), picked move ([Physics(102) {null -> Room A}, Physics(102) {null -> MONDAY 09:30}]).",
"<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> <version><MICROMETER_VERSION></version> </dependency>",
"import io.micrometer.core.instrument.Metrics; import io.micrometer.prometheus.PrometheusConfig; import io.micrometer.prometheus.PrometheusMeterRegistry;",
"PrometheusMeterRegistry prometheusRegistry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT); try { HttpServer server = HttpServer.create(new InetSocketAddress(8080), 0); server.createContext(\"/prometheus\", httpExchange -> { String response = prometheusRegistry.scrape(); httpExchange.sendResponseHeaders(200, response.getBytes().length); try (OutputStream os = httpExchange.getResponseBody()) { os.write(response.getBytes()); } }); new Thread(server::start).start(); } catch (IOException e) { throw new RuntimeException(e); } Metrics.addRegistry(prometheusRegistry); solve(); }",
"withTerminationSpentLimit(Duration.ofMinutes(5)));"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/assembly-planner-java
|
2.8. Programmatic Control
|
2.8. Programmatic Control Red Hat JBoss Data Virtualization exposes a bean that implements the org.teiid.events.EventDistributor interface. You can find it in JNDI under the name teiid/event-distributor-factory. The EventDistributor exposes methods like dataModification (which affects result set caching) or updateMatViewRow (which affects internal materialization) to alert the Teiid engine that the underlying source data has been modified. These operations, which work cluster wide will invalidate the cache entries appropriately and reload the new cache contents. Note If your source system has any built-in change data capture facilities that can scrape logs, install triggers and so forth to capture data change events, they can be captured and propagated to the Red Hat JBoss Data Virtualization engine through a pojo bean/MDB/Session bean. This code shows how you can use the EventDistributor interface in their own code that is deployed in the same JBoss EAP virtual machine using a Pojo/MDB/Session Bean: Important The EventDistributor interface also exposes many methods that can be used to update the costing information on your source models for optimized query planning. Note that these values are volatile and will be lost during a cluster re-start, as there is no repository to persist.
|
[
"public class ChanageDataCapture { public void invalidate() { InitialContext ic = new InitialContext(); EventDistributor ed = ((EventDistributorFactory)ic.lookup(\"teiid/event-distributor-factory\")).getEventDistributor(); // this below line indicates that Customer table in the \"model-name\" schema has been changed. // this result in cache reload. ed.dataModification(\"vdb-name\", \"version\", \"model-name\", \"Customer\"); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/programmatic_control
|
Monitoring OpenShift Data Foundation
|
Monitoring OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.18 View cluster health, metrics, or set alerts. Red Hat Storage Documentation Team Abstract Read this document for instructions on monitoring Red Hat OpenShift Data Foundation using the Block and File, and Object dashboards.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/monitoring_openshift_data_foundation/index
|
Chapter 5. Device Drivers
|
Chapter 5. Device Drivers This chapter provides a comprehensive listing of all device drivers that are new or have been updated in Red Hat Enterprise Linux 7. 5.1. New Drivers Graphics Drivers and Miscellaneous Drivers halt poll cpuidle driver (cpuidle-haltpoll.ko.xz) Intel(R) Trace Hub controller driver (intel_th.ko.xz) Intel(R) Trace Hub ACPI controller driver (intel_th_acpi.ko.xz) Intel(R) Trace Hub Global Trace Hub driver (intel_th_gth.ko.xz) Intel(R) Trace Hub Memory Storage Unit driver (intel_th_msu.ko.xz) Intel(R) Trace Hub PCI controller driver (intel_th_pci.ko.xz) Intel(R) Trace Hub PTI/LPP output driver (intel_th_pti.ko.xz) Intel(R) Trace Hub Software Trace Hub driver (intel_th_sth.ko.xz) dummy_stm device (dummy_stm.ko.xz) stm_console driver (stm_console.ko.xz) System Trace Module device class (stm_core.ko.xz) stm_ftrace driver (stm_ftrace.ko.xz) stm_heartbeat driver (stm_heartbeat.ko.xz) Basic STM framing protocol driver (stm_p_basic.ko.xz) MIPI SyS-T STM framing protocol driver (stm_p_sys-t.ko.xz) Network Drivers gVNIC Driver (gve.ko.xz): 1.0.0. Failover driver for Paravirtual drivers (net_failover.ko.xz) 5.2. Updated Drivers Network Driver Updates Emulex OneConnect NIC Driver (be2net.ko.xz) has been updated to version 12.0.0.0r. Intel(R) Ethernet Connection XL710 Network Driver (i40e.ko.xz) has been updated to version 2.8.20-k. The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 3.10.0-1122.el7.x86_64. Storage Driver Updates QLogic FCoE Driver (bnx2fc.ko.xz) has been updated to version 2.12.10. Driver for HP Smart Array Controller version (hpsa.ko.xz) has been updated to version 3.4.20-170-RH4. Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:12.0.0.13. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.710.50.00-rh1. LSI MPT Fusion SAS 3.0 Device Driver (mpt3sas.ko.xz) has been updated to version 31.100.01.00. QLogic QEDF 25/40/50/100Gb FCoE Driver (qedf.ko.xz) has been updated to version 8.37.25.20. QLogic FastLinQ 4xxxx iSCSI Module (qedi.ko.xz) has been updated to version 8.37.0.20. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.20.07.8-k.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.8_release_notes/device_drivers
|
Chapter 26. Ensuring the presence and absence of services in IdM using Ansible
|
Chapter 26. Ensuring the presence and absence of services in IdM using Ansible With the Ansible service module, Identity Management (IdM) administrator can ensure that specific services that are not native to IdM are present or absent in IdM. For example, you can use the service module to: Check that a manually installed service is present on an IdM client and automatically install that service if it is absent. For details, see: Ensuring the presence of an HTTP service in IdM on an IdM client. Ensuring the presence of an HTTP service in IdM on a non-IdM client. Ensuring the presence of an HTTP service on an IdM client without DNS. Check that a service enrolled in IdM has a certificate attached and automatically install that certificate if it is absent. For details, see: Ensuring the presence of an externally-signed certificate in an IdM service entry. Allow IdM users and hosts to retrieve and create the service keytab. For details, see: Allowing IdM users, groups, hosts, or host groups to create a keytab of a service. Allowing IdM users, groups, hosts, or host groups to retrieve a keytab of a service. Allow IdM users and hosts to add a Kerberos alias to a service. For details, see: Ensuring the presence of a Kerberos principal alias for a service. Check that a service is not present on an IdM client and automatically remove that service if it is present. For details, see: Ensuring the absence of an HTTP service in IdM on an IdM client. 26.1. Ensuring the presence of an HTTP service in IdM using an Ansible playbook Follow this procedure to ensure the presence of an HTTP server in IdM using an Ansible playbook. Prerequisites The system to host the HTTP service is an IdM client. You have the IdM administrator password. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present.yml Ansible playbook file. For example: Open the /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-copy.yml Ansible playbook file for editing: Adapt the file: Change the IdM administrator password defined by the ipaadmin_password variable. Change the name of your IdM client on which the HTTP service is running, as defined by the name variable of the ipaservice task. Save and exit the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log into the IdM Web UI as IdM administrator. Navigate to Identity Services . If HTTP/[email protected] is listed in the Services list, the Ansible playbook has been successfully added to IdM. Additional resources To secure the communication between the HTTP server and browser clients, see adding TLS encryption to an Apache HTTP Server . To request a certificate for the HTTP service, see the procedure described in Obtaining an IdM certificate for a service using certmonger . 26.2. Ensuring the presence of multiple services in IdM on an IdM client using a single Ansible task You can use the ansible-freeipa ipaservice module to add, modify, and delete multiple Identity Management (IdM) services with a single Ansible task. For that, use the services option of the ipaservice module. Using the services option, you can also specify multiple service variables that only apply to a particular service. Define this service by the name variable, which is the only mandatory variable for the services option. Complete this procedure to ensure the presence of the HTTP/[email protected] and the ftp/[email protected] services in IdM with a single task. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You are using RHEL 8.9 and later. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file add-http-and-ftp-services.yml with the following content: Run the playbook: Additional resources The service module in ansible-freeipa upstream docs 26.3. Ensuring the presence of an HTTP service in IdM on a non-IdM client using an Ansible playbook Follow this procedure to ensure the presence of an HTTP server in IdM on a host that is not an IdM client using an Ansible playbook. By adding the HTTP server to IdM you are also adding the host to IdM. Prerequisites You have installed an HTTP service on your host. The host on which you have set up HTTP is not an IdM client. Otherwise, follow the steps in enrolled the HTTP service into IdM. You have the IdM administrator password. The DNS A record - or the AAAA record if IPv6 is used - for the host is available. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-without-host-check.yml Ansible playbook file. For example: Open the copied file, /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-without-host-check-copy.yml , for editing. Locate the ipaadmin_password and name variables in the ipaservice task: Adapt the file: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to the name of the host on which the HTTP service is running. Save and exit the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log into the IdM Web UI as IdM administrator. Navigate to Identity Services . You can now see HTTP/[email protected] listed in the Services list. Additional resources To secure the communication, see adding TLS encryption to an Apache HTTP Server . 26.4. Ensuring the presence of an HTTP service on an IdM client without DNS using an Ansible playbook Follow this procedure to ensure the presence of an HTTP server running on an IdM client that has no DNS entry using an Ansible playbook. The scenario implied is that the IdM host has no DNS A entry available - or no DNS AAAA entry if IPv6 is used instead of IPv4. Prerequisites The system to host the HTTP service is enrolled in IdM. The DNS A or DNS AAAA record for the host may not exist. Otherwise, if the DNS record for the host does exist, follow the procedure in Ensuring the presence of an HTTP service in IdM using an Ansible playbook. You have the IdM administrator password. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-with-host-force.yml Ansible playbook file. For example: Open the copied file, /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-with-host-force-copy.yml , for editing. Locate the ipaadmin_password and name variables in the ipaservice task: Adapt the file: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to the name of the host on which the HTTP service is running. Save and exit the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log into the IdM Web UI as IdM administrator. Navigate to Identity Services . You can now see HTTP/[email protected] listed in the Services list. Additional resources To secure the communication, see adding TLS encryption to an Apache HTTP Server . 26.5. Ensuring the presence of an externally signed certificate in an IdM service entry using an Ansible playbook Follow this procedure to use the ansible-freeipa service module to ensure that a certificate issued by an external certificate authority (CA) is attached to the IdM entry of the HTTP service. Having the certificate of an HTTP service signed by an external CA rather than the IdM CA is particularly useful if your IdM CA uses a self-signed certificate. Prerequisites You have installed an HTTP service on your host. You have enrolled the HTTP service into IdM. You have the IdM administrator password. You have an externally signed certificate whose Subject corresponds to the principal of the HTTP service. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-member-certificate-present.yml file, for example: Optional: If the certificate is in the Privacy Enhanced Mail (PEM) format, convert the certificate to the Distinguished Encoding Rules (DER) format for easier handling through the command line (CLI): Decode the DER file to standard output using the base64 command. Use the -w0 option to disable wrapping: Copy the certificate from the standard output to the clipboard. Open the /usr/share/doc/ansible-freeipa/playbooks/service/service-member-certificate-present-copy.yml file for editing and view its contents: Adapt the file: Replace the certificate, defined using the certificate variable, with the certificate you copied from the CLI. Note that if you use the certificate: variable with the "|" pipe character as indicated, you can enter the certificate THIS WAY rather than having it to enter it in a single line. This makes reading the certificate easier. Change the IdM administrator password, defined by the ipaadmin_password variable. Change the name of your IdM client on which the HTTP service is running, defined by the name variable. Change any other relevant variables. Save and exit the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log into the IdM Web UI as IdM administrator. Navigate to Identity Services . Click the name of the service with the newly added certificate, for example HTTP/client.idm.example.com . In the Service Certificate section on the right, you can now see the newly added certificate. 26.6. Using an Ansible playbook to allow IdM users, groups, hosts, or host groups to create a keytab of a service A keytab is a file containing pairs of Kerberos principals and encrypted keys. Keytab files are commonly used to allow scripts to automatically authenticate using Kerberos, without requiring human interaction or access to password stored in a plain-text file. The script is then able to use the acquired credentials to access files stored on a remote system. As an Identity Management (IdM) administrator, you can allow other users to retrieve or even create a keytab for a service running in IdM. By allowing specific users and user groups to create keytabs, you can delegate the administration of the service to them without sharing the IdM administrator password. This delegation provides a more fine-grained system administration. Follow this procedure to allow specific IdM users, user groups, hosts, and host groups to create a keytab for the HTTP service running on an IdM client. Specifically, it describes how you can allow the user01 IdM user to create a keytab for the HTTP service running on an IdM client named client.idm.example.com . Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have enrolled the HTTP service into IdM. The system to host the HTTP service is an IdM client. The IdM users and user groups that you want to allow to create the keytab exist in IdM. The IdM hosts and host groups that you want to allow to create the keytab exist in IdM. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_create_keytab-present.yml Ansible playbook file. For example: Open the /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_create_keytab-present-copy.yml Ansible playbook file for editing. Adapt the file by changing the following: The IdM administrator password specified by the ipaadmin_password variable. The name of your IdM client on which the HTTP service is running. In the current example, it is HTTP/client.idm.example.com The names of IdM users that are listed in the allow_create_keytab_user: section. In the current example, it is user01 . The names of IdM user groups that are listed in the allow_create_keytab_group: section. The names of IdM hosts that are listed in the allow_create_keytab_host: section. The names of IdM host groups that are listed in the allow_create_keytab_hostgroup: section. The name of the task specified by the name variable in the tasks section. After being adapted for the current example, the copied file looks like this: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification SSH to an IdM server as an IdM user that has the privilege to create a keytab for the particular HTTP service: Use the ipa-getkeytab command to generate the new keytab for the HTTP service: The -s option specifies a Key Distribution Center (KDC) server to generate the keytab. The -p option specifies the principal whose keytab you want to create. The -k option specifies the keytab file to append the new key to. The file will be created if it does not exist. If the command does not result in an error, you have successfully created a keytab of HTTP/client.idm.example.com as user01 . 26.7. Using an Ansible playbook to allow IdM users, groups, hosts, or host groups to retrieve a keytab of a service A keytab is a file containing pairs of Kerberos principals and encrypted keys. Keytab files are commonly used to allow scripts to automatically authenticate using Kerberos, without requiring human interaction or access to a password stored in a plain-text file. The script is then able to use the acquired credentials to access files stored on a remote system. As IdM administrator, you can allow other users to retrieve or even create a keytab for a service running in IdM. Follow this procedure to allow specific IdM users, user groups, hosts, and host groups to retrieve a keytab for the HTTP service running on an IdM client. Specifically, it describes how to allow the user01 IdM user to retrieve the keytab of the HTTP service running on client.idm.example.com . Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have enrolled the HTTP service into IdM. The IdM users and user groups that you want to allow to retrieve the keytab exist in IdM. The IdM hosts and host groups that you want to allow to retrieve the keytab exist in IdM. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_retrieve_keytab-present.yml Ansible playbook file. For example: Open the copied file, /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_retrieve_keytab-present-copy.yml , for editing: Adapt the file: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable of the ipaservice task to the principal of the HTTP service. In the current example, it is HTTP/client.idm.example.com Specify the names of IdM users in the allow_retrieve_keytab_group: section. In the current example, it is user01 . Specify the names of IdM user groups in the allow_retrieve_keytab_group: section. Specify the names of IdM hosts in the allow_retrieve_keytab_group: section. Specify the names of IdM host groups in the allow_retrieve_keytab_group: section. Specify the name of the task using the name variable in the tasks section. After being adapted for the current example, the copied file looks like this: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification SSH to an IdM server as an IdM user with the privilege to retrieve a keytab for the HTTP service: Use the ipa-getkeytab command with the -r option to retrieve the keytab: The -s option specifies a Key Distribution Center (KDC) server from which you want to retrieve the keytab. The -p option specifies the principal whose keytab you want to retrieve. The -k option specifies the keytab file to which you want to append the retrieved key. The file will be created if it does not exist. If the command does not result in an error, you have successfully retrieved a keytab of HTTP/client.idm.example.com as user01 . 26.8. Ensuring the presence of a Kerberos principal alias of a service using an Ansible playbook In some scenarios, it is beneficial for IdM administrator to enable IdM users, hosts, or services to authenticate against Kerberos applications using a Kerberos principal alias. These scenarios include: The user name changed, but the user should be able to log into the system using both the and new user names. The user needs to log in using the email address even if the IdM Kerberos realm differs from the email domain. Follow this procedure to create the principal alias of HTTP/mycompany.idm.example.com for the HTTP service running on client.idm.example.com . Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have set up an HTTP service on your host. You have enrolled the HTTP service into IdM. The host on which you have set up HTTP is an IdM client. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-member-principal-present.yml Ansible playbook file. For example: Open the /usr/share/doc/ansible-freeipa/playbooks/service/service-member-principal-present-copy.yml Ansible playbook file for editing. Adapt the file by changing the following: The IdM administrator password specified by the ipaadmin_password variable. The name of the service specified by the name variable. This is the canonical principal name of the service. In the current example, it is HTTP/client.idm.example.com . The Kerberos principal alias specified by the principal variable. This is the alias you want to add to the service defined by the name variable. In the current example, it is host/mycompany.idm.example.com . The name of the task specified by the name variable in the tasks section. After being adapted for the current example, the copied file looks like this: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: If running the playbook results in 0 unreachable and 0 failed tasks, you have successfully created the host/mycompany.idm.example.com Kerberos principal for the HTTP/client.idm.example.com service. Additional resources Managing Kerberos principal aliases for users, hosts, and services 26.9. Ensuring the absence of an HTTP service in IdM using an Ansible playbook Follow this procedure to unenroll a service from IdM. More specifically, it describes how to use an Ansible playbook to ensure the absence of an HTTP server named HTTP/client.idm.example.com in IdM. Prerequisites You have the IdM administrator password. Procedure Create an inventory file, for example inventory.file : Open the inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the /usr/share/doc/ansible-freeipa/playbooks/service/service-is-absent.yml Ansible playbook file. For example: Open the /usr/share/doc/ansible-freeipa/playbooks/service/service-is-absent-copy.yml Ansible playbook file for editing. Adapt the file by changing the following: The IdM administrator password defined by the ipaadmin_password variable. The Kerberos principal of the HTTP service, as defined by the name variable of the ipaservice task. After being adapted for the current example, the copied file looks like this: Save and exit the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log into the IdM Web UI as IdM administrator. Navigate to Identity Services . If you cannot see the HTTP/[email protected] service in the Services list, you have successfully ensured its absence in IdM. 26.10. Additional resources See the README-service.md Markdown file in the /usr/share/doc/ansible-freeipa/ directory. See sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/config directory.
|
[
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-copy.yml",
"--- - name: Playbook to manage IPA service. hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure service is present - ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/client.idm.example.com",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-copy.yml",
"--- - name: Playbook to add multiple services in a single task hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add HTTP and ftp services ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" services: - name: HTTP/[email protected] - name: ftp/[email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-http-and-ftp-services.yml",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-without-host-check.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-without-host-check-copy.yml",
"--- - name: Playbook to manage IPA service. hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure service is present - ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/www2.example.com skip_host_check: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-without-host-check-copy.yml",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-with-host-force.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-with-host-force-copy.yml",
"--- - name: Playbook to manage IPA service. hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure service is present - ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/ihavenodns.info force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-is-present-with-host-force-copy.yml",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-member-certificate-present.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-member-certificate-present-copy.yml",
"openssl x509 -outform der -in cert1.pem -out cert1.der",
"base64 cert1.der -w0 MIIC/zCCAeegAwIBAgIUV74O+4kXeg21o4vxfRRtyJm",
"--- - name: Service certificate present. hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure service certificate is present - ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/client.idm.example.com certificate: | - MIICBjCCAW8CFHnm32VcXaUDGfEGdDL/ [...] action: member state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-member-certificate-present-copy.yml",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_create_keytab-present.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_create_keytab-present-copy.yml",
"--- - name: Service member allow_create_keytab present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Service HTTP/client.idm.example.com members allow_create_keytab present for user01 ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/client.idm.example.com allow_create_keytab_user: - user01 action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_create_keytab-present-copy.yml",
"ssh [email protected] Password:",
"ipa-getkeytab -s server.idm.example.com -p HTTP/client.idm.example.com -k /etc/httpd/conf/krb5.keytab",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_retrieve_keytab-present.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_retrieve_keytab-present-copy.yml",
"--- - name: Service member allow_retrieve_keytab present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Service HTTP/client.idm.example.com members allow_retrieve_keytab present for user01 ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/client.idm.example.com allow_retrieve_keytab_user: - user01 action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-member-allow_retrieve_keytab-present-copy.yml",
"ssh [email protected] Password:",
"ipa-getkeytab -r -s server.idm.example.com -p HTTP/client.idm.example.com -k /etc/httpd/conf/krb5.keytab",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-member-principal-present.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-member-principal-present-copy.yml",
"--- - name: Service member principal present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Service HTTP/client.idm.example.com member principals host/mycompany.idm.exmaple.com present ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/client.idm.example.com principal: - host/mycompany.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-member-principal-present-copy.yml",
"touch inventory.file",
"[ipaserver] server.idm.example.com",
"cp /usr/share/doc/ansible-freeipa/playbooks/service/service-is-absent.yml /usr/share/doc/ansible-freeipa/playbooks/service/service-is-absent-copy.yml",
"--- - name: Playbook to manage IPA service. hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure service is absent - ipaservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: HTTP/client.idm.example.com state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file /usr/share/doc/ansible-freeipa/playbooks/service/service-is-absent-copy.yml"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/ensuring-the-presence-and-absence-of-services-in-idm-using-ansible_using-ansible-to-install-and-manage-idm
|
Chapter 7. Installing a private cluster on IBM Cloud VPC
|
Chapter 7. Installing a private cluster on IBM Cloud VPC In OpenShift Container Platform version 4.13, you can install a private cluster into an existing VPC. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 7.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Create a DNS zone using IBM Cloud DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 7.3. Private clusters in IBM Cloud VPC To create a private cluster on IBM Cloud VPC, you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud VPC APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 7.3.1. Limitations Private clusters on IBM Cloud VPC are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 7.4. About using a custom VPC In OpenShift Container Platform 4.13, you can deploy a cluster into the subnets of an existing IBM Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 7.4.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.4.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The subnets for control plane machines and compute machines To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 7.4.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a bastion host on your cloud network or a machine that has access to the to the network through a VPN. For more information about private cluster installation requirements, see "Private clusters". Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 7.9.1.4. Additional IBM Cloud VPC configuration parameters Additional IBM Cloud VPC configuration parameters are described in the following table: Table 7.4. Additional IBM Cloud VPC parameters Parameter Description Values platform.ibmcloud.resourceGroupName The name of an existing resource group. By default, an installer-provisioned VPC and cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. If you are deploying the cluster into an existing VPC, the installer-provisioned cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. The VPC resources that you have provisioned must exist in a resource group that you specify using the networkResourceGroupName parameter. In either case, this resource group must only be used for a single cluster installation, as the cluster components assume ownership of all of the resources in the resource group. [ 1 ] String, for example existing_resource_group . platform.ibmcloud.networkResourceGroupName The name of an existing resource group. This resource contains the existing VPC and subnets to which the cluster will be deployed. This parameter is required when deploying the cluster to a VPC that you have provisioned. String, for example existing_network_resource_group . platform.ibmcloud.dedicatedHosts.profile The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud VPC dedicated host profile, such as cx2-host-152x304 . [ 2 ] platform.ibmcloud.dedicatedHosts.name An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . platform.ibmcloud.type The instance type for all IBM Cloud VPC machines. Valid IBM Cloud VPC instance type, such as bx2-8x32 . [ 2 ] platform.ibmcloud.vpcName The name of the existing VPC that you want to deploy your cluster to. String. platform.ibmcloud.controlPlaneSubnets The name(s) of the existing subnet(s) in your VPC that you want to deploy your control plane machines to. Specify a subnet for each availability zone. String array platform.ibmcloud.computeSubnets The name(s) of the existing subnet(s) in your VPC that you want to deploy your compute machines to. Specify a subnet for each availability zone. Subnet IDs are not supported. String array Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer-provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM documentation. 7.9.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.5. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.9.3. Sample customized install-config.yaml file for IBM Cloud VPC You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-network-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 1 8 12 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 13 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 14 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 15 Specify the name of an existing VPC. 16 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 17 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. The default value is External . 20 Enables or disables FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 21 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.9.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud VPC resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, obtain the OpenShift Container Platform release image that your openshift-install binary is built to use: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=<provider_name> \ 1 --to=<path_to_credential_requests_directory> 2 1 The name of the provider. For example: ibmcloud or powervs . 2 The directory where the credential requests will be stored. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on IBM Cloud VPC 0000_26_cloud-controller-manager-operator_15_credentialsrequest-ibm.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request-ibmcos.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-storage-operator_03_credentials_request_ibm.yaml 5 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Storage Operator CR is an optional component and might be disabled in your cluster. Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ 1 --name <cluster_name> \ 2 --output-dir <installation_directory> \ --resource-group-name <resource_group_name> 3 1 The directory where the credential requests are stored. 2 The name of the OpenShift Container Platform cluster. 3 Optional: The name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-network-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=<provider_name> \\ 1 --to=<path_to_credential_requests_directory> 2",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"0000_26_cloud-controller-manager-operator_15_credentialsrequest-ibm.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request-ibmcos.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-storage-operator_03_credentials_request_ibm.yaml 5",
"ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_credential_requests_directory> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_cloud_vpc/installing-ibm-cloud-private
|
Chapter 2. Program policies
|
Chapter 2. Program policies 2.1. Cloud Instance Type An Instance Type is a specific hardware configuration known by a unique name that will be made available to customers as part of a CCSP solution. The hardware defined in the specification can be physical, virtual, complete, or partitioned. An Instance Type may also be available in multiple configuration specifications where each configuration includes a unique name within a common naming convention. These configurations (sizes) may be certified and listed together in the Red Hat Ecosystem Certification Catalog. For more information see, SuperSet Instance Type certification If an Instance Type size configuration is above or below the limits of RHEL the other sizes within the Instance Type may still be certified. 2.2. Policy Changes Typically, Red Hat limits major revisions in the certification tests and criteria to major releases of RHEL. Red Hat may also release updates to the CCSP Instance Type for the following: Policy and/or workflow, Criteria, Minor OS releases; where new hardware support features are introduced or At any other point as deemed necessary. Only a single version of the policy is active at any one time. This current policy is effective upon its release and supersedes all versions. Note The CCSP Instance Policy Guide version applied during the certification process will be recorded in certifications upon successful completion. Changes to the workflow guide will be documented in the workflow guide errata notification and package changelog. 2.3. Original Provider Partner support of certified Instance Type is a fundamental part of Red Hat CCSP Instance type certification. All requests and information about the Instance Type to be certified, including details for the physical and virtual hardware contained within must be submitted by the original provider of the Instance Type to Red Hat. You may choose to use your own internal or outside Partners for any portion of their testing however any such arrangement are the exclusive responsibility of the CCSP Partner. Red Hat will only interact with the CCSP Partner who is responsible for submitting the certification request and Red Hat will only post original certifications that will be easily identifiable. 2.4. Submission Window New Instance Type certifications for a given, major release of RHEL can typically be submitted until the 2nd, subsequent major version of RHEL is released. Certification requests that fall outside of the window must be raised with your EPM or SA. These requests are reviewed on a case-by-case basis.
| null |
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_policy_guide/assembly_program-policies_cloud-instance-pol-introduction
|
Chapter 85. undercloud
|
Chapter 85. undercloud This chapter describes the commands under the undercloud command. 85.1. undercloud backup Backup the undercloud Usage: Table 85.1. Command arguments Value Summary --init [INIT] Initialize environment for backup, using rear or nfs as args which will check for package install and configured ReaR or NFS server. Defaults to: rear. i.e. --init rear. WARNING: This flag will be deprecated and replaced by --setup-rear and --setup-nfs . --setup-nfs Setup the nfs server on the backup node which will install required packages and configuration on the host BackupNode in the ansible inventory. --setup-rear Setup rear on the undercloud host which will install and configure ReaR. --cron Sets up a new cron job that by default will execute a weekly backup at Sundays midnight, but that can be customized by using the tripleo_backup_and_restore_cron extra-var. --db-only Perform a db backup of the undercloud host. the db backup file will be stored in /home/stack with the name openstack-backup-mysql-<timestamp>.sql. --inventory INVENTORY Tripleo inventory file generated with tripleo-ansible- inventory command. Defaults to: /root/config- download/overcloud/tripleo-ansible-inventory.yaml --add-path ADD_PATH Add additional files to backup. defaults to: /home/stack/ i.e. --add-path /this/is/a/folder/ --add- path /this/is/a/texfile.txt. --exclude-path EXCLUDE_PATH Exclude path when performing the undercloud backup, this option can be specified multiple times. Defaults to: none i.e. --exclude-path /this/is/a/folder/ --exclude-path /this/is/a/texfile.txt. --save-swift Save backup to swift. defaults to: false special attention should be taken that Swift itself is backed up if you call this multiple times the backup size will grow exponentially. --extra-vars EXTRA_VARS Set additional variables as dict or as an absolute path of a JSON or YAML file type. i.e. --extra-vars {"key": "val", "key2": "val2"} i.e. --extra-vars /path/to/my_vars.yaml i.e. --extra-vars /path/to/my_vars.json. For more information about the variables that can be passed, visit: https://opendev.org/openstack/tripleo-ansible/src/bran ch/master/tripleo_ansible/roles/backup_and_restore/def aults/main.yml. 85.2. undercloud install Install and setup the undercloud Usage: Table 85.2. Command arguments Value Summary --force-stack-update Do a virtual update of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --no-validations Do not perform undercloud configuration validations --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Print the install command instead of running it -y, --yes Skip yes/no prompt (assume yes). --disable-container-prepare Disable the container preparation actions to prevent container tags from being updated and new containers from being fetched. If you skip this but do not have the container parameters configured, the deployment action may fail. --reproduce-command Create a reproducer command with ansible commandline and all environments variables. 85.3. undercloud upgrade Upgrade undercloud Usage: Table 85.3. Command arguments Value Summary --force-stack-update Do a virtual update of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --no-validations Do not perform undercloud configuration validations --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Print the install command instead of running it -y, --yes Skip yes/no prompt (assume yes). --disable-container-prepare Disable the container preparation actions to prevent container tags from being updated and new containers from being fetched. If you skip this but do not have the container parameters configured, the deployment action may fail. --reproduce-command Create a reproducer command with ansible commandline and all environments variables. --skip-package-updates Flag to skip the package update when performing upgrades and updates --system-upgrade SYSTEM_UPGRADE Run system upgrade while using provided environment yaml file.
|
[
"openstack undercloud backup [--init [INIT]] [--setup-nfs] [--setup-rear] [--cron] [--db-only] [--inventory INVENTORY] [--add-path ADD_PATH] [--exclude-path EXCLUDE_PATH] [--save-swift] [--extra-vars EXTRA_VARS]",
"openstack undercloud install [--force-stack-update] [--no-validations] [--inflight-validations] [--dry-run] [-y] [--disable-container-prepare] [--reproduce-command]",
"openstack undercloud upgrade [--force-stack-update] [--no-validations] [--inflight-validations] [--dry-run] [-y] [--disable-container-prepare] [--reproduce-command] [--skip-package-updates] [--system-upgrade SYSTEM_UPGRADE]"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/undercloud
|
Appendix B. Restoring Manual Changes Overwritten by a Puppet Run
|
Appendix B. Restoring Manual Changes Overwritten by a Puppet Run If your manual configuration has been overwritten by a Puppet run, you can restore the files to the state. The following example shows you how to restore a DHCP configuration file overwritten by a Puppet run. Procedure Copy the file you intend to restore. This allows you to compare the files to check for any mandatory changes required by the upgrade. This is not common for DNS or DHCP services. Check the log files to note down the md5sum of the overwritten file. For example: Restore the overwritten file: Compare the backup file and the restored file, and edit the restored file to include any mandatory changes required by the upgrade.
|
[
"cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup",
"journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_disconnected_network_environment/restoring-manual-changes-overwritten-by-a-puppet-run_satellite
|
Chapter 20. console
|
Chapter 20. console This chapter describes the commands under the console command. 20.1. console log show Show server's console output Usage: Table 20.1. Positional arguments Value Summary <server> Server to show console log (name or id) Table 20.2. Command arguments Value Summary -h, --help Show this help message and exit --lines <num-lines> Number of lines to display from the end of the log (default=all) 20.2. console url show Show server's remote console URL Usage: Table 20.3. Positional arguments Value Summary <server> Server to show url (name or id) Table 20.4. Command arguments Value Summary -h, --help Show this help message and exit --novnc Show novnc console url (default) --xvpvnc Show xvpvnc console url --spice Show spice console url --rdp Show rdp console url --serial Show serial console url --mks Show webmks console url Table 20.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 20.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack console log show [-h] [--lines <num-lines>] <server>",
"openstack console url show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--novnc | --xvpvnc | --spice | --rdp | --serial | --mks] <server>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/console
|
Object Gateway Guide
|
Object Gateway Guide Red Hat Ceph Storage 8 Deploying, configuring, and administering a Ceph Object Gateway Red Hat Ceph Storage Documentation Team
|
[
"ceph orch apply mon --placement=\"host1 host2 host3\"",
"service_type: mon placement: hosts: - host01 - host02 - host03",
"ceph orch apply -i mon.yml",
"ceph orch apply rgw example --placement=\"6 host1 host2 host3\"",
"service_type: rgw service_id: example placement: count: 6 hosts: - host01 - host02 - host03",
"ceph orch apply -i rgw.yml",
"mon_pg_warn_max_per_osd = n",
"ceph osd pool create .us-west.rgw.buckets.non-ec 64 64 replicated rgw-service",
"## SAS-SSD ROOT DECLARATION ## root sas-ssd { id -1 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-sas-ssd weight 4.000 item data1-sas-ssd weight 4.000 item data0-sas-ssd weight 4.000 }",
"## INDEX ROOT DECLARATION ## root index { id -2 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-index weight 1.000 item data1-index weight 1.000 item data0-index weight 1.000 }",
"host data2-sas-ssd { id -11 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item osd.0 weight 1.000 item osd.1 weight 1.000 item osd.2 weight 1.000 item osd.3 weight 1.000 }",
"host data2-index { id -21 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item osd.4 weight 1.000 }",
"osd_crush_update_on_start = false",
"[osd.0] osd crush location = \"host=data2-sas-ssd\" [osd.1] osd crush location = \"host=data2-sas-ssd\" [osd.2] osd crush location = \"host=data2-sas-ssd\" [osd.3] osd crush location = \"host=data2-sas-ssd\" [osd.4] osd crush location = \"host=data2-index\"",
"## SERVICE RULE DECLARATION ## rule rgw-service { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type rack step emit }",
"## THROUGHPUT RULE DECLARATION ## rule rgw-throughput { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type host step emit }",
"## INDEX RULE DECLARATION ## rule rgw-index { type replicated min_size 1 max_size 10 step take index step chooseleaf firstn 0 type rack step emit }",
"rule ecpool-86 { step take default class hdd step choose indep 4 type host step choose indep 4 type osd step emit }",
"rule ecpool-86 { type msr_indep step take default class hdd step choosemsr 4 type host step choosemsr 4 type osd step emit }",
"rule ecpool-86 { step take default class hdd step choose indep 4 type host step choose indep 4 type osd step emit }",
"rule ecpool-86 { type msr_indep step take default class hdd step choosemsr 4 type host step choosemsr 4 type osd step emit }",
"[osd] osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1",
"ceph config set global osd_map_message_max 10 ceph config set osd osd_map_cache_size 20 ceph config set osd osd_map_share_max_epochs 10 ceph config set osd osd_pg_epoch_persisted_max_stale 10",
"[osd] osd_scrub_begin_hour = 23 #23:01H, or 10:01PM. osd_scrub_end_hour = 6 #06:01H or 6:01AM.",
"[osd] osd_scrub_load_threshold = 0.25",
"objecter_inflight_ops = 24576",
"rgw_thread_pool_size = 512",
"ceph soft nofile unlimited",
"USER_NAME soft nproc unlimited",
"cephadm shell",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=default --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default",
"radosgw-admin period update --rgw-realm= REALM_NAME --commit",
"radosgw-admin period update --rgw-realm=test_realm --commit",
"ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] [--zonegroup= ZONE_GROUP_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"",
"ceph orch apply rgw test --realm=test_realm --zone=test_zone --zonegroup=default --placement=\"2 host01 host02\"",
"ceph orch apply rgw SERVICE_NAME",
"ceph orch apply rgw foo",
"ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000",
"ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"label:rgw count-per-host:2\" --port=8000",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"cephadm shell",
"cat nfs-conf.yml service_type: nfs service_id: nfs-rgw-service placement: hosts: ['host1'] spec: port: 2049",
"ceph orch apply -i nfs-conf.yml",
"ceph orch ls --service_name nfs.nfs-rgw-service --service_type nfs",
"touch radosgw.yml",
"service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_zonegroup: ZONE_GROUP_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network",
"service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_zonegroup: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"radosgw-admin realm create --rgw-realm=test_realm --default radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup --default radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone --default radosgw-admin period update --rgw-realm=test_realm --commit",
"service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_zonegroup: test_zonegroup rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i /var/lib/ceph/radosgw/radosgw.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default",
"radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system",
"radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system",
"radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service",
"radosgw-admin realm pull --rgw-realm= PRIMARY_REALM --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY --default",
"radosgw-admin realm pull --rgw-realm=test_realm --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --default",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set rgw rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"",
"radosgw-admin sync status",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm rgw.test_realm.test_zone_bb",
"ceph orch ps",
"ceph orch ps",
"cephadm shell",
"ceph mgr module enable rgw",
"ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]",
"ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.",
"rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - _HOSTNAME_1_ - _HOSTNAME_2_",
"cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02",
"service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - _hostname1_ - _hostname2_",
"service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - foo - bar",
"cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]",
"ceph orch list --daemon-type=rgw NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID rgw.myrealm.myzonegroup.ceph-saya-6-osd-host01.eburst ceph-saya-6-osd-host01 *:80 running (111m) 9m ago 111m 82.3M - 17.2.6-22.el9cp 2d5b080de0b0 2f3eaca7e88e",
"radosgw-admin zonegroup get --rgw-zonegroup _zone_group_name_",
"radosgw-admin zonegroup get --rgw-zonegroup my_zonegroup { \"id\": \"02a175e2-7f23-4882-8651-6fbb15d25046\", \"name\": \"my_zonegroup_ck\", \"api_name\": \"my_zonegroup_ck\", \"is_master\": true, \"endpoints\": [ \"http://vm-00:80\" ], \"hostnames\": [ \"foo\" \"bar\" ], \"hostnames_s3website\": [], \"master_zone\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"zones\": [ { \"id\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"name\": \"my_zone_ck\", \"endpoints\": [ \"http://vm-00:80\" ], \"log_meta\": false, \"log_data\": false, \"bucket_index_max_shards\": 11, \"read_only\": false, \"tier_type\": \"\", \"sync_from_all\": true, \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"compress-encrypted\", \"resharding\" ] } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"439e9c37-4ddc-43a3-99e9-ea1f3825bb51\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ] }",
"cephadm shell",
"ceph mgr module enable rgw",
"ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]",
"ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.",
"rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - HOSTNAME_1 - HOSTNAME_2 spec: rgw_frontend_port: PORT_NUMBER zone_endpoints: http:// RGW_HOSTNAME_1 : RGW_PORT_NUMBER_1 , http:// RGW_HOSTNAME_2 : RGW_PORT_NUMBER_2",
"cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02 spec: rgw_frontend_port: 5500 zone_endpoints: http://<rgw_host1>:<rgw_port1>, http://<rgw_host2>:<rgw_port2>",
"cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]",
"cat zone-spec.yaml rgw_zone: my-secondary-zone rgw_realm_token: <token> placement: hosts: - ceph-node-1 - ceph-node-2 spec: rgw_frontend_port: 5500",
"cephadm shell --mount zone-spec.yaml:/var/lib/ceph/radosgw/zone-spec.yaml",
"ceph mgr module enable rgw",
"ceph rgw zone create -i /var/lib/ceph/radosgw/zone-spec.yaml",
"radosgw-admin realm list { \"default_info\": \"d07c00ef-9041-4f6e-8804-7d40240556ae\", \"realms\": [ \"myrealm\" ] }",
"bucket-name.domain-name.com",
"address=/. HOSTNAME_OR_FQDN / HOST_IP_ADDRESS",
"address=/.gateway-host01/192.168.122.75",
"USDTTL 604800 @ IN SOA gateway-host01. root.gateway-host01. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-host01. @ IN A 192.168.122.113 * IN CNAME @",
"ping mybucket. HOSTNAME",
"ping mybucket.gateway-host01",
"radosgw-admin zonegroup get --rgw-zonegroup= ZONEGROUP_NAME > zonegroup.json",
"radosgw-admin zonegroup get --rgw-zonegroup=us > zonegroup.json",
"cp zonegroup.json zonegroup.backup.json",
"cat zonegroup.json { \"id\": \"d523b624-2fa5-4412-92d5-a739245f0451\", \"name\": \"asia\", \"api_name\": \"asia\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"zones\": [ { \"id\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"name\": \"india\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"d7e2ad25-1630-4aee-9627-84f24e13017f\", \"sync_policy\": { \"groups\": [] } }",
"\"hostnames\": [\"host01\", \"host02\",\"host03\"],",
"radosgw-admin zonegroup set --rgw-zonegroup= ZONEGROUP_NAME --infile=zonegroup.json",
"radosgw-admin zonegroup set --rgw-zonegroup=us --infile=zonegroup.json",
"radosgw-admin period update --commit",
"[client.rgw.node1] rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>",
"touch rgw.yml",
"service_type: rgw service_id: SERVICE_ID service_name: SERVICE_NAME placement: hosts: - HOST_NAME spec: ssl: true rgw_frontend_ssl_certificate: CERT_HASH",
"service_type: rgw service_id: foo service_name: rgw.foo placement: hosts: - host01 spec: ssl: true rgw_frontend_ssl_certificate: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END CERTIFICATE-----",
"ceph orch apply -i rgw.yml",
"mkfs.ext4 nvme-drive-path",
"mkfs.ext4 /dev/nvme0n1 mount /dev/nvme0n1 /mnt/nvme0n1/",
"mkdir <nvme-mount-path>/cache-directory-name",
"mkdir /mnt/nvme0n1/rgw_datacache",
"chmod a+rwx nvme-mount-path ; chmod a+rwx rgw_d3n_l1_datacache_persistent_path",
"chmod a+rwx /mnt/nvme0n1 ; chmod a+rwx /mnt/nvme0n1/rgw_datacache/",
"\"extra_container_args: \"-v\" \"rgw_d3n_l1_datacache_persistent_path:rgw_d3n_l1_datacache_persistent_path\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/:/mnt/nvme0n1/rgw_datacache/\"",
"\"extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 count_per_host: 2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\"",
"ceph orch apply -i rgw-spec.yml",
"ceph config set <client.rgw> <CONF-OPTION> <VALUE>",
"rgw_d3n_l1_datacache_persistent_path=/mnt/nvme/rgw_datacache/",
"rgw_d3n_l1_datacache_size=10737418240",
"fallocate -l 1G ./1G.dat s3cmd mb s3://bkt s3cmd put ./1G.dat s3://bkt",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 13s 73.94 MB/s done",
"ls -lh /mnt/nvme/rgw_datacache rw-rr. 1 ceph ceph 1.0M Jun 2 06:18 cc7f967c-0021-43b2-9fdf-23858e868663.615391.1_shadow.ZCiCtMWeu_19wb100JIEZ-o4tv2IyA_1",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 6s 155.07 MB/s done",
"ceph config set client.rgw debug_rgw VALUE",
"ceph config set client.rgw debug_rgw 20",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw. NAME .asok config set debug_rgw VALUE",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw.rgw.asok config set debug_rgw 20",
"ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_enable_static_website true ceph config set client.rgw rgw_enable_apis s3,s3website ceph config set client.rgw rgw_dns_name objects-zonegroup.example.com ceph config set client.rgw rgw_dns_s3website_name objects-website-zonegroup.example.com ceph config set client.rgw rgw_resolve_cname true",
"objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20",
"*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.",
"http://bucket1.objects-website-zonegroup.domain.com",
"www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20",
"http://www.example.com",
"[root@host01 ~] touch ingress.yaml",
"service_type: ingress 1 service_id: SERVICE_ID 2 placement: 3 hosts: - HOST1 - HOST2 - HOST3 spec: backend_service: SERVICE_ID virtual_ip: IP_ADDRESS / CIDR 4 frontend_port: INTEGER 5 monitor_port: INTEGER 6 virtual_interface_networks: 7 - IP_ADDRESS / CIDR ssl_cert: | 8",
"service_type: ingress service_id: rgw.foo placement: hosts: - host01.example.com - host02.example.com - host03.example.com spec: backend_service: rgw.foo virtual_ip: 192.168.1.2/24 frontend_port: 8080 monitor_port: 1967 virtual_interface_networks: - 10.10.0.0/16 ssl_cert: | -----BEGIN CERTIFICATE----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END PRIVATE KEY-----",
"service_type: ingress service_id: rgw.ssl # adjust to match your existing RGW service placement: hosts: - hostname1 - hostname2 spec: backend_service: rgw.rgw.ssl.ceph13 # adjust to match your existing RGW service virtual_ip: IP_ADDRESS/CIDR # ex: 192.168.20.1/24 frontend_port: INTEGER # ex: 443 monitor_port: INTEGER # ex: 1969 use_tcp_mode_over_rgw: True",
"cephadm shell --mount ingress.yaml:/var/lib/ceph/radosgw/ingress.yaml",
"ceph config set mgr mgr/cephadm/container_image_haproxy HAPROXY_IMAGE_ID ceph config set mgr mgr/cephadm/container_image_keepalived KEEPALIVED_IMAGE_ID",
"ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest",
"ceph orch apply -i /var/lib/ceph/radosgw/ingress.yaml",
"ip addr show",
"wget HOST_NAME",
"wget host01.example.com",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>",
"cephadm shell",
"ceph nfs export create rgw --cluster-id NFS_CLUSTER_NAME --pseudo-path PATH_FROM_ROOT --user-id USER_ID",
"ceph nfs export create rgw --cluster-id cluster1 --pseudo-path root/testnfs1/ --user-id nfsuser",
"mount -t nfs IP_ADDRESS:PATH_FROM_ROOT -osync MOUNT_POINT",
"mount -t nfs 10.0.209.0:/root/testnfs1 -osync /mnt/mount1",
"cat ./haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 7000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 30s timeout server 30s timeout http-keep-alive 10s timeout check 10s timeout client-fin 1s timeout server-fin 1s maxconn 6000 listen stats bind 0.0.0.0:1936 mode http log global maxconn 256 clitimeout 10m srvtimeout 10m contimeout 10m timeout queue 10m JTH start stats enable stats hide-version stats refresh 30s stats show-node ## stats auth admin:password stats uri /haproxy?stats stats admin if TRUE frontend main bind *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app maxconn 6000 backend static balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000 backend app balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000",
"ceph config set osd osd_pool_default_pg_num 50 ceph config set osd osd_pool_default_pgp_num 50",
"radosgw-admin realm create --rgw-realm REALM_NAME --default",
"radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name NEW_ZONE_GROUP_NAME radosgw-admin zone rename --rgw-zone default --zone-new-name NEW_ZONE_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --api-name NEW_ZONE_GROUP_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --endpoints http://ENDPOINT --master --default",
"radosgw-admin zone modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --rgw-zone NEW_ZONE_NAME --endpoints http://ENDPOINT --master --default",
"radosgw-admin user create --uid USER_ID --display-name DISPLAY_NAME --access-key ACCESS_KEY --secret SECRET_KEY --system",
"radosgw-admin period update --commit",
"ceph orch ls | grep rgw",
"ceph config set client.rgw.SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw.SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw.SERVICE_NAME rgw_zone PRIMARY_ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-1",
"systemctl restart ceph-radosgw@rgw.`hostname -s`",
"ceph orch restart _RGW_SERVICE_NAME_",
"ceph orch restart rgw.rgwsvcid.mons-1.jwgwwp",
"cephadm shell",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup=_ZONE_GROUP_NAME_ --rgw-zone=_SECONDARY_ZONE_NAME_ --endpoints=http://_RGW_SECONDARY_HOSTNAME_:_RGW_PRIMARY_PORT_NUMBER_1_ --access-key=_SYSTEM_ACCESS_KEY_ --secret=_SYSTEM_SECRET_KEY_ [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"radosgw-admin zone create --rgw-zonegroup={ ZONE_GROUP_NAME } --rgw-zone={ ZONE_NAME } --endpoints={http:// FQDN : PORT },{http:// FQDN : PORT } --tier-type=archive",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --endpoints={http://example.com:8080} --tier-type=archive",
"radosgw-admin zone modify --rgw-zone archive --sync_from primary --sync_from_all false --sync-from-rm secondary radosgw-admin period update --commit",
"ceph config set client.rgw rgw_max_objs_per_shard 50000",
"<?xml version=\"1.0\" ?> <LifecycleConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Rule> <ID>delete-1-days-az</ID> <Filter> <Prefix></Prefix> <ArchiveZone /> 1 </Filter> <Status>Enabled</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration>",
"radosgw-admin lc get --bucket BUCKET_NAME",
"radosgw-admin lc get --bucket test-bkt { \"prefix_map\": { \"\": { \"status\": true, \"dm_expiration\": true, \"expiration\": 0, \"noncur_expiration\": 2, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"Rule 1\", \"rule\": { \"id\": \"Rule 1\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"2\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"\", \"obj_tags\": { \"tagset\": {} }, \"archivezone\": \"\" 1 }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": true } } ] }",
"radosgw-admin bucket link --uid NEW_USER_ID --bucket BUCKET_NAME --yes-i-really-mean-it",
"radosgw-admin bucket link --uid arcuser1 --bucket arc1-deleted-da473fbbaded232dc5d1e434675c1068 --yes-i-really-mean-it",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default --read-only=false",
"radosgw-admin period update --commit",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc1zg --endpoints=http://rgw1:80 --rgw-realm=ldc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=ldc1 --zone=ldc1z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc1z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc2 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc2zg --endpoints=http://rgw2:80 --rgw-realm=ldc2 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc2zg --rgw-zone=ldc2z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=ldc2 --zone=ldc2z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc2 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc2zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc2z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REPLICATED_REALM_1 --default",
"radosgw-admin realm create --rgw-realm=rdc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=http://_RGW_NODE_NAME :80 --rgw-realm=_RGW_REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=rdc1zg --endpoints=http://rgw1:80 --rgw-realm=rdc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= RGW_ZONE_GROUP --rgw-zone=_MASTER_RGW_NODE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=rdc1zg --rgw-zone=rdc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin user create --uid=\" SYNCHRONIZATION_USER \" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone= RGW_ZONE --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin user create --uid=\"synchronization-user\" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone=rdc1zg --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=rdc1 --zone=rdc1z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc1z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin zone create --rgw-zone= RGW_ZONE --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=https://tower-osd4.cephtips.com --access-key=_ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin zone create --rgw-zone=rdc2z --rgw-zonegroup=rdc1zg --endpoints=https://tower-osd4.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=rdc1 --zone=rdc2z --placement=\"1 host04\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc2z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin sync status",
"radosgw-admin sync status realm 59762f08-470c-46de-b2b1-d92c50986e67 (ldc2) zonegroup 7cf8daf8-d279-4d5c-b73e-c7fd2af65197 (ldc2zg) zone 034ae8d3-ae0c-4e35-8760-134782cb4196 (ldc2z) metadata sync no sync (zone is master)",
"radosgw-admin sync status --rgw-realm RGW_REALM_NAME",
"radosgw-admin sync status --rgw-realm rdc1 realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source",
"radosgw-admin user create --uid=\" LOCAL_USER\" --display-name=\"Local user\" --rgw-realm=_REALM_NAME --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin user create --uid=\"local-user\" --display-name=\"Local user\" --rgw-realm=ldc1 --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z",
"radosgw-admin sync info --bucket=buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }",
"radosgw-admin sync policy get --bucket= BUCKET_NAME",
"radosgw-admin sync policy get --bucket=mybucket",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden",
"radosgw-admin sync group create --group-id=mygroup1 --status=enabled",
"radosgw-admin bucket sync run",
"radosgw-admin bucket sync run",
"radosgw-admin sync group modify --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden",
"radosgw-admin sync group modify --group-id=mygroup1 --status=forbidden",
"radosgw-admin bucket sync run",
"radosgw-admin bucket sync run",
"radosgw-admin sync group get --bucket= BUCKET_NAME --group-id= GROUP_ID",
"radosgw-admin sync group get --group-id=mygroup",
"radosgw-admin sync group remove --bucket= BUCKET_NAME --group-id= GROUP_ID",
"radosgw-admin sync group remove --group-id=mygroup",
"radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE",
"radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2",
"radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE",
"radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2",
"radosgw-admin sync group flow remove --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2",
"radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET_ID --prefix= SOURCE_PREFIX --prefix-rm --tags-add= KEY1=VALUE1 , KEY2=VALUE2 ,.. --tags-rm= KEY1=VALUE1 , KEY2=VALUE2 , ... --dest-owner= OWNER_ID --storage-class= STORAGE_CLASS --mode= USER --uid= USER_ID",
"radosgw-admin sync group pipe modify --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET1 --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET1 --dest-bucket-id=_DESTINATION_BUCKET-ID",
"radosgw-admin sync group pipe modify --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1",
"radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET , --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET-ID",
"radosgw-admin sync group pipe remove --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1",
"radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID",
"radosgw-admin sync group pipe remove -bucket-name=mybuck --group-id=zonegroup --pipe-id=pipe",
"radosgw-admin sync info --bucket= BUCKET_NAME --effective-zone-name= ZONE_NAME",
"radosgw-admin sync info",
"radosgw-admin sync group create --group-id=group1 --status=allowed",
"radosgw-admin sync group flow create --group-id=group1 --flow-id=flow-mirror --flow-type=symmetrical --zones=us-east,us-west",
"radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='*' --source-bucket='*' --dest-zones='*' --dest-bucket='*'",
"radosgw-admin sync group modify --group-id=group1 --status=enabled",
"radosgw-admin period update --commit",
"radosgw-admin sync info -bucket buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }",
"radosgw-admin sync group create --group-id= GROUP_ID --status=allowed",
"radosgw-admin sync group create --group-id=group1 --status=allowed",
"radosgw-admin sync group flow create --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME",
"radosgw-admin sync group flow create --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2",
"radosgw-admin sync group pipe create --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '",
"radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'",
"radosgw-admin period update --commit",
"radosgw-admin sync info",
"radosgw-admin sync group create --group-id= GROUP_ID --status=allowed --bucket= BUCKET_NAME",
"radosgw-admin sync group create --group-id=group1 --status=allowed --bucket=buck",
"radosgw-admin sync group flow create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME",
"radosgw-admin sync group flow create --bucket-name=buck --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2",
"radosgw-admin sync group pipe create --group-id= GROUP_ID --bucket-name= BUCKET_NAME --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '",
"radosgw-admin sync group pipe create --group-id=group1 --bucket-name=buck --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'",
"radosgw-admin sync info --bucket-name= BUCKET_NAME",
"radosgw-admin sync group modify --group-id=group1 --status=allowed",
"radosgw-admin period update --commit",
"radosgw-admin sync group create --bucket=buck --group-id=buck-default --status=enabled",
"radosgw-admin sync group pipe create --bucket=buck --group-id=buck-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*'",
"radosgw-admin bucket sync info --bucket buck realm 33157555-f387-44fc-b4b4-3f9c0b32cd66 (india) zonegroup 594f1f63-de6f-4e1e-90b6-105114d7ad55 (shared) zone ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5 (primary) bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1] source zone e0e75beb-4e28-45ff-8d48-9710de06dcd0 bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1]",
"radosgw-admin sync info --bucket buck { \"id\": \"pipe1\", \"source\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled",
"radosgw-admin sync group create --bucket=buck4 --group-id=buck4-default --status=enabled",
"radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --source-bucket= SOURCE_BUCKET_NAME --dest-zones= DESTINATION_ZONE_NAME",
"radosgw-admin sync group pipe create --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones='*' --source-bucket=buck5 --dest-zones='*'",
"radosgw-admin sync group pipe modify --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones=us-west --source-bucket=buck5 --dest-zones='*'",
"radosgw-admin sync info --bucket-name= BUCKET_NAME",
"radosgw-admin sync info --bucket=buck4 { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [ \"buck4:115b12b3-....14433.2\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, } ] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] }",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled",
"radosgw-admin sync group create --bucket=buck6 --group-id=buck6-default --status=enabled",
"radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --dest-zones= DESTINATION_ZONE_NAME --dest-bucket= DESTINATION_BUCKET_NAME",
"radosgw-admin sync group pipe create --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*' --dest-bucket=buck5",
"radosgw-admin sync group pipe modify --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='us-west' --dest-bucket=buck5",
"radosgw-admin sync info --bucket-name= BUCKET_NAME",
"radosgw-admin sync info --bucket buck5 { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck6:c7887c5b-f6ff-4d5f-9736-aa5cdb4a15e8.20493.4\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck5\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"s3cmd\" } }, ], \"hints\": { \"sources\": [], \"dests\": [ \"buck5\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled",
"radosgw-admin sync group create --bucket=buck1 --group-id=buck8-default --status=enabled",
"radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --tags-add= KEY1 = VALUE1 , KEY2 = VALUE2 --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '",
"radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-tags --tags-add=color=blue,color=red --source-zones='*' --dest-zones='*'",
"radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --prefix= PREFIX --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '",
"radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-prefix --prefix=foo/ --source-zones='*' --dest-zones='*' \\",
"radosgw-admin sync info --bucket= BUCKET_NAME",
"radosgw-admin sync info --bucket=buck1",
"radosgw-admin sync group modify --group-id buck-default --status forbidden --bucket buck { \"groups\": [ { \"id\": \"buck-default\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"pipe1\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", } } ], \"status\": \"forbidden\" } ] }",
"radosgw-admin sync info --bucket buck { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }",
"radosgw-admin realm create --rgw-realm= REALM_NAME",
"radosgw-admin realm create --rgw-realm=test_realm",
"radosgw-admin realm default --rgw-realm= REALM_NAME",
"radosgw-admin realm default --rgw-realm=test_realm1",
"radosgw-admin realm default --rgw-realm=test_realm",
"radosgw-admin realm delete --rgw-realm= REALM_NAME",
"radosgw-admin realm delete --rgw-realm=test_realm",
"radosgw-admin realm get --rgw-realm= REALM_NAME",
"radosgw-admin realm get --rgw-realm=test_realm >filename.json",
"{ \"id\": \"0a68d52e-a19c-4e8e-b012-a8f831cb3ebc\", \"name\": \"test_realm\", \"current_period\": \"b0c5bbef-4337-4edd-8184-5aeab2ec413b\", \"epoch\": 1 }",
"radosgw-admin realm set --rgw-realm= REALM_NAME --infile= IN_FILENAME",
"radosgw-admin realm set --rgw-realm=test_realm --infile=filename.json",
"radosgw-admin realm list",
"radosgw-admin realm list-periods",
"radosgw-admin realm pull --url= URL_TO_MASTER_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin realm rename --rgw-realm= REALM_NAME --realm-new-name= NEW_REALM_NAME",
"radosgw-admin realm rename --rgw-realm=test_realm --realm-new-name=test_realm2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME [--rgw-realm= REALM_NAME ] [--master]",
"radosgw-admin zonegroup create --rgw-zonegroup=zonegroup1 --rgw-realm=test_realm --default",
"zonegroup modify --rgw-zonegroup= ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --rgw-zonegroup=zonegroup1",
"radosgw-admin zonegroup default --rgw-zonegroup= ZONE_GROUP_NAME",
"radosgw-admin zonegroup default --rgw-zonegroup=zonegroup2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup default --rgw-zonegroup=us",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup add --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup rename --rgw-zonegroup= ZONE_GROUP_NAME --zonegroup-new-name= NEW_ZONE_GROUP_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup delete --rgw-zonegroup= ZONE_GROUP_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup list",
"{ \"default_info\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"zonegroups\": [ \"us\" ] }",
"radosgw-admin zonegroup get [--rgw-zonegroup= ZONE_GROUP_NAME ]",
"{ \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" }",
"radosgw-admin zonegroup set --infile zonegroup.json",
"radosgw-admin period update --commit",
"{ \"zonegroups\": [ { \"key\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"val\": { \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" } } ], \"master_zonegroup\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 } }",
"radosgw-admin zonegroup-map set --infile zonegroupmap.json",
"radosgw-admin period update --commit",
"radosgw-admin zone create --rgw-zone= ZONE_NAME [--zonegroup= ZONE_GROUP_NAME ] [--endpoints= ENDPOINT_PORT [,<endpoint:port>] [--master] [--default] --access-key ACCESS_KEY --secret SECRET_KEY",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zone delete --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"ceph osd pool delete DELETED_ZONE_NAME .rgw.control DELETED_ZONE_NAME .rgw.control --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.data.root DELETED_ZONE_NAME .rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.log DELETED_ZONE_NAME .rgw.log --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.users.uid DELETED_ZONE_NAME .rgw.users.uid --yes-i-really-really-mean-it",
"radosgw-admin zone modify [options] --access-key=<key> --secret/--secret-key=<key> --master --default --endpoints=<list>",
"radosgw-admin period update --commit",
"radosgw-admin zone list",
"radosgw-admin zone get [--rgw-zone= ZONE_NAME ]",
"{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\"}, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\"} } ] }",
"radosgw-admin zone set --rgw-zone=test-zone --infile zone.json",
"radosgw-admin period update --commit",
"radosgw-admin zone rename --rgw-zone= ZONE_NAME --zone-new-name= NEW_ZONE_NAME",
"radosgw-admin period update --commit",
"firewall-cmd --zone=public --add-port=636/tcp firewall-cmd --zone=public --add-port=636/tcp --permanent",
"certutil -d /etc/openldap/certs -A -t \"TC,,\" -n \"msad-frog-MSAD-FROG-CA\" -i /path/to/ldap.pem",
"setsebool -P httpd_can_network_connect on",
"chmod 644 /etc/openldap/certs/*",
"ldapwhoami -H ldaps://rh-directory-server.example.com -d 9",
"radosgw-admin metadata list user",
"ldapsearch -x -D \"uid=ceph,ou=People,dc=example,dc=com\" -W -H ldaps://example.com -b \"ou=People,dc=example,dc=com\" -s sub 'uid=ceph'",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_secret /etc/bindpass",
"service_type: rgw service_id: rgw.1 service_name: rgw.rgw.1 placement: label: rgw extra_container_args: - -v - /etc/bindpass:/etc/bindpass",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_uri ldaps://:636 ceph config set client.rgw rgw_ldap_binddn \"ou=poc,dc=example,dc=local\" ceph config set client.rgw rgw_ldap_searchdn \"ou=poc,dc=example,dc=local\" ceph config set client.rgw rgw_ldap_dnattr \"uid\" ceph config set client.rgw rgw_s3_auth_use_ldap true",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"\"objectclass=inetorgperson\"",
"\"(&(uid=joe)(objectclass=inetorgperson))\"",
"\"(&(uid=@USERNAME@)(memberOf=cn=ceph-users,ou=groups,dc=mycompany,dc=com))\"",
"export RGW_ACCESS_KEY_ID=\" USERNAME \"",
"export RGW_SECRET_ACCESS_KEY=\" PASSWORD \"",
"radosgw-token --encode --ttype=ldap",
"radosgw-token --encode --ttype=ad",
"export RGW_ACCESS_KEY_ID=\"ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAibGRhcCIsCiAgICAgICAgImlkIjogImNlcGgiLAogICAgICAgICJrZXkiOiAiODAwI0dvcmlsbGEiCiAgICB9Cn0K\"",
"cat .aws/credentials [default] aws_access_key_id = ewogICaGbnjlwe9UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJjZXBoIiwKICAgICAgICAia2V5IjogInBhc3M0Q2VwaCIKICAgIH0KfQo= aws_secret_access_key =",
"aws s3 ls --endpoint http://host03 2023-12-11 17:08:50 mybucket 2023-12-24 14:55:44 mybucket2",
"radosgw-admin user info --uid dir1 { \"user_id\": \"dir1\", \"display_name\": \"dir1\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"ldap\", \"mfa_ids\": [] }",
"radosgw-admin metadata list user",
"ldapsearch -x -D \"uid=ceph,ou=People,dc=example,dc=com\" -W -H ldaps://example.com -b \"ou=People,dc=example,dc=com\" -s sub 'uid=ceph'",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_secret /etc/bindpass",
"service_type: rgw service_id: rgw.1 service_name: rgw.rgw.1 placement: label: rgw extra_container_args: - -v - /etc/bindpass:/etc/bindpass",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_uri ldaps://_FQDN_:636 ceph config set client.rgw rgw_ldap_binddn \"_BINDDN_\" ceph config set client.rgw rgw_ldap_searchdn \"_SEARCHDN_\" ceph config set client.rgw rgw_ldap_dnattr \"cn\" ceph config set client.rgw rgw_s3_auth_use_ldap true",
"rgw_ldap_binddn \"uid=ceph,cn=users,cn=accounts,dc=example,dc=com\"",
"rgw_ldap_searchdn \"cn=users,cn=accounts,dc=example,dc=com\"",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"export RGW_ACCESS_KEY_ID=\" USERNAME \"",
"export RGW_SECRET_ACCESS_KEY=\" PASSWORD \"",
"radosgw-token --encode --ttype=ldap",
"radosgw-token --encode --ttype=ad",
"export RGW_ACCESS_KEY_ID=\"ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAibGRhcCIsCiAgICAgICAgImlkIjogImNlcGgiLAogICAgICAgICJrZXkiOiAiODAwI0dvcmlsbGEiCiAgICB9Cn0K\"",
"cat .aws/credentials [default] aws_access_key_id = ewogICaGbnjlwe9UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJjZXBoIiwKICAgICAgICAia2V5IjogInBhc3M0Q2VwaCIKICAgIH0KfQo= aws_secret_access_key =",
"aws s3 ls --endpoint http://host03 2023-12-11 17:08:50 mybucket 2023-12-24 14:55:44 mybucket2",
"radosgw-admin user info --uid dir1 { \"user_id\": \"dir1\", \"display_name\": \"dir1\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"ldap\", \"mfa_ids\": [] }",
"openstack service create --name=swift --description=\"Swift Service\" object-store",
"openstack endpoint create --region REGION_NAME swift admin \" URL \" openstack endpoint create --region REGION_NAME swift public \" URL \" openstack endpoint create --region REGION_NAME swift internal \" URL \"",
"openstack endpoint create --region us-west swift admin \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift public \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift internal \"http://radosgw.example.com:8080/swift/v1\"",
"openstack endpoint list --service=swift",
"openstack endpoint show ENDPOINT_ID",
"mkdir /var/ceph/nss openssl x509 -in /etc/keystone/ssl/certs/ca.pem -pubkey | certutil -d /var/ceph/nss -A -n ca -t \"TCu,Cu,Tuw\" openssl x509 -in /etc/keystone/ssl/certs/signing_cert.pem -pubkey | certutil -A -d /var/ceph/nss -n signing_cert -t \"P,P,P\"",
"ceph config set client.rgw nss_db_path \"/var/lib/ceph/radosgw/ceph-rgw.rgw01/nss\"",
"ceph config set client.rgw rgw_keystone_verify_ssl TRUE / FALSE ceph config set client.rgw rgw_s3_auth_use_keystone TRUE / FALSE ceph config set client.rgw rgw_keystone_api_version API_VERSION ceph config set client.rgw rgw_keystone_url KEYSTONE_URL : ADMIN_PORT ceph config set client.rgw rgw_keystone_accepted_roles ACCEPTED_ROLES_ ceph config set client.rgw rgw_keystone_accepted_admin_roles ACCEPTED_ADMIN_ROLES ceph config set client.rgw rgw_keystone_admin_domain default ceph config set client.rgw rgw_keystone_admin_project SERVICE_NAME ceph config set client.rgw rgw_keystone_admin_user KEYSTONE_TENANT_USER_NAME ceph config set client.rgw rgw_keystone_admin_password KEYSTONE_TENANT_USER_PASSWORD ceph config set client.rgw rgw_keystone_implicit_tenants KEYSTONE_IMPLICIT_TENANT_NAME ceph config set client.rgw rgw_swift_versioning_enabled TRUE / FALSE ceph config set client.rgw rgw_swift_enforce_content_length TRUE / FALSE ceph config set client.rgw rgw_swift_account_in_url TRUE / FALSE ceph config set client.rgw rgw_trust_forwarded_https TRUE / FALSE ceph config set client.rgw rgw_max_attr_name_len MAXIMUM_LENGTH_OF_METADATA_NAMES ceph config set client.rgw rgw_max_attrs_num_in_req MAXIMUM_NUMBER_OF_METADATA_ITEMS ceph config set client.rgw rgw_max_attr_size MAXIMUM_LENGTH_OF_METADATA_VALUE ceph config set client.rgw rgw_keystone_accepted_reader_roles SwiftSystemReader",
"ceph config set client.rgw rgw_keystone_verify_ssl false ceph config set client.rgw rgw_s3_auth_use_keystone true ceph config set client.rgw rgw_keystone_api_version 3 ceph config set client.rgw rgw_keystone_url http://<public Keystone endpoint>:5000/ ceph config set client.rgw rgw_keystone_accepted_roles 'member, Member, admin' ceph config set client.rgw rgw_keystone_accepted_admin_roles 'ResellerAdmin, swiftoperator' ceph config set client.rgw rgw_keystone_admin_domain default ceph config set client.rgw rgw_keystone_admin_project service ceph config set client.rgw rgw_keystone_admin_user swift ceph config set client.rgw rgw_keystone_admin_password password ceph config set client.rgw rgw_keystone_implicit_tenants true ceph config set client.rgw rgw_swift_versioning_enabled true ceph config set client.rgw rgw_swift_enforce_content_length true ceph config set client.rgw rgw_swift_account_in_url true ceph config set client.rgw rgw_trust_forwarded_https true ceph config set client.rgw rgw_max_attr_name_len 128 ceph config set client.rgw rgw_max_attrs_num_in_req 90 ceph config set client.rgw rgw_max_attr_size 1024 ceph config set client.rgw rgw_keystone_accepted_reader_roles SwiftSystemReader",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"grubby --update-kernel=ALL --args=\"intel_iommu=on\"",
"dnf install -y qatlib-service qatlib qatzip qatengine",
"usermod -aG qat root",
"cat /etc/sysconfig/qat ServicesEnabled=asym POLICY=8",
"cat /etc/sysconfig/qat ServicesEnabled=dc POLICY=8",
"cat /etc/sysconfig/qat ServicesEnabled=asym,dc POLICY=8",
"sudo vim /etc/security/limits.conf root - memlock 500000 ceph - memlock 500000",
"sudo su -l USDUSER",
"systemctl enable qat",
"systemctl reboot",
"service_type: rgw service_id: rgw_qat placement: label: rgw extra_container_args: - \"-v /etc/group:/etc/group:ro\" - \"--group-add=keep-groups\" - \"--cap-add=SYS_ADMIN\" - \"--cap-add=SYS_PTRACE\" - \"--cap-add=IPC_LOCK\" - \"--security-opt seccomp=unconfined\" - \"--ulimit memlock=209715200:209715200\" - \"--device=/dev/qat_adf_ctl:/dev/qat_adf_ctl\" - \"--device=/dev/vfio/vfio:/dev/vfio/vfio\" - \"--device=/dev/vfio/333:/dev/vfio/333\" - \"--device=/dev/vfio/334:/dev/vfio/334\" - \"--device=/dev/vfio/335:/dev/vfio/335\" - \"--device=/dev/vfio/336:/dev/vfio/336\" - \"--device=/dev/vfio/337:/dev/vfio/337\" - \"--device=/dev/vfio/338:/dev/vfio/338\" - \"--device=/dev/vfio/339:/dev/vfio/339\" - \"--device=/dev/vfio/340:/dev/vfio/340\" - \"--device=/dev/vfio/341:/dev/vfio/341\" - \"--device=/dev/vfio/342:/dev/vfio/342\" - \"--device=/dev/vfio/343:/dev/vfio/343\" - \"--device=/dev/vfio/344:/dev/vfio/344\" - \"--device=/dev/vfio/345:/dev/vfio/345\" - \"--device=/dev/vfio/346:/dev/vfio/346\" - \"--device=/dev/vfio/347:/dev/vfio/347\" - \"--device=/dev/vfio/348:/dev/vfio/348\" - \"--device=/dev/vfio/349:/dev/vfio/349\" - \"--device=/dev/vfio/350:/dev/vfio/350\" - \"--device=/dev/vfio/351:/dev/vfio/351\" - \"--device=/dev/vfio/352:/dev/vfio/352\" - \"--device=/dev/vfio/353:/dev/vfio/353\" - \"--device=/dev/vfio/354:/dev/vfio/354\" - \"--device=/dev/vfio/355:/dev/vfio/355\" - \"--device=/dev/vfio/356:/dev/vfio/356\" - \"--device=/dev/vfio/357:/dev/vfio/357\" - \"--device=/dev/vfio/358:/dev/vfio/358\" - \"--device=/dev/vfio/359:/dev/vfio/359\" - \"--device=/dev/vfio/360:/dev/vfio/360\" - \"--device=/dev/vfio/361:/dev/vfio/361\" - \"--device=/dev/vfio/362:/dev/vfio/362\" - \"--device=/dev/vfio/363:/dev/vfio/363\" - \"--device=/dev/vfio/364:/dev/vfio/364\" - \"--device=/dev/vfio/365:/dev/vfio/365\" - \"--device=/dev/vfio/366:/dev/vfio/366\" - \"--device=/dev/vfio/367:/dev/vfio/367\" - \"--device=/dev/vfio/368:/dev/vfio/368\" - \"--device=/dev/vfio/369:/dev/vfio/369\" - \"--device=/dev/vfio/370:/dev/vfio/370\" - \"--device=/dev/vfio/371:/dev/vfio/371\" - \"--device=/dev/vfio/372:/dev/vfio/372\" - \"--device=/dev/vfio/373:/dev/vfio/373\" - \"--device=/dev/vfio/374:/dev/vfio/374\" - \"--device=/dev/vfio/375:/dev/vfio/375\" - \"--device=/dev/vfio/376:/dev/vfio/376\" - \"--device=/dev/vfio/377:/dev/vfio/377\" - \"--device=/dev/vfio/378:/dev/vfio/378\" - \"--device=/dev/vfio/379:/dev/vfio/379\" - \"--device=/dev/vfio/380:/dev/vfio/380\" - \"--device=/dev/vfio/381:/dev/vfio/381\" - \"--device=/dev/vfio/382:/dev/vfio/382\" - \"--device=/dev/vfio/383:/dev/vfio/383\" - \"--device=/dev/vfio/384:/dev/vfio/384\" - \"--device=/dev/vfio/385:/dev/vfio/385\" - \"--device=/dev/vfio/386:/dev/vfio/386\" - \"--device=/dev/vfio/387:/dev/vfio/387\" - \"--device=/dev/vfio/388:/dev/vfio/388\" - \"--device=/dev/vfio/389:/dev/vfio/389\" - \"--device=/dev/vfio/390:/dev/vfio/390\" - \"--device=/dev/vfio/391:/dev/vfio/391\" - \"--device=/dev/vfio/392:/dev/vfio/392\" - \"--device=/dev/vfio/393:/dev/vfio/393\" - \"--device=/dev/vfio/394:/dev/vfio/394\" - \"--device=/dev/vfio/395:/dev/vfio/395\" - \"--device=/dev/vfio/396:/dev/vfio/396\" - \"--device=/dev/vfio/devices/vfio0:/dev/vfio/devices/vfio0\" - \"--device=/dev/vfio/devices/vfio1:/dev/vfio/devices/vfio1\" - \"--device=/dev/vfio/devices/vfio2:/dev/vfio/devices/vfio2\" - \"--device=/dev/vfio/devices/vfio3:/dev/vfio/devices/vfio3\" - \"--device=/dev/vfio/devices/vfio4:/dev/vfio/devices/vfio4\" - \"--device=/dev/vfio/devices/vfio5:/dev/vfio/devices/vfio5\" - \"--device=/dev/vfio/devices/vfio6:/dev/vfio/devices/vfio6\" - \"--device=/dev/vfio/devices/vfio7:/dev/vfio/devices/vfio7\" - \"--device=/dev/vfio/devices/vfio8:/dev/vfio/devices/vfio8\" - \"--device=/dev/vfio/devices/vfio9:/dev/vfio/devices/vfio9\" - \"--device=/dev/vfio/devices/vfio10:/dev/vfio/devices/vfio10\" - \"--device=/dev/vfio/devices/vfio11:/dev/vfio/devices/vfio11\" - \"--device=/dev/vfio/devices/vfio12:/dev/vfio/devices/vfio12\" - \"--device=/dev/vfio/devices/vfio13:/dev/vfio/devices/vfio13\" - \"--device=/dev/vfio/devices/vfio14:/dev/vfio/devices/vfio14\" - \"--device=/dev/vfio/devices/vfio15:/dev/vfio/devices/vfio15\" - \"--device=/dev/vfio/devices/vfio16:/dev/vfio/devices/vfio16\" - \"--device=/dev/vfio/devices/vfio17:/dev/vfio/devices/vfio17\" - \"--device=/dev/vfio/devices/vfio18:/dev/vfio/devices/vfio18\" - \"--device=/dev/vfio/devices/vfio19:/dev/vfio/devices/vfio19\" - \"--device=/dev/vfio/devices/vfio20:/dev/vfio/devices/vfio20\" - \"--device=/dev/vfio/devices/vfio21:/dev/vfio/devices/vfio21\" - \"--device=/dev/vfio/devices/vfio22:/dev/vfio/devices/vfio22\" - \"--device=/dev/vfio/devices/vfio23:/dev/vfio/devices/vfio23\" - \"--device=/dev/vfio/devices/vfio24:/dev/vfio/devices/vfio24\" - \"--device=/dev/vfio/devices/vfio25:/dev/vfio/devices/vfio25\" - \"--device=/dev/vfio/devices/vfio26:/dev/vfio/devices/vfio26\" - \"--device=/dev/vfio/devices/vfio27:/dev/vfio/devices/vfio27\" - \"--device=/dev/vfio/devices/vfio28:/dev/vfio/devices/vfio28\" - \"--device=/dev/vfio/devices/vfio29:/dev/vfio/devices/vfio29\" - \"--device=/dev/vfio/devices/vfio30:/dev/vfio/devices/vfio30\" - \"--device=/dev/vfio/devices/vfio31:/dev/vfio/devices/vfio31\" - \"--device=/dev/vfio/devices/vfio32:/dev/vfio/devices/vfio32\" - \"--device=/dev/vfio/devices/vfio33:/dev/vfio/devices/vfio33\" - \"--device=/dev/vfio/devices/vfio34:/dev/vfio/devices/vfio34\" - \"--device=/dev/vfio/devices/vfio35:/dev/vfio/devices/vfio35\" - \"--device=/dev/vfio/devices/vfio36:/dev/vfio/devices/vfio36\" - \"--device=/dev/vfio/devices/vfio37:/dev/vfio/devices/vfio37\" - \"--device=/dev/vfio/devices/vfio38:/dev/vfio/devices/vfio38\" - \"--device=/dev/vfio/devices/vfio39:/dev/vfio/devices/vfio39\" - \"--device=/dev/vfio/devices/vfio40:/dev/vfio/devices/vfio40\" - \"--device=/dev/vfio/devices/vfio41:/dev/vfio/devices/vfio41\" - \"--device=/dev/vfio/devices/vfio42:/dev/vfio/devices/vfio42\" - \"--device=/dev/vfio/devices/vfio43:/dev/vfio/devices/vfio43\" - \"--device=/dev/vfio/devices/vfio44:/dev/vfio/devices/vfio44\" - \"--device=/dev/vfio/devices/vfio45:/dev/vfio/devices/vfio45\" - \"--device=/dev/vfio/devices/vfio46:/dev/vfio/devices/vfio46\" - \"--device=/dev/vfio/devices/vfio47:/dev/vfio/devices/vfio47\" - \"--device=/dev/vfio/devices/vfio48:/dev/vfio/devices/vfio48\" - \"--device=/dev/vfio/devices/vfio49:/dev/vfio/devices/vfio49\" - \"--device=/dev/vfio/devices/vfio50:/dev/vfio/devices/vfio50\" - \"--device=/dev/vfio/devices/vfio51:/dev/vfio/devices/vfio51\" - \"--device=/dev/vfio/devices/vfio52:/dev/vfio/devices/vfio52\" - \"--device=/dev/vfio/devices/vfio53:/dev/vfio/devices/vfio53\" - \"--device=/dev/vfio/devices/vfio54:/dev/vfio/devices/vfio54\" - \"--device=/dev/vfio/devices/vfio55:/dev/vfio/devices/vfio55\" - \"--device=/dev/vfio/devices/vfio56:/dev/vfio/devices/vfio56\" - \"--device=/dev/vfio/devices/vfio57:/dev/vfio/devices/vfio57\" - \"--device=/dev/vfio/devices/vfio58:/dev/vfio/devices/vfio58\" - \"--device=/dev/vfio/devices/vfio59:/dev/vfio/devices/vfio59\" - \"--device=/dev/vfio/devices/vfio60:/dev/vfio/devices/vfio60\" - \"--device=/dev/vfio/devices/vfio61:/dev/vfio/devices/vfio61\" - \"--device=/dev/vfio/devices/vfio62:/dev/vfio/devices/vfio62\" - \"--device=/dev/vfio/devices/vfio63:/dev/vfio/devices/vfio63\" networks: - 172.17.8.0/24 spec: rgw_frontend_port: 8000",
"plugin crypto accelerator = crypto_qat",
"qat compressor enabled=true",
"[user@client ~]USD vi bucket-encryption.json",
"{ \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] }",
"aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api put-bucket-encryption --bucket pass:q[_BUCKET_NAME_] --server-side-encryption-configuration pass:q[_file://PATH_TO_BUCKET_ENCRYPTION_CONFIGURATION_FILE/BUCKET_ENCRYPTION_CONFIGURATION_FILE.json_]",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-encryption --bucket testbucket --server-side-encryption-configuration file://bucket-encryption.json",
"aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api get-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --profile ceph --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket { \"ServerSideEncryptionConfiguration\": { \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] } }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-encryption --bucket testbucket",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket An error occurred (ServerSideEncryptionConfigurationNotFoundError) when calling the GetBucketEncryption operation: The server side encryption configuration was not found",
"frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check",
"frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto https here we set the incoming HTTPS port on the load balancer (eg : 443) http-request set-header X-Forwarded-Port 443 default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check",
"ceph config set client.rgw rgw_trust_forwarded_https true",
"systemctl enable haproxy systemctl start haproxy",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=0",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=1",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=2",
"vault policy write rgw-kv-policy -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF",
"vault policy write rgw-transit-policy -<<EOF path \"transit/keys/*\" { capabilities = [ \"create\", \"update\" ] denied_parameters = {\"exportable\" = [], \"allow_plaintext_backup\" = [] } } path \"transit/keys/*\" { capabilities = [\"read\", \"delete\"] } path \"transit/keys/\" { capabilities = [\"list\"] } path \"transit/keys/+/rotate\" { capabilities = [ \"update\" ] } path \"transit/*\" { capabilities = [ \"update\" ] } EOF",
"vault policy write old-rgw-transit-policy -<<EOF path \"transit/export/encryption-key/*\" { capabilities = [\"read\"] } EOF",
"ceph config set client.rgw rgw_crypt_s3_kms_backend vault",
"ceph config set client.rgw rgw_crypt_vault_auth agent ceph config set client.rgw rgw_crypt_vault_addr http:// VAULT_SERVER :8100",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.role_id > PATH_TO_FILE",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.secret_id > PATH_TO_FILE",
"pid_file = \"/run/kv-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/root/vault_configs/kv-agent-role-id\" secret_id_file_path =\"/root/vault_configs/kv-agent-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"http://10.8.128.9:8200\" }",
"/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl",
"ceph config set client.rgw rgw_crypt_vault_secret_engine kv",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit",
"ceph config set client.rgw rgw_crypt_vault_namespace testnamespace1",
"ceph config set client.rgw rgw_crypt_vault_prefix /v1/secret/data",
"ceph config set client.rgw rgw_crypt_vault_prefix /v1/transit/export/encryption-key",
"http://vault-server:8200/v1/transit/export/encryption-key",
"systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"cephadm shell",
"ceph config set client.rgw rgw_crypt_sse_s3_backend vault",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http:// VAULT_AGENT : VAULT_AGENT_PORT",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http://vaultagent:8100",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-role-id > PATH_TO_FILE",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-secret-id > PATH_TO_FILE",
"pid_file = \"/run/rgw-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/usr/local/etc/vault/.rgw-ap-role-id\" secret_id_file_path =\"/usr/local/etc/vault/.rgw-ap-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"https://vaultserver:8200\" }",
"/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine kv",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine transit",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_namespace company/testnamespace1",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/secret/data",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/transit",
"http://vaultserver:8200/v1/transit",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert PATH_TO_CA_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert PATH_TO_CLIENT_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey PATH_TO_PRIVATE_KEY",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert /etc/ceph/vault.ca ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert /etc/ceph/vault.crt ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey /etc/ceph/vault.key",
"systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"vault secrets enable -path secret kv-v2",
"vault kv put secret/ PROJECT_NAME / BUCKET_NAME key=USD(openssl rand -base64 32)",
"vault kv put secret/myproject/mybucketkey key=USD(openssl rand -base64 32) ====== Metadata ====== Key Value --- ---- created_time 2020-02-21T17:01:09.095824999Z deletion_time n/a destroyed false version 1",
"vault secrets enable transit",
"vault write -f transit/keys/ BUCKET_NAME exportable=true",
"vault write -f transit/keys/mybucketkey exportable=true",
"vault read transit/export/encryption-key/ BUCKET_NAME / VERSION_NUMBER",
"vault read transit/export/encryption-key/mybucketkey/1 Key Value --- ----- keys map[1:-gbTI9lNpqv/V/2lDcmH2Nq1xKn6FPDWarCmFM2aNsQ=] name mybucketkey type aes256-gcm96",
"[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id myproject/mybucketkey",
"[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256",
"[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey",
"[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256",
"[user@host01 ~]USD SEED=USD(head -10 /dev/urandom | sha512sum | cut -b 1-30)",
"[user@host01 ~]USD echo USDSEED 492dedb20cf51d1405ef6a1316017e",
"radosgw-admin mfa create --uid= USERID --totp-serial= SERIAL --totp-seed= SEED --totp-seed-type= SEED_TYPE --totp-seconds= TOTP_SECONDS --totp-window= TOTP_WINDOW",
"radosgw-admin mfa create --uid=johndoe --totp-serial=MFAtest --totp-seed=492dedb20cf51d1405ef6a1316017e",
"radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN",
"radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok",
"radosgw-admin mfa resync --uid= USERID --totp-serial= SERIAL --totp-pin= PREVIOUS_PIN --totp=pin= CURRENT_PIN",
"radosgw-admin mfa resync --uid=johndoe --totp-serial=MFAtest --totp-pin=802021 --totp-pin=439996",
"radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN",
"radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok",
"radosgw-admin mfa list --uid= USERID",
"radosgw-admin mfa list --uid=johndoe { \"entries\": [ { \"type\": 2, \"id\": \"MFAtest\", \"seed\": \"492dedb20cf51d1405ef6a1316017e\", \"seed_type\": \"hex\", \"time_ofs\": 0, \"step_size\": 30, \"window\": 2 } ] }",
"radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa remove --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa remove --uid=johndoe --totp-serial=MFAtest",
"radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa get --uid=johndoe --totp-serial=MFAtest MFA serial id not found",
"radosgw-admin zonegroup --rgw-zonegroup= ZONE_GROUP_NAME get > FILE_NAME .json",
"radosgw-admin zonegroup --rgw-zonegroup=default get > zonegroup.json",
"{ \"name\": \"default\", \"api_name\": \"\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"master_zone\": \"\", \"zones\": [{ \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 5 }], \"placement_targets\": [{ \"name\": \"default-placement\", \"tags\": [] }, { \"name\": \"special-placement\", \"tags\": [] }], \"default_placement\": \"default-placement\" }",
"radosgw-admin zonegroup set < zonegroup.json",
"radosgw-admin zone get > zone.json",
"{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [{ \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\", \"data_extra_pool\": \".rgw.buckets.extra\" } }, { \"key\": \"special-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets.special\", \"data_extra_pool\": \".rgw.buckets.extra\" } }] }",
"radosgw-admin zone set < zone.json",
"radosgw-admin period update --commit",
"curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H \"X-Storage-Policy: special-placement\" -H \"X-Auth-Token: AUTH_rgwtxxxxxx\"",
"radosgw-admin zonegroup placement add --rgw-zonegroup=\"default\" --placement-id=\"indexless-placement\"",
"radosgw-admin zone placement add --rgw-zone=\"default\" --placement-id=\"indexless-placement\" --data-pool=\"default.rgw.buckets.data\" --index-pool=\"default.rgw.buckets.index\" --data_extra_pool=\"default.rgw.buckets.non-ec\" --placement-index-type=\"indexless\"",
"radosgw-admin zonegroup placement default --placement-id \"indexless-placement\"",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"ln: failed to access '/tmp/rgwrbi-object-list.4053207': No such file or directory",
"/usr/bin/rgw-restore-bucket-index -b bucket-large-1 -p local-zone.rgw.buckets.data marker is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 bucket_id is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 number of bucket index shards is 5 data pool is local-zone.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. The list of objects that we will attempt to restore can be found in \"/tmp/rgwrbi-object-list.49946\". Please review the object names in that file (either below or in another window/terminal) before proceeding. Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: view Viewing Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: proceed! Proceeding NOTICE: Bucket stats are currently incorrect. They can be restored with the following command after 2 minutes: radosgw-admin bucket list --bucket=bucket-large-1 --allow-unordered --max-entries=1073741824 Would you like to take the time to recalculate bucket stats now? [yes/no] yes Done real 2m16.530s user 0m1.082s sys 0m0.870s",
"time rgw-restore-bucket-index --proceed serp-bu-ver-1 default.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. marker is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 bucket_id is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 Error: this bucket appears to be versioned, and this tool cannot work with versioned buckets.",
"Bucket _BUCKET_NAME_ already has too many log generations (4) from previous reshards that peer zones haven't finished syncing. Resharding is not recommended until the old generations sync, but you can force a reshard with `--yes-i-really-mean-it`.",
"number of objects expected in a bucket / 100,000",
"ceph config set client.rgw rgw_override_bucket_index_max_shards VALUE",
"ceph config set client.rgw rgw_override_bucket_index_max_shards 12",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"number of objects expected in a bucket / 100,000",
"radosgw-admin zonegroup get > zonegroup.json",
"bucket_index_max_shards = VALUE",
"bucket_index_max_shards = 12",
"radosgw-admin zonegroup set < zonegroup.json",
"radosgw-admin period update --commit",
"radosgw-admin reshard status --bucket BUCKET_NAME",
"radosgw-admin reshard status --bucket data",
"radosgw-admin sync status",
"radosgw-admin period get",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_reshard_num_logs 23",
"radosgw-admin reshard add --bucket BUCKET --num-shards NUMBER",
"radosgw-admin reshard add --bucket data --num-shards 10",
"radosgw-admin reshard list",
"radosgw-admin bucket layout --bucket data { \"layout\": { \"resharding\": \"None\", \"current_index\": { \"gen\": 1, \"layout\": { \"type\": \"Normal\", \"normal\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } }, \"logs\": [ { \"gen\": 0, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 0, \"layout\": { \"num_shards\": 11, \"hash_type\": \"Mod\" } } } }, { \"gen\": 1, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 1, \"layout\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } } } ] } }",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"radosgw-admin reshard process",
"radosgw-admin reshard cancel --bucket BUCKET",
"radosgw-admin reshard cancel --bucket data",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"radosgw-admin sync status",
"radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --enable-feature=resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup=us --enable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --enable-feature=resharding",
"radosgw-admin zone modify --rgw-zone=us-east --enable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin period get \"zones\": [ { \"id\": \"505b48db-6de0-45d5-8208-8c98f7b1278d\", \"name\": \"us_east\", \"endpoints\": [ \"http://10.0.208.11:8080\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"resharding\" ] \"default_placement\": \"default-placement\", \"realm_id\": \"26cf6f23-c3a0-4d57-aae4-9b0010ee55cc\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ]",
"radosgw-admin sync status realm 26cf6f23-c3a0-4d57-aae4-9b0010ee55cc (usa) zonegroup 33a17718-6c77-493e-99fe-048d3110a06e (us) zone 505b48db-6de0-45d5-8208-8c98f7b1278d (us_east) zonegroup features enabled: resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --disable-feature=resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup=us --disable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin bi list --bucket= BUCKET > BUCKET .list.backup",
"radosgw-admin bi list --bucket=data > data.list.backup",
"radosgw-admin bucket reshard --bucket= BUCKET --num-shards= NUMBER",
"radosgw-admin bucket reshard --bucket=data --num-shards=100",
"radosgw-admin reshard status --bucket bucket",
"radosgw-admin reshard status --bucket data",
"radosgw-admin reshard stale-instances list",
"radosgw-admin reshard stale-instances rm",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"[root@host01 ~] radosgw-admin zone placement modify --rgw-zone=default --placement-id=default-placement --compression=zlib { \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"data_pool\": \"default.rgw.buckets.data\", \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0, \"compression\": \"zlib\" } } ], }",
"radosgw-admin bucket stats --bucket= BUCKET_NAME { \"usage\": { \"rgw.main\": { \"size\": 1075028, \"size_actual\": 1331200, \"size_utilized\": 592035, \"size_kb\": 1050, \"size_kb_actual\": 1300, \"size_kb_utilized\": 579, \"num_objects\": 104 } }, }",
"radosgw-admin user <create|modify|info|rm|suspend|enable|check|stats> <--uid= USER_ID |--subuser= SUB_USER_NAME > [other-options]",
"radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --access_key TESTER --secret test123 user create",
"radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --subuser tester:swift --key-type swift --access full subuser create radosgw-admin key create --subuser 'testxUSDtester:swift' --key-type swift --secret test123",
"radosgw-admin user create --uid= USER_ID [--key-type= KEY_TYPE ] [--gen-access-key|--access-key= ACCESS_KEY ] [--gen-secret | --secret= SECRET_KEY ] [--email= EMAIL ] --display-name= DISPLAY_NAME",
"radosgw-admin user create --uid=janedoe --access-key=11BS02LGFB6AL6H1ADMW --secret=vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY [email protected] --display-name=Jane Doe",
"{ \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}",
"radosgw-admin subuser create --uid= USER_ID --subuser= SUB_USER_ID --access=[ read | write | readwrite | full ]",
"radosgw-admin subuser create --uid=janedoe --subuser=janedoe:swift --access=full { \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"janedoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}",
"radosgw-admin user info --uid=janedoe",
"radosgw-admin user info --uid=janedoe --tenant=test",
"radosgw-admin user modify --uid=janedoe --display-name=\"Jane E. Doe\"",
"radosgw-admin subuser modify --subuser=janedoe:swift --access=full",
"radosgw-admin user suspend --uid=johndoe",
"radosgw-admin user enable --uid=johndoe",
"radosgw-admin user rm --uid= USER_ID [--purge-keys] [--purge-data]",
"radosgw-admin user rm --uid=johndoe --purge-data",
"radosgw-admin subuser rm --subuser=johndoe:swift --purge-keys",
"radosgw-admin subuser rm --subuser= SUB_USER_ID",
"radosgw-admin subuser rm --subuser=johndoe:swift",
"radosgw-admin user rename --uid= CURRENT_USER_NAME --new-uid= NEW_USER_NAME",
"radosgw-admin user rename --uid=user1 --new-uid=user2 { \"user_id\": \"user2\", \"display_name\": \"user 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"user2\", \"access_key\": \"59EKHI6AI9F8WOW8JQZJ\", \"secret_key\": \"XH0uY3rKCUcuL73X0ftjXbZqUbk0cavD11rD8MsA\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin user rename --uid USER_NAME --new-uid NEW_USER_NAME --tenant TENANT",
"radosgw-admin user rename --uid=testUSDuser1 --new-uid=testUSDuser2 --tenant test 1000 objects processed in tvtester1. Next marker 80_tVtester1_99 2000 objects processed in tvtester1. Next marker 64_tVtester1_44 3000 objects processed in tvtester1. Next marker 48_tVtester1_28 4000 objects processed in tvtester1. Next marker 2_tVtester1_74 5000 objects processed in tvtester1. Next marker 14_tVtester1_53 6000 objects processed in tvtester1. Next marker 87_tVtester1_61 7000 objects processed in tvtester1. Next marker 6_tVtester1_57 8000 objects processed in tvtester1. Next marker 52_tVtester1_91 9000 objects processed in tvtester1. Next marker 34_tVtester1_74 9900 objects processed in tvtester1. Next marker 9_tVtester1_95 1000 objects processed in tvtester2. Next marker 82_tVtester2_93 2000 objects processed in tvtester2. Next marker 64_tVtester2_9 3000 objects processed in tvtester2. Next marker 48_tVtester2_22 4000 objects processed in tvtester2. Next marker 32_tVtester2_42 5000 objects processed in tvtester2. Next marker 16_tVtester2_36 6000 objects processed in tvtester2. Next marker 89_tVtester2_46 7000 objects processed in tvtester2. Next marker 70_tVtester2_78 8000 objects processed in tvtester2. Next marker 51_tVtester2_41 9000 objects processed in tvtester2. Next marker 33_tVtester2_32 9900 objects processed in tvtester2. Next marker 9_tVtester2_83 { \"user_id\": \"testUSDuser2\", \"display_name\": \"User 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testUSDuser2\", \"access_key\": \"user2\", \"secret_key\": \"123456789\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin user info --uid= NEW_USER_NAME",
"radosgw-admin user info --uid=user2",
"radosgw-admin user info --uid= TENANT USD USER_NAME",
"radosgw-admin user info --uid=testUSDuser2",
"radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret { \"user_id\": \"johndoe\", \"rados_uid\": 0, \"display_name\": \"John Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"subusers\": [ { \"id\": \"johndoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"QFAMEDSJP5DEKJO0DDXY\", \"secret_key\": \"iaSFLDVvDdQt6lkNzHyW4fPLZugBAI1g17LO0+87\"}], \"swift_keys\": [ { \"user\": \"johndoe:swift\", \"secret_key\": \"E9T2rUZNu2gxUjcwUBO8n\\/Ev4KX6\\/GprEuH4qhu1\"}]}",
"radosgw-admin key create --uid=johndoe --key-type=s3 --gen-access-key --gen-secret",
"radosgw-admin user info --uid=johndoe",
"radosgw-admin user info --uid=johndoe { \"user_id\": \"johndoe\", \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"0555b35654ad1656d804\", \"secret_key\": \"h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==\" } ], }",
"radosgw-admin key rm --uid= USER_ID --access-key ACCESS_KEY",
"radosgw-admin key rm --uid=johndoe --access-key 0555b35654ad1656d804",
"radosgw-admin caps add --uid= USER_ID --caps= CAPS",
"--caps=\"[users|buckets|metadata|usage|zone]=[*|read|write|read, write]\"",
"radosgw-admin caps add --uid=johndoe --caps=\"users=*\"",
"radosgw-admin caps remove --uid=johndoe --caps={caps}",
"radosgw-admin role create --role-name= ROLE_NAME [--path==\" PATH_TO_FILE \"] [--assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT ]",
"radosgw-admin role create --role-name=S3Access1 --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role get --role-name= ROLE_NAME",
"radosgw-admin role get --role-name=S3Access1 { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role list",
"radosgw-admin role list [ { \"RoleId\": \"85fb46dd-a88a-4233-96f5-4fb54f4353f7\", \"RoleName\": \"kvm-sts\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts\", \"CreateDate\": \"2022-09-13T11:55:09.39Z\", \"MaxSessionDuration\": 7200, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }, { \"RoleId\": \"9116218d-4e85-4413-b28d-cdfafba24794\", \"RoleName\": \"kvm-sts-1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts-1\", \"CreateDate\": \"2022-09-16T00:05:57.483Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]",
"radosgw-admin role-trust-policy modify --role-name= ROLE_NAME --assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT",
"radosgw-admin role-trust-policy modify --role-name=S3Access1 --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role-policy get --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role-policy get --role-name=S3Access1 --policy-name=Policy1 { \"Permission policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":[\\\"s3:*\\\"],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"}]}\" }",
"radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1",
"radosgw-admin role delete --role-name= ROLE_NAME",
"radosgw-admin role delete --role-name=S3Access1",
"radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOCUMENT",
"radosgw-admin role-policy put --role-name=S3Access1 --policy-name=Policy1 --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}",
"radosgw-admin role-policy list --role-name= ROLE_NAME",
"radosgw-admin role-policy list --role-name=S3Access1 [ \"Policy1\" ]",
"radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1",
"radosgw-admin role update --role-name= ROLE_NAME --max-session-duration=7200",
"radosgw-admin role update --role-name=test-sts-role --max-session-duration=7200",
"radosgw-admin role list [ { \"RoleId\": \"d4caf33f-caba-42f3-8bd4-48c84b4ea4d3\", \"RoleName\": \"test-sts-role\", \"Path\": \"/\", \"Arn\": \"arn:aws:iam:::role/test-role\", \"CreateDate\": \"2022-09-07T20:01:15.563Z\", \"MaxSessionDuration\": 7200, <<<<<< \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]",
"radosgw-admin quota set --quota-scope=user --uid= USER_ID [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]",
"radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024",
"radosgw-admin quota enable --quota-scope=user --uid= USER_ID",
"radosgw-admin quota disable --quota-scope=user --uid= USER_ID",
"radosgw-admin quota set --uid= USER_ID --quota-scope=bucket --bucket= BUCKET_NAME [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]",
"radosgw-admin quota enable --quota-scope=bucket --uid= USER_ID",
"radosgw-admin quota disable --quota-scope=bucket --uid= USER_ID",
"radosgw-admin user info --uid= USER_ID",
"radosgw-admin user info --uid= USER_ID --tenant= TENANT",
"radosgw-admin user stats --uid= USER_ID --sync-stats",
"radosgw-admin user stats --uid= USER_ID",
"radosgw-admin global quota get",
"radosgw-admin global quota set --quota-scope bucket --max-objects 1024 radosgw-admin global quota enable --quota-scope bucket",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket link --bucket= ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= USER_ID",
"radosgw-admin bucket link --bucket=s3bucket1 --bucket-new-name=s3newb --uid=testuser",
"radosgw-admin bucket link --bucket= tenant / ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= TENANT USD USER_ID",
"radosgw-admin bucket link --bucket=test/s3bucket1 --bucket-new-name=s3newb --uid=testUSDtestuser",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"s3newb\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket rm --bucket= BUCKET_NAME",
"radosgw-admin bucket rm --bucket=s3bucket1",
"radosgw-admin bucket rm --bucket= BUCKET --purge-objects --bypass-gc",
"radosgw-admin bucket rm --bucket=s3bucket1 --purge-objects --bypass-gc",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket link --uid= USER --bucket= BUCKET",
"radosgw-admin bucket link --uid=user2 --bucket=data",
"radosgw-admin bucket list --uid=user2 [ \"data\" ]",
"radosgw-admin bucket chown --uid= user --bucket= bucket",
"radosgw-admin bucket chown --uid=user2 --bucket=data",
"radosgw-admin bucket list --bucket=data",
"radosgw-admin bucket link --bucket= CURRENT_TENANT / BUCKET --uid= NEW_TENANT USD USER",
"radosgw-admin bucket link --bucket=test/data --uid=test2USDuser2",
"radosgw-admin bucket list --uid=testUSDuser2 [ \"data\" ]",
"radosgw-admin bucket chown --bucket= NEW_TENANT / BUCKET --uid= NEW_TENANT USD USER",
"radosgw-admin bucket chown --bucket='test2/data' --uid='testUSDtuser2'",
"radosgw-admin bucket list --bucket=test2/data",
"ceph config set client.rgw rgw_keystone_implicit_tenants true",
"swift list",
"s3cmd ls",
"radosgw-admin bucket link --bucket=/ BUCKET --uid=' TENANT USD USER '",
"radosgw-admin bucket link --bucket=/data --uid='testUSDtenanted-user'",
"radosgw-admin bucket list --uid='testUSDtenanted-user' [ \"data\" ]",
"radosgw-admin bucket chown --bucket=' tenant / bucket name ' --uid=' tenant USD user '",
"radosgw-admin bucket chown --bucket='test/data' --uid='testUSDtenanted-user'",
"radosgw-admin bucket list --bucket=test/data",
"radosgw-admin bucket radoslist --bucket BUCKET_NAME",
"radosgw-admin bucket radoslist --bucket mybucket",
"head /usr/bin/rgw-orphan-list",
"mkdir orphans",
"cd orphans",
"rgw-orphan-list",
"Available pools: .rgw.root default.rgw.control default.rgw.meta default.rgw.log default.rgw.buckets.index default.rgw.buckets.data rbd default.rgw.buckets.non-ec ma.rgw.control ma.rgw.meta ma.rgw.log ma.rgw.buckets.index ma.rgw.buckets.data ma.rgw.buckets.non-ec Which pool do you want to search for orphans?",
"rgw-orphan-list -h rgw-orphan-list POOL_NAME / DIRECTORY",
"rgw-orphan-list default.rgw.buckets.data /orphans 2023-09-12 08:41:14 ceph-host01 Computing delta 2023-09-12 08:41:14 ceph-host01 Computing results 10 potential orphans found out of a possible 2412 (0%). <<<<<<< orphans detected The results can be found in './orphan-list-20230912124113.out'. Intermediate files are './rados-20230912124113.intermediate' and './radosgw-admin-20230912124113.intermediate'. *** *** WARNING: This is EXPERIMENTAL code and the results should be used *** only with CAUTION! *** Done at 2023-09-12 08:41:14.",
"ls -l -rw-r--r--. 1 root root 770 Sep 12 03:59 orphan-list-20230912075939.out -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.error -rw-r--r--. 1 root root 248508 Sep 12 03:59 rados-20230912075939.intermediate -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.issues -rw-r--r--. 1 root root 0 Sep 12 03:59 radosgw-admin-20230912075939.error -rw-r--r--. 1 root root 247738 Sep 12 03:59 radosgw-admin-20230912075939.intermediate",
"cat ./orphan-list-20230912124113.out a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.0 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.1 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.2 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.3 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.4 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.5 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.6 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.7 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.8 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.9",
"rados -p POOL_NAME rm OBJECT_NAME",
"rados -p default.rgw.buckets.data rm myobject",
"radosgw-admin bucket check --bucket= BUCKET_NAME",
"radosgw-admin bucket check --bucket=mybucket",
"radosgw-admin bucket check --fix --bucket= BUCKET_NAME",
"radosgw-admin bucket check --fix --bucket=mybucket",
"radosgw-admin topic list",
"radosgw-admin topic get --topic=topic1",
"radosgw-admin topic rm --topic=topic1",
"client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*', 's3:ObjectLifecycle:Expiration:*'] }]})",
"{ \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"String\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Enabled\"|\"Disabled\" }, \"Destination\": { \"Bucket\": \"BUCKET_NAME\" } } ] }",
"cat replication.json { \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Disabled\" }, \"Destination\": { \"Bucket\": \"testbucket\" } } ] }",
"aws --endpoint-url=RADOSGW_ENDPOINT_URL s3api put-bucket-replication --bucket BUCKET_NAME --replication-configuration file://REPLICATION_CONFIIRATION_FILE.json",
"aws --endpoint-url=http://host01:80 s3api put-bucket-replication --bucket testbucket --replication-configuration file://replication.json",
"radosgw-admin sync policy get --bucket BUCKET_NAME",
"radosgw-admin sync policy get --bucket testbucket { \"groups\": [ { \"id\": \"s3-bucket-replication:disabled\", \"data_flow\": {}, \"pipes\": [], \"status\": \"allowed\" }, { \"id\": \"s3-bucket-replication:enabled\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"testbucket\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": {}, \"dest\": {}, \"priority\": 1, \"mode\": \"user\", \"user\": \"s3cmd\" } } ], \"status\": \"enabled\" } ] }",
"aws s3api get-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL",
"aws s3api get-bucket-replication --bucket testbucket --endpoint-url=http://host01:80 { \"ReplicationConfiguration\": { \"Role\": \"\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"Destination\": { Bucket\": \"testbucket\" } } ] } }",
"aws s3api delete-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL",
"aws s3api delete-bucket-replication --bucket testbucket --endpoint-url=http://host01:80",
"radosgw-admin sync policy get --bucket=BUCKET_NAME",
"radosgw-admin sync policy get --bucket=testbucket",
"cat user_policy.json { \"Version\":\"2012-10-17\", \"Statement\": { \"Effect\":\"Deny\", \"Action\": [ \"s3:PutReplicationConfiguration\", \"s3:GetReplicationConfiguration\", \"s3:DeleteReplicationConfiguration\" ], \"Resource\": \"arn:aws:s3:::*\", } }",
"aws --endpoint-url=ENDPOINT_URL iam put-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --policy-document POLICY_DOCUMENT_PATH",
"aws --endpoint-url=http://host01:80 iam put-user-policy --user-name newuser1 --policy-name userpolicy --policy-document file://user_policy.json",
"aws --endpoint-url=ENDPOINT_URL iam get-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --region us",
"aws --endpoint-url=http://host01:80 iam get-user-policy --user-name newuser1 --policy-name userpolicy --region us",
"[user@client ~]USD vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-lifecycle --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-lifecycle --bucket testbucket",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket",
"[user@client ~]USD vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" }, { \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\" } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json",
"aws --endpointurl= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"[user@client ~]USD aws -endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\", \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\" }, { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"docs/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} }, \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"DocsExpiration\", \"rule\": { \"id\": \"DocsExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"30\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"docs/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } }, { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }",
"cephadm shell",
"radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" }, { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" } ]",
"radosgw-admin lc process --bucket= BUCKET_NAME",
"radosgw-admin lc process --bucket=testbucket1",
"radosgw-admin lc process",
"radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 17 Mar 2022 21:48:50 GMT\", \"status\" : \"COMPLETE\" } { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 17 Mar 2022 20:38:50 GMT\", \"status\" : \"COMPLETE\" } ]",
"cephadm shell",
"ceph config set client.rgw rgw_lifecycle_work_time %D:%D-%D:%D",
"ceph config set client.rgw rgw_lifecycle_work_time 06:00-08:00",
"ceph config get client.rgw rgw_lifecycle_work_time 06:00-08:00",
"ceph osd pool create POOL_NAME",
"ceph osd pool create test.hot.data",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class hot.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"hot.test\" ] } }",
"radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL",
"radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class hot.test --data-pool test.hot.data { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"test_zone.rgw.buckets.index\", \"storage_classes\": { \"STANDARD\": { \"data_pool\": \"test.hot.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\", } }, \"data_extra_pool\": \"\", \"index_type\": 0 }",
"ceph osd pool application enable POOL_NAME rgw",
"ceph osd pool application enable test.hot.data rgw enabled application 'rgw' on pool 'test.hot.data'",
"aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080",
"aws --endpoint=http://10.0.0.80:8080 s3api put-object --bucket testbucket10 --key compliance-upload --body /root/test2.txt",
"ceph osd pool create POOL_NAME",
"ceph osd pool create test.cold.data",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class cold.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"cold.test\" ] } }",
"radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL",
"radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class cold.test --data-pool test.cold.data",
"ceph osd pool application enable POOL_NAME rgw",
"ceph osd pool application enable test.cold.data rgw enabled application 'rgw' on pool 'test.cold.data'",
"radosgw-admin zonegroup get { \"id\": \"3019de59-ddde-4c5c-b532-7cdd29de09a1\", \"name\": \"default\", \"api_name\": \"default\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"zones\": [ { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"hot.test\", \"cold.test\", \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"\", \"sync_policy\": { \"groups\": [] } }",
"radosgw-admin zone get { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"domain_root\": \"default.rgw.meta:root\", \"control_pool\": \"default.rgw.control\", \"gc_pool\": \"default.rgw.log:gc\", \"lc_pool\": \"default.rgw.log:lc\", \"log_pool\": \"default.rgw.log\", \"intent_log_pool\": \"default.rgw.log:intent\", \"usage_log_pool\": \"default.rgw.log:usage\", \"roles_pool\": \"default.rgw.meta:roles\", \"reshard_pool\": \"default.rgw.log:reshard\", \"user_keys_pool\": \"default.rgw.meta:users.keys\", \"user_email_pool\": \"default.rgw.meta:users.email\", \"user_swift_pool\": \"default.rgw.meta:users.swift\", \"user_uid_pool\": \"default.rgw.meta:users.uid\", \"otp_pool\": \"default.rgw.otp\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"storage_classes\": { \"cold.test\": { \"data_pool\": \"test.cold.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\" }, \"STANDARD\": { \"data_pool\": \"default.rgw.buckets.data\" } }, \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0 } } ], \"realm_id\": \"\", \"notif_pool\": \"default.rgw.log:notif\" }",
"aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080",
"radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"STANDARD\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }",
"vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 5, \"StorageClass\": \"hot.test\" }, { \"Days\": 20, \"StorageClass\": \"cold.test\" } ], \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\" } ] }",
"aws s3api put-bucket-lifecycle-configuration --bucket testbucket10 --lifecycle-configuration file://lifecycle.json",
"aws s3api get-bucket-lifecycle-configuration --bucket testbucke10 { \"Rules\": [ { \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 20, \"StorageClass\": \"cold.test\" }, { \"Days\": 5, \"StorageClass\": \"hot.test\" } ] } ] }",
"radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"cold.test\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }",
"aws --endpoint=http:// RGW_PORT :8080 s3api create-bucket --bucket BUCKET_NAME --object-lock-enabled-for-bucket",
"aws --endpoint=http://rgw.ceph.com:8080 s3api create-bucket --bucket worm-bucket --object-lock-enabled-for-bucket",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object-lock-configuration --bucket BUCKET_NAME --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \" RETENTION_MODE \", \"Days\": NUMBER_OF_DAYS }}}'",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-lock-configuration --bucket worm-bucket --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \"COMPLIANCE\", \"Days\": 10 }}}'",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body TEST_FILE",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body test.dd { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body PATH",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body /etc/fstab { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-legal-hold --bucket worm-bucket --key compliance-upload --legal-hold Status=ON",
"aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket",
"aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket { \"Versions\": [ { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"Size\": 288, \"StorageClass\": \"STANDARD\", \"Key\": \"hosts\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"IsLatest\": true, \"LastModified\": \"2022-06-17T08:51:17.392000+00:00\", \"Owner\": { \"DisplayName\": \"Test User in Tenant test\", \"ID\": \"testUSDtest.user\" } } } ] }",
"aws --endpoint=http://rgw.ceph.com:8080 s3api get-object --bucket worm-bucket --key compliance-upload --version-id 'IGOU.vdIs3SPduZglrB-RBaK.sfXpcd' download.1 { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2022-06-17T08:51:17+00:00\", \"ContentLength\": 288, \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"ObjectLockMode\": \"COMPLIANCE\", \"ObjectLockRetainUntilDate\": \"2023-06-17T08:51:17+00:00\" }",
"radosgw-admin usage show --uid=johndoe --start-date=2022-06-01 --end-date=2022-07-01",
"radosgw-admin usage show --show-log-entries=false",
"radosgw-admin usage trim --start-date=2022-06-01 --end-date=2022-07-31 radosgw-admin usage trim --uid=johndoe radosgw-admin usage trim --uid=johndoe --end-date=2021-04-31",
"radosgw-admin metadata get bucket: BUCKET_NAME radosgw-admin metadata get bucket.instance: BUCKET : BUCKET_ID radosgw-admin metadata get user: USER radosgw-admin metadata set user: USER",
"radosgw-admin metadata list radosgw-admin metadata list bucket radosgw-admin metadata list bucket.instance radosgw-admin metadata list user",
".bucket.meta.prodtx:test%25star:default.84099.6 .bucket.meta.testcont:default.4126.1 .bucket.meta.prodtx:testcont:default.84099.4 prodtx/testcont prodtx/test%25star testcont",
"prodtxUSDprodt test2.buckets prodtxUSDprodt.buckets test2",
"radosgw-admin ratelimit set --ratelimit-scope=user --uid= USER_ID [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin ratelimit set --ratelimit-scope=user --uid=testing --max-read-ops=1024 --max-write-bytes=10240",
"radosgw-admin ratelimit get --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit get --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }",
"radosgw-admin ratelimit enable --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit enable --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }",
"radosgw-admin ratelimit disable --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit disable --ratelimit-scope=user --uid=testing",
"radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket= BUCKET_NAME [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket=mybucket --max-read-ops=1024 --max-write-bytes=10240",
"radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }",
"radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }",
"radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket=mybucket",
"radosgw-admin global ratelimit get",
"radosgw-admin global ratelimit get { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"user_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"anonymous_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false } }",
"radosgw-admin global ratelimit set --ratelimit-scope=bucket [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope bucket --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=bucket",
"radosgw-admin global ratelimit enable --ratelimit-scope bucket",
"radosgw-admin global ratelimit set --ratelimit-scope=user [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope=user --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=user",
"radosgw-admin global ratelimit enable --ratelimit-scope=user",
"radosgw-admin global ratelimit set --ratelimit-scope=anonymous [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope=anonymous --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=anonymous",
"radosgw-admin global ratelimit enable --ratelimit-scope=anonymous",
"radosgw-admin gc list",
"radosgw-admin gc list",
"ceph config set client.rgw rgw_gc_max_concurrent_io 20 ceph config set client.rgw rgw_gc_max_trim_chunk 64",
"ceph config set client.rgw rgw_lc_max_worker 7",
"ceph config set client.rgw rgw_lc_max_wp_worker 7",
"radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]",
"radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }",
"radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3",
"radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=CLOUDTIER --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]",
"radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= AWS_ENDPOINT_URL , access_key= AWS_ACCESS_KEY ,secret= AWS_SECRET_KEY , target_path=\" TARGET_BUCKET_ON_AWS \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME",
"radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class CLOUDTIER --tier-config=endpoint=http://10.0.210.010:8080, access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"http://10.0.210.010:8080\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'",
"s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region [US]: Use \"s3.amazonaws.com\" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 10.0.210.78:80 Use \"%(bucket)s.s3.amazonaws.com\" to the target Amazon S3. \"%(bucket)s\" and \"%(location)s\" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 10.0.210.78:80 Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: No On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region: US S3 Endpoint: 10.0.210.78:80 DNS-style bucket+hostname:port template for accessing a bucket: 10.0.210.78:80 Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] Y Please wait, attempting to list all buckets Success. Your access key and secret key worked fine :-) Now verifying that encryption works Not configured. Never mind. Save settings? [y/N] y Configuration saved to '/root/.s3cfg'",
"s3cmd mb s3:// NAME_OF_THE_BUCKET_FOR_S3",
"s3cmd mb s3://awstestbucket Bucket 's3://awstestbucket/' created",
"s3cmd put FILE_NAME s3:// NAME_OF_THE_BUCKET_ON_S3",
"s3cmd put test.txt s3://awstestbucket upload: 'test.txt' -> 's3://awstestbucket/test.txt' [1 of 1] 21 of 21 100% in 1s 16.75 B/s done",
"<LifecycleConfiguration> <Rule> <ID> RULE_NAME </ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days> DAYS </Days> <StorageClass> STORAGE_CLASS_NAME </StorageClass> </Transition> </Rule> </LifecycleConfiguration>",
"cat lc_cloud.xml <LifecycleConfiguration> <Rule> <ID>Archive all objects</ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days>2</Days> <StorageClass>CLOUDTIER</StorageClass> </Transition> </Rule> </LifecycleConfiguration>",
"s3cmd setlifecycle FILE_NAME s3:// NAME_OF_THE_BUCKET_FOR_S3",
"s3cmd setlifecycle lc_config.xml s3://awstestbucket s3://awstestbucket/: Lifecycle Policy updated",
"cephadm shell",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'",
"radosgw-admin lc list [ { \"bucket\": \":awstestbucket:552a3adb-39e0-40f6-8c84-00590ed70097.54639.1\", \"started\": \"Mon, 26 Sep 2022 18:32:07 GMT\", \"status\": \"COMPLETE\" } ]",
"[root@client ~]USD radosgw-admin bucket list [ \"awstestbucket\" ]",
"[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2022-08-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }",
"s3cmd ls s3://awstestbucket 2022-08-25 09:57 0 s3://awstestbucket/test.txt",
"s3cmd info s3://awstestbucket/test.txt s3://awstestbucket/test.txt (object): File size: 0 Last mod: Mon, 03 Aug 2022 09:57:49 GMT MIME type: text/plain Storage: CLOUDTIER MD5 sum: 991d2528bb41bb839d1a9ed74b710794 SSE: none Policy: none CORS: none ACL: test-user: FULL_CONTROL x-amz-meta-s3cmd-attrs: atime:1664790668/ctime:1664790668/gid:0/gname:root/md5:991d2528bb41bb839d1a9ed74b710794/mode:33188/mtime:1664790668/uid:0/uname:root",
"[client@client01 ~]USD aws configure AWS Access Key ID [****************6VVP]: AWS Secret Access Key [****************pXqy]: Default region name [us-east-1]: Default output format [json]:",
"[client@client01 ~]USD aws s3 ls s3://dfqe-bucket-01/awstest PRE awstestbucket/",
"[client@client01 ~]USD aws s3 cp s3://dfqe-bucket-01/awstestbucket/test.txt . download: s3://dfqe-bucket-01/awstestbucket/test.txt to ./test.txt",
"radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]",
"radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }",
"aws s3 --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default mb s3:// BUCKET_NAME",
"[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default mb s3://transition",
"radosgw-admin bucket stats --bucket transition { \"bucket\": \"transition\", \"num_shards\": 11, \"tenant\": \"\", \"zonegroup\": \"b29b0e50-1301-4330-99fc-5cdcfc349acf\", \"placement_rule\": \"default-placement\", \"explicit_placement\": { \"data_pool\": \"\", \"data_extra_pool\": \"\", \"index_pool\": \"\" },",
"[root@host01 ~]USD oc project openshift-storage [root@host01 ~]USD oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.6 True False 4d1h Cluster version is 4.11.6 [root@host01 ~]USD oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 4d Ready 2023-06-27T15:23:01Z 4.11.0",
"noobaa namespacestore create azure-blob az --account-key=' ACCOUNT_KEY ' --account-name=' ACCOUNT_NAME' --target-blob-container='_AZURE_CONTAINER_NAME '",
"[root@host01 ~]USD noobaa namespacestore create azure-blob az --account-key='iq3+6hRtt9bQ46QfHKQ0nSm2aP+tyMzdn8dBSRW4XWrFhY+1nwfqEj4hk2q66nmD85E/o5OrrUqo+AStkKwm9w==' --account-name='transitionrgw' --target-blob-container='mcgnamespace'",
"[root@host01 ~]USD noobaa bucketclass create namespace-bucketclass single aznamespace-bucket-class --resource az -n openshift-storage",
"noobaa obc create OBC_NAME --bucketclass aznamespace-bucket-class -n openshift-storage",
"[root@host01 ~]USD noobaa obc create rgwobc --bucketclass aznamespace-bucket-class -n openshift-storage",
"radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3",
"radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=AZURE --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]",
"radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= ENDPOINT_URL , access_key= ACCESS_KEY ,secret= SECRET_KEY , target_path=\" TARGET_BUCKET_ON \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME",
"radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class AZURE --tier-config=endpoint=\"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart client.rgw.objectgwhttps.host02.udyllp Scheduled to restart client.rgw.objectgwhttps.host02.udyllp on host 'host02",
"cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \" STORAGE_CLASS \" } ], \"ID\": \" TRANSITION_ID \" } ] }",
"[root@host01 ~]USD cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ], \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\" } ] }",
"aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default put-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default put-bucket-lifecycle-configuration --lifecycle-configuration file://transition.json --bucket transition",
"aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default get-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default get-bucket-lifecycle-configuration --bucket transition { \"Rules\": [ { \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ] } ] }",
"radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\": \"UNINITIAL\" } ]",
"cephadm shell",
"ceph orch daemon CEPH_OBJECT_GATEWAY_DAEMON_NAME",
"ceph orch daemon restart rgw.objectgwhttps.host02.udyllp ceph orch daemon restart rgw.objectgw.host02.afwvyq ceph orch daemon restart rgw.objectgw.host05.ucpsrr",
"for i in 1 2 3 4 5 do aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default cp /etc/hosts s3://transition/transitionUSDi done",
"aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 10:24:01 3847 transition1 2023-06-30 10:24:04 3847 transition2 2023-06-30 10:24:07 3847 transition3 2023-06-30 10:24:09 3847 transition4 2023-06-30 10:24:13 3847 transition5",
"rados ls -p default.rgw.buckets.data | grep transition d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition1 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition4 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition2 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition3 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition5",
"radosgw-admin lc process",
"radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.170017.5\", \"started\": \"Mon, 30 Jun 2023-06-30 16:52:56 GMT\", \"status\": \"COMPLETE\" } ]",
"[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2023-06-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }",
"[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 17:52:56 0 transition1 2023-06-30 17:51:59 0 transition2 2023-06-30 17:51:59 0 transition3 2023-06-30 17:51:58 0 transition4 2023-06-30 17:51:59 0 transition5",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default head-object --key transition1 --bucket transition { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2023-06-31T16:52:56+00:00\", \"ContentLength\": 0, \"ETag\": \"\\\"46ecb42fd0def0e42f85922d62d06766\\\"\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"StorageClass\": \"CLOUDTIER\" }",
"radosgw-admin account create [--account-name={name}] [--account-id={id}] [--email={email}]",
"radosgw-admin account create --account-name=user1 --account-id=12345 [email protected]",
"radosgw-admin user create --uid={userid} --display-name={name} --account-id={accountid} --account-root --gen-secret --gen-access-key",
"radosgw-admin user create --uid=rootuser1 --display-name=\"Root User One\" --account-id=account123 --account-root --gen-secret --gen-access-key",
"radosgw-admin account rm --account-id={accountid}",
"radosgw-admin account rm --account-id=account123",
"radosgw-admin account stats --account-id={accountid} --sync-stats",
"{ \"account\": \"account123\", \"data_size\": 3145728000, # Total size in bytes (3 GB) \"num_objects\": 12000, # Total number of objects \"num_buckets\": 5, # Total number of buckets \"usage\": { \"total_size\": 3145728000, # Total size in bytes (3 GB) \"num_objects\": 12000 } }",
"radosgw-admin quota set --quota-scope=account --account-id={accountid} --max-size=10G radosgw-admin quota enable --quota-scope=account --account-id={accountid}",
"{ \"status\": \"OK\", \"message\": \"Quota enabled for account account123\" }",
"radosgw-admin quota set --quota-scope=bucket --account-id={accountid} --max-objects=1000000 radosgw-admin quota enable --quota-scope=bucket --account-id={accountid}",
"{ \"status\": \"OK\", \"message\": \"Quota enabled for bucket in account account123\" }",
"radosgw-admin quota set --quota-scope=account --account-id RGW12345678901234568 --max-buckets 10000 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } radosgw-admin quota enable --quota-scope=account --account-id RGW12345678901234568 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } radosgw-admin account get --account-id RGW12345678901234568 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } ceph versions { \"mon\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"mgr\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"osd\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 9 }, \"rgw\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"overall\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 18 } }",
"radosgw-admin user modify --uid={userid} --account-id={accountid}",
"{\"TopicConfigurations\": [{ \"Id\": \"ID1\", \"TopicArn\": \"arn:aws:sns:default::topic1\", \"Events\": [\"s3:ObjectCreated:*\"]}]}",
"{\"TopicConfigurations\": [{ \"Id\": \"ID1\", \"TopicArn\": \"arn:aws:sns:default:RGW00000000000000001:topic1\", \"Events\": [\"s3:ObjectCreated:*\"]}]}",
"radosgw-admin topic rm --topic topic1",
"radosgw-admin user modify --uid <user_ID> --account-id <Account_ID> --account-root",
"radosgw-admin user policy attach --uid <user_ID> --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess",
"radosgw-admin user modify --uid <user_ID> --account-root=0",
"radosgw-admin user create --uid= name --display-name=\" USER_NAME \"",
"radosgw-admin user create --uid=\"testuser\" --display-name=\"Jane Doe\" { \"user_id\": \"testuser\", \"display_name\": \"Jane Doe\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"CEP28KDIQXBKU4M15PDC\", \"secret_key\": \"MARoio8HFc8JxhEilES3dKFVj8tV3NOOYymihTLO\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full",
"radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret",
"radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms",
"dnf install python3-boto3",
"vi s3test.py",
"import boto3 endpoint = \"\" # enter the endpoint URL along with the port \"http:// URL : PORT \" access_key = ' ACCESS ' secret_key = ' SECRET ' s3 = boto3.client( 's3', endpoint_url=endpoint, aws_access_key_id=access_key, aws_secret_access_key=secret_key ) s3.create_bucket(Bucket='my-new-bucket') response = s3.list_buckets() for bucket in response['Buckets']: print(\"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'] ))",
"python3 s3test.py",
"my-new-bucket 2022-05-31T17:09:10.000Z",
"sudo yum install python-setuptools sudo easy_install pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient",
"swift -A http:// IP_ADDRESS : PORT /auth/1.0 -U testuser:swift -K ' SWIFT_SECRET_KEY ' list",
"swift -A http://10.10.143.116:80/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list",
"my-new-bucket"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/object_gateway_guide/deployment
|
13.3.2. Related Books
|
13.3.2. Related Books Introduction to System Administration ; Red Hat, Inc - Available at http://www.redhat.com/docs/ and on the Documentation CD, this manual contains background information on storage management (including disk quotas) for new Red Hat Enterprise Linux system administrators.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/diskquotas_additional_resources-related_books
|
5.343. util-linux-ng
|
5.343. util-linux-ng 5.343.1. RHBA-2012:1427 - util-linux-ng bug fix update Updated util-linux-ng packages that fix a bug are now available for Red Hat Enterprise Linux 6. The util-linux-ng packages contain a set of low-level system utilities that are necessary for a Linux operating system to function. Bug Fix BZ# 864367 When the telnetd daemon was used to log in to a server, the login utility failed to update the /var/run/utmp file properly. Consequently, the line used for a session in /var/run/utmp was not reused, thus growing the file unnecessarily. A patch has been provided to address this issue and the login utility now always updates /var/run/utmp as expected. Users of util-linux-ng are advised to upgrade to these updated packages, which fix this bug. 5.343.2. RHBA-2012:0925 - util-linux-ng bug fix update Updated util-linux-ng packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The util-linux-ng packages contain a set of low-level system utilities that are necessary for a Linux operating system to function. Bug Fixes BZ# 588419 The console login time-out value was set to 60 seconds. This could cause the login to time out during the name lookup process on systems with broken DNS (Domain Name Service). With this update, the timeout value has been prolonged to 180 seconds to allow the login process to complete name lookups under these circumstances. BZ# 740163 The "fdisk -l" and "sfdisk -l" commands returned confusing warnings for unpartitioned devices similar to the following: With this update, the commands ignore unpartitioned devices and the problem no longer occurs. BZ# 785142 Previously, after the installation of the uuidd package, the uuidd daemon was not enabled by default. With this update, the underlying code has been modified and the uuidd daemon is enabled after installation as expected and can be started by the init script after reboot. BZ# 797888 Previously, the script command did not work correctly if called from the csh shell in the /etc/csh.login file. The child processes created by the script inherited the SIGTERM ignore property from csh and could not be terminated with the signal. With this update, the script resets the SIGTERM setting so that the shell is started with the default SIGTERM behavior and its children accept signals as expected. All users of util-linux-ng are advised to upgrade to these updated packages, which fix these bugs.
|
[
"Disk /dev/mapper/[volume name] doesn't contain a valid partition table"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/util-linux-ng
|
Preface
|
Preface Date of release: 2023-02-13
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/release_notes_for_eclipse_vert.x_4.3/pr01
|
Preface
|
Preface Depending on the type of your deployment, you can choose one of the following procedures to replace a storage device: For dynamically created storage clusters deployed on AWS, see: Section 1.1, "Replacing operational or failed storage devices on AWS user-provisioned infrastructure" . Section 1.2, "Replacing operational or failed storage devices on AWS installer-provisioned infrastructure" . For dynamically created storage clusters deployed on VMware, see Section 2.1, "Replacing operational or failed storage devices on VMware infrastructure" . For dynamically created storage clusters deployed on Microsoft Azure, see Section 3.1, "Replacing operational or failed storage devices on Azure installer-provisioned infrastructure" . For storage clusters deployed using local storage devices, see: Section 5.1, "Replacing operational or failed storage devices on clusters backed by local storage devices" . Section 5.2, "Replacing operational or failed storage devices on IBM Power" . Section 5.3, "Replacing operational or failed storage devices on IBM Z or IBM LinuxONE infrastructure" . Note OpenShift Data Foundation does not support heterogeneous OSD sizes.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_devices/preface-replacing-devices
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility/proc_providing-feedback-on-red-hat-documentation_converting-from-a-linux-distribution-to-rhel
|
7.2. Block I/O Tuning
|
7.2. Block I/O Tuning The virsh blkiotune command allows administrators to set or display a guest virtual machine's block I/O parameters manually in the <blkio> element in the guest XML configuration. To display current <blkio> parameters for a virtual machine: To set a virtual machine's <blkio> parameters, refer to the following command and replace values according to your environment: Parameters include: weight The I/O weight, within the range 100 to 1000. device-weights A single string listing one or more device/weight pairs, in the format of /path/to/device ,weight, /path/to/device ,weight . Each weight must be within the range 100-1000, or the value 0 to remove that device from per-device listings. Only the devices listed in the string are modified; any existing per-device weights for other devices remain unchanged. config Add the --config option for changes to take effect at boot. live Add the --live option to apply the changes to the running virtual machine. Note The --live option requires the hypervisor to support this action. Not all hypervisors allow live changes of the maximum memory limit. current Add the --current option to apply the changes to the current virtual machine. Note See # virsh help blkiotune for more information on using the virsh blkiotune command.
|
[
"virsh blkiotune virtual_machine",
"virsh blkiotune virtual_machine [--weight number ] [--device-weights string ] [--config] [--live] [--current]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-blockio-intro-block_io_tuning
|
Chapter 1. Support policy for Red Hat build of OpenJDK
|
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration..
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.9/rn-openjdk-support-policy
|
Chapter 10. Restricting the session to a single application
|
Chapter 10. Restricting the session to a single application You can start the GNOME session in single-application mode, also known as kiosk mode. In this session, GNOME displays only a full-screen window of the application that you have selected. 10.1. Single-application mode Single-application mode is a modified GNOME session that reconfigures the Mutter window manager into an interactive kiosk. This session locks down certain behavior to make the standard desktop more restrictive. The user can interact only with a single application selected by the administrator. You can set up single-application mode for several use cases, such as: In the communication, entertainment, or education fields As a self-serve machine As an event manager As a registration point The GNOME Kiosk utility provides the single-application mode configuration and sessions. The following single-application sessions are available: Search Appliance Session This session always starts the Mozilla Firefox web browser at the www.google.com website. Kiosk Script Session This session starts an arbitrary application that you specify in a shell script. 10.2. Enabling search appliance mode This procedure installs and enables the Search Appliance Session, which restricts the GNOME session to the Google search engine in a web browser. Procedure Install the GNOME Kiosk packages: At the GNOME login screen, select Search Appliance Session from the gear button menu and log in as the single-application user. The Mozilla Firefox browser opens as a full-screen window in its kiosk mode. It shows the Google search page. Additional resources The /usr/share/doc/gnome-kiosk/README.md file provided by the gnome-kiosk package. 10.3. Enabling single-application mode This procedure installs and enables the Kiosk Script Session, which restricts the GNOME session to a selected single application. Procedure Install the GNOME Kiosk packages: At the GNOME login screen, select Kiosk Script Session from the gear button menu and log in as the single-application user. The gedit text editor opens as a full-screen window. It shows the shell script that configures which application runs in your single-application session. Edit the shell script and enter the application that you want to start in the single-application session. For example, to start the Mozilla Firefox browser, enter the following content: Save the script file. Close the gedit window. The session terminates and restarts with your selected application. The time you log into the single-application session, your selected application runs. Additional resources The /usr/share/doc/gnome-kiosk/README.md file provided by the gnome-kiosk package.
|
[
"dnf install gnome-kiosk gnome-kiosk-search-appliance",
"dnf install gnome-kiosk gnome-kiosk-script-session",
"#!/usr/bin/sh firefox --kiosk https://example.org"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/assembly_restricting-the-session-to-a-single-application_customizing-the-gnome-desktop-environment
|
Chapter 7. Supported integration products
|
Chapter 7. Supported integration products AMQ Streams 2.1 supports integration with the following Red Hat products. Red Hat Single Sign-On Provides OAuth 2.0 authentication and OAuth 2.0 authorization. Red Hat 3scale API Management Secures the Kafka Bridge and provides additional API management features. Red Hat Debezium Monitors databases and creates event streams. Red Hat Service Registry Provides a centralized store of service schemas for data streaming. For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the product documentation. Additional resources Red Hat Single Sign-On Supported Configurations Red Hat 3scale API Management Supported Configurations Red Hat Debezium Supported Configurations Red Hat Service Registry Supported Configurations
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/release_notes_for_amq_streams_2.1_on_openshift/supported-config-str
|
Chapter 6. AWS DynamoDB
|
Chapter 6. AWS DynamoDB Only producer is supported The AWS2 DynamoDB component supports storing and retrieving data from/to service. Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon DynamoDB. More information is available at Amazon DynamoDB . 6.1. Dependencies When using aws2-ddb Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-ddb-starter</artifactId> </dependency> 6.2. URI Format aws2-ddb://domainName[?options] You can append query options to the URI in the following format, ?options=value&option2=value&... 6.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 6.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 6.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 6.4. Component Options The AWS DynamoDB component supports 22 options, which are listed below. Name Description Default Type amazonDDBClient (producer) Autowired To use the AmazonDynamoDB as the client. DynamoDbClient configuration (producer) The component configuration. Ddb2Configuration consistentRead (producer) Determines whether or not strong consistency should be enforced when data is read. false boolean enabledInitialDescribeTable (producer) Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. true boolean keyAttributeName (producer) Attribute name when creating table. String keyAttributeType (producer) Attribute type when creating table. String keyScalarType (producer) The key scalar type, it can be S (String), N (Number) and B (Bytes). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) What operation to perform. Enum values: BatchGetItems DeleteItem DeleteTable DescribeTable GetItem PutItem Query Scan UpdateItem UpdateTable PutItem Ddb2Operations overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (producer) To define a proxy host when instantiating the DDB client. String proxyPort (producer) The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). Integer proxyProtocol (producer) To define a proxy protocol when instantiating the DDB client. Enum values: HTTP HTTPS HTTPS Protocol readCapacity (producer) The provisioned throughput to reserve for reading resources from your table. Long region (producer) The region in which DDB client needs to work. String trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean writeCapacity (producer) The provisioned throughput to reserved for writing resources to your table. Long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 6.5. Endpoint Options The AWS DynamoDB endpoint is configured using URI syntax: with the following path and query parameters: 6.5.1. Path Parameters (1 parameters) Name Description Default Type tableName (producer) Required The name of the table currently worked with. String 6.5.2. Query Parameters (20 parameters) Name Description Default Type amazonDDBClient (producer) Autowired To use the AmazonDynamoDB as the client. DynamoDbClient consistentRead (producer) Determines whether or not strong consistency should be enforced when data is read. false boolean enabledInitialDescribeTable (producer) Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. true boolean keyAttributeName (producer) Attribute name when creating table. String keyAttributeType (producer) Attribute type when creating table. String keyScalarType (producer) The key scalar type, it can be S (String), N (Number) and B (Bytes). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) What operation to perform. Enum values: BatchGetItems DeleteItem DeleteTable DescribeTable GetItem PutItem Query Scan UpdateItem UpdateTable PutItem Ddb2Operations overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (producer) To define a proxy host when instantiating the DDB client. String proxyPort (producer) The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). Integer proxyProtocol (producer) To define a proxy protocol when instantiating the DDB client. Enum values: HTTP HTTPS HTTPS Protocol readCapacity (producer) The provisioned throughput to reserve for reading resources from your table. Long region (producer) The region in which DDB client needs to work. String trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean writeCapacity (producer) The provisioned throughput to reserved for writing resources to your table. Long accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String Required DDB component options You have to provide the amazonDDBClient in the Registry or your accessKey and secretKey to access the Amazon's DynamoDB . 6.6. Usage 6.6.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. For more information about this you can look at AWS credentials documentation 6.6.2. Message headers evaluated by the DDB producer Header Type Description CamelAwsDdbBatchItems Map<String, KeysAndAttributes> A map of the table name and corresponding items to get by primary key. CamelAwsDdbTableName String Table Name for this operation. CamelAwsDdbKey Key The primary key that uniquely identifies each item in a table. CamelAwsDdbReturnValues String Use this parameter if you want to get the attribute name-value pairs before or after they are modified(NONE, ALL_OLD, UPDATED_OLD, ALL_NEW, UPDATED_NEW). CamelAwsDdbUpdateCondition Map<String, ExpectedAttributeValue> Designates an attribute for a conditional modification. CamelAwsDdbAttributeNames Collection<String> If attribute names are not specified then all attributes will be returned. CamelAwsDdbConsistentRead Boolean If set to true, then a consistent read is issued, otherwise eventually consistent is used. CamelAwsDdbIndexName String If set will be used as Secondary Index for Query operation. CamelAwsDdbItem Map<String, AttributeValue> A map of the attributes for the item, and must include the primary key values that define the item. CamelAwsDdbExactCount Boolean If set to true, Amazon DynamoDB returns a total number of items that match the query parameters, instead of a list of the matching items and their attributes. CamelAwsDdbKeyConditions Map<String, Condition> This header specify the selection criteria for the query, and merge together the two old headers CamelAwsDdbHashKeyValue and CamelAwsDdbScanRangeKeyCondition CamelAwsDdbStartKey Key Primary key of the item from which to continue an earlier query. CamelAwsDdbHashKeyValue AttributeValue Value of the hash component of the composite primary key. CamelAwsDdbLimit Integer The maximum number of items to return. CamelAwsDdbScanRangeKeyCondition Condition A container for the attribute values and comparison operators to use for the query. CamelAwsDdbScanIndexForward Boolean Specifies forward or backward traversal of the index. CamelAwsDdbScanFilter Map<String, Condition> Evaluates the scan results and returns only the desired values. CamelAwsDdbUpdateValues Map<String, AttributeValueUpdate> Map of attribute name to the new value and action for the update. 6.6.3. Message headers set during BatchGetItems operation Header Type Description CamelAwsDdbBatchResponse Map<String,BatchResponse> Table names and the respective item attributes from the tables. CamelAwsDdbUnprocessedKeys Map<String,KeysAndAttributes> Contains a map of tables and their respective keys that were not processed with the current response. 6.6.4. Message headers set during DeleteItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.5. Message headers set during DeleteTable operation Header Type Description CamelAwsDdbProvisionedThroughput ProvisionedThroughputDescription The value of the ProvisionedThroughput property for this table CamelAwsDdbCreationDate Date Creation DateTime of this table. CamelAwsDdbTableItemCount Long Item count for this table. CamelAwsDdbKeySchema KeySchema The KeySchema that identifies the primary key for this table. From Camel 2.16.0 the type of this header is List<KeySchemaElement> and not KeySchema CamelAwsDdbTableName String The table name. CamelAwsDdbTableSize Long The table size in bytes. CamelAwsDdbTableStatus String The status of the table: CREATING, UPDATING, DELETING, ACTIVE 6.6.6. Message headers set during DescribeTable operation Header Type Description CamelAwsDdbProvisionedThroughput \{{ProvisionedThroughputDescription}} The value of the ProvisionedThroughput property for this table CamelAwsDdbCreationDate Date Creation DateTime of this table. CamelAwsDdbTableItemCount Long Item count for this table. CamelAwsDdbKeySchema \{{KeySchema}} The KeySchema that identifies the primary key for this table. CamelAwsDdbTableName String The table name. CamelAwsDdbTableSize Long The table size in bytes. CamelAwsDdbTableStatus String The status of the table: CREATING, UPDATING, DELETING, ACTIVE CamelAwsDdbReadCapacity Long ReadCapacityUnits property of this table. CamelAwsDdbWriteCapacity Long WriteCapacityUnits property of this table. 6.6.7. Message headers set during GetItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.8. Message headers set during PutItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.9. Message headers set during Query operation Header Type Description CamelAwsDdbItems List<java.util.Map<String,AttributeValue>> The list of attributes returned by the operation. CamelAwsDdbLastEvaluatedKey Key Primary key of the item where the query operation stopped, inclusive of the result set. CamelAwsDdbConsumedCapacity Double The number of Capacity Units of the provisioned throughput of the table consumed during the operation. CamelAwsDdbCount Integer Number of items in the response. 6.6.10. Message headers set during Scan operation Header Type Description CamelAwsDdbItems List<java.util.Map<String,AttributeValue>> The list of attributes returned by the operation. CamelAwsDdbLastEvaluatedKey Key Primary key of the item where the query operation stopped, inclusive of the result set. CamelAwsDdbConsumedCapacity Double The number of Capacity Units of the provisioned throughput of the table consumed during the operation. CamelAwsDdbCount Integer Number of items in the response. CamelAwsDdbScannedCount Integer Number of items in the complete scan before any filters are applied. 6.6.11. Message headers set during UpdateItem operation Header Type Description CamelAwsDdbAttributes Map<String, AttributeValue> The list of attributes returned by the operation. 6.6.12. Advanced AmazonDynamoDB configuration If you need more control over the AmazonDynamoDB instance configuration you can create your own instance and refer to it from the URI: from("direct:start") .to("aws2-ddb://domainName?amazonDDBClient=#client"); The #client refers to a DynamoDbClient in the Registry. 6.7. Supported producer operations BatchGetItems DeleteItem DeleteTable DescribeTable GetItem PutItem Query Scan UpdateItem UpdateTable 6.8. Examples 6.8.1. Producer Examples PutItem: this operation will create an entry into DynamoDB from("direct:start") .setHeader(Ddb2Constants.OPERATION, Ddb2Operations.PutItem) .setHeader(Ddb2Constants.CONSISTENT_READ, "true") .setHeader(Ddb2Constants.RETURN_VALUES, "ALL_OLD") .setHeader(Ddb2Constants.ITEM, attributeMap) .setHeader(Ddb2Constants.ATTRIBUTE_NAMES, attributeMap.keySet()); .to("aws2-ddb://" + tableName + "?keyAttributeName=" + attributeName + "&keyAttributeType=" + KeyType.HASH + "&keyScalarType=" + ScalarAttributeType.S + "&readCapacity=1&writeCapacity=1"); Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-ddb</artifactId> <version>USD{camel-version}</version> </dependency> where {camel-version} must be replaced by the actual version of Camel. 6.9. Spring Boot Auto-Configuration The component supports 40 options, which are listed below. Name Description Default Type camel.component.aws2-ddb.access-key Amazon AWS Access Key. String camel.component.aws2-ddb.amazon-d-d-b-client To use the AmazonDynamoDB as the client. The option is a software.amazon.awssdk.services.dynamodb.DynamoDbClient type. DynamoDbClient camel.component.aws2-ddb.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-ddb.configuration The component configuration. The option is a org.apache.camel.component.aws2.ddb.Ddb2Configuration type. Ddb2Configuration camel.component.aws2-ddb.consistent-read Determines whether or not strong consistency should be enforced when data is read. false Boolean camel.component.aws2-ddb.enabled Whether to enable auto configuration of the aws2-ddb component. This is enabled by default. Boolean camel.component.aws2-ddb.enabled-initial-describe-table Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. true Boolean camel.component.aws2-ddb.key-attribute-name Attribute name when creating table. String camel.component.aws2-ddb.key-attribute-type Attribute type when creating table. String camel.component.aws2-ddb.key-scalar-type The key scalar type, it can be S (String), N (Number) and B (Bytes). String camel.component.aws2-ddb.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-ddb.operation What operation to perform. Ddb2Operations camel.component.aws2-ddb.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-ddb.proxy-host To define a proxy host when instantiating the DDB client. String camel.component.aws2-ddb.proxy-port The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). Integer camel.component.aws2-ddb.proxy-protocol To define a proxy protocol when instantiating the DDB client. Protocol camel.component.aws2-ddb.read-capacity The provisioned throughput to reserve for reading resources from your table. Long camel.component.aws2-ddb.region The region in which DDB client needs to work. String camel.component.aws2-ddb.secret-key Amazon AWS Secret Key. String camel.component.aws2-ddb.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-ddb.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-ddb.use-default-credentials-provider Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean camel.component.aws2-ddb.write-capacity The provisioned throughput to reserved for writing resources to your table. Long camel.component.aws2-ddbstream.access-key Amazon AWS Access Key. String camel.component.aws2-ddbstream.amazon-dynamo-db-streams-client Amazon DynamoDB client to use for all requests for this endpoint. The option is a software.amazon.awssdk.services.dynamodb.streams.DynamoDbStreamsClient type. DynamoDbStreamsClient camel.component.aws2-ddbstream.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-ddbstream.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.aws2-ddbstream.configuration The component configuration. The option is a org.apache.camel.component.aws2.ddbstream.Ddb2StreamConfiguration type. Ddb2StreamConfiguration camel.component.aws2-ddbstream.enabled Whether to enable auto configuration of the aws2-ddbstream component. This is enabled by default. Boolean camel.component.aws2-ddbstream.max-results-per-request Maximum number of records that will be fetched in each poll. Integer camel.component.aws2-ddbstream.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-ddbstream.proxy-host To define a proxy host when instantiating the DDBStreams client. String camel.component.aws2-ddbstream.proxy-port To define a proxy port when instantiating the DDBStreams client. Integer camel.component.aws2-ddbstream.proxy-protocol To define a proxy protocol when instantiating the DDBStreams client. Protocol camel.component.aws2-ddbstream.region The region in which DDBStreams client needs to work. String camel.component.aws2-ddbstream.secret-key Amazon AWS Secret Key. String camel.component.aws2-ddbstream.stream-iterator-type Defines where in the DynamoDB stream to start getting records. Note that using FROM_START can cause a significant delay before the stream has caught up to real-time. Ddb2StreamConfigurationUSDStreamIteratorType camel.component.aws2-ddbstream.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-ddbstream.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-ddbstream.use-default-credentials-provider Set whether the DynamoDB Streams client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-ddb-starter</artifactId> </dependency>",
"aws2-ddb://domainName[?options]",
"aws2-ddb:tableName",
"from(\"direct:start\") .to(\"aws2-ddb://domainName?amazonDDBClient=#client\");",
"from(\"direct:start\") .setHeader(Ddb2Constants.OPERATION, Ddb2Operations.PutItem) .setHeader(Ddb2Constants.CONSISTENT_READ, \"true\") .setHeader(Ddb2Constants.RETURN_VALUES, \"ALL_OLD\") .setHeader(Ddb2Constants.ITEM, attributeMap) .setHeader(Ddb2Constants.ATTRIBUTE_NAMES, attributeMap.keySet()); .to(\"aws2-ddb://\" + tableName + \"?keyAttributeName=\" + attributeName + \"&keyAttributeType=\" + KeyType.HASH + \"&keyScalarType=\" + ScalarAttributeType.S + \"&readCapacity=1&writeCapacity=1\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-ddb</artifactId> <version>USD{camel-version}</version> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-aws2-ddb-component-starter
|
Chapter 15. GenericKafkaListenerConfigurationBootstrap schema reference
|
Chapter 15. GenericKafkaListenerConfigurationBootstrap schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBootstrap schema properties Broker service equivalents of nodePort , host , loadBalancerIP and annotations properties are configured in the GenericKafkaListenerConfigurationBroker schema . 15.1. alternativeNames You can specify alternative names for the bootstrap service. The names are added to the broker certificates and can be used for TLS hostname verification. The alternativeNames property is applicable to all types of listeners. Example of an external route listener configured with an additional bootstrap address listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2 # ... 15.2. host The host property is used with route and ingress listeners to specify the hostnames used by the bootstrap and per-broker services. A host property value is mandatory for ingress listener configuration, as the Ingress controller does not assign any hostnames automatically. Make sure that the hostnames resolve to the Ingress endpoints. AMQ Streams will not perform any validation that the requested hosts are available and properly routed to the Ingress endpoints. Example of host configuration for an ingress listener listeners: #... - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # ... By default, route listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts. AMQ Streams does not perform any validation that the requested hosts are available. You must ensure that they are free and can be used. Example of host configuration for a route listener # ... listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com # ... 15.3. nodePort By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. You can override the assigned node ports for nodeport listeners by specifying the requested port numbers. AMQ Streams does not perform any validation on the requested ports. You must ensure that they are free and available for use. Example of an external listener configured with overrides for node ports # ... listeners: #... - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002 # ... 15.4. loadBalancerIP Use the loadBalancerIP property to request a specific IP address when creating a loadbalancer. Use this property when you need to use a loadbalancer with a specific IP address. The loadBalancerIP field is ignored if the cloud provider does not support the feature. Example of an external listener of type loadbalancer with specific loadbalancer IP address requests # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3 # ... 15.5. annotations Use the annotations property to add annotations to OpenShift resources related to the listeners. You can use these annotations, for example, to instrument DNS tooling such as External DNS , which automatically assigns DNS names to the loadbalancer services. Example of an external listener of type loadbalancer using annotations # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" # ... 15.6. GenericKafkaListenerConfigurationBootstrap schema properties Property Description alternativeNames Additional alternative names for the bootstrap service. The alternative names will be added to the list of subject alternative names of the TLS certificates. string array host The bootstrap host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the bootstrap service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress , Route , or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map labels Labels that will be added to the Ingress , Route , or Service resource. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map
|
[
"listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2",
"listeners: # - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com",
"listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com",
"listeners: # - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002",
"listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3",
"listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\""
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-generickafkalistenerconfigurationbootstrap-reference
|
Chapter 6. Using .NET 6.0 on OpenShift Container Platform
|
Chapter 6. Using .NET 6.0 on OpenShift Container Platform 6.1. Overview NET images are added to OpenShift by importing imagestream definitions from s2i-dotnetcore . The imagestream definitions includes the dotnet imagestream which contains sdk images for different supported versions of .NET. .NET Life Cycle provides an up-to-date overview of supported versions. Version Tag Alias .NET Core 3.1 dotnet:3.1-el7 dotnet:3.1 dotnet:3.1-ubi8 .NET 5 dotnet:5.0-ubi8 dotnet:5.0 .NET 6 dotnet:6.0-ubi8 dotnet:6.0 The sdk images have corresponding runtime images which are defined under the dotnet-runtime imagestream. The container images work across different versions of Red Hat Enterprise Linux and OpenShift. The RHEL7-based (suffix -el7) are hosted on the registry.redhat.io image repository. Authentication is required to pull these images. These credentials are configured by adding a pull secret to the OpenShift namespace. The UBI-8 based images (suffix -ubi8) are hosted on the registry.access.redhat.com and do not require authentication. 6.2. Installing .NET image streams To install .NET image streams, use image stream definitions from s2i-dotnetcore with the OpenShift Client ( oc ) binary. Image streams can be installed from Linux, Mac, and Windows. A script enables you to install, update or remove the image streams. You can define .NET image streams in the global openshift namespace or locally in a project namespace. Sufficient permissions are required to update the openshift namespace definitions. 6.2.1. Installing image streams using OpenShift Client You can use OpenShift Client ( oc ) to install .NET image streams. Prerequisites An existing pull secret must be present in the namespace. If no pull secret is present in the namespace. Add one by following the instructions in the Red Hat Container Registry Authentication guide. Procedure List the available .NET image streams: The output shows installed images. If no images are installed, the Error from server (NotFound) message is displayed. If the Error from server (NotFound) message is displayed: Install the .NET image streams: If the Error from server (NotFound) message is not displayed: Include newer versions of existing .NET image streams: 6.2.2. Installing image streams on Linux and macOS You can use this script to install, upgrade, or remove the image streams on Linux and macOS. Procedure Download the script. On Linux use: On Mac use: Make the script executable: Log in to the OpenShift cluster: Install image streams and add a pull secret for authentication against the registry.redhat.io : Replace subscription_username with the name of the user, and replace subscription_password with the user's password. The credentials may be omitted if you do not plan to use the RHEL7-based images. If the pull secret is already present, the --user and --password arguments are ignored. Additional information ./install-imagestreams.sh --help 6.2.3. Installing image streams on Windows You can use this script to install, upgrade, or remove the image streams on Windows. Procedure Download the script. Log in to the OpenShift cluster: Install image streams and add a pull secret for authentication against the registry.redhat.io : Replace subscription_username with the name of the user, and replace subscription_password with the user's password. The credentials may be omitted if you do not plan to use the RHEL7-based images. If the pull secret is already present, the -User and -Password arguments are ignored. Note The PowerShell ExecutionPolicy may prohibit executing this script. To relax the policy, run Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass -Force . Additional information Get-Help .\install-imagestreams.ps1 6.3. Deploying applications from source using oc The following example demonstrates how to deploy the example-app application using oc , which is in the app folder on the {dotnet-branch} branch of the redhat-developer/s2i-dotnetcore-ex GitHub repository: Procedure Create a new OpenShift project: Add the ASP.NET Core application: Track the progress of the build: View the deployed application once the build is finished: The application is now accessible within the project. Optional : Make the project accessible externally: Obtain the shareable URL: 6.4. Deploying applications from binary artifacts using oc You can use .NET Source-to-Image (S2I) builder image to build applications using binary artifacts that you provide. Prerequisites Published application. For more information, see Publishing applications with .NET 6.0 . Procedure Create a new binary build: Start the build and specify the path to the binary artifacts on your local machine: Create a new application: 6.5. Environment variables for .NET 6.0 The .NET images support several environment variables to control the build behavior of your .NET application. You can set these variables as part of the build configuration, or add them to the .s2i/environment file in the application source code repository. Variable Name Description Default DOTNET_STARTUP_PROJECT Selects the project to run. This must be a project file (for example, csproj or fsproj ) or a folder containing a single project file. . DOTNET_ASSEMBLY_NAME Selects the assembly to run. This must not include the .dll extension. Set this to the output assembly name specified in csproj (PropertyGroup/AssemblyName). The name of the csproj file DOTNET_PUBLISH_READYTORUN When set to true , the application will be compiled ahead of time. This reduces startup time by reducing the amount of work the JIT needs to perform when the application is loading. false DOTNET_RESTORE_SOURCES Specifies the space-separated list of NuGet package sources used during the restore operation. This overrides all of the sources specified in the NuGet.config file. This variable cannot be combined with DOTNET_RESTORE_CONFIGFILE . DOTNET_RESTORE_CONFIGFILE Specifies a NuGet.Config file to be used for restore operations. This variable cannot be combined with DOTNET_RESTORE_SOURCES . DOTNET_TOOLS Specifies a list of .NET tools to install before building the app. It is possible to install a specific version by post pending the package name with @<version> . DOTNET_NPM_TOOLS Specifies a list of NPM packages to install before building the application. DOTNET_TEST_PROJECTS Specifies the list of test projects to test. This must be project files or folders containing a single project file. dotnet test is invoked for each item. DOTNET_CONFIGURATION Runs the application in Debug or Release mode. This value should be either Release or Debug . Release DOTNET_VERBOSITY Specifies the verbosity of the dotnet build commands. When set, the environment variables are printed at the start of the build. This variable can be set to one of the msbuild verbosity values ( q[uiet] , m[inimal] , n[ormal] , d[etailed] , and diag[nostic] ). HTTP_PROXY, HTTPS_PROXY Configures the HTTP or HTTPS proxy used when building and running the application, respectively. DOTNET_RM_SRC When set to true , the source code will not be included in the image. DOTNET_SSL_DIRS Specifies a list of folders or files with additional SSL certificates to trust. The certificates are trusted by each process that runs during the build and all processes that run in the image after the build (including the application that was built). The items can be absolute paths (starting with / ) or paths in the source repository (for example, certificates). NPM_MIRROR Uses a custom NPM registry mirror to download packages during the build process. ASPNETCORE_URLS This variable is set to http://*:8080 to configure ASP.NET Core to use the port exposed by the image. Changing this is not recommended. http://*:8080 DOTNET_RESTORE_DISABLE_PARALLEL When set to true , disables restoring multiple projects in parallel. This reduces restore timeout errors when the build container is running with low CPU limits. false DOTNET_INCREMENTAL When set to true , the NuGet packages will be kept so they can be re-used for an incremental build. false DOTNET_PACK When set to true , creates a tar.gz file at /opt/app-root/app.tar.gz that contains the published application. 6.6. Creating the MVC sample application s2i-dotnetcore-ex is the default Model, View, Controller (MVC) template application for .NET. This application is used as the example application by the .NET S2I image and can be created directly from the OpenShift UI using the Try Example link. The application can also be created with the OpenShift client binary ( oc ). Procedure To create the sample application using oc : Add the .NET application: Make the application accessible externally: Obtain the sharable URL: Additional resources s2i-dotnetcore-ex application repository on GitHub 6.7. Creating the CRUD sample application s2i-dotnetcore-persistent-ex is a simple Create, Read, Update, Delete (CRUD) .NET web application that stores data in a PostgreSQL database. Procedure To create the sample application using oc : Add the database: Add the .NET application: Add environment variables from the postgresql secret and database service name environment variable: Make the application accessible externally: Obtain the sharable URL: Additional resources s2i-dotnetcore-ex application repository on GitHub
|
[
"oc describe is dotnet",
"oc create -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams.json",
"oc replace -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams.json",
"wget https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/install-imagestreams.sh",
"curl https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/install-imagestreams.sh -o install-imagestreams.sh",
"chmod +x install-imagestreams.sh",
"oc login",
"./install-imagestreams.sh --os rhel [--user subscription_username --password subscription_password ]",
"Invoke-WebRequest https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/install-imagestreams.ps1 -UseBasicParsing -OutFile install-imagestreams.ps1",
"oc login",
".\\install-imagestreams.ps1 --OS rhel [-User subscription_username -Password subscription_password ]",
"oc new-project sample-project",
"oc new-app --name= example-app 'dotnet:6.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#{dotnet-branch}' --build-env DOTNET_STARTUP_PROJECT=app",
"oc logs -f bc/ example-app",
"oc logs -f dc/ example-app",
"oc expose svc/ example-app",
"oc get routes",
"oc new-build --name= my-web-app dotnet:6.0-ubi8 --binary=true",
"oc start-build my-web-app --from-dir= bin/Release/net6.0/publish",
"oc new-app my-web-app",
"oc new-app dotnet:6.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#{dotnet-branch} --context-dir=app",
"oc expose service s2i-dotnetcore-ex",
"oc get route s2i-dotnetcore-ex",
"oc new-app postgresql-ephemeral",
"oc new-app dotnet:6.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-persistent-ex#{dotnet-branch} --context-dir app",
"oc set env dc/s2i-dotnetcore-persistent-ex --from=secret/postgresql -e database-service=postgresql",
"oc expose service s2i-dotnetcore-persistent-ex",
"oc get route s2i-dotnetcore-persistent-ex"
] |
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_9/using_net_6_0_on_openshift_container_platform
|
Security architecture
|
Security architecture Red Hat build of Quarkus 3.8 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/security_architecture/index
|
14.2. Authentication
|
14.2. Authentication 14.2.1. Using Enterprise Credentials to Log into GNOME If your network has an Active Directory or Identity Management domain available, and you have a domain account, you can use your domain credentials to log into GNOME. If the machine has been successfully configured for domain accounts, users can log into GNOME using their accounts. At the login prompt, type the domain user name followed by an @ sign, and then your domain name. For example, if your domain name is example.com and the user name is User , type: In cases where the machine is already configured for domain accounts, you should see a helpful hint describing the login format. 14.2.1.1. Choosing to Use Enterprise Credentials During Welcome Screens If you have not yet configured the machine for enterprise credentials, you can do so at the Welcome screens that are part of the GNOME Initial Setup program. Procedure 14.1. Configuring Enterprise Credentials At the Login welcome screen, choose Use Enterprise Login . Type the name of your domain in the Domain field if it is not already prefilled. Type your domain account user and password in the relevant fields. Click . Depending on how the domain is configured, a prompt may show up asking for the domain administrator's name and password in order to proceed. 14.2.1.2. Changing to Use Enterprise Credentials to Log into GNOME If you have already completed initial setup, and wish to start a domain account to log into GNOME, then you can accomplish this from the Users panel in the GNOME Settings. Procedure 14.2. Configuring Enterprise Credentials Click your name on the top bar and select Settings from the menu. From the list of items, select Users . Click the Unlock button and type the computer administrator's password. Click the + button in the lower left of the window. Select the Enterprise Login pane. Enter the domain, user, and password for your Enterprise account, and click Add . Depending on how your domain is configured, a prompt may show up asking for the domain administrator's name and password in order to proceed. 14.2.1.3. Troubleshooting and Advanced Setup The realm command and its various subcommands can be used to troubleshoot the enterprise login feature. For example, to see whether the machine has been configured for enterprise logins, run the following command: Network administrators are encouraged to pre-join workstations to a relevant domain. This can be done using the kickstart realm join command, or running realm join in an automated fashion from a script. Getting More Information Red Hat Enterprise Linux 7 Windows Integration Guide - The Windows Integration Guide for Red Hat Enterprise Linux 7 provides more detailed information about using realmd to connect to an Active Directory domain. 14.2.2. Enabling Smart Card Authentication Enabling smart card authentication requires two consecutive steps: Configuration of GDM to allow prompting for smart cards Configuration of the operating system to allow using smart cards to login 1.Configuration of GDM to allow prompting for smart cards You can use two ways to configure the GDM to allow prompting for smart card authentication: dconf editor GUI Procedure 14.3. Enabling smart card authentication using dconf editor GUI Uncheck the box for the org.gnome.login-screen enable-password-authentication dcof key. Check the box for the org.gnome.login-screen enable-smartcard-authentication dcof key. dconf-tool Procedure 14.4. Enabling smart card authentication using dconf-tool Create a keyfile in the /etc/dconf/db/gdm.d directory. Add the following content to this keyfile: Update the system dconf databases: 2.Configuration of the operating system to allow using smart cards to login After GDM has been configured for smart card authentication, use the system-config-authentication tool to configure the system to allow users to use smart cards, making their use available to GDM as a valid authentication method for the graphical environment. The tool is provided by the authconfig-gtk package. To learn more about configuring the system to allow smart card authentication, and to learn more about the system-config-authentication tool, see the Red Hat Enterprise Linux 7 System-Level Authentication Guide . 14.2.3. Enabling Fingerprint Authentication To allow users to log in using their enrolled fingerprints, use the system-config-authentication tool to enable fingerprint authentication. The tool is provided by the authconfig-gtk package. To learn more about fingerprint authentication and the system-config-authentication tool, see the Red Hat Enterprise Linux 7 System-Level Authentication Guide .
|
[
"[email protected]",
"realm list",
"[org/gnome/login-screen] enable-password-authentication='false' enable-smartcard-authentication='true'",
"dconf update"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/authentication
|
5.347. vios-proxy
|
5.347. vios-proxy 5.347.1. RHBA-2012:0755 - vios-proxy bug fix update Updated vios-proxy packages that fix one bug are now available for Red Hat Enterprise Linux 6. The vios-proxy program suite creates a network tunnel between a server in the QEMU host and a client in a QEMU guest. The proxied server and client programs open normal TCP network ports on localhost and the vios-proxy tunnel connects them using QEMU virtioserial channels. Bug Fix BZ# 743723 Previously, the packages did not contain manual pages for the vios-proxy-host and vios-proxy-guest daemons. With this update, these manual pages are now available. All users of vios-proxy are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/vios-proxy
|
Chapter 1. Policy APIs
|
Chapter 1. Policy APIs 1.1. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 1.2. PodDisruptionBudget [policy/v1] Description PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods Type object
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/policy_apis/policy-apis
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_custom_block_storage_back_end/making-open-source-more-inclusive
|
19.7. Administering User Tasks From the Administration Portal
|
19.7. Administering User Tasks From the Administration Portal 19.7.1. Adding Users and Assigning VM Portal Permissions Users must be created already before they can be added and assigned roles and permissions. The roles and permissions assigned in this procedure give the user the permission to log in to the VM Portal and to start creating virtual machines. The procedure also applies to group accounts. Adding Users and Assigning VM Portal Permissions On the header bar, click Administration Configure to open the Configure window. Click System Permissions . Click Add to open the Add System Permission to User window. Select a profile under Search . The profile is the domain you want to search. Enter a name or part of a name in the search text field, and click GO . Alternatively, click GO to view a list of all users and groups. Select the check boxes for the appropriate users or groups. Select an appropriate role to assign under Role to Assign . The UserRole role gives the user account the permission to log in to the VM Portal. Click OK . Log in to the VM Portal to verify that the user account has the permissions to log in. 19.7.2. Viewing User Information Viewing User Information Click Administration Users to display the list of authorized users. Click the user's name to open the details view, usually with the General tab displaying general information, such as the domain name, email and status of the user. The other tabs allow you to view groups, permissions, quotas, and events for the user. For example, to view the groups to which the user belongs, click the Directory Groups tab. 19.7.3. Viewing User Permissions on Resources Users can be assigned permissions on specific resources or a hierarchy of resources. You can view the assigned users and their permissions on each resource. Viewing User Permissions on Resources Find and click the resource's name to open the details view. Click the Permissions tab to list the assigned users, the user's role, and the inherited permissions for the selected resource. 19.7.4. Removing Users When a user account is no longer required, remove it from Red Hat Virtualization. Removing Users Click Administration Users to display the list of authorized users. Select the user to be removed. Ensure the user is not running any virtual machines. Click Remove , then click OK . The user is removed from Red Hat Virtualization, but not from the external directory. 19.7.5. Viewing Logged-In Users You can view the users who are currently logged in, along with session times and other details. Click Administration Active User Sessions to view the Session DB ID , User Name , Authorization provider , User id , Source IP , Session Start Time , and Session Last Active Time for each logged-in user. 19.7.6. Terminating a User Session You can terminate the session of a user who is currently logged in. Terminating a User Session Click Administration Active User Sessions . Select the user session to be terminated. Click Terminate Session . Click OK .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-red_hat_enterprise_virtualization_manager_user_tasks
|
25.18. Adding/Removing a Logical Unit Through rescan-scsi-bus.sh
|
25.18. Adding/Removing a Logical Unit Through rescan-scsi-bus.sh The sg3_utils package provides the rescan-scsi-bus.sh script, which can automatically update the logical unit configuration of the host as needed (after a device has been added to the system). The rescan-scsi-bus.sh script can also perform an issue_lip on supported devices. For more information about how to use this script, refer to rescan-scsi-bus.sh --help . To install the sg3_utils package, run yum install sg3_utils . Known Issues with rescan-scsi-bus.sh When using the rescan-scsi-bus.sh script, take note of the following known issues: In order for rescan-scsi-bus.sh to work properly, LUN0 must be the first mapped logical unit. The rescan-scsi-bus.sh can only detect the first mapped logical unit if it is LUN0 . The rescan-scsi-bus.sh will not be able to scan any other logical unit unless it detects the first mapped logical unit even if you use the --nooptscan option. A race condition requires that rescan-scsi-bus.sh be run twice if logical units are mapped for the first time. During the first scan, rescan-scsi-bus.sh only adds LUN0 ; all other logical units are added in the second scan. A bug in the rescan-scsi-bus.sh script incorrectly executes the functionality for recognizing a change in logical unit size when the --remove option is used. The rescan-scsi-bus.sh script does not recognize ISCSI logical unit removals.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/logical-unit-add-remove
|
Providing feedback on Red Hat JBoss Web Server documentation
|
Providing feedback on Red Hat JBoss Web Server documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_operator/providing-direct-documentation-feedback_jws-operator
|
Chapter 2. Acknowledgments
|
Chapter 2. Acknowledgments Red Hat Ceph Storage version 7.0 contains many contributions from the Red Hat Ceph Storage team. In addition, the Ceph project is seeing amazing growth in the quality and quantity of contributions from individuals and organizations in the Ceph community. We would like to thank all members of the Red Hat Ceph Storage team, all of the individual contributors in the Ceph community, and additionally, but not limited to, the contributions from organizations such as: Intel(R) Fujitsu (R) UnitedStack Yahoo TM Ubuntu Kylin Mellanox (R) CERN TM Deutsche Telekom Mirantis (R) SanDisk TM SUSE (R) Croit TM Clyso TM Cloudbase solutions TM
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/release_notes/acknowledgments
|
Chapter 1. Getting started
|
Chapter 1. Getting started 1.1. Before you start Make sure your machine or container platform can provide sufficient memory and CPU for your desired usage of Red Hat build of Keycloak. See Concepts for sizing CPU and memory resources for more on how to get started with production sizing. Make sure you have OpenJDK 21 installed. 1.2. Download Red Hat build of Keycloak Download Red Hat build of Keycloak from the Red Hat website and extract it. After extracting this file, you should have a directory that is named rhbk-26.0.10 . 1.3. Start Red Hat build of Keycloak From a terminal, open the rhbk-26.0.10 directory. Enter the following command: On Linux, run: bin/kc.sh start-dev On Windows, run: bin\kc.bat start-dev Using the start-dev option, you are starting Red Hat build of Keycloak in development mode. In this mode, you can try out Red Hat build of Keycloak for the first time to get it up and running quickly. This mode offers convenient defaults for developers, such as for developing a new Red Hat build of Keycloak theme. 1.4. Create an admin user Red Hat build of Keycloak has no default admin user. You need to create an admin user before you can start Keycloak. Open http://localhost:8080/ . Fill in the form with your preferred username and password. 1.5. Log in to the Admin Console Go to the Red Hat build of Keycloak Admin Console . Log in with the username and password you created earlier. 1.6. Create a realm A realm in Red Hat build of Keycloak is equivalent to a tenant. Each realm allows an administrator to create isolated groups of applications and users. Initially, Red Hat build of Keycloak includes a single realm, called master . Use this realm only for managing Red Hat build of Keycloak and not for managing any applications. Use these steps to create the first realm. Open the Red Hat build of Keycloak Admin Console . Click Red Hat build of Keycloak to master realm , then click Create Realm . Enter myrealm in the Realm name field. Click Create . 1.7. Create a user Initially, the realm has no users. Use these steps to create a user: Verify that you are still in the myrealm realm, which is shown above the word Manage . Click Users in the left-hand menu. Click Create new user . Fill in the form with the following values: Username : myuser First name : any first name Last name : any last name Click Create . This user needs a password to log in. To set the initial password: Click Credentials at the top of the page. Fill in the Set password form with a password. Toggle Temporary to Off so that the user does not need to update this password at the first login. 1.8. Log in to the Account Console You can now log in to the Account Console to verify this user is configured correctly. Open the Red Hat build of Keycloak Account Console . Log in with myuser and the password you created earlier. As a user in the Account Console, you can manage your account including modifying your profile, adding two-factor authentication, and including identity provider accounts. 1.9. Secure the first application To secure the first application, you start by registering the application with your Red Hat build of Keycloak instance: Open the Red Hat build of Keycloak Admin Console . Click the word master in the top-left corner, then click myrealm . Click Clients . Click Create client Fill in the form with the following values: Client type : OpenID Connect Client ID : myclient Click Confirm that Standard flow is enabled. Click . Make these changes under Login settings . Set Valid redirect URIs to https://www.keycloak.org/app/* Set Web origins to https://www.keycloak.org Click Save . To confirm the client was created successfully, you can use the SPA testing application on the Keycloak website . Open https://www.keycloak.org/app/ . Click Save to use the default configuration. Click Sign in to authenticate to this application using the Red Hat build of Keycloak server you started earlier. 1.10. Taking the step Before you run Red Hat build of Keycloak in production, consider the following actions: Switch to a production ready database such as PostgreSQL. Configure SSL with your own certificates. Switch the admin password to a more secure password. For more information, see the Server Configuration Guide .
|
[
"bin/kc.sh start-dev",
"bin\\kc.bat start-dev"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/getting_started_guide/getting-started-zip-
|
Chapter 5. Updating OpenShift Virtualization
|
Chapter 5. Updating OpenShift Virtualization Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization. 5.1. About updating OpenShift Virtualization Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster. OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the minor version. You cannot update OpenShift Virtualization to the minor version without first updating OpenShift Container Platform. OpenShift Virtualization subscriptions use a single update channel that is named stable . The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible. If your subscription's approval strategy is set to Automatic , the update process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.10 on OpenShift Container Platform 4.10. Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. Updating OpenShift Virtualization does not interrupt network connections. Data volumes and their associated persistent volume claims are preserved during update. Important If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update. As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always . 5.2. Configuring automatic workload updates 5.2.1. About workload updates When you update OpenShift Virtualization, virtual machine workloads, including libvirt , virt-launcher , and qemu , update automatically if they support live migration. Note Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt , which is used to manage the virtual machine (VM) process. You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict . Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default. When LiveMigrate is the only update strategy enabled: VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled. VMIs that do not support live migration are not disrupted or updated. If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated. If you enable both LiveMigrate and Evict : VMIs that support live migration use the LiveMigrate update strategy. VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has a runStrategy value of always , a new VMI is created in a new pod with updated components. Migration attempts and timeouts When updating workloads, live migration fails if a pod is in the Pending state for the following periods: 5 minutes If the pod is pending because it is Unschedulable . 15 minutes If the pod is stuck in the pending state for any reason. When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely. Note Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging. 5.2.2. Configuring workload update methods You can configure workload update methods by editing the HyperConverged custom resource (CR). Prerequisites To use live migration as an update method, you must first enable live migration in the cluster. Note If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update. Procedure To open the HyperConverged CR in your default editor, run the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: "1m0s" 5 ... 1 The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict . If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty. 2 The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. 3 A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: always configured, a new VMI is created in a new pod with updated components. 4 The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method. 5 The interval to wait before evicting the batch of workloads. This does not apply to the LiveMigrate method. Note You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR. To apply your changes, save and exit the editor. 5.3. Approving pending Operator updates 5.3.1. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 5.4. Monitoring update status 5.4.1. Monitoring OpenShift Virtualization upgrade status To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE . You can also monitor the CSV conditions in the web console or by running the command provided here. Note The PHASE and conditions values are approximations that are based on available information. Prerequisites Log in to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Run the following command: USD oc get csv -n openshift-cnv Review the output, checking the PHASE field. For example: Example output VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command: USD oc get hco -n openshift-cnv kubevirt-hyperconverged \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}' A successful upgrade results in the following output: Example output ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully 5.4.2. Viewing outdated OpenShift Virtualization workloads You can view a list of outdated workloads by using the CLI. Note If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires. Procedure To view a list of outdated virtual machine instances (VMIs), run the following command: USD oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces Note Configure workload updates to ensure that VMIs update automatically. 5.5. Additional resources What are Operators? Operator Lifecycle Manager concepts and resources Cluster service versions (CSVs) Virtual machine live migration Configuring virtual machine eviction strategy Configuring live migration limits and timeouts
|
[
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5",
"oc get csv -n openshift-cnv",
"VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing",
"oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'",
"ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully",
"oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/virtualization/updating-openshift-virtualization
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/managing_networking_resources/proc_providing-feedback-on-red-hat-documentation
|
1.5. Replacement Functions for gfs2_tool in Red Hat Enterprise Linux 7
|
1.5. Replacement Functions for gfs2_tool in Red Hat Enterprise Linux 7 The gfs2_tool command is not supported in Red Hat Enterprise Linux 7. Table 1.2, "gfs2_tool Equivalent Functions in Red Hat Enterprise Linux 7" summarizes the equivalent functionality for the gfs2_tool command options in Red Hat Enterprise Linux 7. Table 1.2. gfs2_tool Equivalent Functions in Red Hat Enterprise Linux 7 gfs2_tool option Replacement Functionality clearflag Flag File1 File2 ... Clear an attribute flag on a file Linux standard chattr command freeze mountpoint Freeze (quiesce) a GFS2 file system Linux standard fsfreeze -f mountpoint command gettune mountpoint Print out current values of tuning parameters For many cases, has been replaced by mount ( get mount options ). Other tuning parameters may be fetched from the respective sysfs files: /sys/fs/gfs2/dm-3/tune/* . journals mountpoint Print out information on the journals in a GFS2 file system Information about journals can be fetched with gfs2_edit -p journals device . You can run this command when the file system is mounted. lockdump mountpoint Print out information about the locks this machine holds for a given file system The GFS2 lock information may be obtained by mounting debugfs , then executing a command like such as the following: sb device proto [ newvalue ] View (and possibly replace) the locking protocol To fetch the current value of the locking protocol, you can use the following command: To replace the current value of the locking protocol, you can use the following command: sb device table [ newvalue ] View (and possibly replace) the name of the locking table To fetch the current value of the name of the locking table, you can use the following command: To replace the current value of the name of the locking table, you can use the following command: sb device ondisk [ newvalue ] View (and possibly replace) the ondisk format number Do not perform this task. sb device multihost [ newvalue ] View (and possibly replace) the multihost format number Do not perform this task. sb device uuid [ newvalue ] View (and possibly replace) the uuid value To fetch the current value of the uuid , you can use the following command: To replace the current value of the uuid , you can use the following command: sb device all Print out the GFS2 superblock setflag Flag File1 File2 ... Sets an attribute flag on a file Linux standard chattr command settune mountpoint parameter newvalue Set the value of a tuning parameter For many cases, has been replaced by mount ( -o remount with options). Other tuning parameters may be set by the respective sysfs files: /sys/fs/gfs2/ cluster_name:file_system_name /tune/* unfreeze mountpoint Unfreeze a GFS2 file system Linux standard fsfreeze -unfreeze mountpoint command version Displays version of the gfs2_tool command N/A withdraw mountpoint Cause GFS2 to abnormally shutdown a given file system
|
[
"gfs2_edit -p journals /dev/clus_vg/lv1 Block #Journal Status: of 2620416 (0x27fc00) -------------------- Journal List -------------------- journal0: 0x14 128MB clean. journal1: 0x805b 128MB clean. ------------------------------------------------------",
"cat /sys/kernel/debug/gfs2/ clustername:file_system_name /glocks",
"tunegfs2 -l device | grep protocol",
"tunegfs2 -o lockproto=lock_dlm device",
"tunegfs2 -l device | grep table",
"tunegfs2 -o locktable= file_system_name device",
"tunegfs2 -l device | grep UUID",
"tunegfs2 -U uuid device",
"tunegfs2 -l device",
"echo 1 > /sys/fs/gfs2/ cluster_name:file_system_name /tune/withdraw"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/gfs2toolreplace
|
Appendix A. Preparing to use Maven
|
Appendix A. Preparing to use Maven This section gives a brief overview of how to prepare Maven for building Red Hat Fuse projects and introduces the concept of Maven coordinates, which are used to locate Maven artifacts. A.1. Preparing to set up Maven Maven is a free, open source, build tool from Apache. Typically, you use Maven to build Fuse applications. Procedure Download the latest version of Maven from the Maven download page . Ensure that your system is connected to the Internet. While building a project, the default behavior is that Maven searches external repositories and downloads the required artifacts. Maven looks for repositories that are accessible over the Internet. You can change this behavior so that Maven searches only repositories that are on a local network. That is, Maven can run in an offline mode. In offline mode, Maven looks for artifacts in its local repository. See Section A.3, "Using local Maven repositories" . A.2. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> A.3. Using local Maven repositories If you are running a container without an Internet connection, and you need to deploy an application that has dependencies that are not available offline, you can use the Maven dependency plug-in to download the application's dependencies into a Maven offline repository. You can then distribute this customized Maven offline repository to machines that do not have an Internet connection. Procedure In the project directory that contains the pom.xml file, download a repository for a Maven project by running a command such as the following: In this example, Maven dependencies and plug-ins that are required to build the project are downloaded to the /tmp/my-project directory. Distribute this customized Maven offline repository internally to any machines that do not have an Internet connection. A.4. About Maven artifacts and coordinates In the Maven build system, the basic building block is an artifact . After a build, the output of an artifact is typically an archive, such as a JAR or WAR file. A key aspect of Maven is the ability to locate artifacts and manage the dependencies between them. A Maven coordinate is a set of values that identifies the location of a particular artifact. A basic coordinate has three values in the following form: groupId:artifactId:version Sometimes Maven augments a basic coordinate with a packaging value or with both a packaging value and a classifier value. A Maven coordinate can have any one of the following forms: Here are descriptions of the values: groupdId Defines a scope for the name of the artifact. You would typically use all or part of a package name as a group ID. For example, org.fusesource.example . artifactId Defines the artifact name relative to the group ID. version Specifies the artifact's version. A version number can have up to four parts: n.n.n.n , where the last part of the version number can contain non-numeric characters. For example, the last part of 1.0-SNAPSHOT is the alphanumeric substring, 0-SNAPSHOT . packaging Defines the packaged entity that is produced when you build the project. For OSGi projects, the packaging is bundle . The default value is jar . classifier Enables you to distinguish between artifacts that were built from the same POM, but have different content. Elements in an artifact's POM file define the artifact's group ID, artifact ID, packaging, and version, as shown here: <project ... > ... <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> ... </project> To define a dependency on the preceding artifact, you would add the following dependency element to a POM file: <project ... > ... <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> ... </project> Note It is not necessary to specify the bundle package type in the preceding dependency, because a bundle is just a particular kind of JAR file and jar is the default Maven package type. If you do need to specify the packaging type explicitly in a dependency, however, you can use the type element.
|
[
"<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>",
"mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project",
"groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version",
"<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>",
"<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_spring_boot/spring-boot-using-maven
|
4.2. Server Support
|
4.2. Server Support Running the Certificate Authority (CA), Key Recovery Authority (KRA), Online Certificate Status Protocol (OCSP), Token Key Service (TKS), and Token Processing System (TPS) subsystems of Certificate System 10.6 is supported on Red Hat Enterprise Linux 8.8. The supported Directory Server version is 11.7. For more information, see the Section 1.3.1, "Server Support" of the release notes. Note Certificate System 10.6 is supported running on a Red Hat Enterprise Linux 8.8 virtual guest on a certified hypervisor. For details, see the Which hypervisors are certified to run Red Hat Enterprise Linux? solution article.
| null |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/release_notes-deployment_notes-server_support
|
Chapter 4. Configuring the instrumentation
|
Chapter 4. Configuring the instrumentation The Red Hat build of OpenTelemetry Operator uses an Instrumentation custom resource that defines the configuration of the instrumentation. 4.1. Auto-instrumentation in the Red Hat build of OpenTelemetry Operator Auto-instrumentation in the Red Hat build of OpenTelemetry Operator can automatically instrument an application without manual code changes. Developers and administrators can monitor applications with minimal effort and changes to the existing codebase. Auto-instrumentation runs as follows: The Red Hat build of OpenTelemetry Operator injects an init-container, or a sidecar container for Go, to add the instrumentation libraries for the programming language of the instrumented application. The Red Hat build of OpenTelemetry Operator sets the required environment variables in the application's runtime environment. These variables configure the auto-instrumentation libraries to collect traces, metrics, and logs and send them to the appropriate OpenTelemetry Collector or another telemetry backend. The injected libraries automatically instrument your application by connecting to known frameworks and libraries, such as web servers or database clients, to collect telemetry data. The source code of the instrumented application is not modified. Once the application is running with the injected instrumentation, the application automatically generates telemetry data, which is sent to a designated OpenTelemetry Collector or an external OTLP endpoint for further processing. Auto-instrumentation enables you to start collecting telemetry data quickly without having to manually integrate the OpenTelemetry SDK into your application code. However, some applications might require specific configurations or custom manual instrumentation. 4.2. OpenTelemetry instrumentation configuration options The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server ( httpd ). Important The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. 4.2.1. Instrumentation options Instrumentation options are specified in an Instrumentation custom resource (CR). Sample Instrumentation CR apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: "0.25" java: env: - name: OTEL_JAVAAGENT_DEBUG value: "true" Table 4.1. Parameters used by the Operator to define the Instrumentation Parameter Description Values env Common environment variables to define across all the instrumentations. exporter Exporter configuration. propagators Propagators defines inter-process context propagation configuration. tracecontext , baggage , b3 , b3multi , jaeger , ottrace , none resource Resource attributes configuration. sampler Sampling configuration. apacheHttpd Configuration for the Apache HTTP Server instrumentation. dotnet Configuration for the .NET instrumentation. go Configuration for the Go instrumentation. java Configuration for the Java instrumentation. nodejs Configuration for the Node.js instrumentation. python Configuration for the Python instrumentation. Table 4.2. Default protocol for auto-instrumentation Auto-instrumentation Default protocol Java 1.x otlp/grpc Java 2.x otlp/http Python otlp/http .NET otlp/http Go otlp/http Apache HTTP Server otlp/grpc 4.2.2. Configuration of the OpenTelemetry SDK variables You can use the instrumentation.opentelemetry.io/inject-sdk annotation in the OpenTelemetry Collector custom resource to instruct the Red Hat build of OpenTelemetry Operator to inject some of the following OpenTelemetry SDK environment variables, depending on the Instrumentation CR, into your pod: OTEL_SERVICE_NAME OTEL_TRACES_SAMPLER OTEL_TRACES_SAMPLER_ARG OTEL_PROPAGATORS OTEL_RESOURCE_ATTRIBUTES OTEL_EXPORTER_OTLP_ENDPOINT OTEL_EXPORTER_OTLP_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_KEY Table 4.3. Values for the instrumentation.opentelemetry.io/inject-sdk annotation Value Description "true" Injects the Instrumentation resource with the default name from the current namespace. "false" Injects no Instrumentation resource. "<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from the current namespace. "<namespace>/<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from another namespace. 4.2.3. Exporter configuration Although the Instrumentation custom resource supports setting up one or more exporters per signal, auto-instrumentation configures only the OTLP Exporter. So you must configure the endpoint to point to the OTLP Receiver on the Collector. Sample exporter TLS CA configuration using a config map apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the config map. The config map must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the config map or the absolute path to the certificate if the certificate is already present in the workload file system. Sample exporter mTLS configuration using a Secret apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the Secret for the ca_file , cert_file , and key_file values. The Secret must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 4 Points to the client certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 5 Points to the client key in the Secret or the absolute path to a key if the key is already present in the workload file system. Note You can provide the CA certificate in a config map or Secret. If you provide it in both, the config map takes higher precedence than the Secret. Example configuration for CA bundle injection by using a config map and Instrumentation CR apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: "true" # ... --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt # ... 4.2.4. Configuration of the Apache HTTP Server auto-instrumentation Important The Apache HTTP Server auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 4.4. Parameters for the .spec.apacheHttpd field Name Description Default attrs Attributes specific to the Apache HTTP Server. configPath Location of the Apache HTTP Server configuration. /usr/local/apache2/conf env Environment variables specific to the Apache HTTP Server. image Container image with the Apache SDK and auto-instrumentation. resourceRequirements The compute resource requirements. version Apache HTTP Server version. 2.4 The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-apache-httpd: "true" 4.2.5. Configuration of the .NET auto-instrumentation Important The .NET auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to .NET. image Container image with the .NET SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . The .NET autoinstrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-dotnet: "true" 4.2.6. Configuration of the Go auto-instrumentation Important The Go auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Go. image Container image with the Go SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-go: "true" Additional permissions required for the Go auto-instrumentation in the OpenShift cluster apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - "SYS_PTRACE" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny Tip The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows: USD oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account> 4.2.7. Configuration of the Java auto-instrumentation Important The Java auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Java. image Container image with the Java SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-java: "true" 4.2.8. Configuration of the Node.js auto-instrumentation Important The Node.js auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Node.js. image Container image with the Node.js SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotations to enable injection instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable" The instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE environment variable. 4.2.9. Configuration of the Python auto-instrumentation Important The Python auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Python. image Container image with the Python SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . Python auto-instrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-python: "true" 4.2.10. Multi-container pods The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection. Pod annotation instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>" Note The Go auto-instrumentation does not support multi-container auto-instrumentation injection. 4.2.11. Multi-container pods with multiple instrumentations Injecting instrumentation for an application language to one or more containers in a multi-container pod requires the following annotation: instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>" 1 1 You can inject instrumentation for only one language per container. For the list of supported <application_language> values, see the following table. Table 4.5. Supported values for the <application_language> Language Value for <application_language> ApacheHTTPD apache DotNet dotnet Java java NGINX inject-nginx NodeJS nodejs Python python SDK sdk 4.2.12. Using the instrumentation CR with Service Mesh When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi propagator.
|
[
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5",
"apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt",
"instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"",
"instrumentation.opentelemetry.io/inject-dotnet: \"true\"",
"instrumentation.opentelemetry.io/inject-go: \"true\"",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny",
"oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>",
"instrumentation.opentelemetry.io/inject-java: \"true\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"",
"instrumentation.opentelemetry.io/inject-python: \"true\"",
"instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"",
"instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/red_hat_build_of_opentelemetry/otel-configuration-of-instrumentation
|
Chapter 3. Adding storage resources for hybrid or Multicloud
|
Chapter 3. Adding storage resources for hybrid or Multicloud 3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 3.3, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage Object Storage . Click the Backing Store tab to view all the backing stores. 3.2. Overriding the default backing store You can use the manualDefaultBackingStore flag to override the default NooBaa backing store and remove it if you do not want to use the default backing store configuration. This provides flexibility to customize your backing store configuration and tailor it to your specific needs. By leveraging this feature, you can further optimize your system and enhance its performance. Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Check if noobaa-default-backing-store is present: Patch the NooBaa CR to enable manualDefaultBackingStore : Important Use the Multicloud Object Gateway CLI to create a new backing store and update accounts. Create a new default backing store to override the default backing store. For example: Replace NEW-DEFAULT-BACKING-STORE with the name you want for your new default backing store. Update the admin account to use the new default backing store as its default resource: Replace NEW-DEFAULT-BACKING-STORE with the name of the backing store from the step. Updating the default resource for admin accounts ensures that the new configuration is used throughout your system. Configure the default-bucketclass to use the new default backingstore: Optional: Delete the noobaa-default-backing-store. Delete all instances of and buckets associated with noobaa-default-backing-store and update the accounts using it as resource. Delete the noobaa-default-backing-store: You must enable the manualDefaultBackingStore flag before proceeding. Additionally, it is crucial to update all accounts that use the default resource and delete all instances of and buckets associated with the default backing store to ensure a smooth transition. 3.3. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters. Add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 3.3.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 3.3.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 3.3.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 3.3.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 3.3.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 3.4, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 3.3.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. <aws-region-name> The AWS bucket region. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. <aws-region-name> The AWS bucket region. 3.3.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For example, For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 3.3.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 3.4. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab and search the new Bucket Class. 3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save .
|
[
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"oc get backingstore NAME TYPE PHASE AGE noobaa-default-backing-store pv-pool Creating 102s",
"oc patch noobaa/noobaa --type json --patch='[{\"op\":\"add\",\"path\":\"/spec/manualDefaultBackingStore\",\"value\":true}]'",
"noobaa backingstore create pv-pool _NEW-DEFAULT-BACKING-STORE_ --num-volumes 1 --pv-size-gb 16",
"noobaa account update [email protected] --new_default_resource=_NEW-DEFAULT-BACKING-STORE_",
"oc patch Bucketclass noobaa-default-bucket-class -n openshift-storage --type=json --patch='[{\"op\": \"replace\", \"path\": \"/spec/placementPolicy/tiers/0/backingStores/0\", \"value\": \"NEW-DEFAULT-BACKING-STORE\"}]'",
"oc delete backingstore noobaa-default-backing-store -n openshift-storage | oc patch -n openshift-storage backingstore/noobaa-default-backing-store --type json --patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]'",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> --region <aws-region-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> region: <aws-region-name> type: aws-s3",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/adding-storage-resources-for-hybrid-or-multicloud_rhodf
|
Chapter 4. KafkaSpec schema reference
|
Chapter 4. KafkaSpec schema reference Used in: Kafka Property Property type Description kafka KafkaClusterSpec Configuration of the Kafka cluster. zookeeper ZookeeperClusterSpec Configuration of the ZooKeeper cluster. This section is required when running a ZooKeeper-based Apache Kafka cluster. entityOperator EntityOperatorSpec Configuration of the Entity Operator. clusterCa CertificateAuthority Configuration of the cluster certificate authority. clientsCa CertificateAuthority Configuration of the clients certificate authority. cruiseControl CruiseControlSpec Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. jmxTrans JmxTransSpec The jmxTrans property has been deprecated. JMXTrans is deprecated and related resources removed in Streams for Apache Kafka 2.5. As of Streams for Apache Kafka 2.5, JMXTrans is not supported anymore and this option is ignored. kafkaExporter KafkaExporterSpec Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. maintenanceTimeWindows string array A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaSpec-reference
|
Installation Guide
|
Installation Guide Red Hat Ceph Storage 8 Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Red Hat Ceph Storage Documentation Team
|
[
"ceph soft nofile unlimited",
"USER_NAME soft nproc unlimited",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches ' Red Hat Ceph Storage '",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf update",
"subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms",
"dnf install cephadm-ansible",
"cd /usr/share/cephadm-ansible",
"mkdir -p inventory/staging inventory/production",
"[defaults] inventory = ./inventory/staging",
"touch inventory/staging/hosts touch inventory/production/hosts",
"NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i inventory/staging/hosts PLAYBOOK.yml",
"ansible-playbook -i inventory/production/hosts PLAYBOOK.yml",
"ssh root@myhostname root@myhostname password: Permission denied, please try again.",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf",
"systemctl restart sshd.service",
"ssh root@ HOST_NAME",
"ssh root@host01",
"ssh root@ HOST_NAME",
"ssh root@host01",
"adduser USER_NAME",
"adduser ceph-admin",
"passwd USER_NAME",
"passwd ceph-admin",
"cat << EOF >/etc/sudoers.d/ USER_NAME USDUSER_NAME ALL = (root) NOPASSWD:ALL EOF",
"cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOF",
"chmod 0440 /etc/sudoers.d/ USER_NAME",
"chmod 0440 /etc/sudoers.d/ceph-admin",
"[ceph-admin@admin ~]USD ssh-keygen",
"ssh-copy-id USER_NAME @ HOST_NAME",
"[ceph-admin@admin ~]USD ssh-copy-id ceph-admin@host01",
"[ceph-admin@admin ~]USD touch ~/.ssh/config",
"Host host01 Hostname HOST_NAME User USER_NAME Host host02 Hostname HOST_NAME User USER_NAME",
"Host host01 Hostname host01 User ceph-admin Host host02 Hostname host02 User ceph-admin Host host03 Hostname host03 User ceph-admin",
"[ceph-admin@admin ~]USD chmod 600 ~/.ssh/config",
"host02 host03 host04 [admin] host01",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host01",
"cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-know",
"cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-know",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON",
"cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json",
"{ \"url\":\" REGISTRY_URL \", \"username\":\" USER_NAME \", \"password\":\" PASSWORD \" }",
"{ \"url\":\"registry.redhat.io\", \"username\":\"myuser1\", \"password\":\"mypassword1\" }",
"cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json",
"cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json",
"service_type: host addr: host01 hostname: host01 --- service_type: host addr: host02 hostname: host02 --- service_type: host addr: host03 hostname: host03 --- service_type: host addr: host04 hostname: host04 --- service_type: mon placement: host_pattern: \"host[0-2]\" --- service_type: osd service_id: my_osds placement: host_pattern: \"host[1-3]\" data_devices: all: true",
"cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"su - SSH_USER_NAME",
"su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0",
"[ceph@host01 ~]USD ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0",
"sudo cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"sudo cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --all --matches=\"*Ceph*\"",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf install -y podman httpd-tools",
"mkdir -p /opt/registry/{auth,certs,data}",
"htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORD",
"htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS: LOCAL_NODE_FQDN \"",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:admin.lab.redhat.com\"",
"ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.cert",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \" LOCAL_NODE_FQDN \"",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"scp /opt/registry/certs/domain.crt root@host01:/etc/pki/ca-trust/source/anchors/ ssh root@host01 update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"run --restart=always --name NAME_OF_CONTAINER -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2",
"podman run --restart=always --name myprivateregistry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2",
"unqualified-search-registries = [\"registry.redhat.io\", \"registry.access.redhat.com\", \"registry.fedoraproject.org\", \"registry.centos.org\", \"docker.io\"]",
"login registry.redhat.io",
"run -v / CERTIFICATE_DIRECTORY_PATH :/certs:Z -v / CERTIFICATE_DIRECTORY_PATH /domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN : RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/ SRC_IMAGE : SRC_TAG docker:// LOCAL_NODE_FQDN :5000/ DST_IMAGE : DST_TAG",
"podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-8-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/rhceph-8-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-node-exporter:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/grafana-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/grafana-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-alertmanager:v4.12",
"curl -u PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD https:// LOCAL_NODE_FQDN :5000/v2/_catalog",
"curl -u myregistryusername:myregistrypassword1 https://admin.lab.redhat.com:5000/v2/_catalog {\"repositories\":[\"openshift4/ose-prometheus\",\"openshift4/ose-prometheus-alertmanager\",\"openshift4/ose-prometheus-node-exporter\",\"rhceph/rhceph-8-dashboard-rhel9\",\"rhceph/rhceph-8-rhel9\"]}",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\"",
"ansible-playbook -vvv -i INVENTORY_HOST_FILE_ cephadm-set-container-insecure-registries.yml -e insecure_registry= REGISTRY_URL",
"ansible-playbook -vvv -i hosts cephadm-set-container-insecure-registries.yml -e insecure_registry=host01:5050",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit host02",
"cephadm --image PRIVATE_REGISTRY_NODE_FQDN :5000/ CUSTOM_IMAGE_NAME : IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN :5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORD",
"cephadm --image admin.lab.redhat.com:5000/rhceph-8-rhel9:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD",
"ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1",
"ceph config set mgr mgr/cephadm/ OPTION_NAME CUSTOM_REGISTRY_NAME / CONTAINER_NAME",
"container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporter",
"ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer",
"ceph orch redeploy node-exporter",
"ceph config rm mgr mgr/cephadm/ OPTION_NAME",
"ceph config rm mgr mgr/cephadm/container_image_prometheus",
"[ansible@admin ~]USD cd /usr/share/cephadm-ansible",
"ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user= USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node= ADMIN_NODE_NAME_1",
"[ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 [ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01",
"cephadm shell ceph -s",
"cephadm shell ceph -s",
"exit",
"podman ps",
"cephadm shell ceph -s cluster: id: f64f341c-655d-11eb-8778-fa163e914bcc health: HEALTH_OK services: mon: 3 daemons, quorum host01,host02,host03 (age 94m) mgr: host01.lbnhug(active, since 59m), standbys: host02.rofgay, host03.ohipra mds: 1/1 daemons up, 1 standby osd: 18 osds: 18 up (since 10m), 18 in (since 10m) rgw: 4 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 230 objects, 9.9 KiB usage: 271 MiB used, 269 GiB / 270 GiB avail pgs: 225 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr",
".Syntax [source,subs=\"verbatim,quotes\"] ---- ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----",
".Example ---- ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----",
"ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"[ceph-admin@admin ~]USD cat hosts host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"ceph orch host add NEWHOST",
"ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'",
"ceph orch host add HOSTNAME IP_ADDRESS",
"ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'",
"ceph orch host ls",
"ceph orch host add HOSTNAME IP_ADDR",
"ceph orch host add host01 10.10.128.68",
"ceph orch host set-addr HOSTNAME IP_ADDR",
"ceph orch host set-addr HOSTNAME IPV4_ADDRESS",
"service_type: host addr: hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: hostname: host03 labels: - mon - osd - mgr --- service_type: host addr: hostname: host04 labels: - mon - osd",
"ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'",
"cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml",
"ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon osd mgr host03 host03 mon osd mgr host04 host04 mon osd",
"cephadm shell",
"ceph orch host add HOST_NAME HOST_ADDRESS",
"ceph orch host add host03 10.10.128.70",
"cephadm shell",
"ceph orch host ls",
"ceph orch host drain HOSTNAME",
"ceph orch host drain host02",
"ceph orch osd rm status",
"ceph orch ps HOSTNAME",
"ceph orch ps host02",
"ceph orch host rm HOSTNAME",
"ceph orch host rm host02",
"cephadm shell",
"ceph orch host label add HOSTNAME LABEL",
"ceph orch host label add host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label rm HOSTNAME LABEL",
"ceph orch host label rm host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls HOST ADDR LABELS STATUS host01 _admin mon osd mgr host02 mon osd mgr mylabel",
"ceph orch apply DAEMON --placement=\"label: LABEL \"",
"ceph orch apply prometheus --placement=\"label:mylabel\"",
"vi placement.yml",
"service_type: prometheus placement: label: \"mylabel\"",
"ceph orch apply -i FILENAME",
"ceph orch apply -i placement.yml Scheduled prometheus update...",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0",
"ceph orch apply mon 5",
"ceph orch apply mon --unmanaged",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host01 mon",
"ceph orch host ls",
"ceph orch host label add host02 mon ceph orch host label add host03 mon ceph orch host ls HOST ADDR LABELS STATUS host01 mon host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm generate-key",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key",
"[ceph-admin@admin cephadm-ansible]USDceph cephadm clear-key",
"[ceph-admin@admin cephadm-ansible]USD ceph mgr fail",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user <user>",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user user",
"ceph cephadm get-pub-key > ~/ceph.pub",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ceph.pub USER @ HOST",
"[ceph-admin@admin cephadm-ansible]USD ssh-copy-id ceph-admin@host01",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr host04 host05 host06",
"ceph orch host label add HOSTNAME _admin",
"ceph orch host label add host03 _admin",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr,_admin host04 host05 host06",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host02 mon ceph orch host label add host03 mon",
"ceph orch host ls",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [ NODE:IP_ADDRESS_OR_NETWORK_NAME ...]",
"ceph orch apply mon host02:10.10.128.69 host03:mynetwork",
"ceph orch apply mgr NUMBER_OF_DAEMONS",
"ceph orch apply mgr 3",
"ceph orch apply mgr --placement \" HOSTNAME1 HOSTNAME2 HOSTNAME3 \"",
"ceph orch apply mgr --placement \"host02 host03 host04\"",
"ceph orch device ls [--hostname= HOSTNAME1 HOSTNAME2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch daemon add osd HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph orch apply osd --all-available-devices",
"ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{\"fsid\":\" FSID \", \"client_group\":\" ANSIBLE_GROUP_NAME \", \"keyring\":\" PATH_TO_KEYRING \", \"conf\":\" CONFIG_FILE \"}'",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"be3ca2b2-27db-11ec-892b-005056833d58\",\"client_group\":\"fs_clients\",\"keyring\":\"/etc/ceph/fs.keyring\", \"conf\": \"/etc/ceph/ceph.conf\"}'",
"ceph mgr module disable cephadm",
"ceph fsid",
"exit",
"cephadm rm-cluster --force --zap-osds --fsid FSID",
"cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"host02 host03 host04 [admin] host01 [clients] client01 client02 client03",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit CLIENT_GROUP_NAME | CLIENT_NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --limit clients",
"ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"client_group\":\" CLIENT_GROUP_NAME \",\"conf\":\" CEPH_CONFIGURATION_PATH \",\"keyring_dest\":\" KEYRING_DESTINATION_PATH \"}'",
"[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"266ee7a8-2a05-11eb-b846-5254002d4916\",\"keyring\":\"/etc/ceph/ceph.client.admin.keyring\",\"client_group\":\"clients\",\"conf\":\"/etc/ceph/ceph.conf\",\"keyring_dest\":\"/etc/ceph/custom.name.ceph.keyring\"}'",
"ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"conf\":\" CONF_PATH \"}'",
"ls -l /etc/ceph/ -rw-------. 1 ceph ceph 151 Jul 11 12:23 custom.name.ceph.keyring -rw-------. 1 ceph ceph 151 Jul 11 12:23 ceph.keyring -rw-------. 1 ceph ceph 269 Jul 11 12:23 ceph.conf",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR",
"[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml",
"TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml",
"TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :",
"[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE",
"[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml",
"cephadm adopt [-h] --name DAEMON_NAME --style STYLE [--cluster CLUSTER ] --legacy-dir [ LEGACY_DIR ] --config-json CONFIG_JSON ] [--skip-firewalld] [--skip-pull]",
"cephadm adopt --style=legacy --name prometheus.host02",
"cephadm ceph-volume inventory/simple/raw/lvm [-h] [--fsid FSID ] [--config-json CONFIG_JSON ] [--config CONFIG , -c CONFIG ] [--keyring KEYRING , -k KEYRING ]",
"cephadm ceph-volume inventory --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm check-host [--expect-hostname HOSTNAME ]",
"cephadm check-host --expect-hostname host02",
"cephadm shell deploy DAEMON_TYPE [-h] [--name DAEMON_NAME ] [--fsid FSID ] [--config CONFIG , -c CONFIG ] [--config-json CONFIG_JSON ] [--keyring KEYRING ] [--key KEY ] [--osd-fsid OSD_FSID ] [--skip-firewalld] [--tcp-ports TCP_PORTS ] [--reconfig] [--allow-ptrace] [--memory-request MEMORY_REQUEST ] [--memory-limit MEMORY_LIMIT ] [--meta-json META_JSON ]",
"cephadm shell deploy mon --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm enter [-h] [--fsid FSID ] --name NAME [command [command ...]]",
"cephadm enter --name 52c611f2b1d9",
"cephadm help",
"cephadm help",
"cephadm install PACKAGES",
"cephadm install ceph-common ceph-osd",
"cephadm --image IMAGE_ID inspect-image",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a inspect-image",
"cephadm list-networks",
"cephadm list-networks",
"cephadm ls [--no-detail] [--legacy-dir LEGACY_DIR ]",
"cephadm ls --no-detail",
"cephadm logs [--fsid FSID ] --name DAEMON_NAME cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -n NUMBER # Last N lines cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -f # Follow the logs",
"cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -n 20 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -f",
"cephadm prepare-host [--expect-hostname HOSTNAME ]",
"cephadm prepare-host cephadm prepare-host --expect-hostname host01",
"cephadm [-h] [--image IMAGE_ID ] pull",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a pull",
"cephadm registry-login --registry-url [ REGISTRY_URL ] --registry-username [ USERNAME ] --registry-password [ PASSWORD ] [--fsid FSID ] [--registry-json JSON_FILE ]",
"cephadm registry-login --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"cat REGISTRY_FILE { \"url\":\" REGISTRY_URL \", \"username\":\" REGISTRY_USERNAME \", \"password\":\" REGISTRY_PASSWORD \" }",
"cat registry_file { \"url\":\"registry.redhat.io\", \"username\":\"myuser\", \"password\":\"mypass\" } cephadm registry-login -i registry_file",
"cephadm rm-daemon [--fsid FSID ] [--name DAEMON_NAME ] [--force ] [--force-delete-data]",
"cephadm rm-daemon --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm rm-cluster [--fsid FSID ] [--force]",
"cephadm rm-cluster --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"ceph mgr module disable cephadm",
"cephadm rm-repo [-h]",
"cephadm rm-repo",
"cephadm run [--fsid FSID ] --name DAEMON_NAME",
"cephadm run --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm shell [--fsid FSID ] [--name DAEMON_NAME , -n DAEMON_NAME ] [--config CONFIG , -c CONFIG ] [--mount MOUNT , -m MOUNT ] [--keyring KEYRING , -k KEYRING ] [--env ENV , -e ENV ]",
"cephadm shell -- ceph orch ls cephadm shell",
"cephadm unit [--fsid FSID ] --name DAEMON_NAME start/stop/restart/enable/disable",
"cephadm unit --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8 start",
"cephadm version",
"cephadm version"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/installation_guide/basic-red-hat-ceph-storage-considerations_install
|
1.5. Common Exploits and Attacks
|
1.5. Common Exploits and Attacks Table 1.1, "Common Exploits" details some of the most common exploits and entry points used by intruders to access organizational network resources. Key to these common exploits are the explanations of how they are performed and how administrators can properly safeguard their network against such attacks. Table 1.1. Common Exploits Exploit Description Notes Null or Default Passwords Leaving administrative passwords blank or using a default password set by the product vendor. This is most common in hardware such as routers and firewalls, but some services that run on Linux can contain default administrator passwords as well (though Red Hat Enterprise Linux 7 does not ship with them). Commonly associated with networking hardware such as routers, firewalls, VPNs, and network attached storage (NAS) appliances. Common in many legacy operating systems, especially those that bundle services (such as UNIX and Windows.) Administrators sometimes create privileged user accounts in a rush and leave the password null, creating a perfect entry point for malicious users who discover the account. Default Shared Keys Secure services sometimes package default security keys for development or evaluation testing purposes. If these keys are left unchanged and are placed in a production environment on the Internet, all users with the same default keys have access to that shared-key resource, and any sensitive information that it contains. Most common in wireless access points and preconfigured secure server appliances. IP Spoofing A remote machine acts as a node on your local network, finds vulnerabilities with your servers, and installs a backdoor program or Trojan horse to gain control over your network resources. Spoofing is quite difficult as it involves the attacker predicting TCP/IP sequence numbers to coordinate a connection to target systems, but several tools are available to assist crackers in performing such a vulnerability. Depends on target system running services (such as rsh , telnet , FTP and others) that use source-based authentication techniques, which are not recommended when compared to PKI or other forms of encrypted authentication used in ssh or SSL/TLS. Eavesdropping Collecting data that passes between two active nodes on a network by eavesdropping on the connection between the two nodes. This type of attack works mostly with plain text transmission protocols such as Telnet, FTP, and HTTP transfers. Remote attacker must have access to a compromised system on a LAN in order to perform such an attack; usually the cracker has used an active attack (such as IP spoofing or man-in-the-middle) to compromise a system on the LAN. Preventative measures include services with cryptographic key exchange, one-time passwords, or encrypted authentication to prevent password snooping; strong encryption during transmission is also advised. Service Vulnerabilities An attacker finds a flaw or loophole in a service run over the Internet; through this vulnerability, the attacker compromises the entire system and any data that it may hold, and could possibly compromise other systems on the network. HTTP-based services such as CGI are vulnerable to remote command execution and even interactive shell access. Even if the HTTP service runs as a non-privileged user such as "nobody", information such as configuration files and network maps can be read, or the attacker can start a denial of service attack which drains system resources or renders it unavailable to other users. Services sometimes can have vulnerabilities that go unnoticed during development and testing; these vulnerabilities (such as buffer overflows , where attackers crash a service using arbitrary values that fill the memory buffer of an application, giving the attacker an interactive command prompt from which they may execute arbitrary commands) can give complete administrative control to an attacker. Administrators should make sure that services do not run as the root user, and should stay vigilant of patches and errata updates for applications from vendors or security organizations such as CERT and CVE. Application Vulnerabilities Attackers find faults in desktop and workstation applications (such as email clients) and execute arbitrary code, implant Trojan horses for future compromise, or crash systems. Further exploitation can occur if the compromised workstation has administrative privileges on the rest of the network. Workstations and desktops are more prone to exploitation as workers do not have the expertise or experience to prevent or detect a compromise; it is imperative to inform individuals of the risks they are taking when they install unauthorized software or open unsolicited email attachments. Safeguards can be implemented such that email client software does not automatically open or execute attachments. Additionally, the automatic update of workstation software using Red Hat Network; or other system management services can alleviate the burdens of multi-seat security deployments. Denial of Service (DoS) Attacks Attacker or group of attackers coordinate against an organization's network or server resources by sending unauthorized packets to the target host (either server, router, or workstation). This forces the resource to become unavailable to legitimate users. The most reported DoS case in the US occurred in 2000. Several highly-trafficked commercial and government sites were rendered unavailable by a coordinated ping flood attack using several compromised systems with high bandwidth connections acting as zombies , or redirected broadcast nodes. Source packets are usually forged (as well as rebroadcast), making investigation as to the true source of the attack difficult. Advances in ingress filtering (IETF rfc2267) using iptables and Network Intrusion Detection Systems such as snort assist administrators in tracking down and preventing distributed DoS attacks.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Common_Exploits_and_Attacks
|
11.4. Enabling and Disabling User Accounts
|
11.4. Enabling and Disabling User Accounts The administrator can disable and enable active user accounts. Disabling a user account deactivates the account. Disabled user accounts cannot be used to authenticate. A user whose account has been disabled cannot log into IdM and cannot use IdM services, such as Kerberos, or perform any tasks. Disabled user accounts still exist within IdM and all of the associated information remains unchanged. Unlike preserved user accounts, disabled user accounts remain in the active state. Therefore, they are displayed in the output of the ipa user-find command. For example: Any disabled user account can be enabled again. Note After disabling a user account, existing connections remain valid until the user's Kerberos TGT and other tickets expire. After the ticket expires, the user will not be able renew it. Enabling and Disabling User Accounts in the Web UI Select the Identity Users tab. From the Active users list, select the required user or users, and then click Disable or Enable . Figure 11.12. Disabling or Enabling a User Account Disabling and Enabling User Accounts from the Command Line To disable a user account, use the ipa user-disable command. To enable a user account, use the ipa user-enable command.
|
[
"ipa user-find User login: user First name: User Last name: User Home directory: /home/user Login shell: /bin/sh UID: 1453200009 GID: 1453200009 Account disabled: True Password: False Kerberos keys available: False",
"ipa user-disable user_login ---------------------------- Disabled user account \"user_login\" ----------------------------",
"ipa user-enable user_login ---------------------------- Enabled user account \"user_login\" ----------------------------"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/activating_and_deactivating_user_accounts
|
7.2.2. Window Managers
|
7.2.2. Window Managers Window managers are X client programs which are either part of a desktop environment or, in some cases, standalone. Their primary purpose is to control the way graphical windows are positioned, resized, or moved. Window managers also control title bars, window focus behavior, and user-specified key and mouse button bindings. Four window managers are included with Red Hat Enterprise Linux: kwin - The KWin window manager is the default window manager for KDE. It is an efficient window manager which supports custom themes. metacity - The Metacity window manager is the default window manager for GNOME. It is a simple and efficient window manager which supports custom themes. mwm - The Motif window manager is a basic, standalone window manager. Since it is designed to be a standalone window manager, it should not be used in conjunction with GNOME or KDE. twm - The minimalist Tab Window Manager , which provides the most basic tool set of any of the window managers and can be used either as a standalone or with a desktop environment. It is installed as part of the X11R6.8 release. These window managers can be run without desktop environments to gain a better sense of their differences. To do this, type the xinit -e <path-to-window-manager> command, where <path-to-window-manager> is the location of the window manager binary file. The binary file can be found by typing which <window-manager-name> , where <window-manager-name> is the name of the window manager you are querying.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-x-clients-winmanagers
|
Chapter 11. Pacemaker Rules
|
Chapter 11. Pacemaker Rules Rules can be used to make your configuration more dynamic. One use of rules might be to assign machines to different processing groups (using a node attribute) based on time and to then use that attribute when creating location constraints. Each rule can contain a number of expressions, date-expressions and even other rules. The results of the expressions are combined based on the rule's boolean-op field to determine if the rule ultimately evaluates to true or false . What happens depends on the context in which the rule is being used. Table 11.1. Properties of a Rule Field Description role Limits the rule to apply only when the resource is in that role. Allowed values: Started , Slave, and Master . NOTE: A rule with role="Master" cannot determine the initial location of a clone instance. It will only affect which of the active instances will be promoted. score The score to apply if the rule evaluates to true . Limited to use in rules that are part of location constraints. score-attribute The node attribute to look up and use as a score if the rule evaluates to true . Limited to use in rules that are part of location constraints. boolean-op How to combine the result of multiple expression objects. Allowed values: and and or . The default value is and . 11.1. Node Attribute Expressions Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. Table 11.2. Properties of an Expression Field Description attribute The node attribute to test type Determines how the value(s) should be tested. Allowed values: string , integer , version . The default value is string operation The comparison to perform. Allowed values: * lt - True if the node attribute's value is less than value * gt - True if the node attribute's value is greater than value * lte - True if the node attribute's value is less than or equal to value * gte - True if the node attribute's value is greater than or equal to value * eq - True if the node attribute's value is equal to value * ne - True if the node attribute's value is not equal to value * defined - True if the node has the named attribute * not_defined - True if the node does not have the named attribute value User supplied value for comparison (required) In addition to any attributes added by the administrator, the cluster defines special, built-in node attributes for each node that can also be used, as described in Table 11.3, "Built-in Node Attributes" . Table 11.3. Built-in Node Attributes Name Description #uname Node name #id Node ID #kind Node type. Possible values are cluster , remote , and container . The value of kind is remote . for Pacemaker Remote nodes created with the ocf:pacemaker:remote resource, and container for Pacemaker Remote guest nodes and bundle nodes. #is_dc true if this node is a Designated Controller (DC), false otherwise #cluster_name The value of the cluster-name cluster property, if set #site_name The value of the site-name node attribute, if set, otherwise identical to #cluster-name #role The role the relevant multistate resource has on this node. Valid only within a rule for a location constraint for a multistate resource.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-pacemakerrules-haar
|
Chapter 8. MachineConfigPool [machineconfiguration.openshift.io/v1]
|
Chapter 8. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigPoolSpec is the spec for MachineConfigPool resource. status object MachineConfigPoolStatus is the status for MachineConfigPool resource. 8.1.1. .spec Description MachineConfigPoolSpec is the spec for MachineConfigPool resource. Type object Property Type Description configuration object The targeted MachineConfig object for the machine config pool. machineConfigSelector object machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. maxUnavailable integer-or-string maxUnavailable defines either an integer number or percentage of nodes in the pool that can go Unavailable during an update. This includes nodes Unavailable for any reason, including user initiated cordons, failing nodes, etc. The default value is 1. A value larger than 1 will mean multiple nodes going unavailable during the update, which may affect your workload stress on the remaining nodes. You cannot set this value to 0 to stop updates (it will default back to 1); to stop updates, use the 'paused' property instead. Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards, even if maxUnavailable is greater than one. nodeSelector object nodeSelector specifies a label selector for Machines paused boolean paused specifies whether or not changes to this machine config pool should be stopped. This includes generating new desiredMachineConfig and update of machines. 8.1.2. .spec.configuration Description The targeted MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.3. .spec.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 8.1.4. .spec.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.5. .spec.machineConfigSelector Description machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.6. .spec.machineConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.7. .spec.machineConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.8. .spec.nodeSelector Description nodeSelector specifies a label selector for Machines Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.9. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.10. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.11. .status Description MachineConfigPoolStatus is the status for MachineConfigPool resource. Type object Property Type Description certExpirys array certExpirys keeps track of important certificate expiration data certExpirys[] object ceryExpiry contains the bundle name and the expiry date conditions array conditions represents the latest available observations of current state. conditions[] object MachineConfigPoolCondition contains condition information for an MachineConfigPool. configuration object configuration represents the current MachineConfig object for the machine config pool. degradedMachineCount integer degradedMachineCount represents the total number of machines marked degraded (or unreconcilable). A node is marked degraded if applying a configuration failed.. machineCount integer machineCount represents the total number of machines in the machine config pool. observedGeneration integer observedGeneration represents the generation observed by the controller. readyMachineCount integer readyMachineCount represents the total number of ready machines targeted by the pool. unavailableMachineCount integer unavailableMachineCount represents the total number of unavailable (non-ready) machines targeted by the pool. A node is marked unavailable if it is in updating state or NodeReady condition is false. updatedMachineCount integer updatedMachineCount represents the total number of machines targeted by the pool that have the CurrentMachineConfig as their config. 8.1.12. .status.certExpirys Description certExpirys keeps track of important certificate expiration data Type array 8.1.13. .status.certExpirys[] Description ceryExpiry contains the bundle name and the expiry date Type object Required bundle subject Property Type Description bundle string bundle is the name of the bundle in which the subject certificate resides expiry string expiry is the date after which the certificate will no longer be valid subject string subject is the subject of the certificate 8.1.14. .status.conditions Description conditions represents the latest available observations of current state. Type array 8.1.15. .status.conditions[] Description MachineConfigPoolCondition contains condition information for an MachineConfigPool. Type object Property Type Description lastTransitionTime `` lastTransitionTime is the timestamp corresponding to the last status change of this condition. message string message is a human readable description of the details of the last transition, complementing reason. reason string reason is a brief machine readable explanation for the condition's last transition. status string status of the condition, one of ('True', 'False', 'Unknown'). type string type of the condition, currently ('Done', 'Updating', 'Failed'). 8.1.16. .status.configuration Description configuration represents the current MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.17. .status.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 8.1.18. .status.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 . Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigpools DELETE : delete collection of MachineConfigPool GET : list objects of kind MachineConfigPool POST : create a MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} DELETE : delete a MachineConfigPool GET : read the specified MachineConfigPool PATCH : partially update the specified MachineConfigPool PUT : replace the specified MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status GET : read status of the specified MachineConfigPool PATCH : partially update status of the specified MachineConfigPool PUT : replace status of the specified MachineConfigPool 8.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigpools HTTP method DELETE Description delete collection of MachineConfigPool Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfigPool Table 8.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPoolList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfigPool Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body MachineConfigPool schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 202 - Accepted MachineConfigPool schema 401 - Unauthorized Empty 8.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the MachineConfigPool HTTP method DELETE Description delete a MachineConfigPool Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfigPool Table 8.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfigPool Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfigPool Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body MachineConfigPool schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty 8.2.3. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status Table 8.15. Global path parameters Parameter Type Description name string name of the MachineConfigPool HTTP method GET Description read status of the specified MachineConfigPool Table 8.16. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfigPool Table 8.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfigPool Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body MachineConfigPool schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_apis/machineconfigpool-machineconfiguration-openshift-io-v1
|
Chapter 12. Migrating virtual machines
|
Chapter 12. Migrating virtual machines If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host. 12.1. How migrating virtual machines works You can migrate a running virtual machine (VM) without interrupting the workload, with only a small downtime, by using a live migration. By default, the migrated VM is transient on the destination host, and remains defined also on the source host. The essential part of a live migration is transferring the state of the VM's memory and of any attached virtualized devices to a destination host. For the VM to remain functional on the destination host, the VM's disk images must remain available to it. To migrate a shut-off VM, you must use an offline migration, which copies the VM's configuration to the destination host. For details, see the following table. Table 12.1. VM migration types Migration type Description Use case Storage requirements Live migration The VM continues to run on the source host machine while KVM is transferring the VM's memory pages to the destination host. When the migration is nearly complete, KVM very briefly suspends the VM, and resumes it on the destination host. Useful for VMs that require constant uptime. However, for VMs that modify memory pages faster than KVM can transfer them, such as VMs under heavy I/O load, the live migration might fail. (1) The VM's disk images must be accessible both to the source host and the destination host during the migration. (2) Offline migration Moves the VM's configuration to the destination host Recommended for shut-off VMs and in situations when shutting down the VM does not disrupt your workloads. The VM's disk images do not have to be accessible to the source or destination host during migration, and can be copied or moved manually to the destination host instead. (1) For possible solutions, see: Additional virsh migrate options for live migrations (2) To achieve this, use one of the following: Storage located on a shared network The --copy-storage-all parameter for the virsh migrate command, which copies disk image contents from the source to the destination over the network. Storage area network (SAN) logical units (LUNs). Ceph storage clusters Note For easier management of large-scale migrations, explore other Red Hat products, such as: OpenShift Virtualization Red Hat OpenStack Platform Additional resources Benefits of migrating virtual machines Sharing virtual machine disk images with other hosts 12.2. Benefits of migrating virtual machines Migrating virtual machines (VMs) can be useful for: Load balancing VMs can be moved to host machines with lower usage if their host becomes overloaded, or if another host is under-utilized. Hardware independence When you need to upgrade, add, or remove hardware devices on the host machine, you can safely relocate VMs to other hosts. This means that VMs do not experience any downtime for hardware improvements. Energy saving VMs can be redistributed to other hosts, and the unloaded host systems can thus be powered off to save energy and cut costs during low usage periods. Geographic migration VMs can be moved to another physical location for lower latency or when required for other reasons. 12.3. Limitations for migrating virtual machines Before migrating virtual machines (VMs) in RHEL 9, ensure you are aware of the migration's limitations. VMs that use certain features and configurations will not work correctly if migrated, or the migration will fail. Such features include: Device passthrough SR-IOV device assignment (With the exception of migrating a VM with an attached virtual function of a Mellanox networking device , which works correctly.) Mediated devices, such as vGPUs (With the exception of migrating a VM with an attached NVIDIA vGPU , which works correctly.) A migration between hosts that use Non-Uniform Memory Access (NUMA) pinning works only if the hosts have similar topology. However, the performance on running workloads might be negatively affected by the migration. Both the source and destination hosts use specific RHEL versions that are supported for VM migration, see Supported hosts for virtual machine migration The physical CPUs, both on the source VM and the destination VM, must be identical, otherwise the migration might fail. Any differences between the VMs in the following CPU related areas can cause problems with the migration: CPU model Migrating between an Intel 64 host and an AMD64 host is unsupported, even though they share the x86-64 instruction set. For steps to ensure that a VM will work correctly after migrating to a host with a different CPU model, see Verifying host CPU compatibility for virtual machine migration . Physical machine firmware versions and settings 12.4. Migrating a virtual machine by using the command line If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host. You can perform a live migration or an offline migration . For differences between the two scenarios, see How migrating virtual machines works . Prerequisites Hypervisor: The source host and the destination host both use the KVM hypervisor. Network connection: The source host and the destination host are able to reach each other over the network. Use the ping utility to verify this. Open ports: Ensure the following ports are open on the destination host. Port 22 is needed for connecting to the destination host by using SSH. Port 16509 is needed for connecting to the destination host by using TLS. Port 16514 is needed for connecting to the destination host by using TCP. Ports 49152-49215 are needed by QEMU for transferring the memory and disk migration data. Hosts: For the migration to be supportable by Red Hat, the source host and destination host must be using specific operating systems and machine types. To ensure this is the case, see Supported hosts for virtual machine migration . CPU: The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration . Storage: The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply: You are using storage area network (SAN) logical units (LUNs). You are using a Ceph storage clusters . You have created a disk image with the same format and size as the source VM disk and you will use the --copy-storage-all parameter when migrating the VM. The disk image is located on a separate networked location. For instructions to set up such shared VM storage, see Sharing virtual machine disk images with other hosts . Network bandwidth: When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages. To obtain the dirty page rate of your VM before you start the live migration, do the following: Monitor the rate of dirty page generation of the VM for a short period of time. After the monitoring finishes, obtain its results: In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload. To ensure that the live migration finishes successfully, Red Hat recommends that your network bandwidth is significantly greater than the VM's dirty page generation rate. Note The value of the calc_period option might differ based on the workload and dirty page rate. You can experiment with several calc_period values to determine the most suitable period that aligns with the dirty page rate in your environment. Bridge tap network specifics: When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration. Connection protocol: When performing a VM migration, the virsh client on the source host can use one of several protocols to connect to the libvirt daemon on the destination host. Examples in the following procedure use an SSH connection, but you can choose a different one. If you want libvirt to use an SSH connection, ensure that the virtqemud socket is enabled and running on the destination host. If you want libvirt to use a TLS connection, ensure that the virtproxyd-tls socket is enabled and running on the destination host. If you want libvirt to use a TCP connection, ensure that the virtproxyd-tcp socket is enabled and running on the destination host. Procedure To migrate a VM from one host to another, use the virsh migrate command. Offline migration The following command migrates a shut-off example-VM VM from your local host to the system connection of the example-destination host by using an SSH tunnel. Live migration The following command migrates the example-VM VM from your local host to the system connection of the example-destination host by using an SSH tunnel. The VM keeps running during the migration. Wait for the migration to complete. The process might take some time depending on network bandwidth, system load, and the size of the VM. If the --verbose option is not used for virsh migrate , the CLI does not display any progress indicators except errors. When the migration is in progress, you can use the virsh domjobinfo utility to display the migration statistics. Multi-FD live migration You can use multiple parallel connections to the destination host during the live migration. This is also known as multiple file descriptors (multi-FD) migration. With multi-FD migration, you can speed up the migration by utilizing all of the available network bandwidth for the migration process. This example uses 4 multi-FD channels to migrate the <example_VM> VM. It is recommended to use one channel for each 10 Gbps of available network bandwidth. The default value is 2 channels. Live migration with an increased downtime limit To improve the reliability of a live migration, you can set the maxdowntime parameter, which specifies the maximum amount of time, in milliseconds, the VM can be paused during live migration. Setting a larger downtime can help to ensure the migration completes successfully. Post-copy migration If your VM has a large memory footprint, you can perform a post-copy migration, which transfers the source VM's CPU state first and immediately starts the migrated VM on the destination host. The source VM's memory pages are transferred after the migrated VM is already running on the destination host. Because of this, a post-copy migration can result in a smaller downtime of the migrated VM. However, the running VM on the destination host might try to access memory pages that have not yet been transferred, which causes a page fault . If too many page faults occur during the migration, the performance of the migrated VM can be severely degraded. Given the potential complications of a post-copy migration, it is recommended to use the following command that starts a standard live migration and switches to a post-copy migration if the live migration cannot be finished in a specified amount of time. Auto-converged live migration If your VM is under a heavy memory workload, you can use the --auto-converge option. This option automatically slows down the execution speed of the VM's CPU. As a consequence, this CPU throttling can help to slow down memory writes, which means the live migration might succeed even in VMs with a heavy memory workload. However, the CPU throttling does not help to resolve workloads where memory writes are not directly related to CPU execution speed, and it can negatively impact the performance of the VM during a live migration. Verification For offline migration: On the destination host, list the available VMs to verify that the VM was migrated successfully. For live migration: On the destination host, list the available VMs to verify the state of the destination VM: If the state of the VM is listed as running , it means that the migration is finished. However, if the live migration is still in progress, the state of the destination VM will be listed as paused . For post-copy migration: On the source host, list the available VMs to verify the state of the source VM. On the destination host, list the available VMs to verify the state of the destination VM. If the state of the source VM is listed as shut off and the state of the destination VM is listed as running , it means that the migration is finished. Additional resources virsh migrate --help command virsh (1) man page on your system 12.5. Live migrating a virtual machine by using the web console If you want to migrate a virtual machine (VM) that is performing tasks which require it to be constantly running, you can migrate that VM to another KVM host without shutting it down. This is also known as live migration. The following instructions explain how to do so by using the web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plugin is installed on your system . Hypervisor: The source host and the destination host both use the KVM hypervisor. Hosts: The source and destination hosts are running. Open ports: Ensure the following ports are open on the destination host. Port 22 is needed for connecting to the destination host by using SSH. Port 16509 is needed for connecting to the destination host by using TLS. Port 16514 is needed for connecting to the destination host by using TCP. Ports 49152-49215 are needed by QEMU for transfering the memory and disk migration data. CPU: The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration . Storage: The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply: You are using storage area network (SAN) logical units (LUNs). You are using a Ceph storage clusters . You have created a disk image with the same format and size as the source VM disk and you will use the --copy-storage-all parameter when migrating the VM. The disk image is located on a separate networked location. For instructions to set up such shared VM storage, see Sharing virtual machine disk images with other hosts . Network bandwidth: When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages. To obtain the dirty page rate of your VM before you start the live migration, do the following on the command line: Monitor the rate of dirty page generation of the VM for a short period of time. After the monitoring finishes, obtain its results: In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload. To ensure that the live migration finishes successfully, Red Hat recommends that your network bandwidth is significantly greater than the VM's dirty page generation rate. Note The value of the calc_period option might differ based on the workload and dirty page rate. You can experiment with several calc_period values to determine the most suitable period that aligns with the dirty page rate in your environment. Bridge tap network specifics: When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration. Procedure In the Virtual Machines interface of the web console, click the Menu button ... of the VM that you want to migrate. A drop down menu appears with controls for various VM operations. Click Migrate The Migrate VM to another host dialog appears. Enter the URI of the destination host. Configure the duration of the migration: Permanent - Do not check the box if you want to migrate the VM permanently. Permanent migration completely removes the VM configuration from the source host. Temporary - Temporary migration migrates a copy of the VM to the destination host. This copy is deleted from the destination host when the VM is shut down. The original VM remains on the source host. Click Migrate Your VM is migrated to the destination host. Verification To verify whether the VM has been successfully migrated and is working correctly: Confirm whether the VM appears in the list of VMs available on the destination host. Start the migrated VM and observe if it boots up. 12.6. Live migrating a virtual machine with an attached Mellanox virtual function As a Technology Preview, you can live migrate a virtual machine (VM) with an attached virtual function (VF) of a Mellanox networking device. Currently, this is only possible when using a Mellanox CX-7 networking device. The VF on the Mellanox CX-7 networking device uses a new mlx5_vfio_pci driver, which adds functionality that is necessary for the live migration, and libvirt binds the new driver to the VF automatically. Limitations Currently, some virtualization features cannot be used when live migrating a VM with an attached Mellanox virtual function: Calculating dirty memory page rate generation of the VM. Currently, when migrating a VM with an attached Mellanox VF, live migration data and statistics provided by virsh domjobinfo and virsh domdirtyrate-calc commands are inaccurate, because the calculations only count guest RAM without including the impact of the attached VF. Using a post-copy live migration. Using a virtual I/O Memory Management Unit (vIOMMU) device in the VM. Important This feature is included in RHEL 9 only as a Technology Preview , which means it is not supported. Prerequisites You have a Mellanox CX-7 networking device with a firmware version that is equal to or greater than 28.36.1010 . Refer to Mellanox documentation for details about firmware versions. The mstflint package is installed on both the source and destination host: The Mellanox CX-7 networking device has VF_MIGRATION_MODE set to MIGRATION_ENABLED : You can set VF_MIGRATION_MODE to MIGRATION_ENABLED by using the following command: The openvswitch package is installed on both the source and destination host: All of the general SR-IOV devices prerequisites. For details, see Attaching SR-IOV networking devices to virtual machines All of the general VM migration prerequisites. For details, see Migrating a virtual machine by using the command line Procedure On the source host, set the Mellanox networking device to the switchdev mode. On the source host, create a virtual function on the Mellanox device. The /0000\:e1\:00.0/ part of the file path is based on the PCI address of the device. In the example it is: 0000:e1:00.0 On the source host, unbind the VF from its driver. You can view the PCI address of the VF by using the following command: On the source host, enable the migration function of the VF. In this example, pci/0000:e1:00.0/1 refers to the first VF on the Mellanox device with the given PCI address. On the source host, configure Open vSwitch (OVS) for the migration of the VF. If the Mellanox device is in switchdev mode, it cannot transfer data over the network. Ensure the openvswitch service is running. Enable hardware offloading to improve networking performance. Increase the maximum idle time to ensure network connections remain open during the migration. Create a new bridge in the OVS instance. Restart the openvswitch service. Add the physical Mellanox device to the OVS bridge. In this example, <bridge_name> is the name of the bridge you created in step d and enp225s0np0 is the network interface name of the Mellanox device. Add the VF of the Mellanox device to the OVS bridge. In this example, <bridge_name> is the name of the bridge you created in step d and enp225s0npf0vf0 is the network interface name of the VF. Repeat steps 1-5 on the destination host . On the source host, open a new file, such as mlx_vf.xml , and add the following XML configuration of the VF: <interface type='hostdev' managed='yes'> <mac address='52:54:00:56:8c:f7'/> <source> <address type='pci' domain='0x0000' bus='0xe1' slot='0x00' function='0x1'/> </source> </interface> This example configures a pass-through of the VF as a network interface for the VM. Ensure the MAC address is unique, and use the PCI address of the VF on the source host. On the source host, attach the VF XML file to the VM. In this example, mlx_vf.xml is the name of the XML file with the VF configuration. Use the --live option to attach the device to a running VM. On the source host, start the live migration of the running VM with the attached VF. For more details about performing a live migration, see Migrating a virtual machine by using the command line Verification In the migrated VM, view the network interface name of the Mellanox VF. In the migrated VM, check that the Mellanox VF works, for example: Additional resources Attaching SR-IOV networking devices to virtual machines Migrating a virtual machine by using the command line 12.7. Live migrating a virtual machine with an attached NVIDIA vGPU If you use virtual GPUs (vGPUs) in your virtualization workloads, you can live migrate a running virtual machine (VM) with an attached vGPU to another KVM host. Currently, this is only possible with NVIDIA GPUs. Prerequisites You have an NVIDIA GPU with an NVIDIA Virtual GPU Software Driver version that supports this functionality. Refer to the relevant NVIDIA vGPU documentation for more details. You have a correctly configured NVIDIA vGPU assigned to a VM. For instructions, see: Setting up NVIDIA vGPU devices Note It is also possible to live migrate a VM with multiple vGPU devices attached. The host uses RHEL 9.4 or later as the operating system. All of the vGPU migration prerequisites that are documented by NVIDIA. Refer to the relevant NVIDIA vGPU documentation for more details. All of the general VM migration prerequisites. For details, see Migrating a virtual machine by using the command line Limitations Certain NVIDIA GPU features can disable the migration. For more information, see the specific NVIDIA documentation for your graphics card. Some GPU workloads are not compatible with the downtime that happens during a migration. As a consequence, the GPU workloads might stop or crash. It is recommended to test if your workloads are compatible with the downtime before attempting a vGPU live migration. Currently, vGPU live migration fails if the vGPU driver version differs on the source and destination hosts. Currently, some general virtualization features cannot be used when live migrating a VM with an attached vGPU: Calculating dirty memory page rate generation of the VM. Currently, live migration data and statistics provided by virsh domjobinfo and virsh domdirtyrate-calc commands are inaccurate when migrating a VM with an attached vGPU, because the calculations only count guest RAM without including vRAM from the vGPU. Using a post-copy live migration. Using a virtual I/O Memory Management Unit (vIOMMU) device in the VM. Procedure For instructions on how to proceed with the live migration, see: Migrating a virtual machine by using the command line No additional parameters for the migration command are required for the attached vGPU device. Additional resources General NVIDIA vGPU documentation General NVIDIA AI Enterprise documentation 12.8. Sharing virtual machine disk images with other hosts To perform a live migration of a virtual machine (VM) between supported KVM hosts , you must also migrate the storage of the running VM in a way that makes it possible for the VM to read from and write to the storage during the migration process. One of the methods to do this is using shared VM storage. The following procedure provides instructions for sharing a locally stored VM image with the source host and the destination host by using the NFS protocol. Prerequisites The VM intended for migration is shut down. Optional: A host system is available for hosting the storage that is not the source or destination host, but both the source and the destination host can reach it through the network. This is the optimal solution for shared storage and is recommended by Red Hat. Make sure that NFS file locking is not used as it is not supported in KVM. The NFS protocol is installed and enabled on the source and destination hosts. See Deploying an NFS server . The virt_use_nfs SELinux boolean is set to on . Procedure Connect to the host that will provide shared storage. In this example, it is the example-shared-storage host: Create a directory on the example-shared-storage host that will hold the disk image and that will be shared with the migration hosts: Copy the disk image of the VM from the source host to the newly created directory. The following example copies the disk image example-disk-1 of the VM to the /var/lib/libvirt/shared-images/ directory of the example-shared-storage host: On the host that you want to use for sharing the storage, add the sharing directory to the /etc/exports file. The following example shares the /var/lib/libvirt/shared-images directory with the example-source-machine and example-destination-machine hosts: Run the exportfs -a command for the changes in the /etc/exports file to take effect. On both the source and destination host, mount the shared directory in the /var/lib/libvirt/images directory: Verification Start the VM on the source host and observe if it boots successfully. Additional resources Deploying an NFS server 12.9. Verifying host CPU compatibility for virtual machine migration For migrated virtual machines (VMs) to work correctly on the destination host, the CPUs on the source and the destination hosts must be compatible. To ensure that this is the case, calculate a common CPU baseline before you begin the migration. Note The instructions in this section use an example migration scenario with the following host CPUs: Source host: Intel Core i7-8650U Destination hosts: Intel Xeon CPU E5-2620 v2 Prerequisites Virtualization is installed and enabled on your system. You have administrator access to the source host and the destination host for the migration. Procedure On the source host, obtain its CPU features and paste them into a new XML file, such as domCaps-CPUs.xml . In the XML file, replace the <mode> </mode> tags with <cpu> </cpu> . Optional: Verify that the content of the domCaps-CPUs.xml file looks similar to the following: On the destination host, use the following command to obtain its CPU features: Add the obtained CPU features from the destination host to the domCaps-CPUs.xml file on the source host. Again, replace the <mode> </mode> tags with <cpu> </cpu> and save the file. Optional: Verify that the XML file now contains the CPU features from both hosts. Use the XML file to calculate the CPU feature baseline for the VM you intend to migrate. Open the XML configuration of the VM you intend to migrate, and replace the contents of the <cpu> section with the settings obtained in the step. If the VM is running, shut down the VM and start it again. steps Sharing virtual machine disk images with other hosts Migrating a virtual machine by using the command line Live-migrating a virtual machine by using the web console 12.10. Supported hosts for virtual machine migration For the virtual machine (VM) migration to work properly and be supported by Red Hat, the source and destination hosts must be specific RHEL versions and machine types. The following table shows supported VM migration paths. Table 12.2. Live migration compatibility Migration method Release type Future version example Support status Forward Minor release 9.0.1 9.1 On supported RHEL 9 systems: machine type q35 . Backward Minor release 9.1 9.0.1 On supported RHEL 9 systems: machine type q35 . Note Support level is different for other virtualization solutions provided by Red Hat, including RHOSP and OpenShift Virtualization.
|
[
"virsh domdirtyrate-calc <example_VM> 30",
"virsh domstats <example_VM> --dirtyrate Domain: 'example-VM' dirtyrate.calc_status=2 dirtyrate.calc_start_time=200942 dirtyrate.calc_period=30 dirtyrate.megabytes_per_second=2",
"systemctl enable --now virtqemud.socket",
"systemctl enable --now virtproxyd-tls.socket",
"systemctl enable --now virtproxyd-tcp.socket",
"virsh migrate --offline --persistent <example_VM> qemu+ssh:// example-destination /system",
"virsh migrate --live --persistent <example_VM> qemu+ssh:// example-destination /system",
"virsh migrate --live --persistent --parallel --parallel-connections 4 <example_VM> qemu+ssh:// <example-destination> /system",
"virsh migrate-setmaxdowntime <example_VM> <time_interval_in_milliseconds>",
"virsh migrate --live --persistent --postcopy --timeout <time_interval_in_seconds> --timeout-postcopy <example_VM> qemu+ssh:// <example-destination> /system",
"virsh migrate --live --persistent --auto-converge <example_VM> qemu+ssh:// <example-destination> /system",
"virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut off",
"virsh list --all Id Name State ---------------------------------- 10 example-VM-1 running",
"virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut off",
"virsh list --all Id Name State ---------------------------------- 10 example-VM-1 running",
"virsh domdirtyrate-calc vm-name 30",
"virsh domstats vm-name --dirtyrate Domain: ' vm-name ' dirtyrate.calc_status=2 dirtyrate.calc_start_time=200942 dirtyrate.calc_period=30 dirtyrate.megabytes_per_second=2",
"dnf install mstflint",
"mstconfig -d <device_pci_address> query | grep -i VF_migration VF_MIGRATION_MODE MIGRATION_ENABLED(2)",
"mstconfig -d <device_pci_address> set VF_MIGRATION_MODE=2",
"dnf install openvswitch",
"devlink dev eswitch set pci/ <device_pci_address> mode switchdev",
"echo 1 > /sys/bus/pci/devices/0000\\:e1\\:00.0/sriov_numvfs",
"virsh nodedev-detach <vf_pci_address> --driver pci-stub",
"lshw -c network -businfo Bus info Device Class Description =========================================================================== pci@0000:e1:00.0 enp225s0np0 network MT2910 Family [ConnectX-7] pci@0000:e1:00.1 enp225s0v0 network ConnectX Family mlx5Gen Virtual Function",
"devlink port function set pci/0000:e1:00.0/1 migratable enable",
"systemctl start openvswitch",
"ovs-vsctl set Open_vSwitch . other_config:hw-offload=true",
"ovs-vsctl set Open_vSwitch . other_config:max-idle=300000",
"ovs-vsctl add-br <bridge_name>",
"systemctl restart openvswitch",
"ovs-vsctl add-port <bridge_name> enp225s0np0",
"ovs-vsctl add-port <bridge_name> enp225s0npf0vf0",
"<interface type='hostdev' managed='yes'> <mac address='52:54:00:56:8c:f7'/> <source> <address type='pci' domain='0x0000' bus='0xe1' slot='0x00' function='0x1'/> </source> </interface>",
"virsh attach-device <vm_name> mlx_vf.xml --live --config",
"virsh migrate --live --domain <vm_name> --desturi qemu+ssh:// <destination_host_ip_address> /system",
"ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::a00:27ff:fe4e:66a1 prefixlen 64 scopeid 0x20<link> ether 08:00:27:4e:66:a1 txqueuelen 1000 (Ethernet) RX packets 100000 bytes 6543210 (6.5 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 100000 bytes 6543210 (6.5 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp4s0f0v0 : flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.3.10 netmask 255.255.255.0 broadcast 192.168.3.255 inet6 fe80::a00:27ff:fe4e:66c3 prefixlen 64 scopeid 0x20<link> ether 08:00:27:4e:66:c3 txqueuelen 1000 (Ethernet) RX packets 200000 bytes 12345678 (12.3 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 200000 bytes 12345678 (12.3 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0",
"ping -I <VF_interface_name> 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.3.10 <VF_interface_name> : 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=27.4 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=26.9 ms --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 26.944/27.046/27.148/0.102 ms",
"setsebool virt_use_nfs 1",
"ssh root@ example-shared-storage root@example-shared-storage's password: Last login: Mon Sep 24 12:05:36 2019 root~#",
"mkdir /var/lib/libvirt/shared-images",
"scp /var/lib/libvirt/images/ example-disk-1 .qcow2 root@ example-shared-storage :/var/lib/libvirt/shared-images/ example-disk-1 .qcow2",
"/var/lib/libvirt/shared-images example-source-machine (rw,no_root_squash) example-destination-machine (rw,no\\_root_squash)",
"exportfs -a",
"mount example-shared-storage :/var/lib/libvirt/shared-images /var/lib/libvirt/images",
"virsh domcapabilities | xmllint --xpath \"//cpu/mode[@name='host-model']\" - > domCaps-CPUs.xml",
"cat domCaps-CPUs.xml <cpu> <model fallback=\"forbid\">Skylake-Client-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"clflushopt\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaves\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"ibrs\"/> <feature policy=\"require\" name=\"amd-stibp\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"rsba\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> <feature policy=\"disable\" name=\"hle\"/> <feature policy=\"disable\" name=\"rtm\"/> </cpu>",
"virsh domcapabilities | xmllint --xpath \"//cpu/mode[@name='host-model']\" - <mode name=\"host-model\" supported=\"yes\"> <model fallback=\"forbid\">IvyBridge-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"pcid\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"arat\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaveopt\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> </mode>",
"cat domCaps-CPUs.xml <cpu> <model fallback=\"forbid\">Skylake-Client-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"clflushopt\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaves\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"ibrs\"/> <feature policy=\"require\" name=\"amd-stibp\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"rsba\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> <feature policy=\"disable\" name=\"hle\"/> <feature policy=\"disable\" name=\"rtm\"/> </cpu> <cpu> <model fallback=\"forbid\">IvyBridge-IBRS</model> <vendor>Intel</vendor> <feature policy=\"require\" name=\"ss\"/> <feature policy=\"require\" name=\"vmx\"/> <feature policy=\"require\" name=\"pdcm\"/> <feature policy=\"require\" name=\"pcid\"/> <feature policy=\"require\" name=\"hypervisor\"/> <feature policy=\"require\" name=\"arat\"/> <feature policy=\"require\" name=\"tsc_adjust\"/> <feature policy=\"require\" name=\"umip\"/> <feature policy=\"require\" name=\"md-clear\"/> <feature policy=\"require\" name=\"stibp\"/> <feature policy=\"require\" name=\"arch-capabilities\"/> <feature policy=\"require\" name=\"ssbd\"/> <feature policy=\"require\" name=\"xsaveopt\"/> <feature policy=\"require\" name=\"pdpe1gb\"/> <feature policy=\"require\" name=\"invtsc\"/> <feature policy=\"require\" name=\"ibpb\"/> <feature policy=\"require\" name=\"amd-ssbd\"/> <feature policy=\"require\" name=\"skip-l1dfl-vmentry\"/> <feature policy=\"require\" name=\"pschange-mc-no\"/> </cpu>",
"virsh hypervisor-cpu-baseline domCaps-CPUs.xml <cpu mode='custom' match='exact'> <model fallback='forbid'>IvyBridge-IBRS</model> <vendor>Intel</vendor> <feature policy='require' name='ss'/> <feature policy='require' name='vmx'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='pcid'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='umip'/> <feature policy='require' name='md-clear'/> <feature policy='require' name='stibp'/> <feature policy='require' name='arch-capabilities'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='pdpe1gb'/> <feature policy='require' name='invtsc'/> <feature policy='require' name='ibpb'/> <feature policy='require' name='amd-ssbd'/> <feature policy='require' name='skip-l1dfl-vmentry'/> <feature policy='require' name='pschange-mc-no'/> </cpu>",
"virsh edit <vm_name>",
"virsh shutdown <vm_name> virsh start <vm_name>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/migrating-virtual-machines_configuring-and-managing-virtualization
|
10.5.7. KeepAliveTimeout
|
10.5.7. KeepAliveTimeout KeepAliveTimeout sets the number of seconds the server waits after a request has been served before it closes the connection. Once the server receives a request, the Timeout directive applies instead. The KeepAliveTimeout directive is set to 15 seconds by default.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-keepalivetimeout
|
Chapter 6. Known issues
|
Chapter 6. Known issues This section documents known issues found in this release of Red Hat Ceph Storage. 6.1. The Cephadm utility Using the haproxy_qat_support setting in ingress specification causes the haproxy daemon to fail deployment Currently, the haproxy_qat_support is present but not functional in the ingress specification. This was added to allow haproxy to offload encryption operations on machines with QAT hardware, intending to improve performance. The added function does not work as intended, due to an incomplete code update. If the haproxy_qat_support setting is used, then the haproxy daemon fails to deploy. To avoid this issue, do not use this setting until it is fixed in a later release. Bugzilla:2308344 PROMETHEUS_API_HOST may not get set when Cephadm initially deploys Prometheus Currently, PROMETHEUS_API_HOST may not get set when Cephadm initially deploys Prometheus. This issue is seen most commonly when bootstrapping a cluster with --skip-monitoring-stack , then deploying Prometheus at a later time. Due to this, a few monitoring information may be unavailable. As a workaround, use the command ceph orch redeploy prometheus to set the PROMETHEUS_API_HOST as it redeploys the Prometheus daemon(s). Additionally, the value can be set manually with the ceph dashboard set-prometheus-api-host <value> command. Bugzilla:2315072 6.2. Ceph Manager plugins Sometimes ceph-mgr modules are temporarily unavailable and their commands fail Occasionally, the balancer module takes a long time to load after a ceph-mgr restart. As a result, other ceph-mgr modules can become temporarily unavailable and their commands fail. For example, As a workaround, in cases after a ceph-mgr restart, commands from certain ceph-mgr modules fail, check the status of the balancer, using the ceph balancer status command. This might occur, for example, during an upgrade. * If the balancer was previously active "active": true but now it is marked as "active": false , check its status until it is active again, then rerun the other ceph-mgr module commands. * In other cases, try to turn off the balancer ceph-mgr module. ceph balancer off After turning off the balancer, rerun the other ceph-mgr module commands. Bugzilla:2314146 6.3. Ceph Dashboard Ceph Object Gateway page does not load after a multi-site configuration The Ceph Object Gateway page does not load because the dashboard cannot find the correct access key and secret key for the new realm during multi-site configuration. As a workaround, use the ceph dashboard set-rgw-credentials command to manually update the keys. Bugzilla:2231072 CephFS path is updated with the correct subvolume path when navigating through the subvolume tab In the Create NFS Export form for CephFS, the CephFS path is updating the subvolume group path instead of the subvolume. Currently, there is no workaround. Bugzilla:2303247 Multi-site automation wizard mentions multi-cluster for both Red Hat and IBM Storage Ceph products Within the multi-site automation wizard both Red Hat and IBM Storage Ceph products are mentioned in reference to multi-cluster. Only IBM Storage Ceph supports multi-cluster. Bugzilla:2322398 Deprecated iSCSI feature is displayed in the Ceph Dashboard Currently, although iSCSI is a deprecated feature, it is displayed in the Ceph Dashboard. The UI for iSCSI feature is not usable. Bugzilla:2331648 6.4. Ceph Object Gateway Objects uploaded as Swift SLO cannot be downloaded by anonymous users Objects that are uploaded as Swift SLO cannot be downloaded by anonymous users. Currently, there is no workaround for this issue. Bugzilla:2272648 Not all apparently eligible reads can be performed locally Currently, if a RADOS object has been recently created and in some cases, modified, it is not immediately possible to make a local read. Even when correctly configured and operating, not all apparently eligible reads can be performed locally. This is due to limitations of the RADOS protocol. In test environments, many objects are created and it is easy to create an unrepresentative sample of read-local I/Os. Bugzilla:2309383 6.5. Multi-site Ceph Object Gateway Buckets created by tenanted users do not replicate correctly Currently, buckets that are created by tenanted users do not replicate correctly. To avoid this issue, bucket owners should avoid using tenanted users to create buckets on secondary zone but instead only create them on master zone. Bugzilla:2325018 When a secondary zone running Red Hat Ceph Storage 8.0 replicates user metadata from a pre-8.0 metadata master zone, access keys of those users are erroneously marked as "inactive". Currently, when a secondary zone running Red Hat Ceph Storage 8.0 replicates user metadata from a pre-8.0 metadata master zone, the access keys of those users are erroneously marked as "inactive". Inactive keys cannot be used to authenticate requests, so those users are denied access to the secondary zone. As a workaround, the current primary zone must be upgraded before other sites. Bugzilla:2327402 6.6. RADOS Placement groups are not scaled down in upmap-read and read balancer modes Currently, pg-upmap-primary entries are not properly removed for placement groups (PGs) that are pending merge. For example, when the bulk flag is removed on a pool, or any case where the number of PGs in a pool decreases. As a result, the PG scale-down process gets stuck and the number of PGs in the affected pool do not decrease as expected. As a workaround, remove the pg_upmap_primary entries in the OSD map of the affected pool. To view the entries, run the ceph osd dump command and then run ceph osd rm-pg-upmap-primary PG_ID for reach PG in the affected pool. After using the workaround, the PG scale-down process resumes as expected. Bugzilla:2302230
|
[
"ceph crash ls Error ENOTSUP: Warning: due to ceph-mgr restart, some PG states may not be up to date Module 'crash' is not enabled/loaded (required by command 'crash ls'): use `ceph mgr module enable crash` to enable it"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/8.0_release_notes/known-issues
|
34.3. Additional Resources
|
34.3. Additional Resources To learn more about configuring automated tasks, refer to the following resources. 34.3.1. Installed Documentation cron man page - overview of cron. crontab man pages in sections 1 and 5 - The man page in section 1 contains an overview of the crontab file. The man page in section 5 contains the format for the file and some example entries. /usr/share/doc/at- <version> /timespec contains more detailed information about the times that can be specified for cron jobs. at man page - description of at and batch and their command line options.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Automated_Tasks-Additional_Resources
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/getting_started_guide/making-open-source-more-inclusive
|
Chapter 7. Scaling storage capacity of GCP OpenShift Data Foundation cluster
|
Chapter 7. Scaling storage capacity of GCP OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on GCP cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 7.1. Scaling up storage on a GCP cluster To increase the storage capacity in a dynamically created storage cluster on a Google Cloud Platform installer-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Set the storage class to standard if you are using the default storage class that uses HDD. However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 7.2. Scaling out storage capacity on a GCP cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 7.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While we recommend adding nodes in the multiple of three, you still get the flexibility of adding one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 7.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster .
|
[
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/scaling_storage/scaling_storage_capacity_of_gcp_openshift_data_foundation_cluster
|
Chapter 2. Planning a Distributed Compute Node (DCN) deployment
|
Chapter 2. Planning a Distributed Compute Node (DCN) deployment When you plan your DCN architecture, check that the technologies that you need are available and supported. 2.1. Considerations for storage on DCN architecture The following features are not currently supported for DCN architectures: Copying a volume between edge sites. You can work around this by creating an image from the volume and using the Image service (glance) to copy the image. After the image is copied, you can create a volume from it. Ceph Rados Gateway (RGW) at the edge sites. CephFS at the edge sites. Instance high availability (HA) at the edge sites. RBD mirroring between edge sites. Instance migration, live or cold, either between edge sites, or from the central location to edge sites. You can only migrate instances within a site boundary. To move an image between sites, you must snapshot the image, and use glance image-import . Additionally, you must consider the following: You must upload images to the central location before copying them to edge sites. A copy of each image must exist in the Image service at the central location. You must use the RBD storage driver for the Image, Compute and Block Storage services. For each site, including the central location, assign a unique availability zone. You can migrate an offline volume from an edge site to the central location, or vice versa. You cannot migrate volumes directly between edge sites. 2.2. Considerations for networking on DCN architecture The following features are not currently supported for DCN architectures: DHCP on DPDK nodes TC Flower Hardware Offload The following ML2/OVN networking technologies are fully supported: Routed provider networks OVN GW (networker node) with Neutron AZs supported Additionally, you must consider the following: Network latency: Balance the latency as measured in round-trip time (RTT), with the expected number of concurrent API operations to maintain acceptable performance. Maximum TCP/IP throughput is inversely proportional to RTT. You can mitigate some issues with high-latency connections with high bandwidth by tuning kernel TCP parameters. Contact Red Hat Support if a cross-site communication exceeds 100 ms. Network drop outs: If the edge site temporarily loses connection to the central site, then no control plane API or CLI operations can be executed at the impacted edge site for the duration of the outage. For example, Compute nodes at the edge site are consequently unable to create a snapshot of an instance, issue an auth token, or delete an image. General control plane API and CLI operations remain functional during this outage, and can continue to serve any other edge sites that have a working connection. Image type: You must use raw images when deploying a DCN architecture with Ceph storage. Image sizing: Compute images: Compute images are downloaded from the central location. These images are potentially large files that are transferred across all necessary networks from the central site to the edge site during provisioning. Instance images: If there is no block storage at the edge, then the Image service images traverse the WAN during first use. The images are copied or cached locally to the target edge nodes for all subsequent use. There is no size limit for images. Transfer times vary with available bandwidth and network latency. Provider networks are the most common approach for DCN deployments. Note that the Networking service (neutron) does not validate where you can attach available networks. For example, if you use a provider network called "site-a" only in edge site A, the Networking service does not validate and prevent you from attaching "site-a" to an instance at site B, which does not work. Site-specific networks: A limitation in DCN networking arises if you use networks that are specific to a certain site: When you deploy centralized neutron controllers with Compute nodes, there are no triggers in the Networking service to identify a certain Compute node as a remote node. Consequently, the Compute nodes receive a list of other Compute nodes and automatically form tunnels between each other. The tunnels are formed from edge to edge through the central site. If you use VXLAN or GENEVE, every Compute node at every site forms a tunnel with every other Compute node, whether or not they are local or remote. This is not an issue if you are using the same networks everywhere. When you use VLANs, the Networking service expects that all Compute nodes have the same bridge mappings, and that all VLANs are available at every site. If edge servers are not pre-provisioned, you must configure DHCP relay for introspection and provisioning on routed segments. Routing must be configured either on the cloud or within the networking infrastructure that connects each edge site to the hub. You should implement a networking design that allocates an L3 subnet for each RHOSO cluster network (external, internal API, and so on), unique to each site. 2.3. IP Address pool sizing for the internalapi network The Image service operator creates an endpoint for each Image service pod with its own DNS name, such as glance-az0-internal.openstack.svc:9292 . Each Compute service and Block storage service in each availability zone uses the Image service (glance) API server in the same availability zone. For example, when you update the cinderVolumes field in the OpenStackControlPlane custom resource (CR), add a field called glance_api_servers under customServiceConfig : The Image service endpoint DNS name maps to a load balancer IP address in the internalapi address pool as indicated by the internal metadata annotations: The range of addresses in this address pool should be sized according to the number of DCN sites. For example, the following shows only 10 available addresses in the internalapi network. Use commands like the following after updating the glance section of the OpenStackControlPlane CR in order to confirm that the Glance Operator has created the service endpoint and route.
|
[
"cinderVolumes: az0: customServiceConfig: | [DEFAULT] enabled_backends = az0 glance_api_servers = https://glance-az0-internal.openstack.svc:9292",
"[glance_store] default_backend = ceph [ceph] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = \"ceph RBD backend\" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80",
"oc get ipaddresspool -n metallb-system NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES ctlplane true false [\"192.168.122.80-192.168.122.90\"] internalapi true false [\"172.17.0.80-172.17.0.90\"] storage true false [\"172.18.0.80-172.18.0.90\"] tenant true false [\"172.19.0.80-172.19.0.90\"]",
"oc get svc | grep glance glance-az0-internal LoadBalancer 172.30.217.178 172.17.0.80 9292:32134/TCP 24h glance-az0-public ClusterIP 172.30.78.47 <none> 9292/TCP 24h glance-az1-internal LoadBalancer 172.30.52.123 172.17.0.81 9292:31679/TCP 23h glance-c1ca8-az0-external-api ClusterIP None <none> 9292/TCP 24h glance-c1ca8-az0-internal-api ClusterIP None <none> 9292/TCP 24h glance-c1ca8-az1-edge-api ClusterIP None <none> 9292/TCP 23h oc get route | grep glance glance-az0-public glance-az0-public-openstack.apps.ocp.openstack.lab glance-az0-public glance-az0-public reencrypt/Redirect None glance-default-public glance-default-public-openstack.apps.ocp.openstack.lab glance-default-public glance-default-public reencrypt/Redirect None"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_distributed_compute_node_dcn_architecture/assembly_planning-a-dcn-deployment
|
Chapter 17. Minimizing system latency by isolating interrupts and user processes
|
Chapter 17. Minimizing system latency by isolating interrupts and user processes Real-time environments need to minimize or eliminate latency when responding to various events. To do this, you can isolate interrupts (IRQs) from user processes from one another on different dedicated CPUs. 17.1. Interrupt and process binding Isolating interrupts (IRQs) from user processes on different dedicated CPUs can minimize or eliminate latency in real-time environments. Interrupts are generally shared evenly between CPUs. This can delay interrupt processing when the CPU has to write new data and instruction caches. These interrupt delays can cause conflicts with other processing being performed on the same CPU. It is possible to allocate time-critical interrupts and processes to a specific CPU (or a range of CPUs). In this way, the code and data structures for processing this interrupt will most likely be in the processor and instruction caches. As a result, the dedicated process can run as quickly as possible, while all other non-time-critical processes run on the other CPUs. This can be particularly important where the speeds involved are near or at the limits of memory and available peripheral bus bandwidth. Any wait for memory to be fetched into processor caches will have a noticeable impact in overall processing time and determinism. In practice, optimal performance is entirely application-specific. For example, tuning applications with similar functions for different companies, required completely different optimal performance tunings. One firm saw optimal results when they isolated 2 out of 4 CPUs for operating system functions and interrupt handling. The remaining 2 CPUs were dedicated purely for application handling. Another firm found optimal determinism when they bound the network related application processes onto a single CPU which was handling the network device driver interrupt. Important To bind a process to a CPU, you usually need to know the CPU mask for a given CPU or range of CPUs. The CPU mask is typically represented as a 32-bit bitmask, a decimal number, or a hexadecimal number, depending on the command you are using. Table 17.1. Example of the CPU Mask for given CPUs CPUs Bitmask Decimal Hexadecimal 0 00000000000000000000000000000001 1 0x00000001 0, 1 00000000000000000000000000000011 3 0x00000011 17.2. Disabling the irqbalance daemon The irqbalance daemon is enabled by default and periodically forces interrupts to be handled by CPUs in an even manner. However in real-time deployments, irqbalance is not needed, because applications are typically bound to specific CPUs. Procedure Check the status of irqbalance . If irqbalance is running, disable it, and stop it. Verification Check that the irqbalance status is inactive. 17.3. Excluding CPUs from IRQ balancing You can use the IRQ balancing service to specify which CPUs you want to exclude from consideration for interrupt (IRQ) balancing. The IRQBALANCE_BANNED_CPUS parameter in the /etc/sysconfig/irqbalance configuration file controls these settings. The value of the parameter is a 64-bit hexadecimal bit mask, where each bit of the mask represents a CPU core. Procedure Open /etc/sysconfig/irqbalance in your preferred text editor and find the section of the file titled IRQBALANCE_BANNED_CPUS . Uncomment the IRQBALANCE_BANNED_CPUS variable. Enter the appropriate bitmask to specify the CPUs to be ignored by the IRQ balance mechanism. Save and close the file. Restart the irqbalance service for the changes to take effect: Note If you are running a system with up to 64 CPU cores, separate each group of eight hexadecimal digits with a comma. For example: IRQBALANCE_BANNED_CPUS=00000001,0000ff00 Table 17.2. Examples CPUs Bitmask 0 00000001 8 - 15 0000ff00 8 - 15, 33 00000002,0000ff00 Note In RHEL 7.2 and higher, the irqbalance utility automatically avoids IRQs on CPU cores isolated via the isolcpus kernel parameter if IRQBALANCE_BANNED_CPUS is not set in /etc/sysconfig/irqbalance . 17.4. Manually assigning CPU affinity to individual IRQs Assigning CPU affinity enables binding and unbinding processes and threads to a specified CPU or range of CPUs. This can reduce caching problems. Procedure Check the IRQs in use by each device by viewing the /proc/interrupts file. Each line shows the IRQ number, the number of interrupts that happened in each CPU, followed by the IRQ type and a description. Write the CPU mask to the smp_affinity entry of a specific IRQ. The CPU mask must be expressed as a hexadecimal number. For example, the following command instructs IRQ number 142 to run only on CPU 0. The change only takes effect when an interrupt occurs. Verification Perform an activity that will trigger the specified interrupt. Check /proc/interrupts for changes. The number of interrupts on the specified CPU for the configured IRQ increased, and the number of interrupts for the configured IRQ on CPUs outside the specified affinity did not increase. 17.5. Binding processes to CPUs with the taskset utility The taskset utility uses the process ID (PID) of a task to view or set its CPU affinity. You can use the utility to run a command with a chosen CPU affinity. To set the affinity, you need to get the CPU mask to be as a decimal or hexadecimal number. The mask argument is a bitmask that specifies which CPU cores are legal for the command or PID being modified. Important The taskset utility works on a NUMA (Non-Uniform Memory Access) system, but it does not allow the user to bind threads to CPUs and the closest NUMA memory node. On such systems, taskset is not the preferred tool, and the numactl utility should be used instead for its advanced capabilities. For more information, see the numactl(8) man page on your system. Procedure Run taskset with the necessary options and arguments. You can specify a CPU list using the -c parameter instead of a CPU mask. In this example, my_embedded_process is being instructed to run only on CPUs 0,4,7-11. This invocation is more convenient in most cases. To set the affinity of a process that is not currently running, use taskset and specify the CPU mask and the process. In this example, my_embedded_process is being instructed to use only CPU 3 (using the decimal version of the CPU mask). You can specify more than one CPU in the bitmask. In this example, my_embedded_process is being instructed to execute on processors 4, 5, 6, and 7 (using the hexadecimal version of the CPU mask). You can set the CPU affinity for processes that are already running by using the -p ( --pid ) option with the CPU mask and the PID of the process you want to change. In this example, the process with a PID of 7013 is being instructed to run only on CPU 0. Note You can combine the listed options. Additional resources taskset(1) and numactl(8) man pages on your system
|
[
"systemctl status irqbalance irqbalance.service - irqbalance daemon Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled) Active: active (running) ...",
"systemctl disable irqbalance systemctl stop irqbalance",
"systemctl status irqbalance",
"IRQBALANCE_BANNED_CPUS 64 bit bitmask which allows you to indicate which cpu's should be skipped when reblancing irqs. Cpu numbers which have their corresponding bits set to one in this mask will not have any irq's assigned to them on rebalance # #IRQBALANCE_BANNED_CPUS=",
"systemctl restart irqbalance",
"cat /proc/interrupts",
"CPU0 CPU1 0: 26575949 11 IO-APIC-edge timer 1: 14 7 IO-APIC-edge i8042",
"echo 1 > /proc/irq/142/smp_affinity",
"taskset -c 0,4,7-11 /usr/local/bin/my_embedded_process",
"taskset 8 /usr/local/bin/my_embedded_process",
"taskset 0xF0 /usr/local/bin/my_embedded_process",
"taskset -p 1 7013"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_binding-interrupts-and-processes_optimizing-RHEL9-for-real-time-for-low-latency-operation
|
Chapter 2. Using the configuration API
|
Chapter 2. Using the configuration API The configuration tool exposes 4 endpoints that can be used to build, validate, bundle and deploy a configuration. The config-tool API is documented at https://github.com/quay/config-tool/blob/master/pkg/lib/editor/API.md . In this section, you will see how to use the API to retrieve the current configuration and how to validate any changes you make. 2.1. Retrieving the default configuration If you are running the configuration tool for the first time, and do not have an existing configuration, you can retrieve the default configuration. Start the container in config mode: Use the config endpoint of the configuration API to get the default: The value returned is the default configuration in JSON format: { "config.yaml": { "AUTHENTICATION_TYPE": "Database", "AVATAR_KIND": "local", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DEFAULT_TAG_EXPIRATION": "2w", "EXTERNAL_TLS_TERMINATION": false, "FEATURE_ACTION_LOG_ROTATION": false, "FEATURE_ANONYMOUS_ACCESS": true, "FEATURE_APP_SPECIFIC_TOKENS": true, .... } } 2.2. Retrieving the current configuration If you have already configured and deployed the Quay registry, stop the container and restart it in configuration mode, loading the existing configuration as a volume: Use the config endpoint of the API to get the current configuration: The value returned is the current configuration in JSON format, including database and Redis configuration data: { "config.yaml": { .... "BROWSER_API_CALLS_XHR_ONLY": false, "BUILDLOGS_REDIS": { "host": "quay-server", "password": "strongpassword", "port": 6379 }, "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay", "DEFAULT_TAG_EXPIRATION": "2w", .... } } 2.3. Validating configuration using the API You can validate a configuration by posting it to the config/validate endpoint: The returned value is an array containing the errors found in the configuration. If the configuration is valid, an empty array [] is returned. 2.4. Determining the required fields You can determine the required fields by posting an empty configuration structure to the config/validate endpoint: The value returned is an array indicating which fields are required: [ { "FieldGroup": "Database", "Tags": [ "DB_URI" ], "Message": "DB_URI is required." }, { "FieldGroup": "DistributedStorage", "Tags": [ "DISTRIBUTED_STORAGE_CONFIG" ], "Message": "DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location." }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME is required" }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME must be of type Hostname" }, { "FieldGroup": "Redis", "Tags": [ "BUILDLOGS_REDIS" ], "Message": "BUILDLOGS_REDIS is required" } ]
|
[
"sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret",
"curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq",
"{ \"config.yaml\": { \"AUTHENTICATION_TYPE\": \"Database\", \"AVATAR_KIND\": \"local\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DEFAULT_TAG_EXPIRATION\": \"2w\", \"EXTERNAL_TLS_TERMINATION\": false, \"FEATURE_ACTION_LOG_ROTATION\": false, \"FEATURE_ANONYMOUS_ACCESS\": true, \"FEATURE_APP_SPECIFIC_TOKENS\": true, . } }",
"sudo podman run --rm -it --name quay_config -p 8080:8080 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret",
"curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq",
"{ \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } }",
"curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } } http://quay-server:8080/api/v1/config/validate | jq",
"curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { } } http://quay-server:8080/api/v1/config/validate | jq",
"[ { \"FieldGroup\": \"Database\", \"Tags\": [ \"DB_URI\" ], \"Message\": \"DB_URI is required.\" }, { \"FieldGroup\": \"DistributedStorage\", \"Tags\": [ \"DISTRIBUTED_STORAGE_CONFIG\" ], \"Message\": \"DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location.\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME is required\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME must be of type Hostname\" }, { \"FieldGroup\": \"Redis\", \"Tags\": [ \"BUILDLOGS_REDIS\" ], \"Message\": \"BUILDLOGS_REDIS is required\" } ]"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/config-using-api
|
Chapter 7. keystone
|
Chapter 7. keystone The following chapter contains information about the configuration options in the keystone service. 7.1. keystone.conf This section contains options for the /etc/keystone/keystone.conf file. 7.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/keystone/keystone.conf file. . Configuration option = Default value Type Description admin_token = None string value Using this feature is NOT recommended. Instead, use the keystone-manage bootstrap command. The value of this option is treated as a "shared secret" that can be used to bootstrap Keystone through the API. This "token" does not represent a user (it has no identity), and carries no explicit authorization (it effectively bypasses most authorization checks). If set to None , the value is ignored and the admin_token middleware is effectively disabled. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = keystone string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_publisher_id = None string value Default publisher_id for outgoing notifications. If left undefined, Keystone will default to using the server's host name. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. insecure_debug = False boolean value If set to true, then the server will return information in HTTP responses that may allow an unauthenticated or authenticated user to get more information than normal, such as additional details about why authentication failed. This may be useful for debugging but is insecure. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. list_limit = None integer value The maximum number of entities that will be returned in a collection. This global limit may be then overridden for a specific driver, by specifying a list_limit in the appropriate section (for example, [assignment] ). No limit is set by default. In larger deployments, it is recommended that you set this to a reasonable number to prevent operations like listing all users and projects from placing an unnecessary load on the system. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_param_size = 64 integer value Limit the sizes of user & project ID/names. max_project_tree_depth = 5 integer value Maximum depth of the project hierarchy, excluding the project acting as a domain at the top of the hierarchy. WARNING: Setting it to a large value may adversely impact performance. max_token_size = 255 integer value Similar to [DEFAULT] max_param_size , but provides an exception for token values. With Fernet tokens, this can be set as low as 255. With UUID tokens, this should be set to 32). notification_format = cadf string value Define the notification format for identity service events. A basic notification only has information about the resource being operated on. A cadf notification has the same information, as well as information about the initiator of the event. The cadf option is entirely backwards compatible with the basic option, but is fully CADF-compliant, and is recommended for auditing use cases. notification_opt_out = ['identity.authenticate.success', 'identity.authenticate.pending', 'identity.authenticate.failed'] multi valued You can reduce the number of notifications keystone emits by explicitly opting out. Keystone will not emit notifications that match the patterns expressed in this list. Values are expected to be in the form of identity.<resource_type>.<operation> . By default, all notifications related to authentication are automatically suppressed. This field can be set multiple times in order to opt-out of multiple notification topics. For example, the following suppresses notifications describing user creation or successful authentication events: notification_opt_out=identity.user.create notification_opt_out=identity.authenticate.success public_endpoint = None uri value The base public endpoint URL for Keystone that is advertised to clients (NOTE: this does NOT affect how Keystone listens for connections). Defaults to the base host URL of the request. For example, if keystone receives a request to http://server:5000/v3/users , then this will option will be automatically treated as http://server:5000 . You should only need to set option if either the value of the base URL contains a path that keystone does not automatically infer ( /prefix/v3 ), or if the endpoint should be found on a different host. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. strict_password_check = False boolean value If set to true, strict password length checking is performed for password manipulation. If a password exceeds the maximum length, the operation will fail with an HTTP 403 Forbidden error. If set to false, passwords are automatically truncated to the maximum length. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 7.1.2. application_credential The following table outlines the options available under the [application_credential] group in the /etc/keystone/keystone.conf file. Table 7.1. application_credential Configuration option = Default value Type Description cache_time = None integer value Time to cache application credential data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for application credential caching. This has no effect unless global caching is enabled. driver = sql string value Entry point for the application credential backend driver in the keystone.application_credential namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. user_limit = -1 integer value Maximum number of application credentials a user is permitted to create. A value of -1 means unlimited. If a limit is not set, users are permitted to create application credentials at will, which could lead to bloat in the keystone database or open keystone to a DoS attack. 7.1.3. assignment The following table outlines the options available under the [assignment] group in the /etc/keystone/keystone.conf file. Table 7.2. assignment Configuration option = Default value Type Description driver = sql string value Entry point for the assignment backend driver (where role assignments are stored) in the keystone.assignment namespace. Only a SQL driver is supplied by keystone itself. Unless you are writing proprietary drivers for keystone, you do not need to set this option. prohibited_implied_role = ['admin'] list value A list of role names which are prohibited from being an implied role. 7.1.4. auth The following table outlines the options available under the [auth] group in the /etc/keystone/keystone.conf file. Table 7.3. auth Configuration option = Default value Type Description application_credential = None string value Entry point for the application_credential auth plugin module in the keystone.auth.application_credential namespace. You do not need to set this unless you are overriding keystone's own application_credential authentication plugin. external = None string value Entry point for the external ( REMOTE_USER ) auth plugin module in the keystone.auth.external namespace. Supplied drivers are DefaultDomain and Domain . The default driver is DefaultDomain , which assumes that all users identified by the username specified to keystone in the REMOTE_USER variable exist within the context of the default domain. The Domain option expects an additional environment variable be presented to keystone, REMOTE_DOMAIN , containing the domain name of the REMOTE_USER (if REMOTE_DOMAIN is not set, then the default domain will be used instead). You do not need to set this unless you are taking advantage of "external authentication", where the application server (such as Apache) is handling authentication instead of keystone. mapped = None string value Entry point for the mapped auth plugin module in the keystone.auth.mapped namespace. You do not need to set this unless you are overriding keystone's own mapped authentication plugin. methods = ['external', 'password', 'token', 'oauth1', 'mapped', 'application_credential'] list value Allowed authentication methods. Note: You should disable the external auth method if you are currently using federation. External auth and federation both use the REMOTE_USER variable. Since both the mapped and external plugin are being invoked to validate attributes in the request environment, it can cause conflicts. oauth1 = None string value Entry point for the OAuth 1.0a auth plugin module in the keystone.auth.oauth1 namespace. You do not need to set this unless you are overriding keystone's own oauth1 authentication plugin. password = None string value Entry point for the password auth plugin module in the keystone.auth.password namespace. You do not need to set this unless you are overriding keystone's own password authentication plugin. token = None string value Entry point for the token auth plugin module in the keystone.auth.token namespace. You do not need to set this unless you are overriding keystone's own token authentication plugin. 7.1.5. cache The following table outlines the options available under the [cache] group in the /etc/keystone/keystone.conf file. Table 7.4. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enabled = True boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 7.1.6. catalog The following table outlines the options available under the [catalog] group in the /etc/keystone/keystone.conf file. Table 7.5. catalog Configuration option = Default value Type Description cache_time = None integer value Time to cache catalog data (in seconds). This has no effect unless global and catalog caching are both enabled. Catalog data (services, endpoints, etc.) typically does not change frequently, and so a longer duration than the global default may be desirable. caching = True boolean value Toggle for catalog caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = sql string value Entry point for the catalog driver in the keystone.catalog namespace. Keystone provides a sql option (which supports basic CRUD operations through SQL), a templated option (which loads the catalog from a templated catalog file on disk), and a endpoint_filter.sql option (which supports arbitrary service catalogs per project). list_limit = None integer value Maximum number of entities that will be returned in a catalog collection. There is typically no reason to set this, as it would be unusual for a deployment to have enough services or endpoints to exceed a reasonable limit. template_file = default_catalog.templates string value Absolute path to the file used for the templated catalog backend. This option is only used if the [catalog] driver is set to templated . 7.1.7. cors The following table outlines the options available under the [cors] group in the /etc/keystone/keystone.conf file. Table 7.6. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Project-Id', 'X-Project-Name', 'X-Project-Domain-Id', 'X-Project-Domain-Name', 'X-Domain-Id', 'X-Domain-Name', 'Openstack-Auth-Receipt'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'Openstack-Auth-Receipt'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 7.1.8. credential The following table outlines the options available under the [credential] group in the /etc/keystone/keystone.conf file. Table 7.7. credential Configuration option = Default value Type Description auth_ttl = 15 integer value The length of time in minutes for which a signed EC2 or S3 token request is valid from the timestamp contained in the token request. cache_time = None integer value Time to cache credential data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for caching only on retrieval of user credentials. This has no effect unless global caching is enabled. driver = sql string value Entry point for the credential backend driver in the keystone.credential namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. key_repository = /etc/keystone/credential-keys/ string value Directory containing Fernet keys used to encrypt and decrypt credentials stored in the credential backend. Fernet keys used to encrypt credentials have no relationship to Fernet keys used to encrypt Fernet tokens. Both sets of keys should be managed separately and require different rotation policies. Do not share this repository with the repository used to manage keys for Fernet tokens. provider = fernet string value Entry point for credential encryption and decryption operations in the keystone.credential.provider namespace. Keystone only provides a fernet driver, so there's no reason to change this unless you are providing a custom entry point to encrypt and decrypt credentials. 7.1.9. database The following table outlines the options available under the [database] group in the /etc/keystone/keystone.conf file. Table 7.8. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 7.1.10. domain_config The following table outlines the options available under the [domain_config] group in the /etc/keystone/keystone.conf file. Table 7.9. domain_config Configuration option = Default value Type Description cache_time = 300 integer value Time-to-live (TTL, in seconds) to cache domain-specific configuration data. This has no effect unless [domain_config] caching is enabled. caching = True boolean value Toggle for caching of the domain-specific configuration backend. This has no effect unless global caching is enabled. There is normally no reason to disable this. driver = sql string value Entry point for the domain-specific configuration driver in the keystone.resource.domain_config namespace. Only a sql option is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. 7.1.11. endpoint_filter The following table outlines the options available under the [endpoint_filter] group in the /etc/keystone/keystone.conf file. Table 7.10. endpoint_filter Configuration option = Default value Type Description driver = sql string value Entry point for the endpoint filter driver in the keystone.endpoint_filter namespace. Only a sql option is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. return_all_endpoints_if_no_filter = True boolean value This controls keystone's behavior if the configured endpoint filters do not result in any endpoints for a user + project pair (and therefore a potentially empty service catalog). If set to true, keystone will return the entire service catalog. If set to false, keystone will return an empty service catalog. 7.1.12. endpoint_policy The following table outlines the options available under the [endpoint_policy] group in the /etc/keystone/keystone.conf file. Table 7.11. endpoint_policy Configuration option = Default value Type Description driver = sql string value Entry point for the endpoint policy driver in the keystone.endpoint_policy namespace. Only a sql driver is provided by keystone, so there is no reason to set this unless you are providing a custom entry point. 7.1.13. eventlet_server The following table outlines the options available under the [eventlet_server] group in the /etc/keystone/keystone.conf file. Table 7.12. eventlet_server Configuration option = Default value Type Description admin_bind_host = 0.0.0.0 host address value The IP address of the network interface for the admin service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. admin_port = 35357 port value The port number for the admin service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. public_bind_host = 0.0.0.0 host address value The IP address of the network interface for the public service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. public_port = 5000 port value The port number for the public service to listen on. Deprecated since: K *Reason:*Support for running keystone under eventlet has been removed in the Newton release. These options remain for backwards compatibility because they are used for URL substitutions. 7.1.14. federation The following table outlines the options available under the [federation] group in the /etc/keystone/keystone.conf file. Table 7.13. federation Configuration option = Default value Type Description `assertion_prefix = ` string value Prefix to use when filtering environment variable names for federated assertions. Matched variables are passed into the federated mapping engine. caching = True boolean value Toggle for federation caching. This has no effect unless global caching is enabled. There is typically no reason to disable this. driver = sql string value Entry point for the federation backend driver in the keystone.federation namespace. Keystone only provides a sql driver, so there is no reason to set this option unless you are providing a custom entry point. federated_domain_name = Federated string value An arbitrary domain name that is reserved to allow federated ephemeral users to have a domain concept. Note that an admin will not be able to create a domain with this name or update an existing domain to this name. You are not advised to change this value unless you really have to. Deprecated since: T *Reason:*This option has been superseded by ephemeral users existing in the domain of their identity provider. remote_id_attribute = None string value Default value for all protocols to be used to obtain the entity ID of the Identity Provider from the environment. For mod_shib , this would be Shib-Identity-Provider . For mod_auth_openidc , this could be HTTP_OIDC_ISS . For mod_auth_mellon , this could be MELLON_IDP . This can be overridden on a per-protocol basis by providing a remote_id_attribute to the federation protocol using the API. sso_callback_template = /etc/keystone/sso_callback_template.html string value Absolute path to an HTML file used as a Single Sign-On callback handler. This page is expected to redirect the user from keystone back to a trusted dashboard host, by form encoding a token in a POST request. Keystone's default value should be sufficient for most deployments. trusted_dashboard = [] multi valued A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of this list. This configuration option may be repeated for multiple values. You must set this in order to use web-based SSO flows. For example: trusted_dashboard=https://acme.example.com/auth/websso trusted_dashboard=https://beta.example.com/auth/websso 7.1.15. fernet_receipts The following table outlines the options available under the [fernet_receipts] group in the /etc/keystone/keystone.conf file. Table 7.14. fernet_receipts Configuration option = Default value Type Description key_repository = /etc/keystone/fernet-keys/ string value Directory containing Fernet receipt keys. This directory must exist before using keystone-manage fernet_setup for the first time, must be writable by the user running keystone-manage fernet_setup or keystone-manage fernet_rotate , and of course must be readable by keystone's server process. The repository may contain keys in one of three states: a single staged key (always index 0) used for receipt validation, a single primary key (always the highest index) used for receipt creation and validation, and any number of secondary keys (all other index values) used for receipt validation. With multiple keystone nodes, each node must share the same key repository contents, with the exception of the staged key (index 0). It is safe to run keystone-manage fernet_rotate once on any one node to promote a staged key (index 0) to be the new primary (incremented from the highest index), and produce a new staged key (a new key with index 0); the resulting repository can then be atomically replicated to other nodes without any risk of race conditions (for example, it is safe to run keystone-manage fernet_rotate on host A, wait any amount of time, create a tarball of the directory on host A, unpack it on host B to a temporary location, and atomically move ( mv ) the directory into place on host B). Running keystone-manage fernet_rotate twice on a key repository without syncing other nodes will result in receipts that can not be validated by all nodes. max_active_keys = 3 integer value This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key (always index 0), one primary key (the highest numerical index), and one secondary key (every other index). Increasing this value means that additional secondary keys will be kept in the rotation. 7.1.16. fernet_tokens The following table outlines the options available under the [fernet_tokens] group in the /etc/keystone/keystone.conf file. Table 7.15. fernet_tokens Configuration option = Default value Type Description key_repository = /etc/keystone/fernet-keys/ string value Directory containing Fernet token keys. This directory must exist before using keystone-manage fernet_setup for the first time, must be writable by the user running keystone-manage fernet_setup or keystone-manage fernet_rotate , and of course must be readable by keystone's server process. The repository may contain keys in one of three states: a single staged key (always index 0) used for token validation, a single primary key (always the highest index) used for token creation and validation, and any number of secondary keys (all other index values) used for token validation. With multiple keystone nodes, each node must share the same key repository contents, with the exception of the staged key (index 0). It is safe to run keystone-manage fernet_rotate once on any one node to promote a staged key (index 0) to be the new primary (incremented from the highest index), and produce a new staged key (a new key with index 0); the resulting repository can then be atomically replicated to other nodes without any risk of race conditions (for example, it is safe to run keystone-manage fernet_rotate on host A, wait any amount of time, create a tarball of the directory on host A, unpack it on host B to a temporary location, and atomically move ( mv ) the directory into place on host B). Running keystone-manage fernet_rotate twice on a key repository without syncing other nodes will result in tokens that can not be validated by all nodes. max_active_keys = 3 integer value This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key (always index 0), one primary key (the highest numerical index), and one secondary key (every other index). Increasing this value means that additional secondary keys will be kept in the rotation. 7.1.17. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/keystone/keystone.conf file. Table 7.16. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 7.1.18. identity The following table outlines the options available under the [identity] group in the /etc/keystone/keystone.conf file. Table 7.17. identity Configuration option = Default value Type Description cache_time = 600 integer value Time to cache identity data (in seconds). This has no effect unless global and identity caching are enabled. caching = True boolean value Toggle for identity caching. This has no effect unless global caching is enabled. There is typically no reason to disable this. default_domain_id = default string value This references the domain to use for all Identity API v2 requests (which are not aware of domains). A domain with this ID can optionally be created for you by keystone-manage bootstrap . The domain referenced by this ID cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. There is nothing special about this domain, other than the fact that it must exist to order to maintain support for your v2 clients. There is typically no reason to change this value. domain_config_dir = /etc/keystone/domains string value Absolute path where keystone should locate domain-specific [identity] configuration files. This option has no effect unless [identity] domain_specific_drivers_enabled is set to true. There is typically no reason to change this value. domain_configurations_from_database = False boolean value By default, domain-specific configuration data is read from files in the directory identified by [identity] domain_config_dir . Enabling this configuration option allows you to instead manage domain-specific configurations through the API, which are then persisted in the backend (typically, a SQL database), rather than using configuration files on disk. domain_specific_drivers_enabled = False boolean value A subset (or all) of domains can have their own identity driver, each with their own partial configuration options, stored in either the resource backend or in a file in a domain configuration directory (depending on the setting of [identity] domain_configurations_from_database ). Only values specific to the domain need to be specified in this manner. This feature is disabled by default, but may be enabled by default in a future release; set to true to enable. driver = sql string value Entry point for the identity backend driver in the keystone.identity namespace. Keystone provides a sql and ldap driver. This option is also used as the default driver selection (along with the other configuration variables in this section) in the event that [identity] domain_specific_drivers_enabled is enabled, but no applicable domain-specific configuration is defined for the domain in question. Unless your deployment primarily relies on ldap AND is not using domain-specific configuration, you should typically leave this set to sql . list_limit = None integer value Maximum number of entities that will be returned in an identity collection. max_password_length = 4096 integer value Maximum allowed length for user passwords. Decrease this value to improve performance. Changing this value does not effect existing passwords. password_hash_algorithm = bcrypt string value The password hashing algorithm to use for passwords stored within keystone. password_hash_rounds = None integer value This option represents a trade off between security and performance. Higher values lead to slower performance, but higher security. Changing this option will only affect newly created passwords as existing password hashes already have a fixed number of rounds applied, so it is safe to tune this option in a running cluster. The default for bcrypt is 12, must be between 4 and 31, inclusive. The default for scrypt is 16, must be within range(1,32) . The default for pbkdf_sha512 is 60000, must be within range(1,1<<32) WARNING: If using scrypt, increasing this value increases BOTH time AND memory requirements to hash a password. salt_bytesize = None integer value Number of bytes to use in scrypt and pbkfd2_sha512 hashing salt. Default for scrypt is 16 bytes. Default for pbkfd2_sha512 is 16 bytes. Limited to a maximum of 96 bytes due to the size of the column used to store password hashes. scrypt_block_size = None integer value Optional block size to pass to scrypt hash function (the r parameter). Useful for tuning scrypt to optimal performance for your CPU architecture. This option is only used when the password_hash_algorithm option is set to scrypt . Defaults to 8. scrypt_parallelism = None integer value Optional parallelism to pass to scrypt hash function (the p parameter). This option is only used when the password_hash_algorithm option is set to scrypt . Defaults to 1. 7.1.19. identity_mapping The following table outlines the options available under the [identity_mapping] group in the /etc/keystone/keystone.conf file. Table 7.18. identity_mapping Configuration option = Default value Type Description backward_compatible_ids = True boolean value The format of user and group IDs changed in Juno for backends that do not generate UUIDs (for example, LDAP), with keystone providing a hash mapping to the underlying attribute in LDAP. By default this mapping is disabled, which ensures that existing IDs will not change. Even when the mapping is enabled by using domain-specific drivers ( [identity] domain_specific_drivers_enabled ), any users and groups from the default domain being handled by LDAP will still not be mapped to ensure their IDs remain backward compatible. Setting this value to false will enable the new mapping for all backends, including the default LDAP driver. It is only guaranteed to be safe to enable this option if you do not already have assignments for users and groups from the default LDAP domain, and you consider it to be acceptable for Keystone to provide the different IDs to clients than it did previously (existing IDs in the API will suddenly change). Typically this means that the only time you can set this value to false is when configuring a fresh installation, although that is the recommended value. driver = sql string value Entry point for the identity mapping backend driver in the keystone.identity.id_mapping namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. generator = sha256 string value Entry point for the public ID generator for user and group entities in the keystone.identity.id_generator namespace. The Keystone identity mapper only supports generators that produce 64 bytes or less. Keystone only provides a sha256 entry point, so there is no reason to change this value unless you're providing a custom entry point. 7.1.20. jwt_tokens The following table outlines the options available under the [jwt_tokens] group in the /etc/keystone/keystone.conf file. Table 7.19. jwt_tokens Configuration option = Default value Type Description jws_private_key_repository = /etc/keystone/jws-keys/private string value Directory containing private keys for signing JWS tokens. This directory must exist in order for keystone's server process to start. It must also be readable by keystone's server process. It must contain at least one private key that corresponds to a public key in keystone.conf [jwt_tokens] jws_public_key_repository . In the event there are multiple private keys in this directory, keystone will use a key named private.pem to sign tokens. In the future, keystone may support the ability to sign tokens with multiple private keys. For now, only a key named private.pem within this directory is required to issue JWS tokens. This option is only applicable in deployments issuing JWS tokens and setting keystone.conf [token] provider = jws . jws_public_key_repository = /etc/keystone/jws-keys/public string value Directory containing public keys for validating JWS token signatures. This directory must exist in order for keystone's server process to start. It must also be readable by keystone's server process. It must contain at least one public key that corresponds to a private key in keystone.conf [jwt_tokens] jws_private_key_repository . This option is only applicable in deployments issuing JWS tokens and setting keystone.conf [token] provider = jws . 7.1.21. ldap The following table outlines the options available under the [ldap] group in the /etc/keystone/keystone.conf file. Table 7.20. ldap Configuration option = Default value Type Description alias_dereferencing = default string value The LDAP dereferencing option to use for queries involving aliases. A value of default falls back to using default dereferencing behavior configured by your ldap.conf . A value of never prevents aliases from being dereferenced at all. A value of searching dereferences aliases only after name resolution. A value of finding dereferences aliases only during name resolution. A value of always dereferences aliases in all cases. auth_pool_connection_lifetime = 60 integer value The maximum end user authentication connection lifetime to the LDAP server in seconds. When this lifetime is exceeded, the connection will be unbound and removed from the connection pool. This option has no effect unless [ldap] use_auth_pool is also enabled. auth_pool_size = 100 integer value The size of the connection pool to use for end user authentication. This option has no effect unless [ldap] use_auth_pool is also enabled. chase_referrals = None boolean value Sets keystone's referral chasing behavior across directory partitions. If left unset, the system's default behavior will be used. connection_timeout = -1 integer value The connection timeout to use with the LDAP server. A value of -1 means that connections will never timeout. debug_level = None integer value Sets the LDAP debugging level for LDAP calls. A value of 0 means that debugging is not enabled. This value is a bitmask, consult your LDAP documentation for possible values. group_ad_nesting = False boolean value If enabled, group queries will use Active Directory specific filters for nested groups. group_additional_attribute_mapping = [] list value A list of LDAP attribute to keystone group attribute pairs used for mapping additional attributes to groups in keystone. The expected format is <ldap_attr>:<group_attr> , where ldap_attr is the attribute in the LDAP object and group_attr is the attribute which should appear in the identity API. group_attribute_ignore = [] list value List of group attributes to ignore on create and update. or whether a specific group attribute should be filtered for list or show group. group_desc_attribute = description string value The LDAP attribute mapped to group descriptions in keystone. group_filter = None string value The LDAP search filter to use for groups. group_id_attribute = cn string value The LDAP attribute mapped to group IDs in keystone. This must NOT be a multivalued attribute. Group IDs are expected to be globally unique across keystone domains and URL-safe. group_member_attribute = member string value The LDAP attribute used to indicate that a user is a member of the group. group_members_are_ids = False boolean value Enable this option if the members of the group object class are keystone user IDs rather than LDAP DNs. This is the case when using posixGroup as the group object class in Open Directory. group_name_attribute = ou string value The LDAP attribute mapped to group names in keystone. Group names are expected to be unique only within a keystone domain and are not expected to be URL-safe. group_objectclass = groupOfNames string value The LDAP object class to use for groups. If setting this option to posixGroup , you may also be interested in enabling the [ldap] group_members_are_ids option. group_tree_dn = None string value The search base to use for groups. Defaults to the [ldap] suffix value. page_size = 0 integer value Defines the maximum number of results per page that keystone should request from the LDAP server when listing objects. A value of zero ( 0 ) disables paging. password = None string value The password of the administrator bind DN to use when querying the LDAP server, if your LDAP server requires it. pool_connection_lifetime = 600 integer value The maximum connection lifetime to the LDAP server in seconds. When this lifetime is exceeded, the connection will be unbound and removed from the connection pool. This option has no effect unless [ldap] use_pool is also enabled. pool_connection_timeout = -1 integer value The connection timeout to use when pooling LDAP connections. A value of -1 means that connections will never timeout. This option has no effect unless [ldap] use_pool is also enabled. pool_retry_delay = 0.1 floating point value The number of seconds to wait before attempting to reconnect to the LDAP server. This option has no effect unless [ldap] use_pool is also enabled. pool_retry_max = 3 integer value The maximum number of times to attempt reconnecting to the LDAP server before aborting. A value of zero prevents retries. This option has no effect unless [ldap] use_pool is also enabled. pool_size = 10 integer value The size of the LDAP connection pool. This option has no effect unless [ldap] use_pool is also enabled. query_scope = one string value The search scope which defines how deep to search within the search base. A value of one (representing oneLevel or singleLevel ) indicates a search of objects immediately below to the base object, but does not include the base object itself. A value of sub (representing subtree or wholeSubtree ) indicates a search of both the base object itself and the entire subtree below it. suffix = cn=example,cn=com string value The default LDAP server suffix to use, if a DN is not defined via either [ldap] user_tree_dn or [ldap] group_tree_dn . tls_cacertdir = None string value An absolute path to a CA certificate directory to use when communicating with LDAP servers. There is no reason to set this option if you've also set [ldap] tls_cacertfile . tls_cacertfile = None string value An absolute path to a CA certificate file to use when communicating with LDAP servers. This option will take precedence over [ldap] tls_cacertdir , so there is no reason to set both. tls_req_cert = demand string value Specifies which checks to perform against client certificates on incoming TLS sessions. If set to demand , then a certificate will always be requested and required from the LDAP server. If set to allow , then a certificate will always be requested but not required from the LDAP server. If set to never , then a certificate will never be requested. url = ldap://localhost string value URL(s) for connecting to the LDAP server. Multiple LDAP URLs may be specified as a comma separated string. The first URL to successfully bind is used for the connection. use_auth_pool = True boolean value Enable LDAP connection pooling for end user authentication. There is typically no reason to disable this. use_pool = True boolean value Enable LDAP connection pooling for queries to the LDAP server. There is typically no reason to disable this. use_tls = False boolean value Enable TLS when communicating with LDAP servers. You should also set the [ldap] tls_cacertfile and [ldap] tls_cacertdir options when using this option. Do not set this option if you are using LDAP over SSL (LDAPS) instead of TLS. user = None string value The user name of the administrator bind DN to use when querying the LDAP server, if your LDAP server requires it. user_additional_attribute_mapping = [] list value A list of LDAP attribute to keystone user attribute pairs used for mapping additional attributes to users in keystone. The expected format is <ldap_attr>:<user_attr> , where ldap_attr is the attribute in the LDAP object and user_attr is the attribute which should appear in the identity API. user_attribute_ignore = ['default_project_id'] list value List of user attributes to ignore on create and update, or whether a specific user attribute should be filtered for list or show user. user_default_project_id_attribute = None string value The LDAP attribute mapped to a user's default_project_id in keystone. This is most commonly used when keystone has write access to LDAP. user_description_attribute = description string value The LDAP attribute mapped to user descriptions in keystone. user_enabled_attribute = enabled string value The LDAP attribute mapped to the user enabled attribute in keystone. If setting this option to userAccountControl , then you may be interested in setting [ldap] user_enabled_mask and [ldap] user_enabled_default as well. user_enabled_default = True string value The default value to enable users. This should match an appropriate integer value if the LDAP server uses non-boolean (bitmask) values to indicate if a user is enabled or disabled. If this is not set to True , then the typical value is 512 . This is typically used when [ldap] user_enabled_attribute = userAccountControl . user_enabled_emulation = False boolean value If enabled, keystone uses an alternative method to determine if a user is enabled or not by checking if they are a member of the group defined by the [ldap] user_enabled_emulation_dn option. Enabling this option causes keystone to ignore the value of [ldap] user_enabled_invert . user_enabled_emulation_dn = None string value DN of the group entry to hold enabled users when using enabled emulation. Setting this option has no effect unless [ldap] user_enabled_emulation is also enabled. user_enabled_emulation_use_group_config = False boolean value Use the [ldap] group_member_attribute and [ldap] group_objectclass settings to determine membership in the emulated enabled group. Enabling this option has no effect unless [ldap] user_enabled_emulation is also enabled. user_enabled_invert = False boolean value Logically negate the boolean value of the enabled attribute obtained from the LDAP server. Some LDAP servers use a boolean lock attribute where "true" means an account is disabled. Setting [ldap] user_enabled_invert = true will allow these lock attributes to be used. This option will have no effect if either the [ldap] user_enabled_mask or [ldap] user_enabled_emulation options are in use. user_enabled_mask = 0 integer value Bitmask integer to select which bit indicates the enabled value if the LDAP server represents "enabled" as a bit on an integer rather than as a discrete boolean. A value of 0 indicates that the mask is not used. If this is not set to 0 the typical value is 2 . This is typically used when [ldap] user_enabled_attribute = userAccountControl . Setting this option causes keystone to ignore the value of [ldap] user_enabled_invert . user_filter = None string value The LDAP search filter to use for users. user_id_attribute = cn string value The LDAP attribute mapped to user IDs in keystone. This must NOT be a multivalued attribute. User IDs are expected to be globally unique across keystone domains and URL-safe. user_mail_attribute = mail string value The LDAP attribute mapped to user emails in keystone. user_name_attribute = sn string value The LDAP attribute mapped to user names in keystone. User names are expected to be unique only within a keystone domain and are not expected to be URL-safe. user_objectclass = inetOrgPerson string value The LDAP object class to use for users. user_pass_attribute = userPassword string value The LDAP attribute mapped to user passwords in keystone. user_tree_dn = None string value The search base to use for users. Defaults to the [ldap] suffix value. 7.1.22. memcache The following table outlines the options available under the [memcache] group in the /etc/keystone/keystone.conf file. Table 7.21. memcache Configuration option = Default value Type Description dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. This is used by the key value store system. pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. This is used by the key value store system. pool_maxsize = 10 integer value Max total number of open connections to every memcached server. This is used by the key value store system. pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. This is used by the key value store system. socket_timeout = 3 integer value Timeout in seconds for every call to a server. This is used by the key value store system. Deprecated since: T *Reason:*This option is duplicated with oslo.cache. Configure ``keystone.conf [cache] memcache_socket_timeout`` option to set the socket_timeout of memcached instead. 7.1.23. oauth1 The following table outlines the options available under the [oauth1] group in the /etc/keystone/keystone.conf file. Table 7.22. oauth1 Configuration option = Default value Type Description access_token_duration = 86400 integer value Number of seconds for the OAuth Access Token to remain valid after being created. This is the amount of time the consumer has to interact with the service provider (which is typically keystone). Setting this option to zero means that access tokens will last forever. driver = sql string value Entry point for the OAuth backend driver in the keystone.oauth1 namespace. Typically, there is no reason to set this option unless you are providing a custom entry point. request_token_duration = 28800 integer value Number of seconds for the OAuth Request Token to remain valid after being created. This is the amount of time the user has to authorize the token. Setting this option to zero means that request tokens will last forever. 7.1.24. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/keystone/keystone.conf file. Table 7.23. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 7.1.25. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/keystone/keystone.conf file. Table 7.24. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 7.1.26. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/keystone/keystone.conf file. Table 7.25. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 7.1.27. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/keystone/keystone.conf file. Table 7.26. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 7.1.28. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/keystone/keystone.conf file. Table 7.27. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 7.1.29. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/keystone/keystone.conf file. Table 7.28. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 7.1.30. policy The following table outlines the options available under the [policy] group in the /etc/keystone/keystone.conf file. Table 7.29. policy Configuration option = Default value Type Description driver = sql string value Entry point for the policy backend driver in the keystone.policy namespace. Supplied drivers are rules (which does not support any CRUD operations for the v3 policy API) and sql . Typically, there is no reason to set this option unless you are providing a custom entry point. list_limit = None integer value Maximum number of entities that will be returned in a policy collection. 7.1.31. profiler The following table outlines the options available under the [profiler] group in the /etc/keystone/keystone.conf file. Table 7.30. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 7.1.32. receipt The following table outlines the options available under the [receipt] group in the /etc/keystone/keystone.conf file. Table 7.31. receipt Configuration option = Default value Type Description cache_on_issue = True boolean value Enable storing issued receipt data to receipt validation cache so that first receipt validation doesn't actually cause full validation cycle. This option has no effect unless global caching and receipt caching are enabled. cache_time = 300 integer value The number of seconds to cache receipt creation and validation data. This has no effect unless both global and [receipt] caching are enabled. caching = True boolean value Toggle for caching receipt creation and validation data. This has no effect unless global caching is enabled, or if cache_on_issue is disabled as we only cache receipts on issue. expiration = 300 integer value The amount of time that a receipt should remain valid (in seconds). This value should always be very short, as it represents how long a user has to reattempt auth with the missing auth methods. provider = fernet string value Entry point for the receipt provider in the keystone.receipt.provider namespace. The receipt provider controls the receipt construction and validation operations. Keystone includes just the fernet receipt provider for now. fernet receipts do not need to be persisted at all, but require that you run keystone-manage fernet_setup (also see the keystone-manage fernet_rotate command). 7.1.33. resource The following table outlines the options available under the [resource] group in the /etc/keystone/keystone.conf file. Table 7.32. resource Configuration option = Default value Type Description admin_project_domain_name = None string value Name of the domain that owns the admin_project_name . If left unset, then there is no admin project. [resource] admin_project_name must also be set to use this option. admin_project_name = None string value This is a special project which represents cloud-level administrator privileges across services. Tokens scoped to this project will contain a true is_admin_project attribute to indicate to policy systems that the role assignments on that specific project should apply equally across every project. If left unset, then there is no admin project, and thus no explicit means of cross-project role assignments. [resource] admin_project_domain_name must also be set to use this option. cache_time = None integer value Time to cache resource data in seconds. This has no effect unless global caching is enabled. caching = True boolean value Toggle for resource caching. This has no effect unless global caching is enabled. domain_name_url_safe = off string value This controls whether the names of domains are restricted from containing URL-reserved characters. If set to new , attempts to create or update a domain with a URL-unsafe name will fail. If set to strict , attempts to scope a token with a URL-unsafe domain name will fail, thereby forcing all domain names to be updated to be URL-safe. driver = sql string value Entry point for the resource driver in the keystone.resource namespace. Only a sql driver is supplied by keystone. Unless you are writing proprietary drivers for keystone, you do not need to set this option. Deprecated since: P *Reason:*Non-SQL resource cannot be used with SQL Identity and has been unable to be used since Ocata. SQL Resource backend is a requirement as of Pike. Setting this option no longer has an effect on how Keystone operates. list_limit = None integer value Maximum number of entities that will be returned in a resource collection. project_name_url_safe = off string value This controls whether the names of projects are restricted from containing URL-reserved characters. If set to new , attempts to create or update a project with a URL-unsafe name will fail. If set to strict , attempts to scope a token with a URL-unsafe project name will fail, thereby forcing all project names to be updated to be URL-safe. 7.1.34. revoke The following table outlines the options available under the [revoke] group in the /etc/keystone/keystone.conf file. Table 7.33. revoke Configuration option = Default value Type Description cache_time = 3600 integer value Time to cache the revocation list and the revocation events (in seconds). This has no effect unless global and [revoke] caching are both enabled. caching = True boolean value Toggle for revocation event caching. This has no effect unless global caching is enabled. driver = sql string value Entry point for the token revocation backend driver in the keystone.revoke namespace. Keystone only provides a sql driver, so there is no reason to set this option unless you are providing a custom entry point. expiration_buffer = 1800 integer value The number of seconds after a token has expired before a corresponding revocation event may be purged from the backend. 7.1.35. role The following table outlines the options available under the [role] group in the /etc/keystone/keystone.conf file. Table 7.34. role Configuration option = Default value Type Description cache_time = None integer value Time to cache role data, in seconds. This has no effect unless both global caching and [role] caching are enabled. caching = True boolean value Toggle for role caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = None string value Entry point for the role backend driver in the keystone.role namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. list_limit = None integer value Maximum number of entities that will be returned in a role collection. This may be useful to tune if you have a large number of discrete roles in your deployment. 7.1.36. saml The following table outlines the options available under the [saml] group in the /etc/keystone/keystone.conf file. Table 7.35. saml Configuration option = Default value Type Description assertion_expiration_time = 3600 integer value Determines the lifetime for any SAML assertions generated by keystone, using NotOnOrAfter attributes. certfile = /etc/keystone/ssl/certs/signing_cert.pem string value Absolute path to the public certificate file to use for SAML signing. The value cannot contain a comma ( , ). idp_contact_company = Example, Inc. string value This is the company name of the identity provider's contact person. idp_contact_email = [email protected] string value This is the email address of the identity provider's contact person. idp_contact_name = SAML Identity Provider Support string value This is the given name of the identity provider's contact person. idp_contact_surname = Support string value This is the surname of the identity provider's contact person. idp_contact_telephone = +1 800 555 0100 string value This is the telephone number of the identity provider's contact person. idp_contact_type = other string value This is the type of contact that best describes the identity provider's contact person. idp_entity_id = None uri value This is the unique entity identifier of the identity provider (keystone) to use when generating SAML assertions. This value is required to generate identity provider metadata and must be a URI (a URL is recommended). For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/idp . idp_lang = en string value This is the language used by the identity provider's organization. idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml string value Absolute path to the identity provider metadata file. This file should be generated with the keystone-manage saml_idp_metadata command. There is typically no reason to change this value. idp_organization_display_name = OpenStack SAML Identity Provider string value This is the name of the identity provider's organization to be displayed. idp_organization_name = SAML Identity Provider string value This is the name of the identity provider's organization. idp_organization_url = https://example.com/ uri value This is the URL of the identity provider's organization. The URL referenced here should be useful to humans. idp_sso_endpoint = None uri value This is the single sign-on (SSO) service location of the identity provider which accepts HTTP POST requests. A value is required to generate identity provider metadata. For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/sso . keyfile = /etc/keystone/ssl/private/signing_key.pem string value Absolute path to the private key file to use for SAML signing. The value cannot contain a comma ( , ). relay_state_prefix = ss:mem: string value The prefix of the RelayState SAML attribute to use when generating enhanced client and proxy (ECP) assertions. In a typical deployment, there is no reason to change this value. xmlsec1_binary = xmlsec1 string value Name of, or absolute path to, the binary to be used for XML signing. Although only the XML Security Library ( xmlsec1 ) is supported, it may have a non-standard name or path on your system. If keystone cannot find the binary itself, you may need to install the appropriate package, use this option to specify an absolute path, or adjust keystone's PATH environment variable. 7.1.37. security_compliance The following table outlines the options available under the [security_compliance] group in the /etc/keystone/keystone.conf file. Table 7.36. security_compliance Configuration option = Default value Type Description change_password_upon_first_use = False boolean value Enabling this option requires users to change their password when the user is created, or upon administrative reset. Before accessing any services, affected users will have to change their password. To ignore this requirement for specific users, such as service users, set the options attribute ignore_change_password_upon_first_use to True for the desired user via the update user API. This feature is disabled by default. This feature is only applicable with the sql backend for the [identity] driver . disable_user_account_days_inactive = None integer value The maximum number of days a user can go without authenticating before being considered "inactive" and automatically disabled (locked). This feature is disabled by default; set any value to enable it. This feature depends on the sql backend for the [identity] driver . When a user exceeds this threshold and is considered "inactive", the user's enabled attribute in the HTTP API may not match the value of the user's enabled column in the user table. lockout_duration = 1800 integer value The number of seconds a user account will be locked when the maximum number of failed authentication attempts (as specified by [security_compliance] lockout_failure_attempts ) is exceeded. Setting this option will have no effect unless you also set [security_compliance] lockout_failure_attempts to a non-zero value. This feature depends on the sql backend for the [identity] driver . lockout_failure_attempts = None integer value The maximum number of times that a user can fail to authenticate before the user account is locked for the number of seconds specified by [security_compliance] lockout_duration . This feature is disabled by default. If this feature is enabled and [security_compliance] lockout_duration is not set, then users may be locked out indefinitely until the user is explicitly enabled via the API. This feature depends on the sql backend for the [identity] driver . minimum_password_age = 0 integer value The number of days that a password must be used before the user can change it. This prevents users from changing their passwords immediately in order to wipe out their password history and reuse an old password. This feature does not prevent administrators from manually resetting passwords. It is disabled by default and allows for immediate password changes. This feature depends on the sql backend for the [identity] driver . Note: If [security_compliance] password_expires_days is set, then the value for this option should be less than the password_expires_days . password_expires_days = None integer value The number of days for which a password will be considered valid before requiring it to be changed. This feature is disabled by default. If enabled, new password changes will have an expiration date, however existing passwords would not be impacted. This feature depends on the sql backend for the [identity] driver . password_regex = None string value The regular expression used to validate password strength requirements. By default, the regular expression will match any password. The following is an example of a pattern which requires at least 1 letter, 1 digit, and have a minimum length of 7 characters: ^(?=. \d)(?=. [a-zA-Z]).{7,}USD This feature depends on the sql backend for the [identity] driver . password_regex_description = None string value Describe your password regular expression here in language for humans. If a password fails to match the regular expression, the contents of this configuration variable will be returned to users to explain why their requested password was insufficient. unique_last_password_count = 0 integer value This controls the number of user password iterations to keep in history, in order to enforce that newly created passwords are unique. The total number which includes the new password should not be greater or equal to this value. Setting the value to zero (the default) disables this feature. Thus, to enable this feature, values must be greater than 0. This feature depends on the sql backend for the [identity] driver . 7.1.38. shadow_users The following table outlines the options available under the [shadow_users] group in the /etc/keystone/keystone.conf file. Table 7.37. shadow_users Configuration option = Default value Type Description driver = sql string value Entry point for the shadow users backend driver in the keystone.identity.shadow_users namespace. This driver is used for persisting local user references to externally-managed identities (via federation, LDAP, etc). Keystone only provides a sql driver, so there is no reason to change this option unless you are providing a custom entry point. 7.1.39. token The following table outlines the options available under the [token] group in the /etc/keystone/keystone.conf file. Table 7.38. token Configuration option = Default value Type Description allow_expired_window = 172800 integer value This controls the number of seconds that a token can be retrieved for beyond the built-in expiry time. This allows long running operations to succeed. Defaults to two days. allow_rescope_scoped_token = True boolean value This toggles whether scoped tokens may be re-scoped to a new project or domain, thereby preventing users from exchanging a scoped token (including those with a default project scope) for any other token. This forces users to either authenticate for unscoped tokens (and later exchange that unscoped token for tokens with a more specific scope) or to provide their credentials in every request for a scoped token to avoid re-scoping altogether. cache_on_issue = True boolean value Enable storing issued token data to token validation cache so that first token validation doesn't actually cause full validation cycle. This option has no effect unless global caching is enabled and will still cache tokens even if [token] caching = False . Deprecated since: S *Reason:*Keystone already exposes a configuration option for caching tokens. Having a separate configuration option to cache tokens when they are issued is redundant, unnecessarily complicated, and is misleading if token caching is disabled because tokens will still be pre-cached by default when they are issued. The ability to pre-cache tokens when they are issued is going to rely exclusively on the ``keystone.conf [token] caching`` option in the future. cache_time = None integer value The number of seconds to cache token creation and validation data. This has no effect unless both global and [token] caching are enabled. caching = True boolean value Toggle for caching token creation and validation data. This has no effect unless global caching is enabled. expiration = 3600 integer value The amount of time that a token should remain valid (in seconds). Drastically reducing this value may break "long-running" operations that involve multiple services to coordinate together, and will force users to authenticate with keystone more frequently. Drastically increasing this value will increase the number of tokens that will be simultaneously valid. Keystone tokens are also bearer tokens, so a shorter duration will also reduce the potential security impact of a compromised token. provider = fernet string value Entry point for the token provider in the keystone.token.provider namespace. The token provider controls the token construction, validation, and revocation operations. Supported upstream providers are fernet and jws . Neither fernet or jws tokens require persistence and both require additional setup. If using fernet , you're required to run keystone-manage fernet_setup , which creates symmetric keys used to encrypt tokens. If using jws , you're required to generate an ECDSA keypair using a SHA-256 hash algorithm for signing and validating token, which can be done with keystone-manage create_jws_keypair . Note that fernet tokens are encrypted and jws tokens are only signed. Please be sure to consider this if your deployment has security requirements regarding payload contents used to generate token IDs. revoke_by_id = True boolean value This toggles support for revoking individual tokens by the token identifier and thus various token enumeration operations (such as listing all tokens issued to a specific user). These operations are used to determine the list of tokens to consider revoked. Do not disable this option if you're using the kvs [revoke] driver . 7.1.40. tokenless_auth The following table outlines the options available under the [tokenless_auth] group in the /etc/keystone/keystone.conf file. Table 7.39. tokenless_auth Configuration option = Default value Type Description issuer_attribute = SSL_CLIENT_I_DN string value The name of the WSGI environment variable used to pass the issuer of the client certificate to keystone. This attribute is used as an identity provider ID for the X.509 tokenless authorization along with the protocol to look up its corresponding mapping. In a typical deployment, there is no reason to change this value. protocol = x509 string value The federated protocol ID used to represent X.509 tokenless authorization. This is used in combination with the value of [tokenless_auth] issuer_attribute to find a corresponding federated mapping. In a typical deployment, there is no reason to change this value. trusted_issuer = [] multi valued The list of distinguished names which identify trusted issuers of client certificates allowed to use X.509 tokenless authorization. If the option is absent then no certificates will be allowed. The format for the values of a distinguished name (DN) must be separated by a comma and contain no spaces. Furthermore, because an individual DN may contain commas, this configuration option may be repeated multiple times to represent multiple values. For example, keystone.conf would include two consecutive lines in order to trust two different DNs, such as trusted_issuer = CN=john,OU=keystone,O=openstack and trusted_issuer = CN=mary,OU=eng,O=abc . 7.1.41. totp The following table outlines the options available under the [totp] group in the /etc/keystone/keystone.conf file. Table 7.40. totp Configuration option = Default value Type Description included_previous_windows = 1 integer value The number of windows to check when processing TOTP passcodes. 7.1.42. trust The following table outlines the options available under the [trust] group in the /etc/keystone/keystone.conf file. Table 7.41. trust Configuration option = Default value Type Description allow_redelegation = False boolean value Allows authorization to be redelegated from one user to another, effectively chaining trusts together. When disabled, the remaining_uses attribute of a trust is constrained to be zero. driver = sql string value Entry point for the trust backend driver in the keystone.trust namespace. Keystone only provides a sql driver, so there is no reason to change this unless you are providing a custom entry point. max_redelegation_count = 3 integer value Maximum number of times that authorization can be redelegated from one user to another in a chain of trusts. This number may be reduced further for a specific trust. 7.1.43. unified_limit The following table outlines the options available under the [unified_limit] group in the /etc/keystone/keystone.conf file. Table 7.42. unified_limit Configuration option = Default value Type Description cache_time = None integer value Time to cache unified limit data, in seconds. This has no effect unless both global caching and [unified_limit] caching are enabled. caching = True boolean value Toggle for unified limit caching. This has no effect unless global caching is enabled. In a typical deployment, there is no reason to disable this. driver = sql string value Entry point for the unified limit backend driver in the keystone.unified_limit namespace. Keystone only provides a sql driver, so there's no reason to change this unless you are providing a custom entry point. enforcement_model = flat string value The enforcement model to use when validating limits associated to projects. Enforcement models will behave differently depending on the existing limits, which may result in backwards incompatible changes if a model is switched in a running deployment. list_limit = None integer value Maximum number of entities that will be returned in a role collection. This may be useful to tune if you have a large number of unified limits in your deployment. 7.1.44. wsgi The following table outlines the options available under the [wsgi] group in the /etc/keystone/keystone.conf file. Table 7.43. wsgi Configuration option = Default value Type Description debug_middleware = False boolean value If set to true, this enables the oslo debug middleware in Keystone. This Middleware prints a lot of information about the request and the response. It is useful for getting information about the data on the wire (decoded) and passed to the WSGI application pipeline. This middleware has no effect on the "debug" setting in the [DEFAULT] section of the config file or setting Keystone's log-level to "DEBUG"; it is specific to debugging the WSGI data as it enters and leaves Keystone (specific request-related data). This option is used for introspection on the request and response data between the web server (apache, nginx, etc) and Keystone. This middleware is inserted as the first element in the middleware chain and will show the data closest to the wire. WARNING: NOT INTENDED FOR USE IN PRODUCTION. THIS MIDDLEWARE CAN AND WILL EMIT SENSITIVE/PRIVILEGED DATA.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/keystone
|
Chapter 82. ClientTls schema reference
|
Chapter 82. ClientTls schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of ClientTls schema properties Configures TLS trusted certificates for connecting KafkaConnect, KafkaBridge, KafkaMirror, KafkaMirrorMaker2 to the cluster. 82.1. ClientTls schema properties Property Property type Description trustedCertificates CertSecretSource array Trusted certificates for TLS connection.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-clienttls-reference
|
Jenkins
|
Jenkins OpenShift Dedicated 4 Contains information about Jenkins for OpenShift Dedicated Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/jenkins/index
|
Chapter 5. Managing Red Hat Subscriptions
|
Chapter 5. Managing Red Hat Subscriptions Red Hat Satellite can import content from the Red Hat Content Delivery Network (CDN). Satellite requires a Red Hat subscription manifest to find, access, and download content from the corresponding repositories. You must have a Red Hat subscription manifest containing a subscription allocation for each organization on Satellite Server. All subscription information is available in your Red Hat Customer Portal account. Before you can complete the tasks in this chapter, you must create a Red Hat subscription manifest in the Customer Portal. Note that the entitlement-based subscription model is deprecated and will be removed in a future release. Red Hat recommends that you use the access-based subscription model of Simple Content Access instead. To create, manage, and export a Red Hat subscription manifest in the Customer Portal, see Using Manifests in the Using Red Hat Subscription Management guide. Use this chapter to import a Red Hat subscription manifest and manage the manifest within the Satellite web UI. Subscription Allocations and Organizations You can manage more than one organization if you have more than one subscription allocation. Satellite requires a single allocation for each organization configured in Satellite Server. The advantage of this is that each organization maintains separate subscriptions so that you can support multiple organizations, each with their own Red Hat accounts. Future-Dated subscriptions You can use future-dated subscriptions in a subscription allocation. When you add future-dated subscriptions to content hosts before the expiry date of the existing subscriptions, you can have uninterrupted access to repositories. Manually attach the future-dated subscriptions to your content hosts before the current subscriptions expire. Do not rely on the auto-attach method because this method is designed for a different purpose and might not work. For more information, see Section 5.6, "Attaching Red Hat Subscriptions to Content Hosts" . 5.1. Importing a Red Hat Subscription Manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Prerequisites You must have a Red Hat subscription manifest file exported from the Customer Portal. For more information, see Creating and Managing Manifests in Using Red Hat Subscription Management . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Browse . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . If the Manage Manifest window does not close automatically, click Close to return to the Subscriptions window. CLI procedure Copy the Red Hat subscription manifest file from your client to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in the Content Management guide. 5.2. Locating a Red Hat Subscription When you import a Red Hat subscription manifest into Satellite Server, the subscriptions from your manifest are listed in the Subscriptions window. If you have a high volume of subscriptions, you can filter the results to find a specific subscription. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 5.1, "Importing a Red Hat Subscription Manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click the Search field to view the list of search criteria for building your search query. Select search criteria to display further options. When you have built your search query, click the search icon. For example, if you place your cursor in the Search field and select expires , then press the space bar, another list appears with the options of placing a > , < , or = character. If you select > and press the space bar, another list of automatic options appears. You can also enter your own criteria. 5.3. Adding Red Hat Subscriptions to Subscription Allocations Use the following procedure to add Red Hat subscriptions to a subscription allocation in the Satellite web UI. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 5.1, "Importing a Red Hat Subscription Manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click Add Subscriptions . On the row of each subscription you want to add, enter the quantity in the Quantity to Allocate column. Click Submit 5.4. Removing Red Hat Subscriptions from Subscription Allocations Use the following procedure to remove Red Hat subscriptions from a subscription allocation in the Satellite web UI. Note Manifests must not be deleted. If you delete the manifest from the Red Hat Customer Portal or in the Satellite web UI, all of the entitlements for all of your content hosts will be removed. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 5.1, "Importing a Red Hat Subscription Manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . On the row of each subscription you want to remove, select the corresponding checkbox. Click Delete , and then confirm deletion. 5.5. Updating and Refreshing Red Hat Subscription Manifests Every time that you change a subscription allocation, you must refresh the manifest to reflect these changes. For example, you must refresh the manifest if you take any of the following actions: Renewing a subscription Adjusting subscription quantities Purchasing additional subscriptions You can refresh the manifest directly in the Satellite web UI. Alternatively, you can import an updated manifest that contains the changes. Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click Manage Manifest . In the Manage Manifest window, click Refresh . 5.6. Attaching Red Hat Subscriptions to Content Hosts Using activation keys is the main method to attach subscriptions to content hosts during the provisioning process. However, an activation key cannot update an existing host. If you need to attach new or additional subscriptions, such as future-dated subscriptions, to one host, use the following procedure. For more information about updating multiple hosts, see Section 5.7, "Updating Red Hat Subscriptions on Multiple Hosts" . For more information about activation keys, see Chapter 10, Managing Activation Keys . Satellite Subscriptions In Satellite, you must maintain a Red Hat Enterprise Linux Satellite subscription, formerly known as Red Hat Enterprise Linux Smart Management, for every Red Hat Enterprise Linux host that you want to manage. However, you are not required to attach Satellite subscriptions to each content host. Satellite subscriptions cannot attach automatically to content hosts in Satellite because they are not associated with any product certificates. Adding a Satellite subscription to a content host does not provide any content or repository access. If you want, you can add a Satellite subscription to a manifest for your own recording or tracking purposes. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Hosts > Content Hosts . On the row of each content host whose subscription you want to change, select the corresponding checkbox. From the Select Action list, select Manage Subscriptions . Optionally, enter a key and value in the Search field to filter the subscriptions displayed. Select the checkbox to the left of the subscriptions that you want to add or remove and click Add Selected or Remove Selected as required. Click Done to save the changes. CLI procedure Connect to Satellite Server as the root user, and then list the available subscriptions: Attach a subscription to the host: 5.7. Updating Red Hat Subscriptions on Multiple Hosts Use this procedure for post-installation changes to multiple content hosts at the same time. Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Hosts > Content Hosts . On the row of each content host whose subscription you want to change, select the corresponding checkbox. From the Select Action list, select Manage Subscriptions . Optionally, enter a key and value in the Search field to filter the subscriptions displayed. Select the checkbox to the left of the subscriptions to be added or removed and click Add Selected or Remove Selected as required. Click Done to save the changes.
|
[
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"",
"hammer subscription list --organization-id 1",
"hammer host subscription attach --host host_name --subscription-id subscription_id"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/managing_red_hat_subscriptions_content-management
|
4.105. jasper
|
4.105. jasper 4.105.1. RHSA-2011:1807 - Important: jasper security update Updated jasper packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. JasPer is an implementation of Part 1 of the JPEG 2000 image compression standard. Security Fix CVE-2011-4516 , CVE-2011-4517 Two heap-based buffer overflow flaws were found in the way JasPer decoded JPEG 2000 compressed image files. An attacker could create a malicious JPEG 2000 compressed image file that, when opened, would cause applications that use JasPer (such as Nautilus) to crash or, potentially, execute arbitrary code. Red Hat would like to thank Jonathan Foote of the CERT Coordination Center for reporting these issues. Users are advised to upgrade to these updated packages, which contain a backported patch to correct these issues. All applications using the JasPer libraries (such as Nautilus) must be restarted for the update to take effect.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/jasper
|
Chapter 1. Required Infrastructure Component Versions
|
Chapter 1. Required Infrastructure Component Versions When you work with Red Hat support for Spring Boot, you can use the following components. However, Red Hat does not provide support for components listed below except Red Hat OpenShift cluster and Red Hat OpenJDK. Required components The following components are required to build and develop applications using Spring Boot. Table 1.1. Required components Component name Version Maven 3.6.3 JDK [a] OpenJDK 8, OpenJDK 11, OpenJDK 17 [a] A full JDK installation is required because JRE does not provide tools for compiling Java applications from source. Optional components Red Hat recommends using the following components depending on your development and production environments. Table 1.2. Optional components Component name Version git 2.0 or later oc command line tool 4.12 or later [a] Access to a Red Hat OpenShift cluster [b] 4.12, 4.13 [a] The version of the oc CLI tool should correspond to the version of OCP that you are using. [b] OpenShift cluster is supported by Red Hat.
| null |
https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/release_notes_for_spring_boot_2.7/required-infrastructure-component-versions
|
probe::irq_handler.entry
|
probe::irq_handler.entry Name probe::irq_handler.entry - Execution of interrupt handler starting Synopsis irq_handler.entry Values next_irqaction pointer to irqaction for shared interrupts thread_fn interrupt handler function for threaded interrupts thread thread pointer for threaded interrupts thread_flags Flags related to thread irq irq number flags_str symbolic string representation of IRQ flags dev_name name of device action struct irqaction* for this interrupt num dir pointer to the proc/irq/NN/name entry flags Flags for IRQ handler dev_id Cookie to identify device handler interrupt handler function
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-irq-handler-entry
|
Chapter 8. Providing public access to an instance
|
Chapter 8. Providing public access to an instance New instances automatically receive a port with a fixed IP address on the network that the instance is assigned to. This IP address is private and is permanently associated with the instance until the instance is deleted. The fixed IP address is used for communication between instances. You can connect a public instance directly to a shared external network where a public IP address is directly assigned to the instance. This is useful if you are working in a private cloud. You can also provide public access to an instance through a project network that has a routed connection to an external provider network. This is the preferred method if you are working in a public cloud, or when public IP addresses are limited. To provide public access through the project network, the project network must be connected to a router with the gateway set to the external network. For external traffic to reach the instance, the cloud user must associate a floating IP address with the instance. To provide access to and from an instance, whether it is connected to a shared external network or a routed provider network, you must use a security group with the required protocols, such as SSH, ICMP, or HTTP. You must also pass a key pair to the instance during creation, so that you can access the instance remotely. Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command, for example: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: 8.1. Prerequisites The external network must have a subnet to provide the floating IP addresses. The project network must be connected to a router that has the external network configured as the gateway. A security group with the required protocols must be available for your project. For more information see Configuring security groups in Managing network resources . 8.2. Securing instance access with security groups and key pairs Security groups are sets of IP filter rules that control network and protocol access to and from instances, such as ICMP to allow you to ping an instance, and SSH to allow you to connect to an instance. All projects have a default security group called default , which is used when you do not specify a security group for your instances. By default, the default security group allows all outgoing traffic and denies all incoming traffic from any source other than instances in the same security group. You can apply one or more security groups to an instance during instance creation. To apply a security group to a running instance, apply the security group to a port attached to the instance. For more information on security groups, see Configuring security groups in Managing network resources . Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command, for example: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port in Creating and managing instances . Key pairs are SSH or x509 credentials that are injected into an instance when it is launched to enable remote access to the instance. You can create new key pairs in RHOSP, or import existing key pairs. Each user should have at least one key pair. The key pair can be used for multiple instances. Note You cannot share key pairs between users in a project because each key pair belongs to the individual user that created or imported the key pair, rather than to the project. 8.2.1. Adding a security group to a port The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group to a port on a running instance. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Determine the port on the instance that you want to apply the security group to: Apply the security group to the port: Replace <sec_group> with the name or ID of the security group you want to apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 8.2.2. Removing a security group from a port To remove a security group from a port you need to first remove all the security groups, then re-add the security groups that you want to remain assigned to the port. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure List all the security groups associated with the port and record the IDs of the security groups that you want to remain associated with the port: Remove all the security groups associated with the port: Re-apply the security groups to the port: Replace <sec_group> with the ID of the security group that you want to re-apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 8.2.3. Generating a new SSH key pair You can create a new SSH key pair for use within your project. Note Use a x509 certificate to create a key pair for a Windows instance. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Create the key pair and save the private key in your local .ssh directory: Replace <keypair> with the name of your new key pair. Protect the private key: 8.2.4. Importing an existing SSH key pair You can import an SSH key to your project that you created outside of Red Hat OpenStack Services on OpenShift (RHOSO) by providing the public key file when you create a new key pair. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Create the key pair from the existing public key file and save the private key in your local .ssh directory: Replace <private_key> with the name of the public key file that you want to use to create the key pair. Replace <keypair> with the name of your new key pair. Protect the private key: 8.2.5. Additional resources Configuring security groups in Managing network resources . Project security management in Performing security operations . 8.3. Assigning a floating IP address to an instance You can assign a public floating IP address to an instance to enable communication with networks outside the cloud, including the Internet. The cloud administrator configures the available pool of floating IP addresses for an external network. You can allocate a floating IP address from this pool to your project, then associate the floating IP address with your instance. Projects have a limited quota of floating IP addresses that can be used by instances in the project, 50 by default. Therefore, release IP addresses for reuse when you no longer need them. Prerequisites The instance must be on an external network, or on a project network that is connected to a router that has the external network configured as the gateway. The external network that the instance will connect to must have a subnet to provide the floating IP addresses. The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Check the floating IP addresses that are allocated to the current project: If there are no floating IP addresses available that you want to use, allocate a floating IP address to the current project from the external network allocation pool: Replace <provider-network> with the name or ID of the external network that you want to use to provide external access. Tip By default, a floating IP address is randomly allocated from the pool of the external network. A cloud administrator can use the --floating-ip-address option to allocate a specific floating IP address from an external network. Assign the floating IP address to an instance: Replace <instance> with the name or ID of the instance that you want to provide public access to. Replace <floating_ip> with the floating IP address that you want to assign to the instance. Optional: Replace <ip_address> with the IP address of the interface that you want to attach the floating IP to. By default, this attaches the floating IP address to the first port. Verify that the floating IP address has been assigned to the instance: Additional resources Creating floating IP pools in the Managing networking resources guide. 8.4. Disassociating a floating IP address from an instance When the instance no longer needs public access, disassociate it from the instance and return it to the allocation pool. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Disassociate the floating IP address from the instance: Replace <instance> with the name or ID of the instance that you want to remove public access from. Replace <floating_ip> with the floating IP address that is assigned to the instance. Release the floating IP address back into the allocation pool: Confirm the floating IP address is deleted and is no longer available for assignment: 8.5. Creating an instance with SSH access You can provide SSH access to an instance by specifying a key pair when you create the instance. Key pairs are SSH or x509 credentials that are injected into an instance when it is launched. Each project should have at least one key pair. A key pair belongs to an individual user, not to a project. Note You cannot associate a key pair with an instance after the instance has been created. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port in Creating and managing instances . Prerequisites A key pair is available that you can use to SSH into your instances. For more information, see Generating a new SSH key pair . The network that you plan to create your instance on must be an external network, or a project network connected to a router that has the external network configured as the gateway. For more information, see Adding a router in the Configuring Red Hat OpenStack Platform networking guide. The external network that the instance connects to must have a subnet to provide the floating IP addresses. The security group allows SSH access to instances. For more information, see Securing instance access with security groups and key pairs . The image that the instance is based on contains the cloud-init package to inject the SSH public key into the instance. A floating IP address is available to assign to your instance. For more information, see Assigning a floating IP address to an instance . The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Retrieve the name or ID of the flavor that has the hardware profile that your instance requires: Note Choose a flavor with sufficient size for the image to successfully boot, otherwise the instance will fail to launch. Retrieve the name or ID of the image that has the software profile that your instance requires: If the image you require is not available, you can download or create a new image. For information about creating or downloading cloud images, see Creating RHEL KVM images in Performing storage operations . Retrieve the name or ID of the network that you want to connect your instance to: Retrieve the name of the key pair that you want to use to access your instance remotely: Create your instance with SSH access: Replace <flavor> with the name or ID of the flavor that you retrieved in step 1. Replace <image> with the name or ID of the image that you retrieved in step 2. Replace <network> with the name or ID of the network that you retrieved in step 3. You can use the --network option more than once to connect your instance to several networks, as required. Optional: The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group directly to the instance during instance creation, or to a port on the running instance. Use the --security-group option to specify an alternative security group when creating the instance. For information on adding a security group to a port on a running instance, see Adding a security group to a port . Replace <keypair> with the name or ID of the key pair that you retrieved in step 4. Assign a floating IP address to the instance: Replace <floating_ip> with the floating IP address that you want to assign to the instance. Use the automatically created cloud-user account to verify that you can log in to your instance by using SSH: 8.6. Additional resources Creating a network in Managing network resources . Adding a router in Managing network resources . Configuring security groups in Managing network resources .
|
[
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack port list --server myInstancewithSSH",
"openstack port set --security-group <sec_group> <port>",
"openstack port show <port>",
"openstack port set --no-security-group <port>",
"openstack port set --security-group <sec_group> <port>",
"ssh-keygen -f '<RSA key>' -e -m pem > ~/.ssh/<keypair>.pem",
"chmod 600 ~/.ssh/<keypair>.pem",
"openstack keypair create --private-key ~/.ssh/<private_key> <keypair> > ~/.ssh/<keypair>.pem",
"chmod 600 ~/.ssh/<keypair>.pem",
"openstack floating ip list",
"openstack floating ip create <provider-network>",
"openstack server add floating ip [--fixed-ip-address <ip_address>] <instance> <floating_ip>",
"openstack server show <instance>",
"openstack server remove floating ip <instance> <ip_address>",
"openstack floating ip delete <ip_address>",
"openstack floating ip list",
"openstack flavor list",
"openstack image list",
"openstack network list",
"openstack keypair list",
"openstack server create --flavor <flavor> --image <image> --network <network> [--security-group <secgroup>] --key-name <keypair> --wait myInstancewithSSH",
"openstack server add floating ip myInstancewithSSH <floating_ip>",
"ssh -i ~/.ssh/<keypair>.pem cloud-user@<floatingIP> [cloud-user@demo-server1 ~]USD"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/assembly_providing-public-access-to-an-instance_instances
|
Creating and managing manifests for a connected Satellite Server
|
Creating and managing manifests for a connected Satellite Server Subscription Central 1-latest Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_connected_satellite_server/index
|
Kafka configuration properties
|
Kafka configuration properties Red Hat Streams for Apache Kafka 2.7 Use configuration properties to configure Kafka components
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_properties/index
|
11.6.2. Useful Websites
|
11.6.2. Useful Websites http://www.redhat.com/mirrors/LDP/HOWTO/Mail-Administrator-HOWTO.html - Provides an overview of how email works, and examines possible email solutions and configurations on the client and server sides. http://www.redhat.com/mirrors/LDP/HOWTO/Mail-User-HOWTO/ - Looks at email from the user's perspective, investigates various popular email client applications and gives an introduction to topics such as aliases, forwarding, auto-replying, mailing lists, mail filters, and spam. http://www.redhat.com/mirrors/LDP/HOWTO/mini/Secure-POP+SSH.html - Demonstrates a way to retrieve POP email using SSH with port forwarding, so that the email passwords and messages are transferred securely. http://www.sendmail.net/ - Contains news, interviews, and articles concerning Sendmail, including an expanded view of the many options available. http://www.sendmail.org/ - Offers a thorough technical breakdown of Sendmail features and configuration examples. http://www.postfix.org/ - The Postfix project home page contains a wealth of information about Postfix. The mailing list is a particularly good place to look for information. http://catb.org/~esr/fetchmail/ - The home page for Fetchmail, featuring an online manual, and a thorough FAQ. http://www.procmail.org/ - The home page for Procmail with links to assorted mailing lists dedicated to Procmail as well as various FAQ documents. http://www.ling.helsinki.fi/users/reriksso/procmail/mini-faq.html - An excellent Procmail FAQ, offers troubleshooting tips, details about file locking, and the use of wildcard characters. http://www.uwasa.fi/~ts/info/proctips.html - Contains dozens of tips that make using Procmail much easier. Includes instructions on how to test .procmailrc files and use Procmail scoring to decide if a particular action should be taken. http://www.spamassassin.org/ - The official site of the SpamAssassin project.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-email-useful-websites
|
3.4. RHEA-2011:1636 - new package: libvirt-qmf
|
3.4. RHEA-2011:1636 - new package: libvirt-qmf A new libvirt-qmf package is now available for Red Hat Enterprise Linux 6. The libvirt-qmf package contains a daemon to allow remote control of the libvirt API through the Qpid Management Framework (QMF). Enhancement BZ# 688194 With this update, the libvirt-qmf package obsoletes the libvirt-qpid package, which provided similar functionality. The new package uses the matahari library to provide an interface consistent with that of other Matahari agents. Note: After installation, it is advisable to convert existing QMF consoles, that previously connected to libvirt-qpid, to use libvirt-qmf as their interface. Also, when creating a new QMF console, it is recommended to use libvirt-qmf to communicate with libvirt. All users requiring libvirt-qmf are advised to install this new package, which adds this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libvirt-qmf_new
|
Chapter 4. Useful SystemTap Scripts
|
Chapter 4. Useful SystemTap Scripts This chapter enumerates several SystemTap scripts you can use to monitor and investigate different subsystems. All of these scripts are available at /usr/share/systemtap/testsuite/systemtap.examples/ once you install the systemtap-testsuite RPM. 4.1. Network The following sections showcase scripts that trace network-related functions and build a profile of network activity. 4.1.1. Network Profiling This section describes how to profile network activity. nettop.stp provides a glimpse into how much network traffic each process is generating on a machine. nettop.stp Note that function print_activity() uses the following expressions: These expressions are if/else conditionals. The first statement is simply a more concise way of writing the following psuedo code: nettop.stp tracks which processes are generating network traffic on the system, and provides the following information about each process: PID - the ID of the listed process. UID - user ID. A user ID of 0 refers to the root user. DEV - which ethernet device the process used to send / receive data (for example eth0, eth1) XMIT_PK - number of packets transmitted by the process RECV_PK - number of packets received by the process XMIT_KB - amount of data sent by the process, in kilobytes RECV_KB - amount of data received by the service, in kilobytes nettop.stp provides network profile sampling every 5 seconds. You can change this setting by editing probe timer.ms(5000) accordingly. Example 4.1, "nettop.stp Sample Output" contains an excerpt of the output from nettop.stp over a 20-second period: Example 4.1. nettop.stp Sample Output 4.1.2. Tracing Functions Called in Network Socket Code This section describes how to trace functions called from the kernel's net/socket.c file. This task helps you identify, in finer detail, how each process interacts with the network at the kernel level. socket-trace.stp socket-trace.stp is identical to Example 3.6, "thread_indent.stp" , which was earlier used in SystemTap Functions to illustrate how thread_indent() works. Example 4.2. socket-trace.stp Sample Output Example 4.2, "socket-trace.stp Sample Output" contains a 3-second excerpt of the output for socket-trace.stp . For more information about the output of this script as provided by thread_indent() , refer to SystemTap Functions Example 3.6, "thread_indent.stp" . 4.1.3. Monitoring Incoming TCP Connections This section illustrates how to monitor incoming TCP connections. This task is useful in identifying any unauthorized, suspicious, or otherwise unwanted network access requests in real time. tcp_connections.stp While tcp_connections.stp is running, it will print out the following information about any incoming TCP connections accepted by the system in real time: Current UID CMD - the command accepting the connection PID of the command Port used by the connection IP address from which the TCP connection originated Example 4.3. tcp_connections.stp Sample Output 4.1.4. Monitoring Network Packets Drops in Kernel The network stack in Linux can discard packets for various reasons. Some Linux kernels include a tracepoint, kernel.trace("kfree_skb") , which easily tracks where packets are discarded. dropwatch.stp uses kernel.trace("kfree_skb") to trace packet discards; the script summarizes which locations discard packets every five-second interval. dropwatch.stp The kernel.trace("kfree_skb") traces which places in the kernel drop network packets. The kernel.trace("kfree_skb") has two arguments: a pointer to the buffer being freed ( USDskb ) and the location in kernel code the buffer is being freed ( USDlocation ). Running the dropwatch.stp script 15 seconds would result in output similar in Example 4.4, "dropwatch.stp Sample Output" . The output lists the number of misses for tracepoint address and the actual address. Example 4.4. dropwatch.stp Sample Output To make the location of packet drops more meaningful, refer to the /boot/System.map-`uname -r` file. This file lists the starting addresses for each function, allowing you to map the addresses in the output of Example 4.4, "dropwatch.stp Sample Output" to a specific function name. Given the following snippet of the /boot/System.map-`uname -r` file, the address 0xffffffff8024cd0f maps to the function unix_stream_recvmsg and the address 0xffffffff8044b472 maps to the function arp_rcv :
|
[
"#! /usr/bin/env stap global ifxmit, ifrecv global ifmerged probe netdev.transmit { ifxmit[pid(), dev_name, execname(), uid()] <<< length } probe netdev.receive { ifrecv[pid(), dev_name, execname(), uid()] <<< length } function print_activity() { printf(\"%5s %5s %-7s %7s %7s %7s %7s %-15s\\n\", \"PID\", \"UID\", \"DEV\", \"XMIT_PK\", \"RECV_PK\", \"XMIT_KB\", \"RECV_KB\", \"COMMAND\") foreach ([pid, dev, exec, uid] in ifrecv) { ifmerged[pid, dev, exec, uid] += @count(ifrecv[pid,dev,exec,uid]); } foreach ([pid, dev, exec, uid] in ifxmit) { ifmerged[pid, dev, exec, uid] += @count(ifxmit[pid,dev,exec,uid]); } foreach ([pid, dev, exec, uid] in ifmerged-) { n_xmit = @count(ifxmit[pid, dev, exec, uid]) n_recv = @count(ifrecv[pid, dev, exec, uid]) printf(\"%5d %5d %-7s %7d %7d %7d %7d %-15s\\n\", pid, uid, dev, n_xmit, n_recv, n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0, n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0, exec) } print(\"\\n\") delete ifxmit delete ifrecv delete ifmerged } probe timer.ms(5000), end, error { print_activity() }",
"n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0 n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0",
"if n_recv != 0 then @sum(ifrecv[pid, dev, exec, uid])/1024 else 0",
"[...] PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 0 5 0 0 swapper 11178 0 eth0 2 0 0 0 synergyc PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 2886 4 eth0 79 0 5 0 cups-polld 11362 0 eth0 0 61 0 5 firefox 0 0 eth0 3 32 0 3 swapper 2886 4 lo 4 4 0 0 cups-polld 11178 0 eth0 3 0 0 0 synergyc PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 0 6 0 0 swapper 2886 4 lo 2 2 0 0 cups-polld 11178 0 eth0 3 0 0 0 synergyc 3611 0 eth0 0 1 0 0 Xorg PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 3 42 0 2 swapper 11178 0 eth0 43 1 3 0 synergyc 11362 0 eth0 0 7 0 0 firefox 3897 0 eth0 0 1 0 0 multiload-apple [...]",
"#!/usr/bin/stap probe kernel.function(\"*@net/socket.c\").call { printf (\"%s -> %s\\n\", thread_indent(1), probefunc()) } probe kernel.function(\"*@net/socket.c\").return { printf (\"%s <- %s\\n\", thread_indent(-1), probefunc()) }",
"[...] 0 Xorg(3611): -> sock_poll 3 Xorg(3611): <- sock_poll 0 Xorg(3611): -> sock_poll 3 Xorg(3611): <- sock_poll 0 gnome-terminal(11106): -> sock_poll 5 gnome-terminal(11106): <- sock_poll 0 scim-bridge(3883): -> sock_poll 3 scim-bridge(3883): <- sock_poll 0 scim-bridge(3883): -> sys_socketcall 4 scim-bridge(3883): -> sys_recv 8 scim-bridge(3883): -> sys_recvfrom 12 scim-bridge(3883):-> sock_from_file 16 scim-bridge(3883):<- sock_from_file 20 scim-bridge(3883):-> sock_recvmsg 24 scim-bridge(3883):<- sock_recvmsg 28 scim-bridge(3883): <- sys_recvfrom 31 scim-bridge(3883): <- sys_recv 35 scim-bridge(3883): <- sys_socketcall [...]",
"#! /usr/bin/env stap probe begin { printf(\"%6s %16s %6s %6s %16s\\n\", \"UID\", \"CMD\", \"PID\", \"PORT\", \"IP_SOURCE\") } probe kernel.function(\"tcp_accept\").return?, kernel.function(\"inet_csk_accept\").return? { sock = USDreturn if (sock != 0) printf(\"%6d %16s %6d %6d %16s\\n\", uid(), execname(), pid(), inet_get_local_port(sock), inet_get_ip_source(sock)) }",
"UID CMD PID PORT IP_SOURCE 0 sshd 3165 22 10.64.0.227 0 sshd 3165 22 10.64.0.227",
"#!/usr/bin/stap ############################################################ Dropwatch.stp Author: Neil Horman <[email protected]> An example script to mimic the behavior of the dropwatch utility http://fedorahosted.org/dropwatch ############################################################ Array to hold the list of drop points we find global locations Note when we turn the monitor on and off probe begin { printf(\"Monitoring for dropped packets\\n\") } probe end { printf(\"Stopping dropped packet monitor\\n\") } increment a drop counter for every location we drop at probe kernel.trace(\"kfree_skb\") { locations[USDlocation] <<< 1 } Every 5 seconds report our drop locations probe timer.sec(5) { printf(\"\\n\") foreach (l in locations-) { printf(\"%d packets dropped at location %p\\n\", @count(locations[l]), l) } delete locations }",
"Monitoring for dropped packets 51 packets dropped at location 0xffffffff8024cd0f 2 packets dropped at location 0xffffffff8044b472 51 packets dropped at location 0xffffffff8024cd0f 1 packets dropped at location 0xffffffff8044b472 97 packets dropped at location 0xffffffff8024cd0f 1 packets dropped at location 0xffffffff8044b472 Stopping dropped packet monitor",
"[...] ffffffff8024c5cd T unlock_new_inode ffffffff8024c5da t unix_stream_sendmsg ffffffff8024c920 t unix_stream_recvmsg ffffffff8024cea1 t udp_v4_lookup_longway [...] ffffffff8044addc t arp_process ffffffff8044b360 t arp_rcv ffffffff8044b487 t parp_redo ffffffff8044b48c t arp_solicit [...]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/useful-systemtap-scripts
|
2.8.4. Common IPTables Filtering
|
2.8.4. Common IPTables Filtering Preventing remote attackers from accessing a LAN is one of the most important aspects of network security. The integrity of a LAN should be protected from malicious remote users through the use of stringent firewall rules. However, with a default policy set to block all incoming, outgoing, and forwarded packets, it is impossible for the firewall/gateway and internal LAN users to communicate with each other or with external resources. To allow users to perform network-related functions and to use networking applications, administrators must open certain ports for communication. For example, to allow access to port 80 on the firewall , append the following rule: This allows users to browse websites that communicate using the standard port 80. To allow access to secure websites (for example, https://www.example.com/), you also need to provide access to port 443, as follows: Important When creating an iptables ruleset, order is important. If a rule specifies that any packets from the 192.168.100.0/24 subnet be dropped, and this is followed by a rule that allows packets from 192.168.100.13 (which is within the dropped subnet), then the second rule is ignored. The rule to allow packets from 192.168.100.13 must precede the rule that drops the remainder of the subnet. To insert a rule in a specific location in an existing chain, use the -I option. For example: This rule is inserted as the first rule in the INPUT chain to allow local loopback device traffic. There may be times when you require remote access to the LAN. Secure services, for example SSH, can be used for encrypted remote connection to LAN services. Administrators with PPP-based resources (such as modem banks or bulk ISP accounts), dial-up access can be used to securely circumvent firewall barriers. Because they are direct connections, modem connections are typically behind a firewall/gateway. For remote users with broadband connections, however, special cases can be made. You can configure iptables to accept connections from remote SSH clients. For example, the following rules allow remote SSH access: These rules allow incoming and outbound access for an individual system, such as a single PC directly connected to the Internet or a firewall/gateway. However, they do not allow nodes behind the firewall/gateway to access these services. To allow LAN access to these services, you can use Network Address Translation ( NAT ) with iptables filtering rules.
|
[
"~]# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT",
"~]# iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT",
"~]# iptables -I INPUT 1 -i lo -p all -j ACCEPT",
"~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT ~]# iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-firewalls-common_iptables_filtering
|
DM Multipath
|
DM Multipath Red Hat Enterprise Linux 6 DM Multipath Configuration and Administration Steven Levine Red Hat Customer Content Services [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/index
|
Chapter 6. Override Ceph behavior
|
Chapter 6. Override Ceph behavior As a storage administrator, you need to understand how to use overrides for the Red Hat Ceph Storage cluster to change Ceph options during runtime. 6.1. Setting and unsetting Ceph override options You can set and unset Ceph options to override Ceph's default behavior. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To override Ceph's default behavior, use the ceph osd set command and the behavior you wish to override: Syntax Once you set the behavior, ceph health will reflect the override(s) that you have set for the cluster. Example To cease overriding Ceph's default behavior, use the ceph osd unset command and the override you wish to cease. Syntax Example Flag Description noin Prevents OSDs from being treated as in the cluster. noout Prevents OSDs from being treated as out of the cluster. noup Prevents OSDs from being treated as up and running. nodown Prevents OSDs from being treated as down . full Makes a cluster appear to have reached its full_ratio , and thereby prevents write operations. pause Ceph will stop processing read and write operations, but will not affect OSD in , out , up or down statuses. nobackfill Ceph will prevent new backfill operations. norebalance Ceph will prevent new rebalancing operations. norecover Ceph will prevent new recovery operations. noscrub Ceph will prevent new scrubbing operations. nodeep-scrub Ceph will prevent new deep scrubbing operations. notieragent Ceph will disable the process that is looking for cold/dirty objects to flush and evict. 6.2. Ceph override use cases noin : Commonly used with noout to address flapping OSDs. noout : If the mon osd report timeout is exceeded and an OSD has not reported to the monitor, the OSD will get marked out . If this happens erroneously, you can set noout to prevent the OSD(s) from getting marked out while you troubleshoot the issue. noup : Commonly used with nodown to address flapping OSDs. nodown : Networking issues may interrupt Ceph 'heartbeat' processes, and an OSD may be up but still get marked down. You can set nodown to prevent OSDs from getting marked down while troubleshooting the issue. full : If a cluster is reaching its full_ratio , you can pre-emptively set the cluster to full and expand capacity. Note Setting the cluster to full will prevent write operations. pause : If you need to troubleshoot a running Ceph cluster without clients reading and writing data, you can set the cluster to pause to prevent client operations. nobackfill : If you need to take an OSD or node down temporarily, for example, upgrading daemons, you can set nobackfill so that Ceph will not backfill while the OSDs is down . norecover : If you need to replace an OSD disk and don't want the PGs to recover to another OSD while you are hotswapping disks, you can set norecover to prevent the other OSDs from copying a new set of PGs to other OSDs. noscrub and nodeep-scrubb : If you want to prevent scrubbing for example, to reduce overhead during high loads, recovery, backfilling, and rebalancing you can set noscrub and/or nodeep-scrub to prevent the cluster from scrubbing OSDs. notieragent : If you want to stop the tier agent process from finding cold objects to flush to the backing storage tier, you may set notieragent .
|
[
"ceph osd set FLAG",
"ceph osd set noout",
"ceph osd unset FLAG",
"ceph osd unset noout"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/override-ceph-behavior
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/making-open-source-more-inclusive
|
2.7.2. Configuration File Directives
|
2.7.2. Configuration File Directives The following are directives commonly used in the GRUB menu configuration file: chainloader </path/to/file> - Loads the specified file as a chain loader. Replace </path/to/file> with the absolute path to the chain loader. If the file is located on the first sector of the specified partition, use the blocklist notation, +1 . color <normal-color> <selected-color> - Allows specific colors to be used in the menu, where two colors are configured as the foreground and background. Use simple color names such as red/black . For example: default= <integer> - Replace <integer> with the default entry title number to be loaded if the menu interface times out. fallback= <integer> - Replace <integer> with the entry title number to try if the first attempt fails. hiddenmenu - Prevents the GRUB menu interface from being displayed, loading the default entry when the timeout period expires. The user can see the standard GRUB menu by pressing the Esc key. initrd </path/to/initrd> - Enables users to specify an initial RAM disk to use when booting. Replace </path/to/initrd> with the absolute path to the initial RAM disk. kernel </path/to/kernel> <option-1> <option-N> - Specifies the kernel file to load when booting the operating system. Replace </path/to/kernel> with an absolute path from the partition specified by the root directive. Multiple options can be passed to the kernel when it is loaded. password= <password> - Prevents a user who does not know the password from editing the entries for this menu option. Optionally, it is possible to specify an alternate menu configuration file after the password= <password> directive. In this case, GRUB restarts the second stage boot loader and uses the specified alternate configuration file to build the menu. If an alternate menu configuration file is left out of the command, a user who knows the password is allowed to edit the current configuration file. For more information about securing GRUB, refer to the chapter titled Workstation Security in the Security Guide . root ( <device-type> <device-number> , <partition> ) - Configures the root partition for GRUB, such as (hd0,0) , and mounts the partition. rootnoverify ( <device-type> <device-number> , <partition> ) - Configures the root partition for GRUB, just like the root command, but does not mount the partition. timeout= <integer> - Specifies the interval, in seconds, that GRUB waits before loading the entry designated in the default command. splashimage= <path-to-image> - Specifies the location of the splash screen image to be used when GRUB boots. title group-title - Specifies a title to be used with a particular group of commands used to load a kernel or operating system. To add human-readable comments to the menu configuration file, begin the line with the hash mark character ( # ).
|
[
"color red/black green/blue"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-grub-configfile-commands
|
Part I. Overview
|
Part I. Overview
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/overview
|
5.2. Network Interfaces
|
5.2. Network Interfaces 5.2.1. Adding a New Network Interface You can add multiple network interfaces to virtual machines. Doing so allows you to put your virtual machine on multiple logical networks. Note You can create an overlay network for your virtual machines, isolated from the hosts, by defining a logical network that is not attached to the physical interfaces of the host. For example, you can create a DMZ environment, in which the virtual machines communicate among themselves over the bridge created in the host. The overlay network uses OVN, which must be installed as an external network provider. See the Administration Guide for more information Adding Network Interfaces to Virtual Machines Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Network Interfaces tab. Click New . Enter the Name of the network interface. Select the Profile and the Type of network interface from the drop-down lists. The Profile and Type drop-down lists are populated in accordance with the profiles and network types available to the cluster and the network interface cards available to the virtual machine. Select the Custom MAC address check box and enter a MAC address for the network interface card as required. Click OK . The new network interface is listed in the Network Interfaces tab in the details view of the virtual machine. The Link State is set to Up by default when the network interface card is defined on the virtual machine and connected to the network. For more details on the fields in the New Network Interface window, see Section A.3, "Explanation of Settings in the New Network Interface and Edit Network Interface Windows" . 5.2.2. Editing a Network Interface In order to change any network settings, you must edit the network interface. This procedure can be performed on virtual machines that are running, but some actions can be performed only on virtual machines that are not running. Editing Network Interfaces Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Network Interfaces tab and select the network interface to edit. Click Edit . Change settings as required. You can specify the Name , Profile , Type , and Custom MAC address . See Section 5.2.1, "Adding a New Network Interface" . Click OK . 5.2.3. Hot Plugging a Network Interface You can hot plug network interfaces. Hot plugging means enabling and disabling devices while a virtual machine is running. Note The guest operating system must support hot plugging network interfaces. Hot Plugging Network Interfaces Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Network Interfaces tab and select the network interface to hot plug. Click Edit . Set the Card Status to Plugged to enable the network interface, or set it to Unplugged to disable the network interface. Click OK . 5.2.4. Removing a Network Interface Removing Network Interfaces Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Network Interfaces tab and select the network interface to remove. Click Remove . Click OK . 5.2.5. Blacklisting Network Interfaces You can configure the ovirt-guest-agent on a virtual machine to ignore certain NICs. This prevents IP addresses associated with network interfaces created by certain software from appearing in reports. You must specify the name and number of the network interface you want to blacklist (for example, eth0 , docker0 ). Important You must blacklist NICs on the virtual machine before the guest agent is started for the first time. Blacklisting Network Interfaces In the /etc/ovirt-guest-agent.conf configuration file on the virtual machine, insert the following line, with the NICs to be ignored separated by spaces: Start the agent: Note Some virtual machine operating systems automatically start the guest agent during installation. If your virtual machine's operating system automatically starts the guest agent or if you need to configure the blacklist on many virtual machines, use the configured virtual machine as a template for creating additional virtual machines. See Section 7.2, "Creating a Template" for details.
|
[
"ignored_nics = first_NIC_to_ignore second_NIC_to_ignore",
"systemctl start ovirt-guest-agent"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-Network_Interfaces
|
CI/CD
|
CI/CD OpenShift Container Platform 4.11 Contains information on builds for OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/cicd/index
|
Chapter 11. Supported and Unsupported features for IBM Power and IBM Z
|
Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Supported Unsupported Disaster Recovery for brownfield deployments Unsupported Supported Automatic scaling of RGW Unsupported Unsupported
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/unsupported-features
|
Chapter 8. Handling large messages
|
Chapter 8. Handling large messages Clients might send large messages that can exceed the size of the broker's internal buffer, causing unexpected errors. To prevent this situation, you can configure the broker to store messages as files when the messages are larger than a specified minimum value. Handling large messages in this way means that the broker does not hold the messages in memory. Instead, you specify a directory on disk or in a database table in which the broker stores large message files. When the broker stores a message as a large message, the queue retains a reference to the file in the large messages directory or database table. Large message handling is available for the Core Protocol, AMQP, OpenWire and STOMP protocols. For the Core Protocol and OpenWire protocols, clients specify the minimum large message size in their connection configurations. For the AMQP and STOMP protocols, you specify the minimum large message size in the acceptor defined for each protocol in the broker configuration. Note It is recommended that you do not use different protocols for producing and consuming large messages. To do this, the broker might need to perform several conversions of the message. For example, say that you want to send a message using the AMQP protocol and receive it using OpenWire. In this situation, the broker must first read the entire body of the large message and convert it to use the Core protocol. Then, the broker must perform another conversion, this time to the OpenWire protocol. Message conversions such as these cause significant processing overhead on the broker. The minimum large message size that you specify for any of the preceding protocols is affected by system resources such as the amount of disk space available, as well as the sizes of the messages. It is recommended that you run performance tests using several values to determine an appropriate size. The procedures in this section show how to: Configure the broker to store large messages Configure acceptors for the AMQP and STOMP protocols for large message handling This section also links to additional resources about configuring AMQ Core Protocol and AMQ OpenWire JMS clients to work with large messages. 8.1. Configuring the broker for large message handling The following procedure shows how to specify a directory on disk or a database table in which the broker stores large message files. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. Specify where you want the broker to store large message files. If you are storing large messages on disk, add the large-messages-directory parameter within the core element and specify a file system location. For example: <configuration> <core> ... <large-messages-directory>/path/to/my-large-messages-directory</large-messages-directory> ... </core> </configuration> Note If you do not explicitly specify a value for large-messages-directory , the broker uses a default value of <broker-instance-dir> /data/largemessages If you are storing large messages in a database table, add the large-message-table parameter to the database-store element and specify a value. For example: <store> <database-store> ... <large-message-table>MY_TABLE</large-message-table> ... </database-store> </store> Note If you do not explicitly specify a value for large-message-table , the broker uses a default value of LARGE_MESSAGE_TABLE . Additional resources For more information about configuring a database store, see Section 6.3, "Configuring JDBC Persistence" . 8.2. Configuring AMQP acceptors for large message handling The following procedure shows how to configure an AMQP acceptor to handle an AMQP message larger than a specified size as a large message. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. The default AMQP acceptor in the broker configuration file looks as follows: <acceptors> ... <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> ... </acceptors> In the default AMQP acceptor (or another AMQP acceptor that you have configured), add the amqpMinLargeMessageSize property and specify a value. For example: <acceptors> ... <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=204800</acceptor> ... </acceptors> In the preceding example, the broker is configured to accept AMQP messages on port 5672. Based on the value of amqpMinLargeMessageSize , if the acceptor receives an AMQP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. If you do not explicitly specify a value for this property, the broker uses a default value of 102400 (that is, 100 kilobytes). Note If you set amqpMinLargeMessageSize to -1, large message handling for AMQP messages is disabled. If the broker receives a persistent AMQP message that does not exceed the value of amqpMinLargeMessageSize , but which does exceed the size of the messaging journal buffer (specified using the journal-buffer-size configuration parameter), the broker converts the message to a large Core Protocol message, before storing it in the journal. 8.3. Configuring STOMP acceptors for large message handling The following procedure shows how to configure a STOMP acceptor to handle a STOMP message larger than a specified size as a large message. Procedure Open the <broker-instance-dir> /etc/broker.xml configuration file. The default AMQP acceptor in the broker configuration file looks as follows: <acceptors> ... <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> ... </acceptors> In the default STOMP acceptor (or another STOMP acceptor that you have configured), add the stompMinLargeMessageSize property and specify a value. For example: <acceptors> ... <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=204800</acceptor> ... </acceptors> In the preceding example, the broker is configured to accept STOMP messages on port 61613. Based on the value of stompMinLargeMessageSize , if the acceptor receives a STOMP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. If you do not explicitly specify a value for this property, the broker uses a default value of 102400 (that is, 100 kilobytes). Note To deliver a large message to a STOMP consumer, the broker automatically converts the message from a large message to a normal message before sending it to the client. If a large message is compressed, the broker decompresses it before sending it to STOMP clients. 8.4. Large messages and Java clients There are two options available to Java developers who are writing clients that use large messages. One option is to use instances of InputStream and OutputStream . For example, a FileInputStream can be used to send a message taken from a large file on a physical disk. A FileOutputStream can then be used by the receiver to stream the message to a location on its local file system. Another option is to stream a JMS BytesMessage or StreamMessage directly. For example: BytesMessage rm = (BytesMessage)cons.receive(10000); byte data[] = new byte[1024]; for (int i = 0; i < rm.getBodyLength(); i += 1024) { int numberOfBytes = rm.readBytes(data); // Do whatever you want with the data } Additional resources To learn about working with large messages in the AMQ Core Protocol JMS client, see: Large message options Writing to a streamed large message Reading from a streamed large message To learn about working with large messages in the AMQ OpenWire JMS client, see: Large message options Writing to a streamed large message Reading from a streamed large message For an example of working with large messages, see the large-message example in the <install-dir> /examples/features/standard/ directory of your AMQ Broker installation. To learn more about running example programs, see Running an AMQ Broker example program .
|
[
"<configuration> <core> <large-messages-directory>/path/to/my-large-messages-directory</large-messages-directory> </core> </configuration>",
"<store> <database-store> <large-message-table>MY_TABLE</large-message-table> </database-store> </store>",
"<acceptors> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=204800</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=204800</acceptor> </acceptors>",
"BytesMessage rm = (BytesMessage)cons.receive(10000); byte data[] = new byte[1024]; for (int i = 0; i < rm.getBodyLength(); i += 1024) { int numberOfBytes = rm.readBytes(data); // Do whatever you want with the data }"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/configuring_amq_broker/large_messages
|
Chapter 3. Troubleshooting networking issues
|
Chapter 3. Troubleshooting networking issues This chapter lists basic troubleshooting procedures connected with networking and chrony for Network Time Protocol (NTP). Prerequisites A running Red Hat Ceph Storage cluster. 3.1. Basic networking troubleshooting Red Hat Ceph Storage depends heavily on a reliable network connection. Red Hat Ceph Storage nodes use the network for communicating with each other. Networking issues can cause many problems with Ceph OSDs, such as them flapping, or being incorrectly reported as down . Networking issues can also cause the Ceph Monitor's clock skew errors. In addition, packet loss, high latency, or limited bandwidth can impact the cluster performance and stability. Prerequisites Root-level access to the node. Procedure Installing the net-tools and telnet packages can help when troubleshooting network issues that can occur in a Ceph storage cluster: Example Log into the cephadm shell and verify that the public_network parameters in the Ceph configuration file include the correct values: Example Exit the shell and verify that the network interfaces are up: Example Verify that the Ceph nodes are able to reach each other using their short host names. Verify this on each node in the storage cluster: Syntax Example If you use a firewall, ensure that Ceph nodes are able to reach each other on their appropriate ports. The firewall-cmd and telnet tools can validate the port status, and if the port is open respectively: Syntax Example Verify that there are no errors on the interface counters. Verify that the network connectivity between nodes has expected latency, and that there is no packet loss. Using the ethtool command: Syntax Example Using the ifconfig command: Example Using the netstat command: Example For performance issues, in addition to the latency checks and to verify the network bandwidth between all nodes of the storage cluster, use the iperf3 tool. The iperf3 tool does a simple point-to-point network bandwidth test between a server and a client. Install the iperf3 package on the Red Hat Ceph Storage nodes you want to check the bandwidth: Example On a Red Hat Ceph Storage node, start the iperf3 server: Example Note The default port is 5201, but can be set using the -P command argument. On a different Red Hat Ceph Storage node, start the iperf3 client: Example This output shows a network bandwidth of 1.1 Gbits/second between the Red Hat Ceph Storage nodes, along with no retransmissions ( Retr ) during the test. Red Hat recommends you validate the network bandwidth between all the nodes in the storage cluster. Ensure that all nodes have the same network interconnect speed. Slower attached nodes might slow down the faster connected ones. Also, ensure that the inter switch links can handle the aggregated bandwidth of the attached nodes: Syntax Example Additional Resources See the Basic Network troubleshooting solution on the Customer Portal for details. See the What is the "ethtool" command and how can I use it to obtain information about my network devices and interfaces for details. See the RHEL network interface dropping packets solutions on the Customer Portal for details. For details, see the What are the performance benchmarking tools available for Red Hat Ceph Storage? solution on the Customer Portal. For more information, see Knowledgebase articles and solutions related to troubleshooting networking issues on the Customer Portal. 3.2. Basic chrony NTP troubleshooting This section includes basic chrony NTP troubleshooting steps. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Verify that the chronyd daemon is running on the Ceph Monitor hosts: Example If chronyd is not running, enable and start it: Example Ensure that chronyd is synchronizing the clocks correctly: Example Additional Resources See the How to troubleshoot chrony issues solution on the Red Hat Customer Portal for advanced chrony NTP troubleshooting steps. See the Clock skew section in the Red Hat Ceph Storage Troubleshooting Guide for further details. See the Checking if chrony is synchronized section for further details.
|
[
"dnf install net-tools dnf install telnet",
"cat /etc/ceph/ceph.conf minimal ceph.conf for 57bddb48-ee04-11eb-9962-001a4a000672 [global] fsid = 57bddb48-ee04-11eb-9962-001a4a000672 mon_host = [v2:10.74.249.26:3300/0,v1:10.74.249.26:6789/0] [v2:10.74.249.163:3300/0,v1:10.74.249.163:6789/0] [v2:10.74.254.129:3300/0,v1:10.74.254.129:6789/0] [mon.host01] public network = 10.74.248.0/21",
"ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:1a:4a:00:06:72 brd ff:ff:ff:ff:ff:ff",
"ping SHORT_HOST_NAME",
"ping host02",
"firewall-cmd --info-zone= ZONE telnet IP_ADDRESS PORT",
"firewall-cmd --info-zone=public public (active) target: default icmp-block-inversion: no interfaces: ens3 sources: services: ceph ceph-mon cockpit dhcpv6-client ssh ports: 9283/tcp 8443/tcp 9093/tcp 9094/tcp 3000/tcp 9100/tcp 9095/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: telnet 192.168.0.22 9100",
"ethtool -S INTERFACE",
"ethtool -S ens3 | grep errors NIC statistics: rx_fcs_errors: 0 rx_align_errors: 0 rx_frame_too_long_errors: 0 rx_in_length_errors: 0 rx_out_length_errors: 0 tx_mac_errors: 0 tx_carrier_sense_errors: 0 tx_errors: 0 rx_errors: 0",
"ifconfig ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.74.249.26 netmask 255.255.248.0 broadcast 10.74.255.255 inet6 fe80::21a:4aff:fe00:672 prefixlen 64 scopeid 0x20<link> inet6 2620:52:0:4af8:21a:4aff:fe00:672 prefixlen 64 scopeid 0x0<global> ether 00:1a:4a:00:06:72 txqueuelen 1000 (Ethernet) RX packets 150549316 bytes 56759897541 (52.8 GiB) RX errors 0 dropped 176924 overruns 0 frame 0 TX packets 55584046 bytes 62111365424 (57.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 9373290 bytes 16044697815 (14.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9373290 bytes 16044697815 (14.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0",
"netstat -ai Kernel Interface table Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg ens3 1500 311847720 0 364903 0 114341918 0 0 0 BMRU lo 65536 19577001 0 0 0 19577001 0 0 0 LRU",
"dnf install iperf3",
"iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------",
"iperf3 -c mon Connecting to host mon, port 5201 [ 4] local xx.x.xxx.xx port 52270 connected to xx.x.xxx.xx port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 114 MBytes 954 Mbits/sec 0 409 KBytes [ 4] 1.00-2.00 sec 113 MBytes 945 Mbits/sec 0 409 KBytes [ 4] 2.00-3.00 sec 112 MBytes 943 Mbits/sec 0 454 KBytes [ 4] 3.00-4.00 sec 112 MBytes 941 Mbits/sec 0 471 KBytes [ 4] 4.00-5.00 sec 112 MBytes 940 Mbits/sec 0 471 KBytes [ 4] 5.00-6.00 sec 113 MBytes 945 Mbits/sec 0 471 KBytes [ 4] 6.00-7.00 sec 112 MBytes 937 Mbits/sec 0 488 KBytes [ 4] 7.00-8.00 sec 113 MBytes 947 Mbits/sec 0 520 KBytes [ 4] 8.00-9.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes [ 4] 9.00-10.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender [ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver iperf Done.",
"ethtool INTERFACE",
"ethtool ens3 Settings for ens3: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s 1 Duplex: Full 2 Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes 3",
"systemctl status chronyd",
"systemctl enable chronyd systemctl start chronyd",
"chronyc sources chronyc sourcestats chronyc tracking"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/troubleshooting_guide/troubleshooting-networking-issues
|
Chapter 8. Responsive restarts and security certificates
|
Chapter 8. Responsive restarts and security certificates MicroShift responds to system configuration changes and restarts after alterations are detected, including IP address changes, clock adjustments, and security certificate age. 8.1. IP address changes or clock adjustments MicroShift depends on device IP addresses and system-wide clock settings to remain consistent during its runtime. However, these settings may occasionally change on edge devices, such as DHCP or Network Time Protocol (NTP) updates. When such changes occur, some MicroShift components may stop functioning properly. To mitigate this situation, MicroShift monitors the IP address and system time and restarts if either setting change is detected. The threshold for clock changes is a time adjustment of greater than 10 seconds in either direction. Smaller drifts on regular time adjustments performed by the Network Time Protocol (NTP) service do not cause a restart. 8.2. Security certificate lifetime MicroShift certificates are separated into two basic groups: Short-lived certificates having certificate validity of one year. Long-lived certificates having certificate validity of 10 years. Most server or leaf certificates are short-term. An example of a long-lived certificate is the client certificate for system:admin user authentication, or the certificate of the signer of the kube-apiserver external serving certificate. 8.2.1. Certificate rotation Certificates that are expired or close to their expiration dates need to be rotated to ensure continued MicroShift operation. When MicroShift restarts for any reason, certificates that are close to expiring are rotated. A certificate that is set to expire imminently, or has expired, can cause an automatic MicroShift restart to perform a rotation. Note If the rotated certificate is a Certificate Authority, all of the certificates it signed rotate. 8.2.1.1. Short-term certificates The following situations describe MicroShift actions during short-term certificate lifetimes: No rotation: When a short-term certificate is up to 5 months old, no rotation occurs. Rotation at restart: When a short-term certificate is 5 to 8 months old, it is rotated when MicroShift starts or restarts. Automatic restart for rotation: When a short-term certificate is more than 8 months old, MicroShift can automatically restart to rotate and apply a new certificate. 8.2.1.2. Long-term certificates The following situations describe MicroShift actions during long-term certificate lifetimes: No rotation: When a long-term certificate is up to 8.5 years old, no rotation occurs. Rotation at restart: When a long-term certificate is 8.5 to 9 years old, it is rotated when MicroShift starts or restarts. Automatic restart for rotation: When a long-term certificate is more than 9 years old, MicroShift can automatically restart to rotate and apply a new certificate.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/microshift-things-to-know
|
Chapter 11. Installation and Booting
|
Chapter 11. Installation and Booting Assigning mount points to existing block devices is now possible in Kickstart installations A new mount command is now available in Kickstart. This command assigns a mount point to a particular block device with a file system, and it can also reformat it if you specify the --reformat option. The difference between mount and other storage-related commands like autopart , part , or logvol is that with mount you do not need to describe the entire storage configuration in the Kickstart file, you only need to make sure that the specified block devices exist on the system. However, if you want to create the storage configuration instead of using an existing one, and mount the various devices, then you must use the other storage configuration commands. You can not use mount with the other storage configuration commands in the same Kickstart file. (BZ# 1450922 ) The livemedia-creator utility now provides a sample Kickstart file for UEFI systems The example Kickstart files provided with the livemedia-creator packages have been updated to support 32 and 64-bit UEFI systems. The files are located in the /usr/share/lorax-version/ directory. Note that livemedia-creator must be run on a UEFI system or virtual machine to build bootable UEFI disk images. (BZ# 1458937 ) New option for the network Kickstart command binding the device configuration file to the device MAC address You can now use the new --bindto=mac option with the network Kickstart command to use the HWADDR parameter (the MAC address) instead of the default DEVICE in the device's ifcfg file on the installed system. This will bind the device configuration to the MAC instead of the device name. Note that the new --bindto option is independent of the network --device Kickstart option. It will be applied to the ifcfg file even if the device was specified in the Kickstart file using its name, link , or bootif . (BZ# 1328576 ) New options for Kickstart %packages allow configuring Yum timeout and number of retries This update adds two new options for the %packages section in Kickstart files: --timeout=X - sets the Yum timeout to X seconds. Defaults to 30. --retries=Y - sets the number of Yum retries to Y . Defaults to 10. Note that if you use multiple %packages sections during the installation, options set on the section which appears last will be used for every section. If the last section has neither of these options set, every %packages section in the Kickstart file will use the default values. These new options may help when performing a large number parallel installations from a single package source at once, when package download speed is limited by disk read or network speeds. The new options only affect the system during installation and have no effect on Yum configuration on the installed system. (BZ# 1448459 ) The Red Hat Enterprise Linux 7 ISO image can be used to create guests virtual machines on IBM Z With this release, you can create a bootable Red Hat Enterprise Linux ISO file for KVM virtual machines on the IBM Z architecture. As a result, Red Hat Enterprise Linux guest virtual machines on IBM Z can boot from a boot.iso file. (BZ# 1478448 ) ARPUPDATE option for ifcfg-* files has been introduced This update introduces the ARPUPDATE option for the ifcfg-* files with default value yes . Setting the value to no allows administrators to disable updating neighboring computers with address resolution protocol (ARP) information about current network interface controller (NIC). This is especially needed when using Linux Virtual Server (LVS) Load Balancing with Direct routing enabled. (BZ#1478419) The --noconfig option added for the rpm -V command With this update, the --noconfig option has been added to the rpm -V command. This option enables the command to list only the altered non-configuration files, which helps diagnose system problems. (BZ# 1406611 ) ifcfg-* files now allow you to specify a third DNS server ifcfg-* configuration files now support the DNS3 option. You can use this option to specify a third Domain Name Server (DNS) address to be used in /etc/resolv.conf , instead of the maximum of two DNS servers. (BZ# 1357658 ) Multi-threaded xz compression in rpm-build This update adds multi-threaded xz compression for source and binary packages when setting the %_source_payload or %_binary_payload macros to the wLTX.xzdio pattern. In it, L represents the compression level, which is 6 by default, and X is the number of threads to be used (may be multiple digits), for example w6T12.xzdio . To enable this feature, edit the /usr/lib/rpm/macros file or declare the macro within the spec file or at the command line. As a result, compressions take less time for highly parallel builds, which is beneficial especially for continuous integration of large projects that are built on hardware with many cores. (BZ#1278924)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_installation_and_booting
|
Red Hat build of Apache Camel for Spring Boot Reference
|
Red Hat build of Apache Camel for Spring Boot Reference Red Hat build of Apache Camel 4.8 Red Hat build of Apache Camel for Spring Boot Reference Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team http://access.redhat.com/support
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/index
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.