entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 10
200
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 2
817k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2306.04117v1
|
20230607030730
|
A Robust Hybrid Observer for Side-slip Angle Estimation
|
[
"Agapius Bou Ghosn",
"Marcus Nolte",
"Philip Polack",
"Arnaud de La Fortelle"
] |
cs.RO
|
[
"cs.RO"
] |
A Robust Hybrid Observer for Side-slip Angle Estimation
Agapius Bou Ghosn^1,
Marcus Nolte^2,
Philip Polack^1,
and Arnaud de La Fortelle^1,3
^1 Center for Robotics, Mines Paris, PSL University, 75006 Paris, France [agapius.boughosn, philip.polack, arnaud.delafortelle]@minesparis.psl.eu
^2 Institute for Control Engineering, TU Braunschweig, 38106 Braunschweig, Germany [email protected]
^3 Heex Technologies, Paris, France
=========================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
For autonomous driving or advanced driving assistance, it is key to monitor the vehicle dynamics behavior. Accurate models of this behavior include acceleration, but also the side-slip angle, that eventually results from the complex interaction between the tires and the road. Though it is an essential quantity (e.g. for stability assessment), as opposed to accelerations, it is not measurable through conventional off-the-shelf sensors. Therefore, accurate side-slip angle observers are necessary for the proper planning and control of vehicles. In this paper, we introduce a novel approach that combines model-based side-slip angle estimation with neural networks. We apply our approach to real vehicle data. We prove that the proposed method is able to outperform state-of-the-art methods for normal driving maneuvers, and for near-limits maneuvers where providing accurate estimations becomes challenging.
§ INTRODUCTION
The side-slip angle of a vehicle is defined as the angle between the velocity vector and its longitudinal axis; it is a fundamental quantity to assess vehicle stability and by that an important indicator for critical driving situations <cit.>. Moreover, it is part of the state in many non-linear models of vehicle dynamics. However, the side-slip angle is only measurable with expensive sensors, either based on optical flow directly over ground or highly accurate dual-antenna GNNS solutions. For this reason, the side-slip angle is a typical application for state observers in the literature.
A classic implementation of state observers is model-based, relying on a physical model describing the vehicle dynamics' behavior along with the available measurements (e.g. from inertial measurement units and/or cameras <cit.>).
Other approaches are based on neural networks (or learning-based), using training data to parameterize a highly non-linear black-box model that provides a direct input-output mapping from measurement data to the estimated dynamic states.
In the first case, the accuracy of the observer is related to the quality of the vehicle model and the observer's algorithm.
In the second case the accuracy is related to the quality of the training data and the network's architecture.
On the one hand, model-based approaches explicitly represent physical relations and hence provide more insight and exaplainability.
However, model accuracy is always limited.
Particularly when highly non-linear tire dynamics start dominating the behavior of the overall vehicle dynamics, the estimation accuracy of model-based approaches degrades either due to non-modeled effects, or due to inevitable parameter uncertainty.
Neural-networks on the other hand are typically designed for performing accurate non-linear regression, making them especially suitable for non-linear estimation tasks – at the cost of losing physical explainability.
For this reason, hybrid approaches that combine physical models with neural networks are becoming more and more popular (cf. <cit.>) as they remain physically explainable, while being able to adapt to parameter uncertainty or modeling errors.
This paper introduces a hybrid observer that combines kinematics and neural networks to provide reliable side-slip angle estimations in both normal and harsh driving maneuvers. The introduced observer is tested on the Stadtpilot vehicle shown in Fig. <ref>.
The literature detailed in the next section considers state-of-the-art deterministic and learned approaches to estimate the side-slip angle of the vehicle.
In summary, we will prove that the proposed method is able to estimate the vehicle's side-slip angle in both low and high acceleration maneuvers using only in-car sensor measurements, outperforming state-of-the-art approaches.
For comparability, we focus on approaches that rely on inertial and GNSS sensors.
In the following, the state-of-the-art observers are presented in Section <ref>, the system setup is described in Section <ref>, the used training and testing data sets are described in Section <ref>, the proposed method is presented in Section <ref>, the results are presented and discussed in Section <ref>; the article is concluded in Section <ref>.
§ RELATED WORK
Observers presented in the literature will be detailed in this Section. As we distinguish between classical (or deterministic) observers and learning-based observers, the presented literature will be split into these two categories in Subsections <ref> and <ref> respectively. Subsection <ref> also includes hybrid approaches. After presenting the current state-of-the-art approaches, we will conclude by choosing the observers to which our method will be compared.
§.§ Classical observers
Classical observers are extensively used in the literature to estimate the states and parameters of a vehicle. They rely on a model used to describe the state evolution of the vehicle. The complexity of the model used differs between applications and can vary between using simple kinematic models as in <cit.> to four wheel dynamic models with complex tire models (e.g. Pacejka tire model) as in <cit.>. The choice of the model used for estimation determines the number of quantities to be known to estimate the required state, as well as the governing assumptions. The used model determines the domain of validity of the observer.
Different types of model based observers could be used as the Luenberger observer, which is a linear observer, used in <cit.> and <cit.>, for the estimation of the vehicle velocity, side slip angle, and yaw rate; in these works, a dynamic bicycle model and a linear tire model are used, but inadequacies occur when the tire model is no longer valid.
Nonlinear observers dealing with nonlinear models, as in <cit.>, <cit.>, <cit.> are used to estimate the side slip angle; in these works, the model used is either a dynamic bicycle model or a four wheel dynamic model with nonlinear tire models; the used models ensure a wider representation of the vehicle's maneuvers, resulting in fewer estimation errors in nonlinear cases.
While Luenberger and related observers solve initial value problems, besides their restriction to linear plant models, they cannot account for model uncertainty (or process noise) or measurement noise.
The Kalman filter solves some of the limitations of the Luenberger observer. In its basic implementation, it can only be applied to linear systems. In contrast to the Luenberger observer, it has been specifically designed to account for model uncertainty and (Gaussian) measurement noise.
The Extended Kalman Filter (EKF) is an extension to the Kalman filter to be applied to nonlinear systems. It linearizes at each step around the current estimate. The EKF has been used with a dynamic bicycle prediction model or a four wheel dynamic prediction model for the estimation of the vehicle's slip angle as in <cit.>, <cit.>, <cit.> with different tire models.
§.§ Learned observers
Learned observers have recently been presented for data-based vehicle state estimation. They are implemented either as hybrid observers, combining deterministic equations and neural networks, or as fully learned observers involving only neural networks.
Hybrid approaches can be split into two types. On the one hand, several approaches combine a model-based filter with neural networks, such as the KalmanNet <cit.>.
KalmanNet integrates recurrent neural networks in a Kalman-filter-like predictor-corrector structure; it is employed in <cit.> to estimate vehicle velocities in x- and y direction. <cit.> combine a sliding mode observer and a neural network to estimate the vehicle's velocities.
The second type of approaches applies vehicle dynamics equations to support the neural-networks' estimations by providing additional inputs into the network. <cit.> e.g. calculate the side-slip angle rate based on a single-track model and feed it as an input to the neural network along with the measurements to enhance the network's side-slip angle estimation.
Fully learned approaches implement Long Short-Term Memory (LSTM) or Gated Recurrent Units (GRU) networks, as in <cit.> and <cit.> where a vehicle's velocities are predicted from sensor measurements, or<cit.> where the vehicle's side slip angle is predicted from sensor measurements.
As we are interested in estimating the vehicle's side-slip angle, we will compare our observer to the EKF based approach introduced in <cit.> and to the hybrid approach introduced in <cit.>, knowing that both approaches showed accurate estimations in their testing. The work done in <cit.> proposes a classification approach to identify the road conditions which is out of the scope of this paper.
It should be noted that a fully learned approach, with no physical model input, was considered in <cit.> and was proven to have higher errors than the hybrid approach that we compare to: this is why no comparison with fully learned methods is proposed in our paper.
For the comparison, we implemented the approaches presented in <cit.> and <cit.>.
We trained the hybrid approach (<cit.>) on our own training data set and fed all implementations with the same data from our testing set.
§ SYSTEM SETUP
As we present a side-slip angle observer for autonomous vehicles, we applied this approach on a real vehicle: the Stadtpilot vehicle (AUDI A6 Avant C7) shown in Fig. <ref>. The characteristics of the vehicle are shown in Table <ref>.
The experimental platform is the same that we used for collecting data for a non-hybrid observer structure in <cit.>. For the sake of readability, we provide a very similar description: The vehicle is equipped with a conventional inertial measurement unit (IMU), the Audi Sensor Array (SARA), which measures the longitudinal and lateral accelerations (a_x, a_y), the yaw rate (ψ̇), and the wheel speeds (W_ij). Steering angle measurements (δ) are also available. A reference dual-antenna INS/GNSS https://www.imar-navigation.de/en/products/by-product-names/item/itracert-f200-itracert-f400-itracert-mvtiTraceRT F400 sensor provides highly accurate measurement of the position (Easting – X , Northing – Y) in UTM-coordinates, the longitudinal and lateral velocities (V_x, V_y), the longitudinal and lateral accelerations (a_x, a_y), the yaw (ψ), pitch (θ), and roll (Φ) angles and rates (ψ̇, θ̇, Φ̇), as well as the side-slip angle (β). The top view of the used vehicle is presented in Fig. <ref>; it presents the sensors mounted on the vehicle with the measurements they provide in addition to the side-slip angle to be estimated at the center of gravity. The Audi SARA IMU provides measurements at a 50Hz frequency while the iTraceRT sensor provides measurements at 100Hz; The measurements provided by the iTraceRT sensor are down sampled to 50Hz for synchronization purposes.
The developed observer takes its input data from the in-car sensors and compares its outputs to the ground truth measurements provided by the iTraceRT sensor.
Data collection by the described sensors is presented next.
§ DATA SET
Our data set is split into two categories: The first one is composed of normal driving through out the city of Braunschweig, Germany in normal weather conditions; the second one is composed of harsh maneuvers executed on a special test track near Peine, Germany in varying weather conditions. We collected 1.03 million data samples in 5.7 hours of driving.
Fig. <ref> presents accumulated acceleration measurements in the friction circle, to give an impression of the vehicle dynamics that we were able to achieve during data collection. The plot shows that most of the effected maneuvers are within the low acceleration range which corresponds to the normal city driving and other maneuvers at low speed while other samples are in the higher acceleration range (reaching a=1g), those correspond to the harsh maneuvers on the test track. The distribution is biased towards lower accelerations due to the ease of collecting data at lower accelerations and the difficulty of collecting data at the limits of handling; this is also reflected in the distribution of the side-slip angles over the collected data set shown in Fig. <ref>. It can be seen that the dataset contains a near-symmetrical distribution of side-slip angles in a range von approximately ±18. As the sample count is comparatively low for high side-slip angles, we will consider the accuracy of our estimation specifically in high-dynamic maneuvers.
An 80%, 20% split separates the training data set from the testing data set; the separation is performed such that both data sets include low acceleration and high acceleration data.
Having defined the data sets, the proposed approach is presented next.
§ PROPOSED APPROACH
The goal of the developed observer is to estimate the side-slip angle of the vehicle given the available measurements. We propose a hybrid architecture that relies on a kinematic bicycle model and neural networks to provide accurate estimations. The kinematic bicycle model is presented next, followed by the observer's proposed architecture.
§.§ The kinematic bicycle model
The kinematic bicycle, or single-track, model shown in Fig. <ref> is a simplified vehicle model valid for low speed applications. It is governed by several assumptions: first, the four-wheel vehicle is simplified into a bicycle model: front wheels are considered as a single steerable wheel, rear wheels as a single non-steerable wheel; second, the pitch and roll dynamics are neglected; third, the aerodynamic forces and the effect of the longitudinal wheel forces on the lateral dynamics are neglected.
The side-slip angle β of the kinematic bicycle model is then defined by:
β_kinematic = arctan(l_r tanδ/l_f+l_r)
The kinematic side-slip angle will be fed to the proposed architecture. As the error of the kinematic side-slip angle is prone to increase with harsh maneuvers <cit.>, the neural network is anticipated to adapt and make use of the provided input as much as possible to provide accurate estimations.
§.§ Proposed hybrid observer architecture
The proposed hybrid observer is a multi-layer perceptron (MLP). The inputs of the observer are the vehicle's measurements at a first stage, and the kinematic side-slip angle at a second stage, both at the current time step k. The output of the observer is the estimated side-slip angle at the current time step k. The proposed architecture is shown in Fig. <ref>.
The model includes in its first stage four layers with 16, 32, 64 and 128 neurons respectively; the output of the fourth layer is concatenated with the kinematic side-slip angle then fed, in a second stage, to two layers of 32 and 16 neurons respectively; the output layer follows. All the activation functions are hyperbolic tangent (tanh) except for a linear activation function at the output layer. A grid search was performed to choose the sizes of the layers. The model weights are initialized using the Xavier initialization. The loss function used is defined by:
L = L_β + L_reg
with L_β = MSE(β̂, β_ref) and L_reg being an L2 regularization for the network with a regularization rate of 10^-5.
The network is implemented using PyTorch and is trained using an Adam optimizer on a Nvidia Geforce GTX 1650 Ti for 30 epochs with a batch size of 64.
Note that feeding the kinematic side-slip angle at the first stage (input 1), showed higher testing errors (up to 18%).
The trained network is tested and compared to state-of-the-art approaches next.
§ RESULTS
In this section, the proposed architecture is tested on the previously defined testing set. The results are compared to the state-of-the-art approaches considered in Section <ref>.
In what follows, the used metric is the mean absolute error (MAE). The proposed approach and the considered state-of-the-art approaches will be first evaluated on the whole testing set, then separate scenarios will be considered for a more detailed assessment of their performance.
§.§ Overall performance
We start our evaluation by assessing the performance of the observers on the whole testing set. Thus, the MAE are calculated and shown in Table <ref>. The table shows that our approach has the lowest errors; the hybrid approach <cit.> is better than the EKF approach <cit.> but presents almost 1.4 times higher errors than our approach.
To further examine our method, we split the following evaluation to two types of scenarios: a normal driving maneuver in which the physical models should be valid (based on analysis done in <cit.>,<cit.>) and a harsh maneuver in which the physical models are expected to lose accuracy.
§.§ Normal driving maneuver
Next, we consider a "normal" driving maneuver that does not involve high excitation and which lateral accelerations are less than a_y=0.5g (for model validity reasons explained in <cit.> and <cit.>). The considered maneuver has a maximum lateral acceleration a_y^max=0.35g; it is collected during the city driving phase in Braunschweig, Germany. The MAE of the different observers for the considered trajectory are shown in Table <ref>: they show that our approach beats the references by a factor of about 1.3 to 2.2 on average.
To visualize the behavior of the observers, we plot their estimations for the time span where the peak side-slip angle is reached: the plot is shown in Fig. <ref>. The plot shows that the proposed approach is able to observe the side-slip angle with the lowest errors. The EKF based on the dynamic bicycle model <cit.> shows close behavior to the proposed approach for the increasing part of the plot but delivers higher errors for the peak and the decreasing part of the plot. The GRU-based model <cit.> is undershooting through the whole plot.
In the next section, we evaluate our approach in dynamic driving maneuvers.
§.§ Dynamic driving maneuver
We define a dynamic driving maneuver as one that includes high vehicle excitations with lateral accelerations reaching values higher than 0.5g. This maneuver was executed on a special testing track as described in Section <ref> with a maximum lateral acceleration of a_y^max=0.85g. The MAE for the different approaches is shown in Table <ref>. All the observers present errors higher than the previous case: this is caused by the harshness of the maneuver. Our method is, however, still able to deliver the lowest errors while the EKF approach <cit.> delivers the highest errors.
To evaluate the performance of the proposed observer with the variations of the lateral acceleration, Fig. <ref> shows the errors of the observers in parallel to absolute values of the lateral acceleration. The plotted error is defined as:
e^i_k = |β̂^i_k-β^ref_k|
with k being the considered time step, β̂^i_k being the side-slip angle estimate by observer i at time step k, and β^ref_k being the reference side-slip angle measured by the iTraceRT at time step k.
The plot shows that the proposed approach presents the lowest errors. The errors of all the observers increase with the rise of the lateral acceleration: the EKF based on the dynamic bicycle model <cit.> shows the highest sensitivity to high lateral accelerations, its errors grow up to 0.081rad; the hybrid GRU-based approach <cit.> presents a better performance but its errors can reach that of the EKF <cit.> (e.g. at t=16); the proposed approach shows robustness to high accelerations and maintains the lowest errors along the trajectory.
A sample of the performance of the observers for the highest lateral acceleration point is shown in Fig. <ref> showing that the performance of the proposed method is the closest to the reference sensor.
In brief, the proposed approach is able to deliver the most accurate estimations among the considered state-of-the-art approaches for both low dynamic and high dynamic maneuvers; it adapts to harsh maneuvers where other state-of-the-art approaches lose accuracy.
Having proved the accuracy of the proposed method, we compare the behavior of the kinematic bicycle side-slip angle to both the proposed approach estimations and the reference iTraceRT side-slip angle. A sample trajectory plot is shown in Fig. <ref>. We see that our approach is able to correct the offsets available in the kinematic side-slip angle. This proves that the proposed hybrid observer takes advantage of the kinematic model but makes the necessary corrections to provide accurate estimations.
§ CONCLUSION
In this paper, we presented a novel observer architecture for the estimation of the vehicle's side-slip angle. The proposed architecture takes as its inputs measurements from conventional car sensors in addition to a kinematic side-slip angle based on a physical model. We used a real vehicle to collect the needed data sets; the maneuvers were carefully performed to cover low acceleration and high acceleration scenarios. The presented approach was compared to state-of-the-art approaches for both low acceleration and near-limits maneuvers and was able to deliver the lowest errors for all considered scenarios. While the state-of-the-art methods lose accuracy with the increase of the harshness of the maneuver, the proposed approach adapts and delivers accurate estimations.
Future work will investigate the advantages and limits of using a physical simple model along neural networks to observe the states of a vehicle; and the ability of neural networks to correct the inaccuracies of simplified vehicle models.
§ ACKNOWLEDGEMENT
We would like to thank Prof. Markus Maurer at the Institute of Control Engineering, TU Braunschweig for enabling the collaboration that led to this joint paper.
ieeetr
|
http://arxiv.org/abs/2306.09109v1
|
20230615131130
|
NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and Pose Annotations
|
[
"Varun Jampani",
"Kevis-Kokitsi Maninis",
"Andreas Engelhardt",
"Arjun Karpur",
"Karen Truong",
"Kyle Sargent",
"Stefan Popov",
"André Araujo",
"Ricardo Martin-Brualla",
"Kaushal Patel",
"Daniel Vlasic",
"Vittorio Ferrari",
"Ameesh Makadia",
"Ce Liu",
"Yuanzhen Li",
"Howard Zhou"
] |
cs.CV
|
[
"cs.CV"
] |
Enhanced Sampling with Machine Learning: A Review
Shams Mehdi,^1,2 Zachary Smith,^1,2 Lukas Herron,^1,2 Ziyue Zou,^3 and Pratyush Tiwary^1,3
^1Institute for Physical Science and Technology,
University of Maryland, College Park 20742, USA; email: [email protected]
^2Biophysics Program,
University of Maryland, College Park 20742, USA
^3Department of Chemistry and Biochemistry,
University of Maryland, College Park 20742, USA
July 31, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================
*Equal contributionfootnote
†C. Liu's current affiliation is Microsoftfootnote
Recent advances in neural reconstruction enable high-quality 3D object reconstruction from casually captured image collections. Current techniques mostly analyze their progress on relatively simple image collections where Structure-from-Motion (SfM) techniques can provide ground-truth (GT) camera poses. We note that SfM techniques tend to fail on in-the-wild image collections such as image search results with varying backgrounds and illuminations. To enable systematic research progress on 3D reconstruction from casual image captures, we propose `NAVI': a new dataset of category-agnostic image collections of objects with high-quality 3D scans along with per-image 2D-3D alignments providing near-perfect GT camera parameters. These 2D-3D alignments allow us to extract accurate derivative annotations such as dense pixel correspondences, depth and segmentation maps. We demonstrate the use of NAVI image collections on different problem settings and show that NAVI enables more thorough evaluations that were not possible with existing datasets. We believe NAVI is beneficial for systematic research progress on 3D reconstruction and correspondence estimation. Project page: <https://navidataset.github.io>
§ INTRODUCTION
The field of 3D object reconstruction from images or videos has been dramatically transformed in the recent years with the advent of techniques such as Neural Radiance Fields (NeRF) <cit.>.
With recent techniques, we can reconstruct highly detailed and realistic 3D object models from multiview image captures, which can be used in several downstream applications such as gaming, AR/VR, movies, etc.
Despite such tremendous progress, current object reconstruction techniques make several inherent assumptions to obtain high-quality 3D models.
A key assumption is that the near-perfect camera poses and intrinsics are given or readily available via traditional Structure-from-Motion (SfM) pipelines such as COLMAP <cit.>.
This assumption imposes several restrictions on the input image collections.
The input images have to be of sufficiently high-quality (e.g. non-blurry) and the number of input
images should also be high (typically > 30-50) for SfM
to estimate sufficient correspondences across images.
In addition, SfM techniques typically fail on internet image sets that are captured with varying backgrounds, illuminations, and cameras. Such internet image collections do not require active capturing and are widely and readily available, such as product review photos or image search results (e.g., internet images of Statue-of-Liberty, Tesla Model-3 car, etc.). It is highly beneficial to develop 3D object reconstruction techniques that can automatically produce high-quality 3D models from such image collections in the wild.
In this work, we propose a new dataset of image collections which we refer to as `NAVI' (Not AVerage Image dataset).
Specifically, our dataset contains two types of image collections with near-perfect camera poses and 3D shapes: 1. Standard multiview object captures and 2. In-the-wild object captures with varying backgrounds, illuminations and cameras.
Fig. <ref> shows examples of the in-the-wild and multiview images in NAVI along with the 2D aligned 3D scans.
Next, we describe the key distinguishing properties of the NAVI dataset in relation to existing datasets.
Casual captures.
Several existing multiview datasets are either synthetic or captured in lab settings <cit.>. We capture NAVI images in casual real settings using hand-held cameras.
In-the-wild image collections. In addition to typical multiview images, NAVI also provides in-the-wild image collections where objects are captured under varying backgrounds, illuminations, and cameras. SfM techniques usually fail on such image sets and NAVI provides a unique opportunity to systematically research joint shape and camera estimation from in-the-wild image collections.
Near-perfect 3D geometry and camera poses. We use high-quality 3D scanners to get 3D shape ground-truth and also obtain high-quality 3D camera pose annotations with manual 2D-3D alignment along with rigorous verification. This is in contrast to several recent datasets such as <cit.> that rely on SfM to provide GT, thereby limiting the image capture setups.
Accurate dense correspondences. We provide accurate per-pixel correspondences using the 3D shape alignments.
While most real-world datasets for correspondence evaluation rely on known homographies <cit.> or sparse keypoint annotations recovered from estimated geometry <cit.>, NAVI's precise 2D-3D alignments lead to accurate and dense object correspondences.
Derivative annotations such as pixel-accurate object segmentation and monocular depth can be easily derived from high-quality 2D-3D alignments in NAVI.
Category-agnostic. Objects in the NAVI dataset are category-agnostic with image collections that do not have any category-specific shapes, which is in contrast to widely-used 2D-3D datasets <cit.>.
To demonstrate the utility of NAVI,
we benchmark and analyze some representative techniques on three different problem settings: multiview object reconstruction, 3D shape and pose estimation from in-the-wild image collections, and dense pixel correspondence estimation from image pairs. In addition to these problem settings, one could also use NAVI images for other single-image vision problems such as single image 3D reconstruction, depth estimation, object segmentation, etc.
§ DATASET CONSTRUCTION
Challenges.
It is worth emphasizing the challenges in our data construction by taking a look at some existing
2D-3D aligned datasets. Several
works <cit.>
propose synthetic 3D assets which are used to render 3D-aligned images.
Real-world datasets such as Scan2CAD <cit.> and Pascal3D+ <cit.> use nearest intra-category CAD models for alignment w.r.t 2D images, resulting in only coarse annotations.
Similarly, IKEA Objects <cit.> and Pix3D <cit.> annotate retrieved
images by aligning one 3D CAD model to images using point correspondences.
Even for datasets with mostly exactly-matching products <cit.>, slight deformations
and moving parts that appear different on images with respect to their 3D scan can lead to
inaccurate alignments. Different instances of the same object can also have different shapes due to
other factors (e.g. shoes of different sizes are not uniformly scaled versions etc.)
Fig. <ref> shows the sample alignments from the existing datasets showcasing the challenges in obtaining near-perfect 3D shapes and in the 2D-3D alignment.
Rationale.
To avoid such issues in NAVI, we selected rigid objects without moving parts, manually scanned the object shape and took image captures of the same objects in diverse real-world settings. We then use our interactive alignment tool to obtain near-perfect 2D-3D alignments with precise pose control during the annotation. Most datasets, including our earlier attempts, use a multi-stage alignment process that involves annotating point correspondences and then optimizing the object pose. Even though this is a more scalable approach for dataset creation, the alignments are not as accurate as we want.
The NAVI dataset construction consists of 4 steps: (1) Scanning the 3D objects, (2) Capturing image collections, (3) 2D-3D alignment, and (4) Alignment verification.
r0.5
< g r a p h i c s >
2D-3D alignments from existing datasets have issues as 3D models do not exactly match the corresponding 2D image due to model or configuration discrepencies.
1. Scanning the 3D objects.
We collect 36 rigid objects and use two professional 3D scanners, EinScan-SP <cit.> and EinScan Pro HD <cit.>, to obtain high-quality 3D object scans. We center the scans at origin, but do not normalize the shapes to preserve their metric dimensions (in mm).
Fig. <ref> displays some NAVI images and their aligned 3D scans. Notice the diverse and category-agnostic nature of the objects.
2. Capturing image collections. For each object, we captured two types of image collections:
in-the-wild, and multiview. In-the-wild captures contain images with different backgrounds,
illumination, and cameras. multiview captures offer the standard multiview setup: same camera,
object pose, and environment, but with different camera poses.
For practical utility, we captured the images in casual settings with hand-held cameras ranging from
mobile phones to DSLRs and mirrorless cameras. In total, we use 12 different cameras to capture
around 10.5K images with 2.3K in-the-wild images and 8.2K multiview images. More dataset details
are present in the supplementary material.
3. 2D-3D alignment.
The goal is to obtain near-perfect 2D-3D alignments; i.e., accurate 6DoF rigid object
transformations along with accurate camera intrinsics.
We developed an interactive tool on which the user can progressively align the 3D object by rotating and translating it in 3D, using the mouse.
Since we know the cameras used to capture the images, we initialize the camera focal length, which
can be further refined during the alignment process.
Our interactive tool gives the user full control over the alignments, and we observe that this leads to higher-quality poses than alternative implicit alignment tools that optimize the pose from 3D↔2D point correspondences <cit.>.
We trained 10 dedicated annotators for our alignment task allowing us to obtain higher quality
annotations than several existing datasets
that rely on generic non-expert annotators.
4. Alignment verification.
To ensure high-quality annotations, we further manually verify each 2D-3D alignment with 2 expert annotators. Specifically, we overlay the 3D shape onto the 2D image and ask trained annotators to label them as `incorrect' if the alignments look even slightly wrong. For images labeled `incorrect', we repeat the 2D-3D alignment and verification steps. After two stages of alignment and verification, we discard around 7% of the original captured images.
We further annotate images with a binary occlusion label to indicate if the object is occluded by other objects. We exclude occluded object images from our validation sets for different tasks to avoid introducing artifacts in the metrics.
Derivative annotations.
In addition to the full 3D alignments of scans to images, there are several derivative annotations that
result from the accurate 2D-3D alignments:
Relative camera poses, dense correspondences, metric depth maps, and binary masks.
Relative camera poses are an implicit output of alignment, as all objects were posed with respect to
their canonical pose.
Since we have annotated multiple images of the same object, we obtain dense correspondences on the images by sampling the pixels in mutually visible parts of the 3D shape in image pairs. This enables dense correspondence evaluation both for the standard multiview setup, and for in-the-wild images captured in different environments.
Fig. <ref> visualizes sample GT pixel correspondences on NAVI image
pairs.
Furthermore, metric depth maps are obtained by computing the depth of the 3D alignments from the camera viewpoint. The binary object masks are trivially obtained by binarizing the depth maps.
Fig. <ref> shows sample object depth and mask annotations in NAVI.
For simplicity, we refer to our annotations as GT.
§ 3D FROM MULTIVIEW IMAGE COLLECTIONS
Problem setting.
Given a set of images taken from different viewpoints, the task is to reconstruct the 3D shape and appearance of an object. The 3D representation can then be used for downstream tasks like scene editing, relighting, and rendering of novel views. Traditional multiview reconstruction pipelines such as Structure-from-Motion (SfM) first reconstruct camera poses together with a sparse object representation followed by a dense reconstruction and potential mesh generation step. After adding materials and textures, the resulting 3D asset can then be used to render new views.
More recent techniques such as NeRF <cit.>
optimize neural representations of objects directly on the RGB images with the camera poses obtained from an SfM reconstruction as a pre-processing.
Related datasets.
Synthetic multiview scenes <cit.> are widely adopted for evaluations.
In contrast to synthetic scenes that come with precise 3D scene and camera poses but only
translate to real-world photography to a limited degree, real scenes usually require off-the-shelf SfM
methods <cit.> for pose estimation.
BlendedMVS <cit.>, one of the first multi purpose datasets for stereo reconstruction comes with re-rendered images based on geometry and poses reconstructed via a SfM pipeline.
CO3D <cit.> and Objectron <cit.> are large-scale
datasets with object-centric videos, and provide either a rough point cloud reconstruction of the
object <cit.> or a 3D bounding box <cit.>.
The dataset of <cit.> offers a handful of 3D laser scans along with the corresponding
real-world image collections.
Recently, works of <cit.> and OmniObject3D <cit.> provide
3D object scans along with multiview image captures in constrained lab settings. These works rely
on SfM for semi-automatic 2D-3D alignment. In summary, existing multiview datasets are
synthetic <cit.> or based on
reconstructed 3D models <cit.>, with rough 3D
shapes <cit.>, provide only a limited number of
scenes <cit.> or they consist of image captures in constrained
settings <cit.>.
The distinctiveness of NAVI.
In contrast, NAVI satisfies multiple requirements by offering highly-accurate 3D shapes and alignments for multiple objects from different categories in different real-world environments and illumination. This allows for more precise evaluation of 3D reconstruction techniques on real-world object image collections.
NAVI dataset and metrics.
We split each of the multiview image sets into 80%/20% train/validation sets.
The multiview sets are object-centric with an average of 25 images per set (minimum 3 to maximum
180).
For even evaluation across the objects, we randomly sample 5 multiview scenes for each object
from the subsets that include more than 6 images, resulting in 180 multiview sets for our
experiments.
We use the standard novel view synthesis metrics, PSNR, SSIM and, LPIPS <cit.>, on validation images and report average metrics across all sets.
r0.4
1cCamera Poses 1cPSNR↑ 1cSSIM↑ 1cLPIPS↓
COLMAP
24.04 0.93 0.079
NAVI Poses (GT)
27.54 0.94 0.045
tableView synthesis metrics using COLMAP and our GT posesThis shows
considerably better performance with our GT poses. This demonstrates that
COLMAP <cit.> can fail on multiview scenes showcasing the use of our pose
annotations on multiview scenes.
Experiment. A key assumption in most existing works is that SfM provided camera poses are good enough for 3D reconstruction. We want to test this hypothesis by evaluating how our annotated camera poses compare against COLMAP <cit.> poses for off-the-shelf 3D reconstruction techniques. For this, we use the generic and widely-used InstantNGP <cit.> to reconstruct Radiance Fields from the multiview image sets. For the optimization we use the GT masks to limit the reconstruction to the object area.
Results: COLMAP vs. GT poses.
Table <ref> shows the novel view synthesis metrics on validation images.
Results on all the metrics demonstrate considerably better reconstruction with our GT poses
compared to using COLMAP poses. COLMAP only registers partial set of views for several cases.
This shows that the our GT poses are accurate and are still valuable in the multiview reconstruction
setting to analyze reconstruction techniques independent of inaccuracies from the camera
registration. While COLMAP poses are arbitrarily rotated and scaled, all NAVI scenes are centered at
the origin and in a common coordinate frame. This facilitates evaluation across different objects,
especially in the context of grid-based methods like InstantNGP where the scene bounds have some
impact on performance.
§ 3D FROM IN-THE-WILD IMAGE COLLECTIONS
Problem setting.
The aim is to estimate 3D shape and appearance of an object given an unconstrained image
collection; where the object is captured with different backgrounds, cameras and illuminations. Such
image collections are readily available on the internet; e.g., image search results, product
review photos, etc.
The high variability in the appearance across images makes pose estimation and reconstruction
highly challenging compared to the more controlled multiview captures.
Techniques need to jointly reason camera poses and illuminations in addition to 3D geometry and
appearance.
Standard SfM techniques <cit.> fail to recover camera poses on such in-the-wild image sets.
Existing datasets.
Curated object centric image collections from in-the-wild data are scarce. While one could search
online image databases for multiple occurrences of the same object or class <cit.>,
additional data like camera parameters or object shape as well as the certainty that all images
actually depict the same object instance is critical for faithful evaluation. DTU MVS
dataset <cit.> is widely used as a proxy for in-the-wild data <cit.> as it comes with different lighting conditions for each of the 124
scenes. However, the controlled acquisition environment does not fully reflect in-the-wild
conditions. Additionally, 3D scans and depths are of limited quality and coverage since the
structured-light scan is only acquired at the given view positions. NeROIC <cit.>
and NeRD <cit.> provide small collections of scenes for 360° object
reconstruction featuring lighting changes and poses reconstructed via SfM. However, no GT object
shapes are included.
SAMURAI <cit.> adds eight image sets to the NeRD dataset with different cameras, backgrounds and illuminations; but it only provides RGB images without any GT camera poses or shapes. NAVI dataset subsumes these 8 SAMURAI in-the-wild image collections where we provide near-GT poses and 3D shapes.
The distinctiveness of NAVI.
NAVI provides the first real-world in-the-wild image collections with GT 3D shapes and camera poses.
For evaluation, existing techniques such as <cit.> rely on novel view synthesis
metrics on held-out images which entangle the role of estimated camera poses and shapes.
It is not possible to assess whether the view synthesis is poor due to a wrongly estimated camera pose or a wrongly estimated 3D object.
GT poses and shapes in NAVI wild-sets provide a unique opportunity to systematically analyze different techniques using pose metrics. In addition, NAVI also enables thorough analysis of techniques with controlled noise levels in the camera parameters.
NAVI dataset and metrics.
We divide each of the in-the-wild image sets
of NAVI into 80 % / 20 % splits for training and validation respectively, where the techniques optimize a 3D asset using the train images and are evaluated on validation sets.
On average there are 65 images in each in-the-wild set with minimum of 46 and maximum of 93 images, respectively.
We use 2 different setups for evaluation. First is the standard novel view synthesis metrics that measure PSNR, SSIM and LPIPS <cit.> scores on held-out validation images.
Second is camera pose evaluation where we use Procrustes analysis <cit.> to
compute the mean absolute rotation, translation and scale difference in camera pose estimations for
all the images.
The camera metrics are a unique feature of NAVI enabled by our near-GT poses, compared to existing real-world datasets with in-the-wild image collections.
Techniques.
We analyze four recent reconstruction techniques that can jointly optimize camera poses and can also deal with varying illuminations to some extent:
NeRS <cit.>, SAMURAI <cit.>, NeROIC <cit.>,
and GNeRF <cit.>.
Different works use different camera initialization and also model the object appearance differently.
NeROIC
assumes roughly correct COLMAP poses.
NeRS and SAMURAI assume rough quadrant pose initialization and GNeRF takes randomly initialized poses.
Please refer to their respective papers for more details.
While these techniques use either pre-computed or GT objects masks, we use GT object masks in
our experiments to ensure fair comparison.
!
1cMethod 1cPose Init 2cPSNR↑ 2cSSIM↑ 2cLPIPS↓
2cTranslation↓
2cRotation °↓
(lr)3-4
(lr)5-6
(lr)7-8
(lr)9-10
(lr)11-12
S_C ∼ S_C S_C ∼ S_C S_C ∼ S_C
S_C ∼ S_C S_C ∼ S_C
NeROIC <cit.> COLMAP
19.77 - 0.88 - 0.1498 - 0.09±0.12 - 42.11±17.19 -
NeRS <cit.> Directions
18.67 18.66 0.92 0.93 0.1078 0.1067 0.49±0.21 0.52±0.19 122.41±10.61 123.63±8.8
SAMURAI <cit.> Directions
25.34 24.61 0.92 0.91 0.0958 0.1054 0.24±0.17 0.35±0.24 26.16±22.72 36.59±29.98
GNeRF <cit.> Random
8.30 6.25 0.64 0.63 0.52 0.57 1.02±0.16 1.04±0.09 93.15±26.54 80.22±27.64
NeROIC <cit.> GT
22.75 21.31 0.91 0.90 0.0984 0.0845 0.07±0.24 0.01±0.01 33.17±19.63 31.90±11.11
NeRS <cit.> GT
17.92 18.02 0.92 0.93 0.114 0.1098 0.62±0.19 0.65±0.2 86.96±27.63 89.43±22.60
SAMURAI <cit.> GT
25.65 25.59 0.92 0.92 0.0949 0.0881 0.16±0.14 0.25±0.26 21.55±21.72 28.25±26.71
tableMetrics for 3D shape and pose from image collections in the wildView synthesis and pose metrics over two subsets from all wild-sets depending on the success of COLMAP (S_C / ∼ S_C). Rendering quality is evaluated on a holdout set of test views that are aligned as part of the optimization without contributing to the shape recovery. We include GNeRF as a separate baseline although this method is not designed for multi-illumination data. We report metrics with the methods' default camera initialization as well as initializing with the GT poses that come with NAVI.
§.§ Analysis
COLMAP vs. GT poses.
Table <ref> shows the view synthesis performance and camera pose errors for different techniques and camera initializations.
We observe that COLMAP reconstruction only works for a subset of scenes S_C (19 out of 36 scenes) for which the camera pose estimation using COLMAP yields more than 10 cameras.
For comparisons with NeROIC that rely on COLMAP initialization, we separately report the metrics on
scenes S_C where COLMAP works and those where COLMAP fails (∼ S_C).
We omit one scene (vitamins bottle) that shows some inconsistencies between views because of a moving cap.
Compared to the results from Section <ref>, the increased complexity of the task is reflected in lower performance.
Comparing the performance of NeROIC with COLMAP to the initialization with NAVI GT poses on the
S_C subset, it is clear that the NAVI GT poses are also superior in this setting. In addition to any
COLMAP inaccuracies, the 3D reconstruction task becomes harder as the number of images shrinks
due to incomplete COLMAP pose recovery that recovers only a subset of views.
Optimizing with GT poses can give insights into the additional challenges of the in-the-wild task independent of any dependency like COLMAP. This enables us to observe the other limitations that have an impact on in-the-wild reconstruction quality like the illumination model in SAMURAI or material model in NeRS.
Comparing different methods.
Table <ref> shows that SAMURAI performs best although the camera
reconstruction quality varies drastically from scene to scene as can be seen in the large uncertainty.
This is partly by design as views with large reconstruction errors are discarded over the course of optimization in this approach. It should be noted that data similar to NAVI guided SAMURAI's design. The results indicate that there are aspects covered by this data not available in other datasets (predominantely synthetic) used for evaluations so far.
Fig. <ref> shows sample novel view synthesis results of different techniques
on an example from the "Keywest showpiece" validation set. This is a challenging object with high
frequency details (e.g. text), some symmetry, and glossy surface areas. We can observe different
artifacts characteristic for the evaluated methods like the rotated view and the high specularity in
NeRS, texture smoothness in SAMURAI, and floater artifacts in NeROIC. NAVI includes several
challenging objects that are well suited to evaluate the methods' limits.
Camera metrics.
Thanks to the GT camera pose annotations, both the novel view synthesis and camera evaluations can be done on the same data where multiple datasets, often including synthetic data had to be used in the past.
Together with the GT masks from NAVI all the confounding varying assumptions on the input data across different techniques can be made uniform here.
For all the techniques, camera errors are relatively high overall, still there is a correlation between pose error and view synthesis quality.
NeRS shows a surprisingly large camera pose error. It can be visually confirmed that test views are not that well aligned, still 3D mesh generation based on the training views works relatively well. Camera pose not being a focus in the original work, techniques like NeRS can benefit from explicit pose evaluations for technical improvements.
Analysis with varying camera noise.
Annotated camera parameters in NAVI allow for a controlled study of how different techniques work with increasing amount of camera noise in their camera initialization.
Specifically, we add normal distributed noise with zero mean and varying standard deviation to the annotated poses before feeding it as input to different techniques.
The rotational change is limited to +/- 90° and the translation noise scales with the mean distance of the cameras to the object. A noise level of 1.0 translates to a standard deviation of 10% of the mean distance for translation and 18° standard deviation for the rotation noise on a linear scale.
Fig. <ref> shows the plots with novel view synthesis and camera metrics
for SAMURAI, NeRS and NeROIC.
While the pose error generally increases as the noise level increases, the camera rotation error is not strictly monotonically increasing, for example. This points to the shape of the loss landscape with local minima.
Both SAMURAI and NeRS seems relatively robust with varying camera noises, while NeROIC performance degrades with increasing camera noise.
SAMURAI seems to be robust to large noise levels but, except for GT poses, yields a high translation
error. This might stem from the camera multiplex initialization and view weighting scheme.
Translation can also be approximated by a focal length change to some extend which could also happen in SAMURAI where the global scene bound is part of a regularization that prefers cameras around the mean radius.
NeROIC performs very well under small noise levels but cameras rotate too far away from the object bounding box for higher camera noise levels.
It seems like small rotation errors can be compensated by the neural network (if conditioned on
direction) to some extent here.
In summary, different methodologies seem to be needed for different strengths of camera noise.
NAVI can help systematically investigate how the camera optimization performs in a technique
thereby informing on several useful design choices for technical improvements
(e.g. larger vs. smaller pose updates, regularization weights, initialization and fine-tuning). In
addition, investigations around the breaking point of a method can lead to valuable insights into the
task of joint shape and camera optimization.
§ CORRESPONDENCE ESTIMATION
Problem setting.
Given a pair of images of the same object, the goal of correspondence estimation is to match a set
of object pixels from one image to the corresponding pixels in the second image.
By definition, an image point can have at most one correspondence in the other image as some points may be unmatched due to occlusion. Image pair correspondences are fundamental for the downstream tasks of 3D reconstruction and pose estimation, where a robust estimator is often used to recover the underlying relative camera rotation and translation.
Existing datasets. Finding a suitable dataset for training and evaluating correspondence estimation methods can be a challenge. SPair-71k <cit.> and CUB <cit.> provide in-the-wild semantic correspondences, but these correspondences associate parts of different objects and have limited use in instance-level tasks.
Manually labeling fine-grained, instance-level correspondences is a time-consuming and
error-prone task, so datasets must rely on either known real <cit.> or
synthetic <cit.> homographies, or complete scene
information <cit.>.
However, synthetic homography pairs suffer from unrealistic image distortion, and many of the latter datasets focus only on indoor/outdoor scenes and not object-centric imagery.
Alternatively, high-quality 3D models <cit.> can be used to render
object-focused image pairs with known correspondences, but methods may suffer from a wide
domain gap when transferring knowledge from synthetic renderings to real world scenes.
The distinctiveness of NAVI.
In contrast, the NAVI dataset annotations allow us to generate real-world image pairs with dense per-pixel correspondences, due to the precise 2D-3D alignments. This provides a unique opportunity to have novel dense evaluation metrics for correspondence estimation techniques.
Additionally, the NAVI in-the-wild collections allow correspondences to be annotated across images with different backgrounds, lighting conditions, and camera models.
For example, Fig. <ref> shows sample pixel correspondences on NAVI
in-the-wild image pairs.
NAVI dataset and metrics.
We sample two types of correspondence datasets in NAVI.
The first dataset contains randomly sampled image pairs within the same multiview set to
represent the scenario of a fixed scene and camera model. The second dataset contains randomly
sampled pairs from the in-the-wild set to emulate the variety of backgrounds, illuminations,
and cameras.
For each image pair, we can use the complete camera-object knowledge to label ground truth correspondences between the two images while respecting self-occlusions. We sample up to 707 multiview pairs and 1035 in-the-wild pairs per object resulting in the validation sets with 24745 and 35931 pairs, respectively.
We limit GT correspondence labels to object pixels, since the data annotation process limits the available depth information to object points only. Additionally, we resize each image before evaluation such that their largest dimension is 1200 pixels.
For benchmarking, we evaluate both correspondence and pose estimation metrics.
We use precision (reprojection error less than 3 pixels) and recall to directly evaluate
correspondences, but we define a recall metric that leverages the dense ground truth
correspondences made available by the NAVI 2D-3D alignment.
For each object pixel visible in the first image, we find the corresponding location in the second image, after filtering out instances of self-occlusion. Given a correspondence prediction set, we calculate the percentage of ground truth matches which have a corresponding prediction whose keypoints are within N pixels of error.
We denote this metric dense recall, and it provides an understanding of how well-distributed the predicted correspondences are across the co-visible regions.
In addition, we also estimate relative camera poses from the estimated correspondences
and calculate the rotation error between the predicted and ground truth rotation matrices using Rodrigues' formula, and report accuracy within 5^∘, 10^∘, and 20^∘ of error following <cit.>.
Techniques.
We evaluate the following 4 types of correspondence estimation methods:
SIFT + MNN/NN-Ratio <cit.> that use traditional keypoint detection with heuristic traditional matching; SuperPoint + MNN/NN-Ratio <cit.> that use learned keypoint detection with traditional matching;
SuperPoint + SuperGlue <cit.> that use both learned keypoint detection and
learned matching and; LoFTR <cit.> that proposes dense learnable matching. We
directly evaluate these off-the-shelf models trained on their respective datasets. Please refer to the
original papers for more details.
§.§ Analysis
Multiview vs. In-the-wild pairs.
Table <ref> presents the evaluation metrics on the multiview/in-the-wild image pair datasets in NAVI. Across all metrics, we observe a significant decrease in performance from multiview to in-the-wild pairs. Traditional methods (i.e. SIFT+MNN/Ratio) are insufficient to handle major changes in lighting conditions, such as ambient lighting and shadows produced by the environment. Learned methods (SuperPoint and SuperGlue) are more robust to changes across in-the-wild images with different backgrounds, lighting and cameras. We note that SuperGlue experiences a 3% decrease from multiview to in-the-wild in [email protected] and a 3.5% decrease in Dense-Recall@15px, compared to 6% and 4.6% for the traditional matcher (SuperPoint + NN-Ratio).
We also note that LoFTR proves to be less robust to changes in lighting conditions than the sparse
feature-based SuperPoint+SuperGlue method. These results emphasize the importance of exposing
learnable features and matchers to sufficient in-the-wild image pairs during training.
Dense coverage.
Table <ref> also shows dense recall metric enabled by dense GT correspondences in NAVI.
This measures the coverage of pixel correspondences given a wide error tolerance (15 pixels).
Local feature techniques are highly dependent on texture-rich regions and suffer from low coverage over smooth/textureless overlapping regions. LoFTR, a dense learnable matcher, performs well on the multiview split but is outperformed by SuperPoint+SuperGlue on the in-the-wild split.
This dense recall metric highlights that existing matching techniques recover correspondence sets with low coverage of overlapping object regions, and that the NAVI dataset may serve as a benchmark for this important evaluation metric. Finetuning these methods on object-centric data is likely to yield better performance.
Figure <ref> shows some sample visual results of correspondences with different techniques.
§ CONCLUSION AND DISCUSSION
Use of NAVI in other tasks.
In addition to 3D from image collections and correspondence tasks, NAVI can be useful for single-image tasks such as single image 3D reconstruction, monocular depth or normal estimation and object segmentation. There exist several large-scale datasets for these tasks and NAVI can be used as an additional fine-tuning or evaluation dataset. We present some preliminary single image 3D reconstruction experiments in the supplementary material.
Limitations. Scale is the main limitation of the NAVI dataset which consists of only 36 objects and ≈10K images. We prioritize annotation quality over quantity; and our current rigorous data capture and annotation pipeline is not easily scalable to collect large datasets. Since the techniques for 3D from image collections usually optimize the 3D models within an image collection, we do not find the small scale of NAVI to be a limiting factor. In the future, we also plan to extend the dataset to videos.
Concluding remarks.
In summary, we propose NAVI dataset with multiview and in-the-wild image collections annotated
with near-perfect 3D shapes and camera poses. We demonstrated the use of NAVI for better
analysis on 3D from multiview image collections, 3D from in-the-wild image collections and pixel
correspondence estimation problems. We believe NAVI is beneficial for a multitude of 3D
reconstruction and correspondence tasks.
Acknowledgements.
We thank Prabhanshu Tiwari, Gourav Jha, and Ratandeep Singh for coordinating the annotation
process, and all annotators who contributed to NAVI. We also thank Mohamed El Banani and Amit Raj
for their valuable feedback on the manuscript.
abbrvnat
|
http://arxiv.org/abs/2306.06086v1
|
20230609174858
|
Developing Speech Processing Pipelines for Police Accountability
|
[
"Anjalie Field",
"Prateek Verma",
"Nay San",
"Jennifer L. Eberhardt",
"Dan Jurafsky"
] |
cs.CL
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] |
Light Scalar Meson and Decay Constant in SU(3) Gauge Theory with Eight Dynamical Flavors
O. Witzel
July 31, 2023
========================================================================================
Police body-worn cameras have the potential to improve accountability and transparency in policing. Yet in practice, they result in millions of hours of footage that is never reviewed. We investigate the potential of large pre-trained speech models for facilitating reviews, focusing on ASR and officer speech detection in footage from traffic stops. Our proposed pipeline includes training data alignment and filtering, fine-tuning with resource constraints, and combining officer speech detection with ASR for a fully automated approach.
We find that (1) fine-tuning strongly improves ASR performance on officer speech (WER=12-13%), (2) ASR on officer speech is much more accurate than on community member speech (WER=43.55-49.07%), (3) domain-specific tasks like officer speech detection and diarization remain challenging. Our work offers practical applications for reviewing body camera footage and general guidance for adapting pre-trained speech models to noisy multi-speaker domains.
Index Terms: speech recognition, accountability, policing, social applications, noisy domains
§ INTRODUCTION
Over the last decade, police departments across the United States have rapidly adopted body-worn cameras (BWCs) <cit.>.
This rapid adoption has been spurred on by widespread protests demanding improved accountability and transparency following high-profile deaths of civilians involving officers' use of force <cit.>. In some ways, BWCs have resulted in improvements: the footage is valuable evidence in instances such as litigation of excessive force cases <cit.>, and analysis of hand-transcribed footage can identify racial disparities in policing and failures to practice procedural justice <cit.>.
However, in the absence of a lawsuit or high-profile incident, most footage is never reviewed.
Further, reliance on manual transcriptions limits the scalability of existing automated analyses <cit.>.
At the same time, large pre-trained speech models have achieved remarkable performance over standardized datasets <cit.>. Models like Whisper and Wav2Vec2 also have demonstrated potential in social good applications, e.g., in monitoring audio(visual) materials related to long-term elderly care <cit.> or child exploitation <cit.>. However, in applications involving multi-speaker conversations in noisy environments, models require application-specific adaptation and evaluation <cit.>. Little work has investigated the speech processing of police BWC footage specifically.
Here, we develop and evaluate automatic speech recognition (ASR) and police officer speech detection (diarization) for police BWC footage. Automatic transcription of officer speech would allow extending existing text analyses of racial bias in hand-transcriptions to new data without requiring expensive transcription efforts <cit.>.
It would also allow departments to determine adherence to a procedure by using text classifiers <cit.> or keyword searches.
Although most reviews are likely to be internal, some departments publicly release BWC footage or are mandated to provide access upon request <cit.>.
Thus, speech-processing technology could support independent audits.
Our primary data is footage from 1,040 vehicle stops conducted by one department in one month, where utterances spoken by officers and community members were previously hand-transcribed.
We use the data to construct training and test data sets for ASR and officer speech detection. We evaluate ASR models, with and without in-domain fine-tuning, over the entire test set, dividing by role (officer or community member), race, and gender, and we examine the performance of officer speech detection in combination with ASR.
Our findings provide insight into the best practices and limitations of developing technology in this domain. For example, our training data processing pipeline is robust enough that fine-tuning improves ASR performance by 3-11 points. We also show evidence that Whisper models learn to mimic transcribers' representations of transcription confidence by marking difficult segments as unintelligible. Differences by gender and race are not significant; however, ASR over officer speech (WER=12-13% for officers unseen in training) is much more accurate than over community member speech (WER=43.55-49.07%), which suggests that models have a high potential for addressing accountability with less risk of compromising community member privacy <cit.>.
Finally, we identify diarization, specifically officer speech detection, as a continued challenge.
§ DATA
Video recordings of the 1,040 vehicle stops and hand-transcriptions were provided to us under a data use agreement for the management of such high-risk data and under IRB supervision. The data is generally noisy. Prior transcripts were intended for language analysis, rather than the development of speech processing tools, so not all speech was transcribed and diarized.[The transcribers were instructed to transcribe only speech by officers and community members, not police dispatch; they inconsistently included officer speech to dispatch (vs. to the community member).]
Stops contain background noise like wind and traffic. They contain multiple speakers, and secondary officers, as well as drivers and passengers, can be situated far from the recording device. Dispatch speech from officers' radios can often be heard, sometimes directly overlapping with utterances from the primary interaction.
There is high variance in the clarity of speech and quality of footage across stops.
Test and Validation Sets. To create reliable test and validation sets, we hand-align existing transcribed utterances to time-stamps timestamps and correct observed transcription errors. To facilitate analysis by race, we chose the test data to consist of 50%/50% stops of white and black drivers. We also choose each test file to be a stop by a distinct officer and withhold any other stops made by the same officers (whether as primary or secondary officers) from the training and validation sets. Thus, we also selected officers who made a small number of stops to minimize unusable data. Hand-aligning data is extremely time consuming, so we restrict test set stops to contain <60 utterances. We similarly ensure there is no overlap in primary officers between the validation and training set, witholding data as needed, though we less strictly enforce the separation of secondary officers, who speak less frequently.
We conduct evaluations over these aligned utterances, discarding un-transcribed speech.
Training Set Alignment. We build a training set by applying automated alignment tools and filtering poor-quality transcriptions.
We determine the start and end time for each transcribed utterance using the best of 5 alignment methods:
* Unaligned: 1sec granularity timestamps hand-written by transcribers with heuristics to correct for obvious typos and extending the start and end by .25sec
* MFA: Montreal Forced Aligner <cit.> with unaligned timestamps as starting points
* MFA chunked: Many utterances are too short for the aligner to process correctly. Thus, using the unaligned timestamps, we chunk consecutive utterances up to a total of 20sec. We run MFA to obtain word-level timestamps and then divide chunks back into separate utterances, with start and end times determined by the word-level timestamps
* W2V2: Robust Wav2Vec2 <cit.> for forced alignment <cit.>
* W2V2 chunked: Same as MFA chunked, but using Robust Wav2Vec2 for forced alignment instead of MFA.
For each utterance, we use off-the-shelf Whisper Large <cit.> and Robust Wav2Vec2 (W2V2) <cit.> to transcribe the audio segment identified by each alignment method and compare the output with the hand-written transcript. We choose as the final alignment the one for which min(WER_Whisper, WER_W2V2) is lowest. <Ref> reports training WER for each alignment method and the percent of the final training data aligned using each method.
Training Set Filtering. Even after alignment, the training data is noisy, containing, for example, transcription errors, overlapping speech, and unfixed alignment errors. We again use min_WER = min(WER_Whisper, WER_W2V2) over the best alignment to filter out training instances that are likely incorrect. We experiment with four filtering criteria, indicating filtered training data size in brackets:
* Remove instances <0.5sec and >10sec [54,600]
* #1, and remove instances where min_WER > 50% [40,361]
* We define WER[no subs.] as WER where we do not count substitutions as errors. This metric is designed to retain instances where there may be errors in the Whisper/Wav2Vec2 outputs (e.g., WER is high) but likely not alignment errors (e.g., WER is driven by substitutions rather than insertions or deletions). We then filter according to #1, and keep only instances where (min_WER[no subs.] < 10% AND min_WER < 50%). [26,121]
* #1, and remove instances where min_WER > 10% [19,759]
We compare each criteria by using the filtered training data to fine-tune Robust Wav2Vec2 and examining performance over the validation set. Criteria #3 (WER=45.23) and #4 (WER=44.92) perform similarly and both outperform #1 (WER=49.34) and #2 (WER=48.75). We use #3 when training subsequent models, favoring the criteria that keeps more training data. <Ref> reports the final sizes for each data split.
§ ASR
We compare the performance of ASR models off-the-shelf and fine-tuned on the training data set constructed in <Ref>. We use two of the current best-performing and most popular architectures: Wav2Vec2 <cit.> and Whisper <cit.>.
For Wav2Vec2, we use the Robust model <cit.>, which was pre-trained using a self-supervised objective on Libri-Light,
CommonVoice, Switchboard, Fisher and fine-tuned for ASR on Switchboard. For Whisper, which was trained on 680,000 hours of multilingual and multitask data, we compare small, medium, and large <cit.>. Thus, both models are intended to perform well in a variety of domains and over noisy data.
We describe the model training parameters in detail, including the use of decoder-only training for Whisper large due to compute constraints.
§.§ Experimental Setup
To fine-tune Wav2Vec2, we use model default parameters with learning rate=1e-5, weight decay=0.005, warmup steps=500, batch size=32. We report performance with and without a 4-gram language model trained over the training data transcripts, implemented with KenLM and integrated with beam size=1500, lm weight=1.31 and word score=1.31.[lm weight and word score were tuned following the Bayesian optimization procedure in <cit.>.
We do no other hyperparameter tuning.]
For Whisper models without fine-tuning, we hard-code the task as transcription and the language as English. For fine-tuning, we use model default parameters with learning rate=1e-5, and warmup steps=500. Our experiments are conducted in a resource-constrained environment. Data protocols mandate that the footage be stored on a secure restricted-access server, which does not have sufficient GPU memory to fine-tune Whisper large, even with reduced batch size and precision. Thus, we experiment with freezing the encoder and just training the decoder as well as the inverse.
We use a batch size of 32 for Whisper small and 16 for medium and large.
Finally, as Whisper is prone to outputting repeated words and phrases, we remove any words from the model output if they occur >10 times.
As transcription norms vary between corpora and the body-camera gold transcripts contain bracketed terms like [unintelligible] and [laughter], we remove all terms in brackets and use the Whisper text normalizer on both the reference and model output before computing WER for all models (including Wav2Vec2 models).
For all models, we choose the checkpoint with the lower validation WER after 5 epochs and train using 1-2 A40 GPUs. Wav2Vec2 and Whisper small models trained in <5hrs; Whisper medium and large models trained in <16hrs.
§.§ Results
§.§.§ Overall ASR
<Ref> reports validation results (reserving the test set for final configurations) of freezing either the encoder or decoder when fine-tuning Whisper large and small. For Whisper small, decoder-only tuning performs almost comparably to tuning the entire model (28.12 vs., 26.07), whereas tuning only the encoder performs less well (34.30). For Whisper large, freezing the encoder or decoder provides advantages over no fine-tuning, though decoder-only tuning converged faster (2 vs. 5 epochs). Subsequently, we use decoder-only training for the fine-tuned the Whisper large model.
<Ref> reports the overall WER and CER for each model. Whisper large with fine-tuning performs the best overall. Fine-tuning gives improves performance by 3-11pts across models.
As Whisper is a new model with yet-limited work on understanding model performance and fine-tuning effects, we highlight a few examples from the data in <Ref>.
In the original transcripts, transcribers mark segments they are unable to decipher as [unintelligible]. While we removed all bracketed text when computing WER rate for fair comparison of off-the-shelf and fine-tuned models, examining Whisper outputs reveals that the fine-tuned model sometimes outputs [unintelligible]. In some instances, the predicted [unintelligible] exactly aligns with hand-transcription. However, we also find examples where Whisper hallucinates transcriptions for difficult content, whereas Wav2Vec2 more often does not produce output. After fine-tuning, Whisper hallucinations are particularly difficult to identify without referring back to the audio, as they often appear to be plausible statements in an interaction.
§.§.§ Performance by officer/driver, gender, and race
We examine model performance over sub-populations of the test data, specifically distinguishing between officers and community members, black and white people, and men and women. As there is high variance in model performance depending on the quality of footage from each stop, we use a mixed effects linear regression model. Each data point in the regression is a single utterance. The dependent variable is model WER for the utterance. Role (officer or community member), race, gender are fixed effects, and the specific stop is a random effect.
<Ref> reports the learned regression coefficients and WER by sub-population for the best performing Wav2Vec2 and Whisper models, off-the-shelf and fine-tuned. ASR performance for officers is significantly better than performance for community members by a wide margin. Even the best-performing models perform poorly at transcribing community member speech.
Community members are situated further from the camera and typically speak very few short utterances.
Even hand-transcribers often mark their speech as unintelligible, and training a high-performing model on this type of data may be infeasible.
This result suggests that ASR could be an extremely useful tool for police accountability with small potential privacy-reducing impact on community members.
In contrast to prior work, we do not find significant differences by race or by gender <cit.>. Subdividing the test data leads to small data set sizes, which could be skewed by a single outlying stop. This potential effect is greater when looking at race and gender than looking at role, since a low-quality video would decrease ASR performance for both the officer and the community member, whereas in examining race and gender, we are comparing across footage of different stops.
<Ref> does show
WER is lower for white than black officers for most models.
§ OFFICER SPEECH DETECTION
In <Ref>, we use hand-aligned evaluation data, but in practice, we do not know segmentation or speaker identities in new footage.
As our goal is police accountability, we develop two models to identify segments of speech by primary officers (e.g., officers wearing the camera) and evaluate them using the best-performing ASR model over the detected speech.
§.§ Methodology
Training Data Processing
We adapt the training set introduced in <Ref>. We remove any instances that do not contain active speech using an off-the-shelf acoustic scene understanding Mobile-Net <cit.> architecture trained on AudioSet <cit.> (AudioSet category 0 <0.3).
We divide remaining samples into 250ms chunks with a 100ms hop and represent each 250ms segment as a mel-spectogram with 64 mel-filters, computed with a hop of 10ms, and a window of 25ms. We create a balanced training corpus by randomly sampling 150K chunks each of officer/non-officer speech.
Since officers are closer to body-camera microphones (near-field) than community members (far-field), we use volume-based data augmentation.
As the raw training data contains non-officer speech that was not transcribed (e.g., dispatch speech), we also augment the training set.
We divide training files into 250ms chunks with a 100ms hop, keep chunks with a speech score (from the Mobile-Net model) ≥ 0.5, and merge consecutive chunks that occur within 1sec of each other. We add all new segments (ones that were not transcribed) to the training data as instances of not-officer-speech and then filter and sample the data as described above.
We use these data to train models to classify 250ms chunks as officer or not-officer speech (with cross-entropy loss).
In-domain classifier
We train a custom model from scratch, which contains 7 convolutional layers with 128 3x3 filters in every layer and Relu activation followed by max-pooling of 2. The output of the last layer is passed onto a linear head of 1024 neurons, followed by softmax activation, and the posterior probability is taken as officer score for that instance.
Universal d-vectors We extract d-vectors as features from an off-the-shelf model trained over the VoxCeleb dataset for speaker recognition <cit.> and train an officer speech classifier, with the same linear-head architecture as the in-domain model.
Inference
We predict voice activity detection (using the same
Mobile-Net model) and officer scores for 250ms chunks with
100ms hops. We consider a chunk to be officer speech if its voice activity score is >t_VAD and its officer score is >t_officer, and we merge positive chunks if they occur within t_smooth sec of each other.[{t_VAD,t_smooth, t_officer} are hyperparameters chosen via 20-iteration Bayesian optimization over the validation set with range [0,1] for t_VAD/t_officer and [0.25,2] for t_smooth. They are {0.93,1.76,0.16} for d-vector,
{0.4,0.67,1.2} for in-domain, and
{0.52,0.51,1.1} for in-domain [aug.]]
For evaluation, we concatenate the ASR model output for all identified segments and compute WER against similarly concatenated hand-aligned officer segments.
§.§ Results
<Ref> reports results for the best performing ASR model over the automatically detected officer speech segments. There is a substantial performance decrease between the hand-aligned segments and the detected segments.
The d-vector model performance particularly poorly, likely due to the high difference in domain between VoxCeleb and police traffic stops.
Augmenting the training data does substantially improve performance (49.47 to 31.52 WER), though performance still may not be sufficient for applications.
In reviewing model outputs, we identify that models often misidentify other near-field speech as officer speech, and the presence of multiple officers complicates the task, as speech by secondary officers is sometimes scored closer to community member speech.
We also identified several annotation errors, such as segments attributed to the wrong person and inconsistencies in which speech was transcribed, suggesting these metrics may under-estimate performance.
These errors could be removed in hand-aligned test data, but their presence in training data is still likely to degrade model performance, and manually re-cleaning training data (as opposed to automatic augmentation) would involve a substantial undertaking that may not generalize to other settings.
§ DISCUSSION
We find pre-trained ASR models achieve low WER over police officer speech, particularly when fine-tuned on automatically cleaned training data.
Whisper specifically achieves low WER and even learns to mimic transcribers in marking segments as unintelligible, but can still fail more dramatically over difficult segments than Wav2Vec2.
While prior work has identified ASR as a limitation in noisy speech domains <cit.>, we instead find that officer speech detection is a significant challenge in this setting.
There are potential avenues for improvement, such as explicitly modeling dispatch and secondary officer speech or using text-based classifiers over ASR outputs <cit.>.
Further, although WER is worse over detected than hand-aligned officer speech, WER is an imperfect proxy metric for tasks actually of interest, such as determining officers' adherence to procedure.
As many errors are driven by misidentified or short utterances, performance may still be sufficient for tasks like dialog act classification <cit.>.
While we focus on policing, our work has the potential to inform adapting ASR models to other noisy multispeaker domains as well.
Limitations and Ethical Considerations Our data consists of traffic stops from one police department.
We cannot predict if results generalize to data from other departments, time periods or types of police-community interactions.
Also, although all work abides by IRB and data sharing protocols, it has high misuse potential. Models could used for purposes other than police accountability, such as community surveillance. Because models were trained on private data and pending mitigation of potential misuse, we are not releasing them at this time.
Acknowledgements
This research was supported by the John D. and Catherine T. MacArthur Foundation (G-1512-150464 and G-1805-153038). We thank Stanford Data Science for fellowship funding as well as the city and the police department whose provision of camera footage enabled this research. This work was administered and supported by Stanford SPARQ, a center that builds research-driven partnerships to combat bias, reduce disparities, and drive culture change. We thank Martijn Bartelds, Dora Demszky, Rebecca Hetey, and Tolulope Ogunremi for helpful feedback.
IEEEtran
|
http://arxiv.org/abs/2306.03864v1
|
20230606170653
|
Reinterpreting the Polluted White Dwarf SDSS J122859.93+104032.9 in Light of Thermohaline Mixing Models: More Polluting Material from a Larger Orbiting Solid Body
|
[
"Arianna Dwomoh",
"Evan B. Bauer"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.EP"
] |
Arianna M. Dwomoh
[email protected]
0000-0002-0800-7894]Arianna M. Dwomoh
Duke University,
2138 Campus Drive,
Durham, NC 27708, USA
Center for Astrophysics | Harvard & Smithsonian,
60 Garden Street,
Cambridge, MA 02138, USA
0000-0002-4791-6724]Evan B. Bauer
Center for Astrophysics | Harvard & Smithsonian,
60 Garden Street,
Cambridge, MA 02138, USA
The polluted white dwarf (WD) system SDSS J122859.93+104032.9 (SDSS J1228) shows variable emission features interpreted as originating from a solid core fragment held together against tidal forces by its own internal strength, orbiting within its surrounding debris disk. Estimating the size of this orbiting solid body requires modeling the accretion rate of the polluting material that is observed mixing into the WD surface. That material is supplied via sublimation from the surface of the orbiting solid body. The sublimation rate can be estimated as a simple function of the surface area of the solid body and the incident flux from the nearby hot WD. On the other hand, estimating the accretion rate requires detailed modeling of the surface structure and mixing in the accreting WD. In this work, we present MESA WD models for SDSS J1228 that account for thermohaline instability and mixing in addition to heavy element sedimentation to accurately constrain the sublimation and accretion rate necessary to supply the observed pollution. We derive a total accretion rate of Ṁ_ acc=1.8× 10^11 g s^-1, several orders of magnitude higher than the Ṁ_ acc=5.6× 10^8 g s^-1 estimate obtained in earlier efforts. The larger mass accretion rate implies that the minimum estimated radius of the orbiting solid body is r_min = 72 km, which, although significantly larger than prior estimates, still lies within upper bounds (a few hundred km) for which the internal strength could no longer withstand tidal forces from the gravity of the WD.
§ INTRODUCTION
A white dwarf (WD) is left behind when low- and intermediate-mass stars (M ≲ 8 M_⊙) reach the last stages of their evolution <cit.>. WDs are very compact and have strong gravitational fields, which can tidally disrupt planetary bodies when they approach within ∼ 1 R_⊙ of a WD. They then form an accreting debris disk centered on the WD, and the signatures of accretion can furnish surprising clues about the objects that were made up of that shredded material <cit.>.
Because elements heavier than helium quickly sink below the WD surface <cit.>, heavy elements observed at the surface of a WD are known as pollution because they must be continuously supplied by an external source to be observable. Approximately 30% of WDs show evidence of pollution <cit.>, and a significant fraction of those show infrared excesses corresponding to a debris disk <cit.>. Recent discoveries, such as transits from orbiting debris around polluted WDs <cit.>, have corroborated the emerging theoretical picture of polluted WDs supplied by disrupted planetary bodies.
Comprehensive modeling of the chemical mixing processes at the surface of WDs allows for inferences of compositions and accretion rates supplied by disrupted planetary bodies <cit.>. Such models allow us to gain insight into the chemical composition of planetary bodies, which can be utilized to study the planetary systems that once surrounded the star <cit.>.
In the present work, we examine how accreted elements are mixed into the surface layers of WD models, and compare to observed spectra for the WD SDSS J122859.93+104032.9 (hereafter SDSS J1228). SDSS J1228 is a WD that exhibits signatures of both photospheric pollution and infrared excess from a surrounding debris disk <cit.>. The system also exhibits Ca II emission features indicating a planetesimal orbiting the WD, embedded within the debris disk inside its Roche radius <cit.>. This implies that the planetesimal is held together by its own internal strength, preventing it from being tidally disrupted by the WD's gravity. This planetesimal may be the core of a larger differentiated object that was originally stripped of its crust and mantle (e.g., ).
The accretion of heavy elements onto an atmosphere dominated by hydrogen can induce thermohaline instability that leads to extra mixing of accreted material <cit.>. It mixes on a very short timescale compared to particle diffusion, and as the surface abundances reach a steady state, mixing of accreted material reaches greater depths <cit.>. Recent work suggests that in hydrogen-rich WDs with thin or no surface convection zones (T_ eff≳ 12,000 K), the accretion rates needed in order to reproduce observed element abundances exceed those calculated without accounting for thermohaline mixing by up to three orders of magnitude <cit.>. For SDSS J1228 in particular, thermohaline mixing is relevant because the WD surface temperature is high enough that it should have no surface convection zone. This means that accreted heavy elements concentrate at the surface, which in turn excites the thermohaline instability.
Here we use MESA to create models of the WD surface of SDSS J1228 that account for thermohaline mixing. We find that previous inferences of its accretion rate are orders of magnitude too small. This implies that the solid planetary body supplying the polluting material is significantly larger than previously inferred, because its size influences the total sublimation rate responsible for supplying the accreting material to the WD.
This paper is structured as follows. Sec. <ref> presents the observed properties of SDSS J1228 that match our models. Sec. <ref> describes the MESA calculations and the results obtained from the new models. In Sec. <ref>, we conclude by stating our findings and comparing them to previous work that did not account for thermohaline mixing.
§ OBSERVATIONS OF SDSS J1228
We build models consistent with <cit.> for the WD surface structure and chemical elements observed at the surface, which are based on the observations of <cit.> and <cit.> to constrain its mass, temperature, and composition. This information is summarized in Tables <ref> and <ref>.
c|CC
Measured Elemental Abundances for SDSS J1228
1
Element
log(n_i/n_H)
Mass Fraction
C -7.50 ±0.2 3.79 ×10^-7
O -4.55 ±0.2 4.51 ×10^-4
Mg -5.10 ±0.2 1.91 ×10^-4
Al -5.75 ±0.2 4.80 ×10^-5
Si -5.20 ±0.2 1.77 ×10^-4
Ca -5.70 ±0.2 7.98 ×10^-5
Fe -5.20 ±0.3 3.53 ×10^-4
From <cit.>.
cCcC
Adopted Stellar Properties of SDSS J1228
2
T_ eff [ K]
M_ WD [ M_⊙]
R_ WD [ R_⊙]
log(g/ cm s^-2)
20713 ± 281 0.705 ± 0.050 0.01169 8.150 ± 0.04
From <cit.>.
§ THEORETICAL MODELS
In this section we describe the computational models used to determine WD accretion rates, accounting for thermohaline mixing. In order to infer the interior composition and structure of the planetesimal, we build MESA models of accreting WDs representative of SDSS J1228.
§.§ MESA Models of SDSS J1228
We employ the open-source stellar evolution code MESA, version r22.05.1 <cit.>.
The MESA equation of state (EOS) is a blend of the OPAL <cit.>, SCVH
<cit.>, FreeEOS <cit.>, HELM <cit.>,
PC <cit.>, and Skye <cit.> EOSes.
Radiative opacities are primarily from OPAL <cit.>. Electron conduction opacities are from
<cit.> and <cit.>.
Thermal neutrino loss rates are from <cit.>.
A repository of work directories containing MESA input files needed to reproduce all of the models presented in this work is available on Zenodo: [doi:10.5281/zenodo.7996400]10.5281/zenodo.7996400.
§.§.§ Template Model
In order to create a MESA model representative of SDSS J1228, we began by building a 0.705 M_⊙ WD model with the test case template in MESA, and then cooled the WD to the observed temperature of 20,713 K <cit.>. Element diffusion was enabled from the beginning of the WD cooling track, to allow the WD atmosphere to stratify and the surface composition of the model to develop to a pure hydrogen composition before accretion begins. We used this cooled template WD as a starting point for all subsequent modeling.
In order to calculate the accretion rates of each element for models without thermohaline mixing, we first calculate the diffusion timescales and observed mass fractions in the photosphere. Element diffusion velocities and composition changes in MESA are calculated using an approach based on <cit.> with diffusion coefficients based on <cit.> and <cit.> (see for more details). Following the approach of <cit.>, the diffusion timescale for element species i is
τ_ diff, i = M_ phot/4π r^2 ρ v_ diff, i .
We use the MESA WD model to calculate the density (ρ), surface mass (M_ phot), radius (r), and downward sedimentation velocity (v_ diff, i). <cit.> employed M_ cvz (mass contained in the fully-mixed surface convection zone) as opposed to M_ phot for calculating the diffusion timescale, but our WD model has no surface convection zone due to its high temperature, so we evaluate the mass and diffusion at the photosphere of the model. The mass fraction of an accreted element that should be reached in diffusive equilibrium is
X_ eq, i = A_i × 10^log(n_i/n_H)
where A_i is the atomic weight for each element and n_i is the observed element number density. All elements considered here are listed in Table <ref>.
Now we can find the accretion rate for each element (assuming no mixing other than element diffusion is present), which we compare to the thermohaline models later on. Solving Equation (4) of <cit.> for total accretion rate
Ṁ_i = X_ eq,i M_ phot/τ_ diff,i ,
we use the previously calculated surface mass, mass fraction, and diffusion timescale to find Ṁ_i for each element. Our results are shown in Table <ref>. When comparing our calculated accretion rates to those in table 4 from <cit.>, we find that our results are within a factor of 2.
Our total accretion rate for models without thermohaline mixing is 3.9 × 10^8 g s^-1, which is slightly smaller than the value of 5.6 × 10^8 g s^-1 derived by <cit.> but well within observational uncertainties.
C|CC
Diffusion timescales and accretion rates calculated assuming only element diffusion is present at the surface of our MESA models.
3
Element log(τ_ diff/ year) Ṁ_ diff [ g s^-1]
C -2.04 3.12 ×10^4
O -2.67 1.58 ×10^8
Mg -2.35 3.21 ×10^7
Al -2.38 8.51 ×10^6
Si -2.42 3.42 ×10^7
Ca -2.58 2.26 ×10^7
Fe -2.72 1.37 ×10^8
§.§.§ Steady-State Model
In order to verify that MESA models reach the expected steady state after accreting for many diffusion timescales in the absence of any mixing other than element diffusion, we run a MESA model using the accretion rates calculated using Equation (<ref>). This model allows us to verify that in steady state the surface mass fractions match the observed surface composition for each element. In the left-hand panel of Figure <ref>, the steady state matches observed mass fractions when only diffusion is accounted for. We define a “match” to be when the surface mass fractions produced by MESA are within 10% of the <cit.> observations.
As a further test, we then run a model with the same accretion rate with thermohaline mixing turned on, which is shown in the right panel of Figure <ref>. When thermohaline mixing is included in the model, the mass fractions at the surface of the model decrease by roughly two orders of magnitude for the same accretion rate. This shows that thermohaline mixing dramatically alters the observed surface abundance by mixing accreted material further into the WD. It also provides an initial estimate for how much greater the modeled accretion rate will need to be to match the observed surface abundances.
§.§.§ Thermohaline Mixing
Thermohaline instability can drive mixing in fluids that have stable temperature stratification but unstable composition gradients.
For example, thermohaline mixing occurs in the oceans in regions where warm saltwater is above a layer of cool, less salty water, forming downward sinking “fingers" when the saltwater begins to cool off <cit.>. Stars accreting planetary debris can experience a similar instability due to mixing heavy elements into surfaces composed of primarily hydrogen or helium <cit.>.
Molecular weight gradients can cause fluid instability in stars, and for some polluted WDs, drive the thermohaline instability <cit.>.
The strength of mixing due to thermohaline instability can be understood in terms of a diffusion coefficient D_ th, which scales approximately as
D_ th∝κ_T ∇_μ/∇_T - ∇_ ad ,
and instability is present when ∇_T - ∇_ ad < ∇_μ < 0
<cit.>.
In the equation above, ∇_μ, ∇_T, ∇_ ad and κ_T represent the mean molecular weight gradient, temperature gradient in the fluid, adiabatic temperature gradient, and thermal diffusivity, respectively.
We build MESA models that include thermohaline mixing according to the prescription of <cit.>.
[This prescription employs the notion of “parasitic saturation” to enable estimating the total amount of mixing in 1D models by calculating when the mixing will saturate due to secondary shear instabilities in the fingers. It is therefore somehwat more sophisticated than the simplified model presented in Eqn (<ref>), but it has been shown to produce net mixing that scales similarly in the polluted WD context <cit.>. The prescription of <cit.> has been extensively validated against 3D simulations in the hydrodynamical regime, though 3D magnetohydrodynamical simulations have shown that thermohaline mixing could be further enhanced in the presence of magnetic fields <cit.>.]
When running models that include both thermohaline mixing and element diffusion, we verify that diffusion no longer has a noticeable effect on the predicted surface abundances. Because thermohaline mixing dominates when both processes are present, we ignore diffusion for our subsequent models, and focus on models that include only thermohaline mixing to find the accretion rate necessary to match observations.
Increasing the accretion rate leads to greater thermohaline instability and more mixing, so the amount of accretion needed to match the particular amount of pollution is not a simple linear function. We tune our models to match observations by iteratively adjusting the accretion rate and checking how the steady-state surface abundances compare to observed values.
Our results after tuning Ṁ to match SDSS J1228 are shown in Figure <ref>. Figure <ref> shows the interior abundance and mixing profile for the same MESA model, along with the composition profile from the model without thermohaline mixing from Section <ref> for comparison. The presence of thermohaline mixing, as compared to diffusion, results in accreted material being mixed much deeper into the WD on a faster timescale. Thus, the accretion rate must be higher in order to continue matching the observed surface abundances.
§ RESULTS AND CONCLUSION
§.§ Accretion Rate and Composition
When including thermohaline mixing, our MESA modeling of SDSS J1228 finds that the best match for the accretion rate is 1.8 × 10^11 g s^-1 (2.8 × 10^-15 M_⊙ yr^-1), more than two orders of magnitude higher than previously inferred for this object. Table <ref> shows the changes in total accretion rate and other inferred properties when comparing our MESA models with and without thermohaline mixing.
We also find that the relative accreted mass fractions of material in the WD photosphere are different when accounting for thermohaline mixing. Notably, the accreted fractions of ^16O and ^56Fe are somewhat lower than previously inferred, and the accreted fractions of ^24Mg and ^28Si are significantly higher (see Table <ref>).
C|cc
Comparison of accreted material and parent body properties in SDSS J1228 inferred from models with and without the presence of thermohaline mixing
4
With Thermohaline Mixing Without Thermohaline Mixing
Ṁ_ acc [g s^-1] 1.8 × 10^11 3.9 × 10^8
r_ min [km] 72 4
lifetime [yr] 1500 85
total mass [g] 1.3 × 10^22 2.1 × 10^18
^16 O Mass Fraction 0.347 0.402
^24Mg Mass Fraction 0.151 0.082
^27Al Mass Fraction 0.037 0.022
^28Si Mass Fraction 0.136 0.087
^40Ca Mass Fraction 0.061 0.058
^56Fe Mass Fraction 0.272 0.349
§.§ Size of the Parent Body
<cit.> have argued that the pollution and gas observed in SDSS J1228 are supplied by a solid iron-rich body with internal strength orbiting within the Roche radius of the WD. Based on the density of iron and internal strength of iron meteorite samples, they estimate an upper limit of a few hundred km for the size r of the solid object before tidal forces from the WD gravity would overcome the solid body's internal strength and coherence. They estimate a lower limit for the size r by arguing that the observed polluting material is supplied by evaporation of the solid body due to irradiation from the nearby WD. The surface area of the solid body sets the amount of irradiation energy from the WD incident on the body, so the evaporation rate scales as Ṁ∝ r^2, i.e. minimum inferred size scales as r_ min∝√(Ṁ). Based on the calculated accretion rate of Ṁ = 5.6 × 10^8 g s^-1 from <cit.>, <cit.> infer an approximate minimum size of r_ min≈ 4 km for the solid body orbiting SDSS J1228. With this size and a density comparable to iron (≈ 8 g cm^-3), the total lifetime of the solid body would be just ≈ 85 yr before its entire mass evaporates away.
We reassess this lower limit with our calculated accretion rate of 1.8 × 10^11 g s^-1 accounting for thermohaline mixing. Due to this higher accretion rate, the minimum size needed to supply this accretion from evaporation is r_ min≈ 72 km, and the corresponding lifetime is ≈ 1500 yr. This lower limit still lies within the upper limit of a few hundred km from considering internal strength vs tidal forces, but significantly narrows the overall range of possible sizes for the solid body. It also points to a much more plausible, longer lifespan for the currently observed phase of the evaporating solid body. This longer timescale (1500 yr instead of just 85 yr) before the orbiting mass is sublimated makes it much more likely that human observations could catch a polluted WD in a state like that observed for SDSS J1228.
§.§ Conclusion
By incorporating an updated physical model for the surface of the accreting WD in SDSS J1228, our work has yielded a better understanding of the planetesimal orbiting SDSS J1228. Our models lead to an inferred accretion rate of 1.8 × 10^11 g s^-1, more than two orders of magnitude higher than previously inferred. This accretion rate in turn implies a much large minimum size and mass of the inferred solid core fragment orbiting in the debris disk of SDSS J1228. Future studies of warm polluted DA WDs, such as the one observed in SDSS J1228, should account for thermohaline mixing when making model-based inferences about accretion properties.
Acknowledgments: We thank the anonymous referee for constructive feedback on an earlier draft of this work. We would like to thank Jonathan McDowell and Matthew Ashby for serving as amazing mentors through the SAO REU program, and for their continued support and assistance during the paper-writing process.
The SAO REU program is funded in part by the National Science Foundation REU and Department of Defense ASSURE programs under NSF Grant no. AST-2050813, and by the Smithsonian Institution.
Astropy <cit.>, Matplotlib <cit.>, Modules for Experiments in Stellar Astrophysics (MESA, ).
aasjournal
|
http://arxiv.org/abs/2306.03077v1
|
20230605175313
|
Modified metrics of acoustic black holes: A review
|
[
"M. A. Anacleto",
"F. A. Brito",
"E. Passos"
] |
hep-th
|
[
"hep-th"
] |
[email protected]
Departamento de Física, Universidade Federal de Campina Grande
Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
[email protected]
Departamento de Física, Universidade Federal de Campina Grande
Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
Departamento de Física, Universidade Federal da Paraíba,
Caixa Postal 5008, 58051-970 João Pessoa, Paraíba, Brazil
[email protected]
Departamento de Física, Universidade Federal de Campina Grande
Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
In this brief review, we will address acoustic black holes arising from quantum field theory in the Lorentz-violating
and non-commutative background.
Thus, we consider canonical acoustic black holes with effective metrics for the purpose of investigating Hawking radiation and entropy.
We show that due to the generalized uncertainty principle and the modified dispersion relation, the Hawking temperature is regularized, that is, free from the singularity when the horizon radius goes to zero.
In addition, we also find logarithmic corrections in the leading order for entropy.
Modified metrics of acoustic black holes: A review
E. Passos
==================================================
10000
§ INTRODUCTION
Gravitational analogue models are topics of great interest and have been widely studied in the literature due to the possibility of detecting Hawking radiation in the table experiment.
In particular, acoustic black holes were proposed by Unruh in 1981 <cit.> for the purpose of exploring Hawking radiation, as well as investigating other issues to understand quantum gravity effects.
It is well known that an acoustic black hole can be generated when fluid motion reaches a speed greater than the local speed of sound.
These objects can exhibit properties similar to the laws of thermodynamics of gravitational black holes, such as a Hawking-like temperature and entropy (entanglement entropy).
Besides, it has been conjectured that phenomena that are observed in black holes may also occur in acoustic black holes.
Furthermore, with the detection of gravitational waves <cit.> and the capture of the image of a supermassive black hole <cit.>, a window of possibilities in the physics of black holes and also in analogous models was opened.
Acoustic black holes have applications in various branches of physics, namely high energy physics,
condensed matter, and quantum physics <cit.>.
On the experimental side, Hawking radiation has been successfully measured in the works reported
in <cit.>.
And also carried out in other branches of physics <cit.>.
However, in the physics of acoustic black holes, the first experimental measurement of Hawking radiation was devised in the Bose-Einstein condensate <cit.>.
In a recent paper, acoustic black holes embedded in a curved background were constructed by applying relativistic Gross-Pitaevskii and Yang-Mills theories <cit.>.
In <cit.>, an acoustic black hole of a D3-black brane was proposed.
On the other hand, relativistic acoustic black holes in Minkowski spacetime were generated
from the Abelian Higgs model <cit.>.
Also, relativistic acoustic black holes have emerged from other physical models <cit.>.
In addition, these objects have been used to analyze various physical phenomena, such as superradiance <cit.>, entropy <cit.>, quasinormal modes <cit.>, and as well as, in other models <cit.>.
Moreover, in <cit.>, was reported that there is a thermodynamic-like description for acoustic black holes in two dimensions. In this sense, an analogous form of Bekenstein-Hawking entropy (understood as an entanglement entropy) was addressed in <cit.> by analyzing the Bose-Einstein condensate system.
In addition, the dependence of entropy on the area of the event horizon of the acoustic black hole was explored in <cit.>.
Also, in <cit.>, the entanglement entropy of an acoustic black hole was examined.
In this brief review, we are interested in investigating modified acoustic black holes that have been constructed from field theory by considering the Abelian Higgs model in the Lorentz-violating <cit.>
and noncommutative <cit.> background.
To this end, we will explore canonical acoustic black holes with modified metrics to examine the effect of Lorentz symmetry breaking and noncommutativity on Hawking radiation and entropy.
In addition, by applying the generalized uncertainty principle and a modified dispersion relation, we show that the Hawking temperature singularity disappears when the horizon radius vanishes.
Besides, we also find logarithmic correction terms for entropy.
Recently, the stability of the canonical acoustic black hole in the presence of noncommutative effects and minimum length has been addressed by us in <cit.>.
Thus, it was verified that the non-commutativity and the minimum length act as regulators in the Hawking temperature, that is, the singularity is removed.
Also, it was shown that for a certain minimum radius the canonical acoustic black hole presents stability.
This brief review is organized as follows. In Sec. <ref>, we briefly review the steps to find the relativistic acoustic black hole metrics.
In Sec. <ref>, we briefly review the steps to find the relativistic acoustic black hole modified metrics.
In Sec. <ref>, wwe will focus on canonical acoustic black holes with effective metrics to compute Hawking temperature and entropy.
In Sec. <ref>, we will introduce quantum corrections via the generalized uncertainty principle and the modified dispersion relation in the calculation of Hawking temperature and entropy.
Finally in Sec. <ref> we present our final considerations.
§ ACOUSTIC BLACK HOLE
In this section we review the steps to obtain the relativistic acoustic metric from the Lagrangian density of the charged scalar field. Here we will follow the procedure adopted in <cit.>.
§.§ Relativistic Acoustic Metric
In order to determine the relativistic acoustic metric, we start by considering the following Lagrangian density:
L=∂_μϕ^∗∂^μϕ+ m^2|ϕ|^2-b|ϕ|^4.
Now, we decompose the scalar field as ϕ = √(ρ(x, t))exp(iS(x, t)), such that
L = ρ∂_μS∂^μS + m^2ρ - bρ^2
+ρ/√(ρ)(∂_μ∂^μ)√(ρ).
Moreover, from the above Lagrangian, we find the equations of motion for S and ρ given respectively by
∂_μ(ρ∂^μS)=0,
and
1/√(ρ)∂_μ∂^μ√(ρ)
+∂_μS∂^μS+m^2-2bρ=0,
where the Eq. (<ref>) is the continuity equation and Eq. (<ref>) is an equation describing a hydrodynamical fluid, and the term, 1/√(ρ)∂_μ∂^μ√(ρ), called the quantum potential can be neglected in the hydrodynamic region.
Now, by performing the following perturbations on equations of motion (<ref>) and (<ref>):
ρ=ρ_0+ ϵρ_1 + 𝒪(ϵ^2),
S=S_0+ϵψ + 𝒪(ϵ^2).
We obtain
∂ _μ( ρ _1 u_0^μ
+ρ _0∂ ^μψ) =0,
and
u_0^μ∂ _μψ -bρ _1 =0,
where we have defined u_0^μ =∂ ^μ S_0.
Hence, solving (<ref>) for ρ_1 and substituting into (<ref>), we have
∂ _μ[u^μ_0u^ν_0 + bρ _0 g^μν]∂_νψ =0.
We can also write the above equation as follows:
∂ _t{ω^2_0
[ -1- bρ_0/2ω^2_0]∂ _t ψ
-ω^2_0v^i_0/ω_0∂ _i ψ}
+∂ _i{-ω^2_0 v^i_0/ω_0∂ _t ψ
+ω^2_0[ - v^i_0 v^j_0/ω^2_0 +bρ_0/2ω^2_0δ^ij]
∂ _j ψ} =0,
where ω_0=-∂^t S_0 and v_0^i=∂ _i S_0 (the local velocity field).
In addition, we define
c^2_s=bρ_0/2ω^2_0 to be the speed of sound
and v^i=v^i_0/ω_0.
However, the equation (<ref>) becomes
∂ _t{bρ _0/2c_s ^2[ (-1 -c_s^2 )∂ _t ψ
- v^i ∂ _iψ] }
+ ∂ _i{bρ _0/2c_s^2[ - v^i∂ _t ψ
+( - v^i v^j +c_s^2 δ^ij)
∂ _j ψ]} =0.
In this way, the above equation can be written as a Klein-Gordon equation in (3+1) dimensional curved space as follows:
1/√(-g)∂_μ( √(-g)g^μν∂_ν)ψ=0,
where
√(-g)g^μν=bρ_0/2c^2_s([ -1-c^2_s ⋮ -v^i; ⋯⋯ · ⋯⋯; -v^j ⋮ c^2_sδ^ij - v^i v^j ]).
Hence, by determining the inverse of g^μν, we find the relativistic acoustic metric given by
g_μν=bρ_0/2c_s√(1+c^2_s-v^2)([ -c^2_s + v^2 ⋮ -v^i; ⋯⋯ · ⋯⋯; -v^j ⋮ (1+c^2_s)δ^ij ]).
The metric depends on the density ρ_0, the local sound speed in the fluid c_s, the velocity of flow v⃗.
This is the acoustic black hole metric for high c_s and v⃗ speeds.
Note that, in the non-relativistic limit, up to a overall factor, the metric found by Unruh is obtained.
g_μν=bρ_0/2c_s([ -c^2_s + v^2 ⋮ -v^i; ⋯⋯ · ⋯⋯; -v^j ⋮ δ^ij ]).
The relativistic acoustic metric (<ref>) has also been obtained from the Abelian Higgs model <cit.>.
§.§ The Dispersion Relation
Here we aim to examine the dispersion relation.
Hence, we will adopt the notation written below
ψ∼[e^iω t - i k⃗·x⃗],
ω=∂ψ/∂ t,
k⃗=∇ψ.
So we can write the Klein-Gordon equation (<ref>) in terms of momentum and frequency as follows:
(1+c^2_s) ω^2 + 2(v⃗·k⃗) ω
-( c_s^2 - v^2) k^2 = 0.
Now, by making k^i=δ^i1, we have
ω=-v_1 k ± c_s k√(1+c^2_s - v^2_1)/(1+c^2_s)
=-v_1 k ± c_s k√(1+(c_s - v_1)(c_s + v_1))/(1+c^2_s),
In the limit of small v_1, we find the modified dispersion relation
ω≈ E(1+v_1/2),
where E=c_s k is the linear dispersion relation.
§ MODIFIED ACOUSTIC BLACK HOLE
In this section we review the derivation of the relativistic acoustic metric from the Abelian Higgs model in the background violating-Lorentz and noncommutative.
§.§ The Lorentz-Violating Model
At this point, we consider the Abelian Higgs model with Lorentz symmetry breaking that has been introduced as a change in the scalar sector of the Lagrangian <cit.>.
Moreover, the relativistic acoustic metric violating Lorentz has been found in <cit.>.
Then, the corresponding Lagrangian for the abelian Higgs model in the Lorentz-violating background is written as follows:
L = -1/4F_μνF^μν +|D_μϕ|^2+ m^2|ϕ|^2-b|ϕ|^4+ k^μνD_μϕ^∗D_νϕ,
being F_μν=∂_μA_ν-∂_νA_μ the field intensity tensor, D_μϕ=∂_μϕ - ieA_μϕ the covariant derivative and k^μν a constant tensor implementing the Lorentz symmetry breaking,
given by <cit.>
k_μν=[[ β α α α; α β α α; α α β α; α α α β ]], (μ,ν=0,1,2,3),
where α and β are real parameters.
Next, following the steps taken in the previous section to derive the relativistic acoustic metric from quantum field theory, we consider ϕ = √(ρ(x, t))exp(iS(x, t)) in the Lagrangian above. Thus, we have
L = -1/4 F_μνF^μν + ρ∂_μS∂^μS - 2eρ A_μ∂^μS+ e^2ρ A_μ A^μ + m^2ρ - bρ^2
+ k^μνρ(∂_μS∂_νS-2eA_μ∂_νS+e^2 A_μ A_ν)
+ρ/√(ρ)(∂_μ∂̃^μ)√(ρ),
where ∂̃^μ=∂^μ + k^μν∂_ν. The equations of motion for S and ρ are:
∂_μ[ρ u^μ
+ ρ k^μνu_ν]=0,
and
(∂_μ∂̃^μ)√(ρ)/√(ρ)
+ u_μu^μ + k^μνu_μu_ν +m^2 - 2bρ=0,
where we have defined u^μ =∂^μ S - e A^μ.
Now, by linearizing the equations above around the background (ρ_0,S_0), with
ρ=ρ_0+ ϵρ_1 + 𝒪(ϵ^2),
S=S_0+ϵψ + 𝒪(ϵ^2),
and keeping the vector field A_μ unchanged, we have
∂_μ[ρ_1( u_0^μ+ k^μνu_0ν)
+ρ_0(g^μν+k^μν)∂_νψ] = 0,
and
( u_0^μ + k^μνu_0ν) ∂ _μψ -bρ _1 =0,
by solving (<ref>) for ρ_1 and replacing into equation (<ref>), we obtain
∂ _μ[u^μ_0u^ν_0 + k^μλu_0λu_0^ν + u_0^μk^νλu_0λ
+ bρ _0 (g^μν+k^μν)]∂_νψ =0.
Hence, we find the equation of motion for a linear acoustic disturbance ψ given by a Klein-Gordon equation in a curved space
1/√(-g)∂_μ(√(-g)g^μν∂_ν)ψ=0,
where g_μν is the relativistic acoustic metrics.
For β≠0 and α=0, we have <cit.>
g_μν≡bρ_0β̃_-^1/2/2c_s√(𝒬)[[ -(c_s^2/β̃_+-β̃_-/β̃_+v^2) ⋮ -v^j; ⋯⋯⋯⋯⋯ · ⋯⋯⋯⋯⋯⋯; -v^i ⋮ f_βδ^ij+β̃_-/β̃_+v^iv^j ]],
where Q=1+c_s^2/β̃_+-β̃_-/β̃_+v^2
and f_β=β̃_+/β̃_-
+c_s^2/β̃_--β̃_-/β̃_+v^2.
The acoustic line element in the Lorentz-violating background can be written as follows
ds^2 = bρ_0β̃_-^1/2/2c_s√(Q)[-(c_s^2/β̃_+-β̃_-/β̃_+v^2)dt^2-2v⃗· dx⃗dt
+β̃_-/β̃_+(v⃗· dx⃗)^2
+f_β dx⃗^2].
Now changing the time coordinate as dτ=dt + β̃_+v⃗· dx⃗/c^2_s-β̃_-v^2,
we find the acoustic metric in the stationary form
ds^2 = bρ_0β̃_-^1/2/2c_s√( Q)[-(c_s^2/β̃_+-β̃_-/β̃_+v^2)dτ^2+
F(β̃_-v^iv^j/c^2_s-β̃_-v^2
+f_β/ Fδ^ij)dx^idx^j].
where F=(β̃_+/β̃_-+c_s^2/β̃_+
-β̃_-/β̃_+v^2).
For β̃=1 we recover the result found in Ref. <cit.>.
Next, for β=0 and α≠ 0, we have <cit.>
g_μν≡bρ_0/2c_s√(f)[[ g_tt ⋮ g_tj; ⋯ · ⋯; g_it ⋮ g_ij ]],
where
g_tt = -[(1+α)c^2_s-v^2+α^2(1-v)^2],
g_tj = -(1-α⃗·v⃗)v^j,
g_it = -(1-α⃗·v⃗)v^i,
g_ij = [(1-α⃗·v⃗)^2+c^2_s-v^2]δ^ij +v^iv^j,
f = (1+α)[(1-α⃗·v⃗)^2+c^2_s]-v^2+α^2(1-v)^2[1+(1-α⃗·v⃗)^2c^-2_s].
Thus, the acoustic line element in the Lorentz-violating background can be written as
ds^2 = bρ_0/2c_s√(f)[g_ttdt^2-2(1-α⃗·v⃗)(v⃗· dx⃗)dt+(v⃗· dx⃗)^2
+ f_α dx⃗^2],
where f_α=(1-α⃗·v⃗)^2+c^2_s-v^2. Now changing the time coordinate as
dτ=dt + (1-α⃗·v⃗)(v⃗· dx⃗)/[(1+α)c^2_s-v^2+α^2(1-v)^2],
we find the acoustic metric in the stationary form
ds^2=bρ_0/2c_s√(f)[g_ttdτ^2+Λ(-v^iv^j/g_tt
+f_αδ^ij/Λ)dx^idx^j].
where Λ=(1-α⃗·v⃗)^2-g_tt.
For α=0, the result found in <cit.> is recovered.
§.§ Noncommutative Acoustic Black Hole
The metric of a noncommutative canonical acoustic black hole has been found by us in <cit.>.
Here, starting from the noncommutative Abelian Higgs model, we briefly review the steps to generate the relativistic acoustic metric in the noncommutative background.
Thus, the Lagrangian of the Abelian Higgs model in the noncommutative background is given by <cit.>
L̂ = -κ_+/4F_μνF^μν
+κ_-(|D_μϕ|^2+ m^2|ϕ|^2-b|ϕ|^4)
+1/2θ^αβF_αμ[(D_βϕ)^†D^μϕ+(D^μϕ)^†D_βϕ],
being κ_±=1 ±θ^μνF_μν/2, F_μν=∂_μA_ν-∂_νA_μ the field intensity tensor
and D_μϕ=∂_μϕ - ieA_μϕ the covariant derivative.
The parameter θ^αβ is a constant, real-valued antisymmetric D× D- matrix in D-dimensional spacetime with dimensions of length squared.
Now, we use ϕ = √(ρ(x, t))exp(iS(x, t)) in the above Lagrangian,
such that <cit.>.
L = -κ_+/4F_μνF^μν
+ρg̅^μν D_μS D_νS+θ̃ m^2ρ-θ̃bρ^2
+ρ/√(ρ)g̅^μν∂_μ∂_ν√(ρ),
where D_μ=∂_μ-eA_μ/S, g̅^μν=θ̃g^μν+Θ^μν, θ̃=(1+θ⃗·B⃗), B⃗=∇×A⃗ and Θ^μν=θ^αμF_α^ν.
In our analysis we consider the case where there is no noncommutativity between space and time, that is θ^0i=0 and use θ^ij=ε^ijkθ^k, F^i0=E^i and F^ij=ε^ijkB^k.
In the sequence we obtain the equations of motion for S and ρ as follows:
∂_μ[θ̃ρ u^μ
+ρΘ̃^μνu_ν]=0,
and
1/√(ρ)g̅^μν∂_μ∂_ν√(ρ)
+g̅^μνu_μu_ν
+θ̃m^2-2θ̃bρ=0,
where Θ̃^μν=(Θ^μν+Θ^νμ)/2.
Hence, by linearizing the equations of motion around the background (ρ_0,S_0), with ρ=ρ_0+ρ_1, S=S_0+ψ
and keeping the vector potential A_μ unchanged, such that
∂_μ[ρ_1g̅^μνu_0ν
+ρ_0(g^μν+Θ̃^μν)∂_νψ] = 0,
and
(θ̃ u_0^μ + Θ̃^μνu_0ν) ∂ _μψ -bθ̃ρ _1 =0.
Then, by manipulating the above equations, we obtain the equation of motion for a linear acoustic disturbance ψ in the form
1/√(-g)∂_μ(√(-g)g^μν∂_ν)ψ=0,
where g_μν=bρ_0/2c_s√(f)g̃_μν is the relativistic acoustic metric with noncommutative corrections in (3+1) dimensions and with g̃_μν given in the form <cit.>
g̃_tt = -[(1-3θ⃗·B⃗)c^2_s-(1+3θ⃗·B⃗)v^2
+2(θ⃗·v⃗)(B⃗·v⃗)-(θ⃗×E⃗)·v⃗],
g̃_tj = -1/2(θ⃗×E⃗)^j(c^2_s+1)-[2(1+2θ⃗·B⃗)
-(θ⃗×E⃗)·v⃗]v^j/2+B^j/2(θ⃗·v⃗)+θ^j/2(B⃗·v⃗),
g̃_it = -1/2(θ⃗×E⃗)^i(c^2_s+1)-[2(1+2θ⃗·B⃗)-(θ⃗×E⃗)·v⃗]v^i/2
+B^i/2(θ⃗·v⃗)+θ^i/2(B⃗·v⃗),
g̃_ij = [(1+θ⃗·B⃗)(1+c^2_s)-(1+θ⃗·B⃗)v^2
-(θ⃗×E⃗)·v⃗]δ^ij+(1+θ⃗·B⃗)v^iv^j.
f = [(1-2θ⃗·B⃗)(1+c^2_s)-(1+4θ⃗·B⃗)v^2]
-3(θ⃗×E⃗)·v⃗+2(B⃗·v⃗)(θ⃗·v⃗).
Setting θ=0, the acoustic metric above reduces to the acoustic metric obtained in Ref. <cit.>.
§ MODIFIED CANONICAL ACOUSTIC BLACK HOLE
In this section, we shall address the issue of Hawking temperature in the regime of low velocities
for the previous cases with further details. Now we consider an incompressible fluid with spherical symmetry. In this case the density ρ is a position independent quantity and the continuity equation implies that v∼1/r^2. The sound speed is also a constant.
In the following we examine the Hawking radiation and entropy of the usual canonical acoustic black hole, as well as, in the Lorentz-violating and noncommutative background.
§.§ Canonical Acoustic Metric
In this case the line element of the acoustic black hole is given by
ds^2=-f(v_r)dτ^2+c^2_s/f(v_r)dr^2
+r^2(dθ^2+sin^2θ dϕ^2),
where the metric function, f(v_r) takes the form
f(v_r)=c^2-v^2_r ⟶ f(r)=c^2_s(1-r^4_h/r^4).
Here we have defined
v_r=c_sr^2_h/r^2,
being r_h the radius of the event horizon.
In this case we compute the Hawking temperature using the following formula:
T_H=f^'(r_h)/4π=c^2_s/π r_h.
By considering the above result for the Hawking temperature and applying the first law of thermodynamics, we can obtain the entropy (entanglement entropy <cit.>) of the acoustic black hole as follows
S=∫dE/T=∫dA/4π r_h T_H=A/4c^2_s,
being A=4π r^2_h the horizon area of the canonical acoustic black hole.
§.§ Canonical Acoustic Metric with Lorentz Violation
In the limit c^2_s≪1 and v^2≪1 can be written as a Schwarzschild metric type.
Thus, for β≠0 and α=0 and up to an irrelevant position-independent factor,
we have <cit.>
ds^2 = -f(v_r)dτ^2+c^2_s/√(β̃_-β̃_+)f(v_r)dr^2
+√(β̃_̃+̃/β̃_-)r^2(dθ^2+sin^2θ dϕ^2),
where
f(v_r)=√(β̃_̃-̃/β̃_+)[c^2_s-β̃_-v^2_r/β̃_+]→
f(r)=√(β̃_̃-̃/β̃_+)[c^2_s/β̃_+(1-β̃_-r^4_h/r^4)].
The Hawking temperature is given by
T_H=f^'(r_h)/4π=c^2_s(1-β)^3/2/(1+β)^3/2π r_h
=c^2_s(1-3β)/π r_h.
Therefore, the temperature is decreased when we vary the parameter β.
For β=0 the usual result is obtained.
Hence, from the above temperature, we have the following result for the entropy of the acoustic black hole in the background violating Lorentz.
S=(1+3β)/4c^2_sA.
Now, for β=0 and α≠ 0, we find for α sufficiently small we have up to first order
f(v_r)=α̃c^2_s-v^2_r]/√(α̃(1-2α v_r)),
where α̃=1+α. For v_r=c_s r^2_h/r^2 with c_s=1, the metric function becomes
f(r)≃α̃^-1/2[α̃-r_h^4/r^4(1+αr_h^2/r^2)
+αr_h^2/r^2].
In the present case there is a richer structure such as charged and rotating black holes.
The event horizon of the modified canonical acoustic black hole is obtained from the following equation:
α̃-r_h^4/r^4(1+αr_h^2/r^2)
+αr_h^2/r^2=0,
which can also be rewritten in the form
r^6 + α r^2_h r^4 - α̃^-1 r^4_ h r^2 - α r^6_h=0,
we can also write
r^2(r^2 - r^2_+)(r^2- r^2_-) - α r^6_h=0,
where
r^2_±=r^2_h(-α/2±1/√(α̃)).
Now, arranging the above equation (<ref>), we have
r^2=r^2_+ + α r^6_h/r^2(r^2 - r^2_- ).
Therefore, we can find the event horizon by solving the above equation perturbatively.
So, up to the first order in α, we obtain
r̃^2_+≈ r^2_+ + α r^6_h/r^2_+(r^2_+ - r^2_- )
=( 1-α/2) r^2_h + ⋯.
Then, we have
r̃_+=r_h√(1-α/2) + ⋯.
For the Hawking temperature, we obtain
T_H=1/πr̃_+(1+3α/2).
In terms of r_h, we have
T_H=(1+7α/4)1/(π r_h).
In this situation the temperature is increased when we vary the parameter α.
For α=0 one recovers the usual result.
In this case for entropy, we find
S=(1-7α/4)A/4.
§.§ Noncommutative Canonical Acoustic Metric
The noncommutative acoustic metric can be written as a Schwarzschild metric type, up to an irrelevant
position-independent factor, in the nonrelativistic limit as follows <cit.>,
ds^2 = -F̃(v_r)dτ^2+[v_r^2Γ+Σ+F̃(v_r)Λ]/F̃(v_r)dr^2
+r^2(dϑ^2+sin^2ϑ dϕ^2)/√(f),
where
F̃(v_r) = F(v_r)/√(f(v_r))=1/√(f(v_r))[(1-3θ⃗·B⃗)c^2_s-(1+3θ⃗·B⃗)v^2_r-θ E_rv_r
+2(θ_rB_rv^2_r)],
f(v_r) = 1-2θ⃗·B⃗-3θ E_r v_r,
Λ(v_r) = 1+θ⃗·B⃗-θ E_r v_r,
Γ(v_r) = 1+4θ⃗·B⃗-2θ E_r v_r,
Σ(v_r) = [θ E_r -(B_rv_r)θ_r-(θ_r v_r)B_r]v_r,
being θ E_r=θ(n⃗×E⃗)_r.
Now, by applying the relation v_r=c_sr^2_h/r^2, where r_h is the radius of the event horizon and making c_s=1 and so, the metric function of the noncommutative canonical acoustic black hole becomes
F̃(r)=[ 1-3θ⃗·B⃗ -(1+3θ⃗·B⃗-2θ_rB_r)
r^4_h/r^4
-θ E_rr^2_h/r^2]
[1-2θ⃗·B⃗
-3θ E_rr^2_h/r^2]^-1/2.
Next, we will do our analysis considering the pure magnetic sector first and then we will investigate the pure electric sector.
Hence, for θ_r=0, θ⃗·B⃗=θ_3B_3≠ 0, θ E_r=0 (or E=0) with small θ_3B_3,
T_H=(1+3θ_3B_3)/√(1-2θ_3B_3)1/(π r_h)
=(1+4θ_3B_3)/(π r_h).
For θ=0 the usual result is obtained.
Here the temperature has its value increased when we vary the parameter θ.
However, for the temperature in (<ref>) we can find the entropy given by
S=∫dE/T=∫dA/4π r_h T_H=(1-4θ_3 B_3)/4A,
where A=4π r^2_h is the horizon area of the canonical acoustic black hole.
At this point, we will consider the situation where B=0 and θ E_r≠ 0.
So, from (<ref>), we have
F̃(r)=[ 1-r^4_h/r^4
-θ E_rr^2_h/r^2]
[1-3θ E_rr^2_h/r^2]^-1/2.
For this metric the event horizon is obtained by solving the equation below
1-r^4_h/r^4-θ E_rr^2_h/r^2=0,
or
r^4 - θ E_rr^2_h r^2 - r^4_h=0.
So, solving the above equation, we obtain
r_+=(1+θℰ_r/4)r_h.
For the Hawking temperature, we find
T_H = [1-θ E_r/2]/√(1-3θ E_r)1/π r_+=(1+θ E_r )/π r_+,
= (1+3θ E_r/4 )/π r_h.
We also noticed that the temperature is increased when we vary the θ parameter.
For entropy we have
S=(1-θ E_r)A/4,
where A=4π r_+.
§ QUANTUM-CORRECTED HAWKING TEMPERATURE AND ENTROPY
In this section, we implement quantum corrections in the Hawking temperature and entropy calculation arising from the generalized uncertainty principle and modified dispersion relations.
§.§ Result using GUP
At this point, we introduce quantum corrections via the generalized uncertainty principle (GUP) to determine the Hawking temperature and entropy of the canonical acoustic black hole in the Lorentz-violating and noncommutative background.
So, we will adopt the following GUP <cit.>
Δ xΔ p≥ħ/2( 1-λ l_p/ħΔ p +λ^2 l^2_p/ħ^2 (Δ p)^2 ),
where α is a dimensionless positive parameter and l_p is the Planck length.
In sequence, without loss of generality, we will adopt the natural units G=c=k_B=ħ=l_p=1 and by
assuming that Δ p∼ E
and following the steps performed in <cit.> we can obtain the following relation for the corrected energy of the black hole
E_gup≥ E[1-λ/2(Δ x)+ λ^2/2(Δ x)^2+⋯].
Thus, applying the tunneling formalism using the Hamilton-Jacobi method, we have the following result for the probability of tunneling with corrected energy E_ gup given by
Γ≃exp[-2 Im ( I)]=exp[-4πE_gup/κ],
where κ is the surface gravity.
Comparing with the Boltzmann factor, e^-E/T, we obtain the following result for the Hawking temperature with quantum corrections
T≤T_H[ 1-λ/2(Δ x)+ λ^2/2(Δ x)^2+⋯]^-1.
So, by applying it to temperature (<ref>), we have the following result
T=c^2_s/π[ r_h-λ/4+ λ^2/8r_h+⋯].
Therefore, when r_h=0 the singularity is removed and the temperature is now zero.
Next, we analyze the effect of GUP in the Lorentz-violating and noncommutative cases.
For this case we can calculate the entropy which is given by
S=A/4c^2_s - 4√(π)λ√(A)/4c^2_s+πλ^2ln A/8c^2_s+⋯.
So due to the GUP we get a logarithmic correction term for the entropy.
§.§.§ Violation-Lorentz Case
In the situation where β≠ 0 and α=0, the corrected temperature due to GUP is
T=T_H[ 1-λ/4 r_h+ λ^2/8r^2_h+⋯]^-1.
where
T_H=(1-3β)/π r_h.
Thus, we have
T=(1-3β)/π[ r_h-λ/4+ λ^2/8r_h+⋯].
Note that when r_h→ 0 the Hawking temperature tends to zero, T→ 0.
In the absence of the GUP the temperature, T_H, diverges when r_h=0. Therefore, we observe that the GUP has the effect of removing the singularity at r_h=0 in the Hawking temperature of the acoustic black hole.
Now computing the entropy, we find
S=(1+3β)[A/4 - 4√(π)λ√(A)/4
+πλ^2ln A/8+⋯].
For β=0 and α≠ 0, we have
T=(1+3α/2)/π[r̃_+-λ/4+ λ^2/8r̃_++⋯].
In terms of r_h, we obtain
T=(1+3α/2)/π[r_h(1-α/4)-λ/4
+ λ^2/8r_h(1+α/4)+⋯].
In this situation, we can also verify the effect of the GUP on the temperature that goes to zero
when r_h→ 0 (r̃_+→ 0).
In addition, we note that in both cases the Hawking temperature reaches a maximum value before going to zero,
as we can see in Fig. <ref>. Therefore, presenting a behavior analogous to what happens with the corrected Hawking temperature of the Schwarzschild black hole.
For entropy, we obtain
S=(1-3α/2)[(1-α/4)A/4 - 4√(π)λ√(A)/4
+(1+α/4)πλ^2ln A/8+⋯].
Again we find a logarithmic correction term and also the contribution of the α parameter to the entropy.
§.§.§ Noncommutative Case
For the magnetic sector, the GUP-corrected Hawking temperature is given by
T=(1+4θ_3 B_3)/π[ r_h-λ/4+ λ^2/8r_h+⋯].
Note that, the GUP acts as a temperature regulator by removing the singularity when r_h=0. In addition, the temperature goes through a maximum value point before going to zero for r_h=0.
In this case entropy is given by
S=(1-4θ_3 B_3)[A/4 - 4√(π)λ√(A)/4
+πλ^2ln A/8+⋯].
Next, for the electrical sector, we find the following GUP-corrected Hawking temperature
T=(1+θℰ_r)/π[ r_+-λ/4+ λ^2/8r_++⋯].
In terms of r_h, the temperature becomes
T=(1+θℰ_r)/π[ r_h(1+θℰ_r/4) -λ/4+ λ^2/8r_h(1-θℰ_r/4)+⋯].
Hence, as has been verified in the violating-Lorentz case, here in both cases the temperature-corrected magnetic and electric sectors have the singularity removed when the horizon radius goes to zero.
Also, in this case we can observe that the temperature reaches a maximum value and then goes to zero when the horizon radius is zero.
At this point, when determining the entropy, we have
S=(1-θℰ_r)[(1+θℰ_r/4)A/4 - 4√(π)λ√(A)/4
+(1-θℰ_r/4)πλ^2ln A/8+⋯].
§.§ Result using modified dispersion relation
Near the event horizon the dispersion relation (<ref>) becomes
ω= E(1+a^2_0/2r^2_h),
where a_0 is a parameter with length dimension.
By assuming k∼Δ k∼ 1/Δ x=1/r_h, we can write
ω= E(1+a^2_0 k^2/2).
Thus, in terms of the energy difference, we have
Δ E/E=ω - E/E=a^2_0 k^2/2.
Next, by using the Rayleigh's formula that relates the phase and group velocities
v_g=v_p + k dv_p/dk,
where the phase velocity (v_p) and the group velocity (v_g) are given by
v_p=ω/k=1+a^2_0 k^2/2,
and
v_g=dω/dk=1+3a^2_0 k^2/2.
However, we find an expression for the velocity difference as following
v_g - v_p/v_p=a^2_0 k^2,
which corresponds to the supersonic case (v_g > v_p).
Furthermore, the Hawking temperature (<ref>) can be corrected by applying the dispersion ratio (<ref>), i.e.
T_H=c^2_s/π(r_n + a^2_0/2r_h).
Note that, the singularity is removed when r_h=0 and the temperature vanishes.
In addition, the temperature reaches a maximum value before going to zero.
as we can see in Fig. <ref>.
Now, by calculating the entropy, we find
S=A/4c^2_s+2π a_0^2ln A/4c^2_s.
Here a logarithmic correction term arises in entropy on account of the modified dispersion relation.
In order to correct the Hawking temperature and entropy for the Lorentz-violating and non-commutative cases, we will apply the modified dispersion relations obtained in Refs. <cit.>.
§.§.§ Violation-Lorentz Case
In the situation where β=0 and α≠ 0, we have the following dispersion relation:
ω=E(1+α/2+α a_0^2/r̃^2_+).
So for temperature (<ref>), we get
T=(1+3α/2)/π[r̃_++α/2+ α a_0^2/r̃_+].
Furthermore, the result shows that the temperature reaches a maximum point and then goes to zero when the horizon radius is zero.
Moreover, entropy is given by
S=(1-3α/2)[(1-α/4)A/4 + 2√(π)α√(A)/4
+4πα a^2_0ln A/4].
Again due to the contribution of the modified dispersion relation, a logarithmic correction term arises in the entropy.
§.§.§ Noncommutative Case
At this point we consider the dispersion ratio for the pure electrical sector. So we have
ω=E(1+θℰ_1 a_0^2/4r^2_+).
For the temperature (<ref>), we find
T=(1+θ E_r )/π(r_+ + θℰ_1 a_0^2/4r_+),
which in terms of r_h becomes
T=(1+θ E_r )/π[(1+θℰ_r/4)r_h + θℰ_1 a_0^2/4r_h].
Here, we can see that the temperature goes through a maximum value before going to zero for r_h=0.
Hence, the result for entropy is
S=(1-θℰ_r)[(1+θℰ_r/4)A/4
+πθℰ_1 a_0^2ln A/4].
In the above equation a logarithmic correction term arises in entropy as a consequence of the noncommutativity effect on the dispersion relation.
§ CONCLUSIONS
In summary, in this work, we have reviewed the steps to generate relativistic acoustic metrics in the Lorentz-violating
and noncommutative background.
In particular, we have considered the modified canonical acoustic metric due to the contribution of terms violating Lorentz symmetry and noncommutativity; to examine Hawking radiation and entropy.
Moreover, we have verified, in the calculation of the Hawking temperature, that due to the presence of the GUP and the modified dispersion relation, the singularity is removed. In addition, we have shown that in these cases, the temperature reaches a maximum value and then vanishes when the horizon radius goes to zero.
Furthermore, entropy has been computed, and we show that logarithmic correction terms are generated due to the GUP and also the modified dispersion relation.
Therefore, the presented results show a behavior similar to what happens in the case of the Schwarzschild black hole.
We would like to thank CNPq, CAPES and CNPq/PRONEX/FAPESQ-PB (Grant nos. 165/2018 and 015/2019),
for partial financial support. MAA, FAB and EP acknowledge support from CNPq (Grant nos. 306398/2021-4, 312104/2018-9, 304290/2020-3).
100
Unruh:1980cg
W. G. Unruh,
Phys. Rev. Lett. 46 (1981), 1351-1353
Unruh:1994je
W. G. Unruh,
Phys. Rev. D 51 (1995), 2827-2838
[arXiv:gr-qc/9409008 [gr-qc]].
LIGOScientific:2016aoc
B. P. Abbott et al. [LIGO Scientific and Virgo],
Phys. Rev. Lett. 116 (2016) no.6, 061102
[arXiv:1602.03837 [gr-qc]].
LIGOScientific:2017vwq
B. P. Abbott et al. [LIGO Scientific and Virgo],
Phys. Rev. Lett. 119 (2017) no.16, 161101
[arXiv:1710.05832 [gr-qc]].
event2019firstI
E. H. T. Collaboration et al.,
ApJ, 875, L1, 2019.
[arXiv:1906.11238 [astro-ph.GA]]
event2019firstVI
E. H. T. Collaboration et al.,
ApJ, 875, L6, 2019.
[arXiv:1906.11243 [astro-ph.GA]]
Visser:1997ux
M. Visser,
Acoustic black holes: Horizons, ergospheres, and Hawking radiation,
Class. Quant. Grav. 15 (1998), 1767-1791
[arXiv:gr-qc/9712010 [gr-qc]].
Barcelo:2005fc
C. Barcelo, S. Liberati and M. Visser,
Analogue gravity,
Living Rev. Rel. 8 (2005), 12
[arXiv:gr-qc/0505065 [gr-qc]].
MunozdeNova:2018fxv
J. R. Muñoz de Nova, K. Golubkov, V. I. Kolobov and J. Steinhauer,
Observation of thermal Hawking radiation and its temperature in an analogue black hole,
Nature 569 (2019) no.7758, 688-691
[arXiv:1809.00913 [gr-qc]].
Isoard:2019buh
M. Isoard and N. Pavloff,
Departing from thermality of analogue Hawking radiation in a Bose-Einstein condensate,
Phys. Rev. Lett. 124 (2020) no.6, 060401
[arXiv:1909.02509 [cond-mat.quant-gas]].
Steinhauer:2014dra
J. Steinhauer,
Observation of self-amplifying Hawking radiation in an analog black hole laser,
Nature Phys. 10 (2014), 864
[arXiv:1409.6550 [cond-mat.quant-gas]].
Drori:2018ivu
J. Drori, Y. Rosenberg, D. Bermudez, Y. Silberberg and U. Leonhardt,
Observation of Stimulated Hawking Radiation in an Optical Analogue,
Phys. Rev. Lett. 122 (2019) no.1, 010404
[arXiv:1808.09244 [gr-qc]].
Rosenberg:2020jde
Y. Rosenberg,
Optical analogues of black-hole horizons,
Phil. Trans. Roy. Soc. Lond. A 378 (2020) no.2177, 20190232
[arXiv:2002.04216 [physics.optics]].
Guo:2019tmr
Y. Guo and Y. G. Miao,
Quasinormal mode and stability of optical black holes in moving dielectrics,
Phys. Rev. D 101 (2020) no.2, 024048
[arXiv:1911.04479 [gr-qc]].
Bera:2020doh
A. Bera and S. Ghosh,
Stimulated Hawking Emission From Electromagnetic Analogue Black Hole: Theory and Observation,
Phys. Rev. D 101 (2020) no.10, 105012
[arXiv:2001.08467 [hep-th]].
Blencowe:2020ygo
M. P. Blencowe and H. Wang,
Analogue Gravity on a Superconducting Chip,
Phil. Trans. Roy. Soc. Lond. A 378 (2020) no.2177, 20190224
[arXiv:2003.00382 [quant-ph]].
Lahav:2009wx
O. Lahav, A. Itah, A. Blumkin, C. Gordon and J. Steinhauer,
Phys. Rev. Lett. 105, 240401 (2010)
doi:10.1103/PhysRevLett.105.240401
[arXiv:0906.1337 [cond-mat.quant-gas]].
Ge:2019our
X. H. Ge, M. Nakahara, S. J. Sin, Y. Tian and S. F. Wu,
Acoustic black holes in curved spacetime and the emergence of analogue Minkowski spacetime,
Phys. Rev. D 99 (2019) no.10, 104047
[arXiv:1902.11126 [hep-th]].
Yu:2017bnu
C. Yu and J. R. Sun,
Note on acoustic black holes from black D3-brane,
Int. J. Mod. Phys. D 28 (2019) no.07, 1950095
[arXiv:1712.04137 [hep-th]].
Ge:2010wx
X. H. Ge and S. J. Sin,
Acoustic black holes for relativistic fluids,
JHEP 06 (2010), 087
[arXiv:1001.0371 [hep-th]].
Anacleto:2010cr
M. A. Anacleto, F. A. Brito and E. Passos,
Acoustic Black Holes from Abelian Higgs Model with Lorentz Symmetry Breaking,
Phys. Lett. B 694 (2011), 149-157
[arXiv:1004.5360 [hep-th]].
Anacleto:2011bv
M. A. Anacleto, F. A. Brito and E. Passos,
Supersonic Velocities in Noncommutative Acoustic Black Holes,
Phys. Rev. D 85 (2012), 025013
[arXiv:1109.6298 [hep-th]].
Anacleto:2013esa
M. A. Anacleto, F. A. Brito and E. Passos,
Acoustic Black Holes and Universal Aspects of Area Products,
Phys. Lett. A 380 (2016), 1105-1109
[arXiv:1309.1486 [hep-th]].
Anacleto:2021nhm
M. A. Anacleto, F. A. Brito, G. C. Luna and E. Passos,
The generalized uncertainty principle effect in acoustic black holes,
Annals Phys. 440, 168837 (2022)
doi:10.1016/j.aop.2022.168837
[arXiv:2112.13573 [gr-qc]].
Bilic:1999sq
N. Bilic,
Relativistic acoustic geometry,
Class. Quant. Grav. 16 (1999), 3953-3964
[arXiv:gr-qc/9908002 [gr-qc]].
Fagnocchi:2010sn
S. Fagnocchi, S. Finazzi, S. Liberati, M. Kormos and A. Trombettoni,
Relativistic Bose-Einstein Condensates: a New System for Analogue Models of Gravity,
New J. Phys. 12 (2010), 095012
[arXiv:1001.1044 [gr-qc]].
Giacomelli:2017eze
L. Giacomelli and S. Liberati,
Rotating black hole solutions in relativistic analogue gravity,
Phys. Rev. D 96, no.6, 064014 (2017)
[arXiv:1705.05696 [gr-qc]].
Visser:2010xv
M. Visser and C. Molina-Paris,
Acoustic geometry for general relativistic barotropic irrotational fluid flow,
New J. Phys. 12 (2010), 095014
[arXiv:1001.1310 [gr-qc]].
Basak:2002aw
S. Basak and P. Majumdar,
`Superresonance' from a rotating acoustic black hole,
Class. Quant. Grav. 20 (2003), 3907-3914
[arXiv:gr-qc/0203059 [gr-qc]].
Richartz:2009mi
M. Richartz, S. Weinfurtner, A. J. Penner and W. G. Unruh,
General universal superradiant scattering,
Phys. Rev. D 80 (2009), 124016
[arXiv:0909.2317 [gr-qc]].
Anacleto:2011tr
M. A. Anacleto, F. A. Brito and E. Passos,
Superresonance effect from a rotating acoustic black hole and Lorentz symmetry breaking,
Phys. Lett. B 703 (2011), 609-613
[arXiv:1101.2891 [hep-th]].
Zhang:2011zzh
L. C. Zhang, H. F. Li and R. Zhao,
Hawking radiation from a rotating acoustic black hole,
Phys. Lett. B 698 (2011), 438-442
Ge:2010eu
X. H. Ge, S. F. Wu, Y. Wang, G. H. Yang and Y. G. Shen,
Acoustic black holes from supercurrent tunneling,
Int. J. Mod. Phys. D 21 (2012), 1250038
[arXiv:1010.4961 [gr-qc]].
Zhao:2012zz
H. H. Zhao, G. L. Li and L. C. Zhang,
Generalized uncertainty principle and entropy of three-dimensional rotating acoustic black hole,
Phys. Lett. A 376 (2012), 2348-2351
Anacleto:2014apa
M. A. Anacleto, F. A. Brito, E. Passos and W. P. Santos,
The entropy of the noncommutative acoustic black hole based on generalized uncertainty principle,
Phys. Lett. B 737 (2014), 6-11
[arXiv:1405.2046 [hep-th]].
Anacleto:2015awa
M. A. Anacleto, F. A. Brito, G. C. Luna, E. Passos and J. Spinelly,
Quantum-corrected finite entropy of noncommutative acoustic black holes,
Annals Phys. 362 (2015), 436-448
[arXiv:1502.00179 [hep-th]].
Anacleto:2016qll
M. A. Anacleto, I. G. Salako, F. A. Brito and E. Passos,
The entropy of an acoustic black hole in neo-Newtonian theory,
Int. J. Mod. Phys. A 33 (2018) no.32, 1850185
[arXiv:1603.07311 [hep-th]].
Anacleto:2019rfn
M. A. Anacleto, F. A. Brito, C. V. Garcia, G. C. Luna and E. Passos,
Quantum-corrected rotating acoustic black holes in Lorentz-violating background,
Phys. Rev. D 100 (2019) no.10, 105005
[arXiv:1904.04229 [hep-th]].
Cardoso:2004fi
V. Cardoso, J. P. S. Lemos and S. Yoshida,
Quasinormal modes and stability of the rotating acoustic black hole: Numerical analysis,
Phys. Rev. D 70 (2004), 124032
[arXiv:gr-qc/0410107 [gr-qc]].
Nakano:2004ha
H. Nakano, Y. Kurita, K. Ogawa and C. M. Yoo,
Quasinormal ringing for acoustic black holes at low temperature,
Phys. Rev. D 71 (2005), 084006
[arXiv:gr-qc/0411041 [gr-qc]].
Berti:2004ju
E. Berti, V. Cardoso and J. P. S. Lemos,
Quasinormal modes and classical wave propagation in analogue black holes,
Phys. Rev. D 70 (2004), 124006
[arXiv:gr-qc/0408099 [gr-qc]].
Chen:2006zy
S. B. Chen and J. L. Jing,
Quasinormal modes of a coupled scalar field in the acoustic black hole spacetime
Chin. Phys. Lett. 23 (2006), 21-24
Guo:2020blq
H. Guo, H. Liu, X. M. Kuang and B. Wang,
Acoustic black hole in Schwarzschild spacetime: quasi-normal modes, analogous Hawking radiation and shadows,
Phys. Rev. D 102 (2020), 124019
[arXiv:2007.04197 [gr-qc]].
Ling:2021vgk
R. Ling, H. Guo, H. Liu, X. M. Kuang and B. Wang,
Shadow and near-horizon characteristics of the acoustic charged black hole in curved spacetime,
Phys. Rev. D 104, no.10, 104003 (2021)
[arXiv:2107.05171 [gr-qc]].
Dolan:2011zza
S. R. Dolan, E. S. Oliveira and L. C. B. Crispino,
Aharonov-Bohm effect in a draining bathtub vortex,
Phys. Lett. B 701 (2011), 485-489
Anacleto:2012ba
M. A. Anacleto, F. A. Brito and E. Passos,
Analogue Aharonov-Bohm effect in a Lorentz-violating background
Phys. Rev. D 86 (2012), 125015
[arXiv:1208.2615 [hep-th]].
Anacleto:2012du
M. A. Anacleto, F. A. Brito and E. Passos,
Noncommutative analogue Aharonov-Bohm effect and superresonance,
Phys. Rev. D 87 (2013) no.12, 125015
[arXiv:1210.7739 [hep-th]].
Anacleto:2015mta
M. A. Anacleto, I. G. Salako, F. A. Brito and E. Passos,
Analogue Aharonov-Bohm effect in neo-Newtonian theory,
Phys. Rev. D 92 (2015) no.12, 125010
[arXiv:1506.03440 [hep-th]].
Anacleto:2016ukc
M. A. Anacleto, F. A. Brito, A. Mohammadi and E. Passos,
Aharonov-Bohm effect for a fermion field in the acoustic black hole ”spacetime”,
Eur. Phys. J. C 77 (2017) no.4, 239
[arXiv:1606.09231 [hep-th]].
Anacleto:2018acl
M. A. Anacleto, F. A. Brito, J. A. V. Campos and E. Passos,
Higher-derivative analogue Aharonov–Bohm effect, absorption and superresonance,
Int. J. Mod. Phys. A 35 (2020) no.21, 2050112
[arXiv:1810.13356 [hep-th]].
Anacleto:2020kxj
M. A. Anacleto, C. H. G. Bessa, F. A. Brito, E. J. B. Ferreira and E. Passos,
Stochastic motion in an expanding noncommutative fluid,
Phys. Rev. D 103 (2021) no.12, 125023
[arXiv:2012.12212 [hep-th]].
Anacleto:2021wmv
M. A. Anacleto, C. H. G. Bessa, F. A. Brito, A. E. Mateus, E. Passos and J. R. L. Santos,
LIV effects on the quantum stochastic motion in an acoustic FRW-geometry,
[arXiv:2106.09684 [gr-qc]].
Qiao:2021trw
C. K. Qiao and M. Zhou,
The Gravitational Bending of Acoustic Schwarzschild Black Hole,
[arXiv:2109.05828 [gr-qc]].
Vieira:2014rva
H. S. Vieira and V. B. Bezerra,
Acoustic black holes: massless scalar field analytic solutions and analogue Hawking radiation,
Gen. Rel. Grav. 48 (2016) no.7, 88
[erratum: Gen. Rel. Grav. 51 (2019) no.4, 51]
[arXiv:1406.6884 [gr-qc]].
Ribeiro:2021fpk
C. C. H. Ribeiro, S. S. Baak and U. R. Fischer,
Existence of steady-state black hole analogs in finite quasi-one-dimensional Bose-Einstein condensates,
Phys. Rev. D 105, no.12, 124066 (2022)
doi:10.1103/PhysRevD.105.124066
[arXiv:2103.05015 [cond-mat.quant-gas]].
Zhang:2016pqx
B. Zhang,
Thermodynamics of acoustic black holes in two dimensions,
Adv. High Energy Phys. 2016, 5710625 (2016)
doi:10.1155/2016/5710625
[arXiv:1606.00693 [hep-th]].
Rinaldi:2011nb
M. Rinaldi,
The entropy of an acoustic black hole in Bose-Einstein condensates,
Phys. Rev. D 84, 124009 (2011)
doi:10.1103/PhysRevD.84.124009
[arXiv:1106.4764 [gr-qc]].
Steinhauer:2015ava
J. Steinhauer,
Measuring the entanglement of analogue Hawking radiation by the density-density correlation function,
Phys. Rev. D 92, no.2, 024043 (2015)
doi:10.1103/PhysRevD.92.024043
[arXiv:1504.06583 [gr-qc]].
Giovanazzi:2011az
S. Giovanazzi,
Entanglement Entropy and Mutual Information Production Rates in Acoustic Black Holes,
Phys. Rev. Lett. 106, 011302 (2011)
doi:10.1103/PhysRevLett.106.011302
[arXiv:1101.3272 [cond-mat.other]].
Anacleto:2022lnt
M. A. Anacleto, F. A. Brito and E. Passos,
Hawking radiation and stability of the canonical acoustic black holes,
[arXiv:2212.13850 [hep-th]].
Bazeia:2005tb
D. Bazeia and R. Menezes,
Phys. Rev. D 73, 065015 (2006)
[arXiv:hep-th/0506262].
Ghosh:2004wi
S. Ghosh,
Noncommutativity in Maxwell-Chern-Simons-matter theory simulates Pauli magnetic coupling,
Mod. Phys. Lett. A 20, 1227-1238 (2005)
doi:10.1142/S0217732305017494
[arXiv:hep-th/0407086 [hep-th]].
Das:2008kaa
S. Das and E. C. Vagenas,
Universality of Quantum Gravity Corrections,
Phys. Rev. Lett. 101, 221301 (2008)
doi:10.1103/PhysRevLett.101.221301
[arXiv:0810.5333 [hep-th]].
Das:2009hs
S. Das and E. C. Vagenas,
Phenomenological Implications of the Generalized Uncertainty Principle,
Can. J. Phys. 87, 233-240 (2009)
doi:10.1139/P08-105
[arXiv:0901.1768 [hep-th]].
Ali:2011fa
A. F. Ali, S. Das and E. C. Vagenas,
A proposal for testing Quantum Gravity in the lab,
Phys. Rev. D 84, 044013 (2011)
doi:10.1103/PhysRevD.84.044013
[arXiv:1107.3164 [hep-th]].
Ali:2009zq
A. F. Ali, S. Das and E. C. Vagenas,
Discreteness of Space from the Generalized Uncertainty Principle,
Phys. Lett. B 678, 497-499 (2009)
doi:10.1016/j.physletb.2009.06.061
[arXiv:0906.5396 [hep-th]].
Casadio:2014pia
R. Casadio, O. Micu and P. Nicolini,
Minimum length effects in black hole physics,
Fundam. Theor. Phys. 178, 293-322 (2015)
doi:10.1007/978-3-319-10852-0_10
[arXiv:1405.1692 [hep-th]].
Kempf:1994su
A. Kempf, G. Mangano and R. B. Mann,
Hilbert space representation of the minimal length uncertainty relation,
Phys. Rev. D 52, 1108-1118 (1995)
doi:10.1103/PhysRevD.52.1108
[arXiv:hep-th/9412167 [hep-th]].
Garay:1994en
L. J. Garay,
Quantum gravity and minimum length,
Int. J. Mod. Phys. A 10, 145-166 (1995)
[arXiv:gr-qc/9403008 [gr-qc]].
Amelino-Camelia:2000cpa
G. Amelino-Camelia,
Testable scenario for relativity with minimum length,
Phys. Lett. B 510, 255-263 (2001)
doi:10.1016/S0370-2693(01)00506-8
[arXiv:hep-th/0012238 [hep-th]].
Scardigli:1999jh
F. Scardigli,
Generalized uncertainty principle in quantum gravity from micro - black hole Gedanken experiment,
Phys. Lett. B 452, 39-44 (1999)
doi:10.1016/S0370-2693(99)00167-7
[arXiv:hep-th/9904025 [hep-th]].
Scardigli:2003kr
F. Scardigli and R. Casadio,
Generalized uncertainty principle, extra dimensions and holography,
Class. Quant. Grav. 20, 3915-3926 (2003)
doi:10.1088/0264-9381/20/18/305
[arXiv:hep-th/0307174 [hep-th]].
Scardigli:2014qka
F. Scardigli and R. Casadio,
Gravitational tests of the Generalized Uncertainty Principle,
Eur. Phys. J. C 75, no.9, 425 (2015)
doi:10.1140/epjc/s10052-015-3635-y
[arXiv:1407.0113 [hep-th]].
Scardigli:2016pjs
F. Scardigli, G. Lambiase and E. Vagenas,
GUP parameter from quantum corrections to the Newtonian potential,
Phys. Lett. B 767, 242-246 (2017)
doi:10.1016/j.physletb.2017.01.054
[arXiv:1611.01469 [hep-th]].
|
http://arxiv.org/abs/2306.07282v1
|
20230612175948
|
Waffling around for Performance: Visual Classification with Random Words and Broad Concepts
|
[
"Karsten Roth",
"Jae Myung Kim",
"A. Sophia Koepke",
"Oriol Vinyals",
"Cordelia Schmid",
"Zeynep Akata"
] |
cs.CV
|
[
"cs.CV",
"cs.LG"
] |
Waffling around for Performance: Visual Classification with
Random Words and Broad Concepts
Karsten Roth^1, Jae Myung Kim^1, A. Sophia Koepke^1, Oriol Vinyals^2, Cordelia Schmid^3, Zeynep Akata^1,4
^1University of Tübingen, ^2Google DeepMind,
^3Inria, Ecole normale supérieure, CNRS, PSL Research University, ^4MPI for Intelligent Systems
July 31, 2023
==============================================================================================================================================================================================================================================================
empty
The visual classification performance of vision-language models such as CLIP can benefit from additional semantic knowledge, e.g. via large language models (LLMs) such as GPT-3.
Further extending classnames with LLM-generated class descriptors, e.g. “waffle, which has a round shape”, or averaging retrieval scores over multiple such descriptors, has been shown to improve generalization performance.
In this work, we study this behaviour in detail and propose CLIP, a framework for zero-shot visual classification which achieves similar performance gains on a large number of visual classification tasks by simply replacing LLM-generated descriptors with random character and word descriptors without querying external models.
We extend these results with an extensive experimental study on the impact and shortcomings of additional semantics introduced via LLM-generated descriptors, and showcase how semantic context is better leveraged by automatically querying LLMs for high-level concepts, while jointly resolving potential class name ambiguities.
Link to the codebase: https://github.com/ExplainableML/WaffleCLIPhttps://github.com/ExplainableML/WaffleCLIP.
§ INTRODUCTION
Task-specific natural language prompts <cit.> improve the performance of large vision-language models (VLMs) <cit.>. However, if the model does not have access to additional training data, i.e. in the zero-shot setting, prompt tuning is not an option.
Instead, a promising alternative <cit.> is querying large language models (LLMs) to provide additional semantic context to enrich class representations. Extending classnames with fine-grained class descriptors generated by GPT-3 <cit.> via minimal human intervention boosts results <cit.>.
In particular, <cit.> use class-based descriptors on top of classnames, e.g. a round shape for waffle, and provide experimental evidence that additional semantic cues obtained this way are beneficial.
However, a closer inspection of GPT-3 generated descriptors indicates a high degree of diversity, limited visual relevance and ambiguity <cit.>. This means that multiple descriptors can get assigned to a class despite them likely not co-occurring, e.g. “steamed” and “fried”,
can contain non-visual attributes, e.g. “a sour and spicy smell”, or can be associated with an ambiguous class interpretation, e.g. “webbed feet” for “Peking duck” as a food item.
Hence, the underlying drivers of performance improvements when using generated fine-grained class descriptors are unclear.
To understand what is required to achieve these performance gains, we first evaluate a variant of <cit.> and show that each set of class-specific GPT-3 generated descriptors can be replaced with a fixed set of randomly selected, class-independent descriptors while still retaining similar benefits in performance.
Motivated by this observation, we take this one step further and propose , named after waffling around the class name, that replaces the LLM-generated fine-grained descriptors, e.g. a round shape, a grid pattern, with random words (e.g. "foot loud") or character lists (e.g. "jmhj, !J#m") based on average class name length and word counts (cf. Figure <ref>).
As doesn't require access to LLMs for additional context (unlike e.g. <cit.>), it remains inherently zero-shot.
Naturally, the convincing performance of across benchmarks raises questions regarding the true benefits of additional semantics introduced by LLM-generated descriptors.
We provide answers with extensive experiments, showcasing that semantic descriptors produced by LLMs offer a structurally different and complementary impact on the classification behavior. However, we find this not to be fully driven by additionally introduced semantics, but rather a different form of structured noise ensembling.
Instead, we show that actual semantic context is better introduced through coarse-grained, high-level concepts.
Given access to external LLMs, we suggest a query mechanism for GPT-3 to automatically generate these (e.g. food for waffle, peking duck), while jointly resolving issues of context-dependent class label ambiguity for further gains.
In summary, our contributions are:
1) We motivate and propose to use random character and word descriptors to enhance the semantic retrieval process in VLMs (particularly CLIP);
2) we demonstrate that yields similar or better zero-shot image classification performances compared to methods reliant on external LLM-generated descriptors;
3) we extensively study the semantic context introduced through LLM-generated descriptors and propose (automatically extracted) high-level LLM-generated concepts as an alternative for better use of semantics while tackling classname ambiguities.
§ RELATED WORK
Image classification with VLMs such as CLIP <cit.> has gained popularity particularly in low-data regimes.
As input prompts have a significant impact on the performance, recent research has focused on the exploration of learnable prompts for the text encoder <cit.>, the visual encoder <cit.> or for both encoders jointly <cit.>.
Alternatively synthetic images generated from the classnames can support image classification <cit.>.
In contrast, we do not tune prompts or query image generation methods, but propose to use prompts containing random characters or words to enhance the zero-shot capabilities of VLMs.
Adding external knowledge to language prompts.
Recently, multiple works have leveraged LLMs to obtain more effective prompts.
<cit.> utilized GPT-3 <cit.> to produce and study lengthy, descriptive sentences that articulate the visual concepts of each category, while
<cit.> generated semantic hierarchies to identify subclasses of categories for zero-shot class prediction. <cit.> used multiple fine-grained LLM-generated class descriptors, which enhance accuracy and appear to provide interpretability by assigning weights to each descriptor.
Similarly, different kinds of descriptions have been used for image classification, by manually crafting descriptions <cit.>, or by utilizing external databases based on Wikipedia <cit.>,
the WordNet hierarchy <cit.>, or the ImageNet-Wiki <cit.>.
Whilst external knowledge from LLMs can be valuable, we can match the image classification performance of using fine-grained LLM-generated descriptors with randomly sampled characters and words as class descriptors. In addition, we find that if semantic context is available through LLMs, it is better integrated through high-level context (c.f. e.g. also <cit.>), for which we provide an automatic extraction mechanism.
Noise augmentation.
Data augmentation through noise is known to enhance the performance and robustness of model training for a variety of tasks and domains <cit.>. In the language domain, noise can be incorporated in the embedding or input space.
For instance, <cit.> used linguistic embedding space augmentations inspired by mixup <cit.>, and <cit.> added Gaussian embedding space noise.
Augmentation through input space noise has been performed at the word- <cit.>, token- <cit.> or character-level <cit.>.
For character-level noise augmentation, characters are randomly substituted, added or removed <cit.>.
In all cases, these augmentation are used to prevent overfitting during training. Instead, our approach utilizes character- and word-level language augmentation to perturb the class prompts for improved zero-shot image classification.
§ METHOD
We first describe image classification using class descriptors following <cit.> (<ref>), before motivating and explaining our LLM-free, random semantic descriptor alternative CLIP (<ref>). Finally, if LLMs are available, we highlight a simple extension to incorporate semantics while jointly resolving ambiguities via automatic high-level semantic concepts extraction for additional benefits (<ref>).
§.§ Image classification with class descriptors
Given target categories C and a query image x, the zero-shot image classification protocol used in CLIP <cit.> defines the classification problem as nearest neighbour retrieval:
c̃ = _c∈ C s(ϕ_I(x), ϕ_L(f(c))),
with prompt f(c)= and image and language encoder ϕ_I and ϕ_L.
To improve the retrieval process, <cit.> converts the simple class-embedding retrieval to a dictionary-based one, where a class c is associated with a set of descriptors D_c via with e.g. and . Given D_c for classes c, classification is reformulated as
_c∈ C1/|D_c|∑_d∈ D_cs(ϕ_I(x), ϕ_L(d)),
which defines the similarity between image x and class c as the average similarity to all its descriptor variants. We abbreviate this descriptor-based extension of CLIP as DCLIP.
§.§ CLIP
DCLIP <cit.> [DCLIP <cit.> reports improvements over CLIP by using the phrase instead of as suggested in the original CLIP paper. For fair comparison with CLIP, we utilize the latter.] requires external LLMs for descriptors that convert the single-class matching problem to one over an ensemble of fine-grained class representations.
Motivation. However, we observe that various such LLM-generated descriptors reveal high diversity, limited visual relevance, and ambiguity.
From a conceptual perspective, this makes it hard to pin down the exact benefits of generated class descriptors used e.g. in <cit.> or <cit.>.
To understand a possible driver of performance improvements, we conduct a simple experimental study, shown in Tab. <ref>. We take all available LLM-generated descriptors for a dataset from <cit.>, sample a small set of descriptors where the cardinality of the set is
the average number of descriptors per class used in DCLIP, and assign this same set of random descriptors to every class, i.e. DCLIP (same, 1x).
This shows a close match to DCLIP (e.g. 58.56% and 58.16% for ViT-B/32, 69.14% and 68.80% for ViT-L/14, and 54.77% and 54.71% for ResNet50 in total average) and in parts even better performance (e.g. 0.83%, 0.34%, 1.49% improvement in Food101 for ViT-B/32, ViT-L/14, ResNet50, respectively). This reveals averaging over descriptor variations as one of the key drivers for performance.
The results further improve when increasing the number of random LLM-generated descriptors for each class (DCLIP (same, 2x), e.g. 58.16%→58.29% on ViT-B/32 or 68.80%→69.12% on ViT-L/14). This indicates that the role of additional descriptor semantics is likely overestimated, especially when uncurated descriptors are used.
Building on the benefits of averaging over various prompt variants to extract a better semantic representation estimate of an associated class, we investigate whether fully randomized prompt descriptors can provide similar benefits, without querying external LLMs.
CLIP. This motivates CLIP, an LLM-free descriptor alternative that uses simple randomized descriptors.
In particular, we populate D_c with class-independent, random word sequences or random character lists, with fixed number of characters per word, l_w, and fixed number of words, n_w. For example, l_w = 4 and n_w = 2 for in Figure <ref>.
To avoid introducing hyperparameters, we leverage a simple heuristic where the average number of words and average number of characters per word in the provided class labels determines l_w and n_w.
As a result, this converts the standard CLIP input prompt into , where we follow the extension structure used in <cit.>.
§.§ Better semantics and reduced ambiguity via high-level concepts
Due to the limited impact of additional semantics introduced by fine-grained descriptors (c.f. <ref>), we propose an alternative way of querying LLMs, that does not require averaging across multiple descriptors and jointly addresses the issue of class ambiguities.
Therefore, we suggest taking a step back and to search not for additional class details, but instead for higher-level commonalities between the classes, akin to the use of class hierarchies in image classification <cit.>. Understanding commonalities between multiple target classes can help resolve ambiguities. If the class is seen in the context of animal classification, it likely refers to the animal instead of a human athlete.
We propose to automatically produce such high-level concepts by using available class names (or subsets if the class count exceeds the maximum LLM input sequence length) C_𝒟 for a dataset 𝒟 and querying GPT-3 <cit.> with:
.
After extracting the shared , a simple concept filtering is attached that checks if generated concepts fall into non-specific categories, namely , , , , or . If so, high-level concept guidance is omitted (only the case for three out of eleven visual classification benchmarks, see also <ref>). We then augment the default CLIP prompt to and for , the prompt is then extended to .
While the prompt style can likely be improved, this naive extension already offers remarkable benefits.
§ EXPERIMENTS
We start with implementation details before comparing CLIP to DCLIP in <ref>.
Extending our observational experiments in Tab. <ref>, we study the source of performance gains via LLM-generated descriptors (<ref>) and show a better way to introduce semantics into the retrieval process while tackling semantic ambiguities via automatic high-level concept extraction (<ref>).
Finally, <ref> provides additional insights on additional (OOD) benchmarks and a comparison to prompt ensembles and latent space noise. For additional details, see Supplementary.
Implementation details. We utilize CLIP <cit.> as the underlying VLM for . As there is no direct cost associated with generating random character or word sequences, their number is only bounded by inference speed requirements (which is minimal as all respective language embeddings can be computed a priori <cit.>). However, we find diminishing returns for very high numbers (see also <ref>), and use 30 random descriptors per class (or 15 random character and word descriptor pairs) if not mentioned otherwise, with similar performance for both half or double the descriptor count (c.f. <ref>).
All experiments use PyTorch <cit.> and are conducted on a single NVIDIA 3090Ti. Wherever necessary, fine-grained LLM-generated descriptors are either taken from or generated following the codebase provided by <cit.>, which we build on. If not mentioned explicitly, every result involving is computed over at least seven random seeds.
Benchmarks. The datasets considered are (mostly from <cit.>) ImageNet <cit.> and ImageNetV2 <cit.>, CUB200-2011 <cit.> (fine-grained bird classification data), EuroSAT <cit.> (satellite image recognition data), Places365 <cit.> with scene imagery, Food101 <cit.> with different food classes, Oxford IIIT Pets <cit.>, DTD (Describable Textures Dataset, <cit.>), Flowers102 <cit.>, FGVCAircraft <cit.> and Stanford Cars <cit.>.
High-level concepts. Following <ref>, the GPT-3 generated high-level concept for CUB200-2011 is , for EuroSAT, for Places365, for Food101 and for Oxford Pets. For additional benchmarks, extracted concepts are noted in the resp. section <ref>. For ImageNet (V2) and DTD, the concepts are too generic and thus filtered out (, or ), with high-level guidance omitted.
§.§ vs LLM-generated descriptors
We start by analyzing the impact of randomization beyond fixed, randomized sets of fine-grained LLM-generated descriptors as done in Tab. <ref>, by instead using randomized character or word descriptors through our proposed .
For that, we investigate visual classification accuracies across the eight diverse benchmarks studied in <cit.> in Tab. <ref>, where we compare , which does not use any external LLMs, with DCLIP. We find that averaging over randomized descriptors yields performances comparable to or better than those obtained with LLM-generated fine-grained descriptors over a majority of studied datasets, with average performance similarly matching: 58.56% using DCLIP versus 58.57% for with a ViT-B/32 backbone and (see Supp.) 69.14%→68.95% for ViT-L/14, and 54.77%→54.20% for ResNet50.
Beyond the inherently zero-shot nature of and ease of use, these results highlight that improved visual classification with pretrained VLMs does not require external LLMs, and further cements prompt averaging as a potential key driver behind DCLIP.
§.§ Are descriptors from LLMs obsolete?
Our results above question the benefits of LLM-generated fine-grained semantics, as averaging over fully randomized character and word sequences achieves comparable performance. But does that mean that there is no benefit in leveraging descriptors produced by LLMs?
Impact of Averaging. To better understand this, we extend our motivational experiments from Tab. <ref>. First, we look at what happens when not performing averaging over all image-descriptor distances as in DCLIP, but instead choosing the maximum. If additional fine-grained semantics were indeed beneficial, selecting the most suitable one should similarly raise the performance. However, as Tab. <ref> reveals, performance actually drops, highlighting that the VLM can not leverage the additional semantics to improve visual classification performance[This is potentially influenced by bag-of-words behaviour of CLIP-like VLMs <cit.>, which we leave to future research to study in more detail.]. Instead, it again points to descriptor ensembling as the main driver in performance.
We further support this by studying additional descriptor randomization variants beyond those in <ref>. In particular, instead of swapping specific descriptors, we interchange full class-specific descriptor lists (interchanged). As descriptions often contain class-specific keywords, this models a systematic semantic shift away from the actual class. Additionally, we evaluate shuffling words within a descriptor list (shuffled), and descriptor lists subsampled from all available ones (random). This gives a progression from systematic to more independent descriptor randomization.
And indeed, our results in Tab. <ref> reveal that directly interchanging full class-dependent descriptor lists (interchanged) drops performance significantly (e.g. from 58.56% to 55.03 on ViT-B/32). In cases where no such shift is happening, we find performances to match that of DCLIP (e.g. 86.54%→86.28% on Oxford Pets). Similarly, when moving from a systematic shift closer to fully randomized descriptors (scrambled with 58.56%→57.55% to random with 58.56%→ 58.02%, see Supp. for more results), we move closer to DCLIP performance.
While this offers further evidence for and the fact that class-dependent ensembling drives gains, it does not yet allow us to directly compare the impact on the prediction behavior between LLM-generated descriptors and randomized ones.
Structural differences. We consider the percentages of samples that get positively or negatively flipped - i.e. ones that are classified correctly while previously being classified incorrectly (and vice versa) - when moving from CLIP to either DCLIP or in Fig. <ref>. We find that using LLM-generated fine-grained descriptors flips significantly more predictions than randomized words and characters, even in cases where CLIP outperforms DCLIP. For example, DCLIP achieves 43.29% compared to with 44.31% on EuroSAT or 82.79% to 83.35% on Food101 in Tab. <ref>, but DCLIP still flips a significantly larger portion of samples than on those datasets in Fig. <ref>.
This reveals that full sentence, LLM-generated descriptors have a structurally different impact on the zero-shot classification process, which we find to operate complementary to randomized ones (see Tab. <ref>, + GPT descr.), where the use of both descriptor types leads to additional performance improvements over CLIP (e.g. 58.57%→58.93% for ViT-B/32 or 54.20%→55.22% for ResNet50).
This means that even if additional semantics are not the guiding factor, LLMs for structured descriptor generation can still facilitate the extraction of a more robust class embedding. Furthermore, even with access to an external model, can provide additional benefits.
§.§ Semantic guidance via high-level concepts
While we verified the relevance of additional semantic context through fine-grained descriptors, methods using additional fine-grained class information <cit.> suffer from the inherent ambiguities in some class names.
As proposed in <ref>, our aim is to understand if high-level semantic context can be used to resolve such ambiguities and provide high-level semantic guidance for the class-retrieval process.
Our results with extracted high-level concepts
in Tab. <ref>, i.e. (+ Concepts), demonstrate across most benchmarks and backbones consistent and significant improvements, when used with CLIP, with CLIP, and even when used alongside CLIP and DCLIP.
These improvements are especially evident on benchmarks with ambiguous (e.g. Food101) or generic labeling (e.g. EuroSAT, with labels such as Industrial or Residential): For ViT-B/32, classification accuracy increases from 40.78% to 48.86% when applied to CLIP, with similarly high improvements for ViT-L/14 (56.03%→61.23%) or ResNet50 (28.09→34.06).
Overall, the average classification accuracy also increases consistently (e.g. from 57.34% to 58.96% with ViT-B/32). This beats even DCLIP, while only being applicable on five out of eight benchmarks (58.96% versus 58.56%).
When applied to , improvements across most benchmark and backbone settings are also significant, although we find diminishing returns on the largest backbone, ViT-L/14, with average performance increasing only from 68.95% to 69.12%. This might be due to its capabilities of retaining the most common concepts associated with specific classes, resulting in a robust class retrieval setup when averaging over multiple randomized descriptor variants.
We verify the benefits of high-level semantics further by looking at performance changes when concepts are interchanged (Fig. <ref>). For most benchmarks, the highest improvements are obtained with respective GPT-generated concepts.
Some off-diagonal terms with higher scores, e.g. CUB200 where performs similar to/worse than /, do appear out of distribution, and warrant future research to improve our understanding of how semantics concepts are truly encoded in large VLMs.
However, seeing maximum performances primarily on the diagonal heuristically supports that additional semantics introduced as high-level concepts and commonalities, can offer reliable guidance. Indeed, considering a selection of ambiguous samples such as or in the Oxford Pets dataset, , or in the Food101 dataset, or highly generic labels such as or in the EuroSAT satellite image dataset, we find a consistent increase in average similarity to all associated test images by up to 13%. This confirms that concept guidance can re-align and refine class embeddings based on the respective context.
§.§ Ablation studies
Evaluation on additional (OOD) benchmarks.
For further evidence on the generality of CLIP and concept guidance, we study three additional benchmarks beyond those in Tab. <ref> and <cit.>: Flowers102 <cit.> (extracted concept: ), FGVCAircraft <cit.> (), and StanfordCars <cit.> ().
Our results in Tab. <ref> (and in the suppl. material for other backbones) again show consistent gains when going from CLIP to CLIP or + Concepts.
Interestingly, DCLIP is detrimental on very fine-grained benchmarks like Stanford Cars, losing 1.46% against CLIP.
We speculate that this is due to semantically similar descriptors for multiple classes that are coarser than the actual class label (e.g. and being assigned similar generic BMW descriptors). Consequently, embeddings of related classes are systematically moved too close, deteriorating the performance.
Meanwhile, CLIP (+ Concepts) can still offer performance boosts (58.54%→58.91%→59.70%).
Furthermore, we study on OOD benchmarks: Adversarial natural images (ImageNet-A, <cit.>), sketches (ImageNet-S, <cit.>) and renditions (ImageNet-R, <cit.>). Results in Tab. <ref> show that while DCLIP does not improve consistently, operates well even for out-of-distribution data (e.g. 29.63%→ 31.52% on ImageNet-A).
Comparison to prompt ensembles. We also compare to prompt ensembling (c.f. e.g. <cit.>) with the same budget of 30 randomly selected prompt options from a list of eighty handcrafted ones (taken from <cit.>, such as , , ...). Unlike , prompt ensembling still requires human input and design. Results on all eleven benchmarks are listed in Tab. <ref>, which favor - outperforming prompt ensembling in eight out of eleven benchmarks and comparable performance on the remaining ones.
In particular, improvements over prompt ensembling are higher than the improvement of prompt ensembling over vanilla CLIP (56.32%→55.58%→55.01%).
This further supports the benefit of extracting more robust semantic representations, for which randomized descriptors provide a cheap and suitable tool.
In addition to that, we highlight the complementarity of high-level concept guidance also with prompt ensembling in Tab. <ref> (wherever the classname is included, we simply use instead), raising the average classification accuracy from 55.58% to 56.94% (compared to the base CLIP accuracy of 55.01%).
Comparison to latent space noise. To highlight that a more robust extraction of semantics through class-conditioned randomization on the input level is crucial, we also compare to randomization directly in the (hyperpherical) latent space. For that, we choose a von-Mises-Fisher distribution (as commonly utilized to model uni-model distributions on the hypersphere <cit.>):
p(ϕ̂^c|ϕ^c_l, κ) = 𝒞_d(κ)exp(κϕ^c_l^Tϕ̂^c),
centered around default class embeddings ϕ^c_l with constant normalization function 𝒞_d(κ) only dependent on the input dimensionality and concentration κ.
To sample from a vMF distribution around each class embedding, we leverage the sampler utilized in <cit.> with the same budget of 30 noise embeddings. Average classification performance as a function of the (inverse) concentration κ is visualized in Fig. <ref>.
As can be seen, for high concentrations (i.e. random embedding samples are placed close to the mean direction), one can replicate the default performance of a single CLIP embedding. For higher variances, performance continuously drops, with a hard inflection at around κ≈500. This serves to show that class-conditioned randomized descriptors as used in are crucial to providing a more robust estimate of semantic concepts, and cannot be simulated through simple embedding space noise.
Dependence on descriptor counts. We study the impact of the randomized word and character sequence pair count for in Fig. <ref>. A value of one indicates a single pair comprising a random words and characters descriptor, respectively. We achieve competitive performance already with 4 to 15 descriptor pairs (c.f. DCLIP in Tab. <ref>), while consistently outperforming CLIP (blue line) even with a single randomized descriptor pair. As class embeddings can be computed a priori, the impact on overall inference time is low, making CLIP and its extensions very attractive for enhancing image classification performance of VLMs.
Impact of randomization types. Finally, we analyze how performance changes when either only using random character sequences or only random word sequences, instead of a combination of both as in .
Across benchmarks and architectures (see Tab. <ref>), we observe dichotomies in performance between either random word or random character sequences, often performing either best or worst on a specific benchmark and backbone, while the joint usage of both random words and character sequences strikes a consistent and best transferable average improvement across benchmarks and backbone architectures. Therefore, we chose the joint usage of both random words and characters as our default setup.
§ CONCLUSION
In this work, we systematically examined the benefits of using LLM-generated additional class descriptors for improved training-free image classification with vision-language models (VLMs).
In-depth studies reveal how similar performance gains can be achieved by replacing these LLM-generated descriptors with randomized ones, giving rise to . Even though is entirely zero-shot as it does not require access to external LLMs, across eleven visual classification benchmarks, we find comparable or better results than those obtained when using fine-grained GPT-3 generated descriptors, making very attractive for practical use in true zero-shot scenarios.
We also show that VLM struggles to leverage the actual semantics introduced through LLM-generated descriptors, and instead show that if given access to external LLMs, semantics are better exploited through coarse-grained, high-level concepts. Using specific queries, we show how these can be automatically extracted, while jointly helping to address issues of class ambiguity.
§ ACKNOWLEDGEMENTS
This work was supported by DFG project number 276693517, by BMBF FKZ: 01IS18039A, by the ERC (853489 - DEXIM), and by EXC number 2064/1 – project number 390727645.
Karsten Roth and Jae Myung Kim thank the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program and the International Max
Planck Research School for Intelligent Systems (IMPRS-IS) for support.
ieee_fullname
|
http://arxiv.org/abs/2306.10768v1
|
20230619082241
|
Performance Analysis of Spoke Resonators, Statistics from Cavity Fabrication to Cryomodule Testing
|
[
"A. Miyazaki",
"P. Duchesne",
"D. Ledrean",
"D. Longuevergne",
"G. Olry"
] |
physics.acc-ph
|
[
"physics.acc-ph"
] |
PERFORMANCE ANALYSIS OF SPOKE RESONATORS, STATISTICS FROM CAVITY FABRICATION TO CRYOMODULE Testing
A. [email protected], P. Duchesne, D. Ledrean, D. Longuevergne, and G. Olry,
CNRS/IN2P3/IJCLab Université Paris-Saclay, 91405 Orsay, France
July 31, 2023
=======================================================================================================================================================================
Irène Joliot-Curie Laboratory (IJCLab) has been leading the development of spoke resonators in multiple international SRF projects, from fundamental R&D, prototyping, to series production.
The European Spallation Source (ESS) superconducting linac is the first of its kind to put into operation the spoke resonators.
After three prototype cavities, 29 ESS production cavities have been processed, tested, assembled into cryomodules at IJCLab, and then shipped to Uppsala for the site acceptance test.
Seven prototypes for two other major projects, Multi-purpose hYbrid Research Reactor for High-tech Application (MYRRHA) and Proton Improvement Plan II (PIP-II), designed in collaboration with external institutions, have as well been processed and tested at IJCLab.
A new challenge is to fully process series cavities in industry, following the successful implementation of 1.3 GHz elliptical cavities in the other projects.
This paper summarises main results obtained from fabrication to final testing, including frequency tuning strategy, performance, limitation in vertical cryostat, and identifies future direction of projects and R&D in the field of spoke cavities.
§ INTRODUCTION
Superconducting spoke cavities are the choice in a medium-β section of proton drivers.
Since the late 1980s <cit.>, spoke cavities have been developed and their technology is matured today <cit.>.
However, practical challenges for the deployment of the these cavities in real machines need to be identified and overcome.
Unlike the 1.3 GHz TESLA-type cavities, spoke cavities are not sufficiently standardised and there are many open questions towards the successful operation of accelerators.
Irène Joliot-Curie Laboratory (IJCLab) plays a leading role in this crucial matter in international projects.
IJCLab has been pioneering the development of spoke cavities technology from fundamental studies <cit.> to design and prototyping work <cit.> and even deployment in the machines.
In this paper, we overview state-of-the-art technology in developing spoke resonators at IJCLab with three international projects as examples.
The series production of European Spallation Source (ESS) <cit.> double-spoke cavities revealed delicate issues in frequency tuning, including fabrication, chemical etching, and heat treatment.
We discuss how we overcame these issues with statistics obtained during the production.
These ESS cavities have been qualified in cold tests, integrated in cryomodules, and all passed the site-acceptance tests at Uppsala University.
The next challenge of ESS is installation, commissioning, and of course, operation in the machine.
We completed prototyping four single-spoke cavities for the Multi-purpose hYbrid Research Reactor for High-tech Application (MYRRHA) <cit.>.
The next challenge is to industrialise the surface treatment of spoke cavities,
whose complicated shape may require special attentions compared to conventional elliptical cavities.
We also show preliminary results on heat treatment in prototype MYRRHA cavities, which may be a breakthrough towards 4 K operation of spoke resonators.
Finally, we started prototyping Single Spoke Resonator 2 (SSR2) for Proton Improvement Plan II (PIP-II).
We discuss preliminary results and trade-off of cavity design between RF performance and cleaning process.
§ SUPRATECH
IJCLab hosts the SUPRATECH facilities, where one can perform chemical treatments, high-pressure water rinsing (HPR), heat treatment, mechanical frequency tuning, cold tests at 4 K and 2 K, assembly of cavity string and cryomodules, and cryomodule testing.
As shown in Fig. <ref>, two spoke cavities can be tested at a time in a vertical cryostat (ϕ800) with a vacuum insert which requires a helium tank welded around the cavity <cit.>.
In SUPRATECH, we do not measure bare cavities today because of the following strategic reasons.
First, we can save a huge amount of liquid helium for cold tests, because helium supply is a global issue for the SRF community today.
Secondly, the small heat capacitance of the cryostat enables quick cool down and warm up.
As a drawback, however, careful frequency tuning is required in fabrication, surface processing, and even cooling down because we skip the cold test of a bare cavity to check the frequency before welding the helium tank.
This unique tuning and testing strategy has been successful in the ESS project and the same was adopted for other similar projects (MYRRHA and PIP-II SSR2).
Therefore, a global standard of future projects can follow our strategy: accurate frequency tuning and only one cold test in a vertical test-stand.
§ ESS SERIES CAVITIES
ESS is a proton driver in Sweden for neutron science via nuclear spallation.
In the spoke section from 90 MeV to 216 MeV,
it deploys 13 cryomodules, each of which accommodates two double-spoke cavities with 352 MHz.
The target gradient is E_ acc=9 MV/m with an unloaded quality factor Q_0=1.5×10^9 at 2 K.
Since the geometry factor is G=133 Ω and the peak field ratio is 6.9 mT(MV/m)^-1,
the target surface resistance and peak magnetic field are 89 nΩ and 69 mT, respectively.
Compared to state-of-the-art elliptical cavities,
the target performance is more conservative partially because the field level is somewhat limited by beam dynamics in the spoke section.
However, we achieved performance beyond the specification by one order of magnitude.
Typical Q_0 achieved was above 2×10^10 at 9 MV/m and maximum gradient was above 15 MV/m.
A particular challenge of these cavities was frequency tuning without a bare cavity testing at cold.
The behavior of spoke cavities, with intrinsically complicated shape compared to the conventional elliptical cavities, must be fully understood and controlled in such a very delicate process.
We describe the strategy in fabrication, heat treatment, and chemical etching in the following subsections.
§.§ TUNING AT FABRICATION
The goal of fabrication is to keep the frequency tolerance within ±150 kHz.
The body of the cavity has 5 mm margin for both sides before welding the end-caps there, as indicated by red arrows in Fig. <ref>.
Depending on the frequency, preliminary measured by clamping these parts, and based on frequency shifts by the electron-beam welding (EBW), the trimming lengths were decided by IJCLab.
The first leak check was performed after this final EBW and frequency was permanently shifted due to vacuum pumping inside of the cavities.
After the first leak check, a helium jacket was manually welded to the cavity body by Gas tungsten arc (TIG) welding.
In the beginning of the series production, frequency shifts by different welders were evaluated by four pre-series cavities in order to estimate the influence of this manual welding on the helium jacket.
The jacket welding of series cavities were performed by the selected welder and further statistics were recorded.
After the welding, careful machining was performed in order to form the parts, which are dedicated to mounting cold tuners on the cavity during the module assembly process.
The cavities frequency was further shifted due to a strong supporting force that holds the cavity during machining because of the required high mechanical tolerance for the tuner.
As summarized in Fig. <ref>, although a few exceptions were observed,
the frequency tuning during the manufacturing process was under control.
§.§ CHEMICAL TREATMENT
The chemical treatment is the major mean to fine-tune the cavity frequency after the manufacturing process.
Ports originally prepared for HPR,
enabled Buffer Chemical Polishing (BCP) in two orientations.
Horizontal BCP decreases the frequency by -0.62 kHz/μm while vertical one increases it by +0.34 kHz/μm as shown in Fig. <ref>.
Depending on the frequency at the reception,
we optimised the periods of horizontal and vertical BCPs,
in order to meet the frequency tolerance within ±40 kHz.
After heat treatment,
light BCP was performed to remove surface contamination generated during annealing.
This light BCP is mainly in the vertical orientation;
however, fine-tuning by additional horizontal BCP was sometimes necessary due to unexpected frequency detuning caused by the heat treatment as described in the next subsection.
§.§ HEAT TREATMENT
After bulk BCP, heat treatment was performed in order to degas hydrogen and avoid Q-disease.
The annealing parameters were optimised to 650^∘C for 10 hours because higher temperature was not necessary due to little gain by flux expulsion as described in Ref. <cit.>.
Since the annealing temperature is marginal, some flanges were even copper-brazed in advance.
The frequency shift by heat treatment showed an unexpected behavior as seen in Fig. <ref>.
It statistically distributes around +10 kHz with a substantially large standard deviation at 32 kHz.
The heat treatment either increases or decreases the resonant frequency of cavities unpredictably.
This may be due to the helium jacket (made of titanium) annealed together with the niobium cavity, releasing mechanical stress in its material history.
We did not know the stress level of this titanium jacket at the stage of heat treatment.
In the series production of the ESS cavities,
we made use of horizontal and vertical BCPs to compensate the unexpected frequency shift.
In a few cases, we performed mechanical tuning by pressurising the helium circuit.
For more details, see Ref. <cit.>.
§.§ COLD TESTS
In order to qualify the RF performance of the cavities, cold tests were performed after HPR.
All 29 cavities passed the tests although a few of them required several iterations with HPR and sometimes even light BCP to remove contaminants causing field emission.
We obtained excellent performance in all the cavities as shown in Fig. <ref>, with low-field surface resistance ranging from 2 to 7 nΩ.
In order to evaluate the trapped flux sensitivity for the series cavities,
two cavities (DSPK07 and 17) were measured without active compensation of the ambient magnetic field and still met the project specification.
This is consistent to the dedicated studies with prototype cavities <cit.>.
Another concern was the fact that the cavity mounted at the lower side might capture more contamination than the upper one, because the lower one got cold even when the upper was still at room temperature.
Nevertheless, as shown in Fig. <ref>, we did not find significant systematic differences in the cavities tested at upper or lower positions of the vacuum insert.
Note that no HPR was performed between the cold test and module assembly[HPR was performed for two cavities DSPK06 and 13 exceptionally after the cold test and field emission in DSPK23 observed in the test was successfully removed in the cryomodule. Although the pick-up antennas were carefully re-mounted after the HPR, the field calibration could be potentially uncertain. However, no major impact was observed in the site acceptance test.].
Therefore, the pick-up antenna was not touched until the site acceptance tests so that the calibrated field to power values were preserved.
This may provoke a concern about degradation of field emission onset in the cryomodule,
but we did not observe any substantial increase of X-rays in the site acceptance test at Uppsala University <cit.>.
This evidences that ICJLab's strategy was successful.
§ MYRRHA PROTOTYPE CAVITIES
A proton driver in MYRRHA will provide a high-power proton beam to an accelerator-driven subcritical nuclear reactor.
The first stage of the accelerator is composed of 60 single spoke cavities to provide protons at 100 MeV.
IJCLab developed four prototype single spoke cavities in collaboration with SCK-CEN.
The cavities will be operated at 352 MHz and its target gradient is at 9 MV/m including fault tolerance during the reactor operation.
The peak field ratio of MYRRHA's single spoke cavities is slightly higher than that of ESS, 7.3 mT/(MV/m)^-1 while the geometrical factor is slightly lower G=109 Ω.
The RF performance is shown in Fig. <ref> plotted on top of all the ESS series cavity results in a gray scale.
MYRRHA prototype cavities showed as excellent performance as ESS and therefore the future series production looks promising.
Note that the peak magnetic field and surface resistance are slightly different in ESS and MYRRHA due to geometrical factors, but this does not have any impact in this conclusion.
The next challenge in the MYRRHA project is full industrialization of all the production process including chemical, heat treatments, and HPR.
This is one of the major objectives of the pre-series cavity development led by SCK-CEN.
IJCLab contributes to vertical tests of the pre-series cavities as well as giving practical advice based on our long experience in spoke cavity development for ESS.
Note that such industrialization has been successful in 1.3 GHz TESLA-type elliptical cavities but its application to spoke cavities is highly non-trivial due to its fundamentally complicated shape intrinsically required by the RF performance at low-β.
§ BAKING OF SPOKE CAVITIES
During the series production of ESS double-spoke cavities,
IJCLab did not perform conventional baking after HPR except for very mild heating (120^∘C, 3 h) for just drying water from the surface and reducing multipacting (MP).
Baking at 120^∘C for 48 h is known as low temperature (low-T) baking <cit.> and can improve the accelerating gradient and Q_0 in elliptical cavities at 1.3 GHz.
The former is thanks to removing the high-field Q-slope so that the peak magnetic field exceeds 100 mT (around 25 MV/m for typical elliptical cavities).
The latter is mainly thanks to lowering loss contributions from thermally excited quasi-particles on dirtier surface, so-called BCS resistance R_ BCS.
Consequently, low-T baking has been included in the standard procedure of other projects such as the International Linear Collider.
However, as a byproduct, a temperature-insensitive component, so-called residual resistance R_ res usually increases.
Because of this issue, the spoke cavities do not necessarily benefit from this standard process.
The field levels of low-β cavities are limited by beam dynamics even if the ultimate field is improved.
Typically, for the spoke cavities, around 9 MV/m or maximum 12 MV/m is the field level required by the projects.
Clearly, one does not need to remove the high-field Q-slope with low-T baking.
Moreover, at 2 K, the low frequency (below 400 MHz) leads to R_ BCS < 1 nΩ at 2 K because R_ BCS has approximately a parabolic dependence on the RF frequency.
In this case, R_ res dominates the loss, so that low-T baking may even deteriorate the unloaded quality factor at low field.
When one takes into account the 4 K operation,
the benefit of baking should be re-evaluated.
Since R_ BCS is higher than 50 nΩ at 4 K with 352 MHz,
the unloaded quality factor can be significantly improved by baking.
Figure. <ref> shows the substantial improvement of cavity performance at 4 K after low-T baking.
Although the MYRRHA project is primarily designed for the 2 K operation,
even the 4 K cavity performance after low-T baking met the specification of the machine.
This is a potential breakthrough in the future spoke cavity technology <cit.>.
Considering the recent progress in various baking methods beyond the conventional low-T baking,
we suppose that the medium temperature baking (mid-T baking; 200-400^∘C) may be the most promising as the new research direction.
IJCLab developed an excellent vacuum furnace <cit.> and we plan to perform fundamental studies in baking spoke cavities in coming years by using a spare ESS series cavity and prototype MYRRHA cavities.
§ PIP-II PROTOTYPE CAVITIES
The Proton Improvement Plan II (PIPII) is an international project to build a proton driver, hosted by Fermilab, for answering questions of fundamental physics, such as Dirac CP phase in neutrino, muon physics, and dark sector <cit.>.
This project includes two types of spokes cavities, SSR1 (β=0.22) and SSR2 (β=0.47),
in which IJCLab is strongly involved on SSR2 section since the design phase <cit.> and has agreed an in kind contribution for the production phase of SSR2 <cit.>.
One objective of PIPII SSR2 is the same as MYRRHA, i.e., industrialization including fabrication and surface processing.
IJCLab performs the vertical tests of cavities fully prepared by the manufacturer and evaluate their quality of surface preparation.
When field emission is observed,
we perform our own surface treatment, fully qualified by ESS series cavities, and provide feedback to the manufacturer.
The design of SSR2 is based on lessons learned from SSR1 and ESS cavities.
As is well known, substantial MP is one of the major challenges of spoke cavities.
Figure. <ref> shows the MP bands of ESS cavities before being conditioned.
Clearly, the MP bands even cover the nominal accelerating gradient, which is a typical field level for the spoke cavities.
This differentiates spoke cavities from the elliptical cavities whose MP bands are usually sufficiently lower than the operational gradient.
The potential concern is any unexpected influence in stability during accelerator operation even if the MP bands are conditioned in advance during machine commissioning.
In the prototype SSR2, the cavity structure was designed to avoid MP bands at the nominal field level at the expense of a slight degradation of peak field ratios <cit.>,
being inspired by the balloon spoke cavities developed in TRIUMF <cit.>.
IJCLab has tested three prototypes of this design with preliminary results shown in Fig. <ref> <cit.>.
Surprisingly, we observed a deterministic field emission whose onset has been systematically around 4-5 MV/m in all the prototype cavities.
Moreover, MP bands at the low fields are substantially more difficult to condition compared to the ones in ESS and MYRRHA cavities.
The cause of the field emission is traced back to the fact that the designed SSR2 shape, optimized to avoid MP in the nominal fields, is so different from previous cavities that existing HPR tooling cannot cover the whole surface of the cavity.
During the design phase, a better RF performance was prioritized.
Fermilab and IJCLab are currently optimizing the HPR tooling to completely clean the complex surface of this spoke cavity.
We are forced to spend a factor of three times longer to pass the MP bands at low field.
Although the MP bands were well predicted at low field during design phase,
the strength and conditioning dynamic of MP is not predictable with the today code.
Moreover, we are starting R&D of preventive plasma processing during surface preparation that will definitely improve and speed up MP conditioning.
These preliminary results and discussions may imply important trade-offs among RF performance, surface cleaning, cold tests and operation for successful implementation of new cavities of complicated shape.
For example, as mentioned in this paper, the ESS series cavities are equipped with additional ports for HPR.
These ports enable horizontal and vertical BCP and therefore they offer thorough surface cleaning as well as fine tuning of the cavity frequency.
However, the ultimate reach of RF performance is slightly degraded by such ports, influencing the geometrical factors slightly.
The MP bands around the operational gradient are certainly of great concern but they have been easily conditioned (within 30 minutes) in the cold tests at IJCLab.
On the contrary, lower field MP needs several hours to overcome, even by experts of cavity measurement.
We could optimise RF performance but we might lose something else as a side effect.
These are important research subjects in the next years about global optimization in the spoke cavity technology.
§ CONCLUSION
IJCLab successfully tuned the frequency shifts of 29 series double-spoke cavties for the ESS project.
All the challenges in fabrication and processing were identified and were all solved so that the final frequency tolerance met the specification.
The ESS cavity performance was sufficiently beyond the project's specification.
All the ESS series cavities were assembled into cryomodules and passed the site acceptance test at Uppsala University and are being installed in the ESS tunnel.
The prototype single-spoke cavities for the MYRRHA project also showed very promising performance and the pre-series cavities are being fabricated by industry.
The new challenge is to industrialize chemical processing, heat treatment and HPR, following the recent success in the LCLS-II project for 1.3 GHz elliptical cavities.
The PIP-II SSR2 cavities are still in the prototyping phase.
Similar to the MYRRHA cavities, IJCLab plays a leading role in industrializing all the processes of cavity preparation.
One major difference from ESS and MYRRHA is its optimised RF design to avoid the MP bands at the nominal fields.
The MP bands is known to be problematic in spoke cavities.
Although this design challenge revealed another issue concerning surface cleaning,
we are optimising the cleaning process for this new geometry and will give feedback to the industry.
Another research subject is on baking spoke cavities and we pave the way to their operation at 4 K.
§ ACKNOWLEDGMENT
We greatly appreciate the invaluable contributions from the FREIA laboratory during the series production of ESS spoke-cavity cryomodules.
We would like to acknowledge with appreciation the crucial role of colleagues from ESS.
We are deeply grateful to SCK-SEN and Fermilab for their leadership and cooperation in the MYRRHA and PIPII projects, respectively.
Last but not least, we thank all the technical staff, administrative colleagues, and in particular students, without whom the project and R&D would have not and would not be feasible at all.
booljacowbiblatex
9
jacow-help
JACoW,
<http://www.jacow.org>
IEEE
IEEE Editorial Style Manual,
IEEE Periodicals, Piscataway,
NJ, USA, Oct. 2014, pp. 34–52.
|
http://arxiv.org/abs/2306.03604v3
|
20230606114909
|
Enabling Intelligent Interactions between an Agent and an LLM: A Reinforcement Learning Approach
|
[
"Bin Hu",
"Chenyang Zhao",
"Pu Zhang",
"Zihao Zhou",
"Yuanhang Yang",
"Zenglin Xu",
"Bin Liu"
] |
cs.AI
|
[
"cs.AI"
] |
Representative set statements for delta-matroids
and the Mader delta-matroid
Magnus Wahlström
July 31, 2023
==============================================================================
Large language models (LLMs) encode a vast amount of world knowledge acquired from massive text datasets. Recent studies have demonstrated that LLMs can assist an agent in solving complex sequential decision making tasks in embodied environments by providing high-level instructions. However, interacting with LLMs can be time-consuming, as in many practical scenarios, they require a significant amount of storage space that can only be deployed on remote cloud server nodes. Additionally, using commercial LLMs can be costly since they may charge based on usage frequency. In this paper, we explore how to enable intelligent cost-effective interactions between the agent and an LLM. We propose a reinforcement learning based mediator model that determines when it is necessary to consult LLMs for high-level instructions to accomplish a target task. Experiments on 4 MiniGrid environments that entail planning sub-goals demonstrate that our method can learn to solve target tasks with only a few necessary interactions with an LLM, significantly reducing interaction costs in testing environments, compared with baseline methods. Experimental results also suggest that by learning a mediator model to interact with the LLM, the agent's performance becomes more robust against partial observability of the environment. Our code is available at https://github.com/ZJLAB-AMMI/LLM4RL.
§ INTRODUCTION
Solving complex sequential decision making tasks in embodied environments requires logical reasoning ability. One common solution to such problems is using reinforcement learning (RL), where the agent interacts with the environment and learns from feedback. Despite recent progress in deep RL, efficiently and safely solving complex sequential decision making problems remains a challenge <cit.>. As an alternative, the recent emergence of large language models (LLMs) shows promise for solving such problems. Previous work has demonstrated that LLMs possess reasoning abilities <cit.>. Some researchers have attempted to use LLMs' reasoning abilities to solve various embodied tasks, such as robot manipulation tasks <cit.> and playing video games <cit.>. As shown in Figure <ref>, these studies utilize LLMs as explicit planners to assist agents in making high-level decisions, such as choosing whether to pick up a can of coke or an apple for the next step.
While integrating pre-trained LLMs into embodied agents has been explored, enabling the agent to efficiently interact with an LLM to solve real-world problems remains a challenge. Uncertainties present in such problems can lead to unexpected errors when executing high-level decisions given by LLMs. Consider a scenario where a robot is collecting a can of coke from the kitchen and finds that the door is unexpectedly locked. Ideally, the robot agent should return to the LLM planner with the newly acquired information regarding the locked door and request a new plan. In such cases, deciding when to return to the LLM planner is crucial. If the agent fails to pause and ask for a new plan in time, it may impede the progress of completing the target task or even lead to safety issues, such as damaging the door or the robot itself. Conversely, if the agent frequently requests plans from the LLM, it can be time-consuming and expensive, especially when using commercial LLMs that may be deployed on remote cloud server nodes and charge based on usage frequency.
In this paper, we investigate methods for enabling intelligent cost-effective interactions between the agent and an LLM that is deployed on a remote cloud server node. Our objective is to facilitate effective completion of a target task with minimal communication costs due to interactions with the LLM. Specifically, we adopt a Planner-Actor-Mediator framework, similar to <cit.>, where the planner is a pre-trained LLM used for making plans, the actor contains policies for executing the plans, and the mediator serves as an interface in between by deciding when to request a new plan and generating observation representations for the planner (which are text descriptions in the case of using an LLM planner). With a focus on optimizing interacting timings, we use reinforcement learning to learn an asking policy that instructs the agent to either adhere to the current plan or request a new plan from the LLM.
We evaluate our approach using 4 partially observable MiniGrid environments <cit.>, which require the agent to explore the environment and react to newly acquired information. Experimental results demonstrate that our approach can effectively balance the desired task performance with the communication costs associated with using an LLM. Specifically, it achieves competitive task performance with minimal necessary interactions with the LLM. Additionally, we find that our approach performs more robustly against partial observability of the environments in two scenarios, where the agent needs to handle newly acquired information and unexpected errors, respectively, when providing subsequent plans.
To summarize, our main contributions include:
* We propose an RL approach to coordinate the interaction between the agent and the LLM based on the Planner-Actor-Mediator framework <cit.>.
* We have thoroughly validated through experiments the significant superiority of our approach over baseline methods.
* We have made our code open source to facilitate future research on the applications of LLMs.
§ BACKGROUND
§.§ The Options Framework
In RL, sequential decision-making problems are typically formalized as a Markov decision process (MDP), denoted by ℳ = ⟨𝒮, 𝒜, p, r, γ⟩. Here, 𝒮 and 𝒜 represent the state and action spaces, respectively. p(s'|s,a) is the transition probability function, r(s,a) is the reward function, and γ is the discount factor. The objective is to learn how to act in a way that maximizes the cumulative return over the time horizon: max∑_t γ^t r(s_t, a_t).
Solving complex sequential decision-making tasks often requires planning, acting, and learning temporally extended actions over different time scales. To address this challenge, the options framework was introduced to represent courses of actions <cit.>. Formally, an option ω is a 3-tuple ⟨ℐ_ω, π_ω, β_ω⟩, where ℐ_ω is the initial state set for the option, π_ω is the acting policy, and β_ω is the termination condition. Given a state s, a policy-over-options would select an option from the set of options, ω∈Ω. Then, the agent would plan for low-level actions following its current option policy a∼π(·|s, ω) until the option's termination condition β_ω is satisfied. In this work, we use a pre-trained LLM as the policy-over-options to plan for high-level options.
§.§ LLM as a Planner
Recent work has demonstrated that LLMs have achieved significant success in various tasks within embodied environments. Generally, LLMs serve as planners that generate a sequence of options given descriptions about observations and tasks. The plan is then executed by the option policies in turn. More formally, with text descriptions as input prompts, the LLM outputs the plan as a list of options [ω_k]_k=1,...,K to take. The actors then output the low-level actions to perform at each time step, following the option policy π(a|s; ω_k). The actor policies π_ω can be either hard-coded or pre-trained with RL.
With LLMs being powerful tools for generating plans, several previous works have focused on designing the interface between the planner and actors to make better use of this tool. <cit.> deployed LLMs to help plan the entire option sequence at the beginning of each task and complete the task without further interaction with its planner. <cit.> introduced a closed-loop feedback system and terminated execution to ask the LLM for a re-plan when new relevant information or unexpected failure was observed during the executive phase. Consequently, the acting agent can be more robust to uncertainties in the environment. However, such methods rely on hard-coded failure detectors, such as detecting if the number of time steps of an option exceeds a maximum threshold. On the other hand, concurrent with our work, <cit.> designed a Planner-Actor-Reporter framework where a reporter module is implemented to help exchange information between the actor and the LLM-based planner. In this framework, the agent interacts with the LLM at every step, regardless of whether new information is acquired or not. This helps ease the requirement of hard-coded termination conditions and deal with uncertainties during executing an option. However, it consumes more resources, especially when the planner LLM is large-scale and expensive to call. To this end, we propose letting the agent learn how to interact with the LLM more intelligently in a cost-effective way. By asking the LLM for help only when necessary, the agent will be capable of completing the given task with as few interactions with the LLM as possible.
§ OUR METHOD
In this section, we present our agent design aimed at solving complex tasks in embodied environments that require exploration and action planning. The agent can interact with an LLM for help, while also taking into account the interaction costs. Our method is built upon the Planner-Actor-Mediator framework <cit.>. In particular, we design an RL-based mediator model within this framework to enable more intelligent cost-effective interactions between the agent and the LLM.
§.§ The Planner-Actor-Mediator Framework
This framework consists of three components, as illustrated in Figure <ref>: the planner, the actor and the mediator. The planner provides high-level instructions, the actor generates actions to follow these instructions, and the mediator serves as an interface between them.
We introduce these components conceptually here, while referring readers to subsection <ref> for implementation details for each component in our experiments.
Planner The planner reads the descriptions of the current state in the form of text and plans for the next high-level option or a list of options to perform. We use a pre-trained LLM as the planner that provides high-level skill instructions for the actors. Every time the planner is activated, the LLM generates an option plan given the descriptions of the current observation and properly designed prompts.
Actor
The actor plans for the low-level actions to follow the instructed option, e.g., “go to the red door” or “pick up the yellow key”. For this work, we consider that these option policies are hard-coded with human expert knowledge. However, these policies may also be pre-trained with option-conditioned reward functions for more complex skills.
Mediator
In this work, we mainly focus on designing an intelligent mediator component, compared with previous works <cit.>.
In our approach, we train an explicit asking policy for deciding when to interact with the planner using RL. Specifically, our mediator component consists of two sub-components: an asking policy that decides whether to ask the planner for a new plan or not, given observations and the option, and a translator that converts observations into text descriptions that are readable by the LLM. Following <cit.>, we assume an expert translator is ready for use. The translating module may also be replaced by a learned model <cit.>.
§.§ Learning the Mediator with RL
Communicating with the LLM requires a significant amount of computational resources, including power and time. Ideally, the agent should ask the LLM for a new plan only when it discovers new informative observations, i.e., when the LLM would potentially return a different plan based on newly acquired information. To achieve this, we apply an additional penalty term in the reward when the mediator selects to ask the LLM for a new plan, and the LLM's plan remains unchanged, i.e., a non-informative interaction.
Denote the asking policy as π^ask and the parameters of this policy as θ, We train it using one of the standard on-policy RL methods - PPO <cit.>, using the following objective function:
max_θ∑_t=1[γ^t r_t - λ1(y_t == Askω_t==ω_t-1 ) ].
Here, y_t∈{Ask, Not Ask} denotes the decision made by the mediator, r_t the reward, ω_t the planned option provided by the LLM, at time t, and λ the penalty factor. Note that ω_t will be set as the plan of the previous time step if the decision is Not Ask, i.e., ω_t = ω_t-1 if y_t == Not Ask. At every iteration, the data are collected on-policy with model π^ask_θ.
§ IMPLEMENTATION DETAILS & EXPERIMENTAL SETUP
In this section, we present implementation details of our method in the context of our experiments. We designed 4 partially observable environments, based on MiniGrid <cit.>. These environments require the agent to have the capability to perceive, reason and control in order to solve target tasks. To begin with, we introduce the testing environments. Then we present implementation details of our method, Finally, we present baseline methods involved in our experiments.
§.§ Environments
In MiniGrid environments, an agent must navigate in a 2D grid room and interact with specific objects, in order to complete different tasks, such as “open the red door” or “put the green ball next to the yellow box”. The agent is equipped with only limited view range, such that it also needs to first explore about the environment and collect useful information for further planning. More specifically, the environment returns as observation the full grid information with unexplored area occluded (e.g., “fog of war” in StarCraft). Technically, the environment returns an observation of shape o ∈ℝ^W× H× 4, where W and H are the width and height of the full grid respectively. For an unexplored grid at location [w,h], the observation returns [-1,-1,-1,-1]. For an explored grid , the corresponding 4-dimensional vector contains all information about that grid: its object ID, color ID, state ID (e.g., closed or locked if it is a door), and an agent direction ID (direction ID if the agent is at this location, or 4 otherwise). With such designs, we aim to focus on the reasoning ability of the agent and exclude the potential influence of other factors, such as memorization. Figure <ref> provides an example of this environment setup in the environment of SimpleDoorKey.
For our experiments, we consider the task of opening a locked door in four distinct environments: SimpleDoorKey, KeyInBox, RandomBoxKey, ColoredDoorKey. In the basic setup SimpleDoorKey and KeyInBox, the room contains only one key and one exit door. The key is located on the floor (SimpleDoorKey) or in a box (KeyInBox). The agent needs to explore to locate the target door and the key/box, then pickup the key, and finally unlock the target door with the carried key. In RandomBoxKey environment, the key is randomly put on the floor or in a box when the room is generated. The agent needs to interactively plan based on the feedback from the environment, i.e., change its plan according to whether the agent observe a key or a box. Finally, in ColoredDoorKey, the room contains multiple keys and only one exit door. Each key and the corresponding door are color-coded, requiring a key of matching color to unlock the door. All environments are generated procedurally, with the grid layout (such as room size, key and door locations) randomly determined each time the environment is reset. Additionally, a held-out test set consisting of 100 arbitrarily selected random seeds is pre-defined for each environment.
§.§ Implementation Details
Planner As shown in <cit.>, LLMs require carefully designed prompts and few-shot demonstrations for generalizing to different tasks. In our experiments,
we provide task instructions and few-shot examples as in-context prompts for each environment. In addition, to help LLM solve difficult reasoning task, we compose Chain-of-Thought prompts for ColoredDoorKey <cit.>. Please note that these few-shot examples are only used to ground task knowledge (e.g., a door can only be unlocked with the key of the same color) and constraint the output formats. The LLM needs to reason about the target task with its embedded knowledge, in order to generalize to different scenarios with various objects and colors. Figure <ref> provides the prefix prompts and one interaction example in ColoredDoorKey, where the LLM planner successfully generates a correct plan given novel observations. Full details about prompts for all environments are included in Appendix <ref>.
We use two different versions of Vicuna model <cit.> - a set of open-source LLMs trained by fine-tuning LLaMa <cit.> - as LLM planners in our work: a Vicuna-7b model for SimpleDoorKey, KeyInBox, and RandomBoxKey, and a Vicuna-13b model for the most complex environment, ColoredDoorKey. In addition, we design a communication application interface implemented with fastapi framework in RESTful API style. More details can be found at our open-source code repository: https://github.com/ZJLAB-AMMI/LLM4RL.
Actor The actor contains a set of pre-defined option policies. The options set is: {explore, go to an object, pickup an object, toggle an object}. More details are included in Appendix.
Mediator As described in Section <ref>, the mediator contains two separate components: an asking policy and a translator. In this work, we use an expert translator and train a neural network for the asking policy. More specifically, the asking policy is provided with the observations at the current and previous frame. We takes the difference between two frames before passing it to the network, so that it is encouraged to take the ask action only when something changes in the environment. The network consists of 3 CNN layers, followed by 2 MLP layers, and outputs the logits of {ask, not ask} for each option. So the output dimension of the network is 2× K, where the (2k-1)-th and 2k-th entries together decide the action distribution for option k. K is the size of the option set here.
To add more details about the architectures, especially the conditional outputs. (including CNN, etc.)
§.§ Baselines
We consider following baselines, for comparison with our approach in our experiments:
Hard-coded Timing and conditions for requesting new instructions from LLMs are hard-coded by human experts for each option in <cit.>. The agent will only request a new plan from the LLM planner when the option termination conditions are met. These conditions include both a goal-finishing detector and a constraint on the maximum number of timesteps allowed. For example, for the option “go to the red door”, the termination condition is that the agent reaches the target door location or exceeds 100 timesteps spent on this option. Appendix <ref> provides termination conditions for all options. However, we argue that hard-coded termination conditions are unable to take advantage of newly acquired information during option execution. Additionally, these conditions may be weak to uncertainties in other components of the framework, potentially leading to suboptimal performance if the actors or LLMs are imperfect.
Always In <cit.>, the LLM planner is consulted at every step, ensuring that newly acquired information is immediately relayed to the planner. This strategy theoretically guarantees better task performance due to zero-delay between gathering new information and requesting a re-plan. However, it consumes significantly more interaction resources than other methods.
Random At each timestep, the agent has a fixed 50% probability of deciding to ask the LLM for instructions.
Planner Learned with RL In the original options framework, the policy-over-options (i.e., the planner) is learned using RL on data collected during interactions with the environment <cit.>. In this work, we compare our framework with a RL-learned policy-over-options without any LLM involvement to evaluate the contribution of using an LLM as the planner.
§ EXPERIMENTAL RESULTS
As discussed in previous sections, we hypothesize that explicitly learning an asking policy within the planner-actor-mediator framework can benefit the agent in two ways: (1) avoiding wasting resources on non-informative interactions with LLM, and (2) improving task performance by interactively changing acting plans. With our designed experiments, we aim to investigate the following research questions:
* Can our agent solve target tasks with less interaction costs?
* Can our agent actively ask an LLM in exploratory environments?
* Can our learned mediator perform robustly against uncertainties in other components of the Planner-Actor-Mediator framework?
* What is the performance gain provided by using the LLM, compared with a pure RL based options framework?
§.§ Can our agent solve target tasks with less interaction costs?
We compared our approach with baseline methods. Figure <ref> summarizes both the communication costs (top row) and task performances (bottom row) across all four environments. As shown, using our approach, the number of interactions with the LLM is reduced while maintaining the task performance across all four environments. Moreover, our approach can maintain high success rates throughout the learning process. This suggests that the asking policy only learns to cut off non-informative interactions with the LLM, while maintaining the essential ones.
§.§ Can our agent actively ask an LLM in exploratory environments?
After analyzing how the agent performs when it is expected to propose asking the LLM planner for help, it was observed that the baseline with hard-coded asking policy achieved significantly lower success rates than other methods. This happened because the agent continued to execute every option until its termination condition was satisfied, even though it had already collected enough information for the task. This resulted in a waste of time on each option and a failure to complete the task within the given time limit.
On the other hand, our approach and other baselines were able to early-stop any options when necessary and achieve 100 percent success rates in SimpleDoorKey and KeyInBox.
In a specific scenario within the domain of ColoredDoorKey, as shown in Figure <ref>, the agent was taking the explore option and had just acquired information about the location of the yellow key (frame 2). In the hard-coded baseline approach, the agent would continue with the explore option until fully exploring the entire room. However, in contrast, the proposed approach would propose asking the LLM planner about what to do next with the current information and stop exploring to immediately pick up the yellow key.
This example highlights the effectiveness of the proposed approach in recognizing when to ask for help from the LLM planner and making more efficient decisions based on the available information, resulting in better performance.
§.§ Can our mediator perform robustly against uncertainties in other components?
In the complex environment ColoredDoorKey, the Always baselines would fail in certain special corner cases due to imperfections of other components. Figure <ref> provides an example scenario in ColoredDoorKey. Consider the Always baseline approach. In the first frame, the agent is instructed to go to then pick up the key. After taking a left turn to drop the carried purple key (frame 2), the LLM instructs the agent again with go to then pick up the key, where the agent should continue with picking up the yellow key. This failure case occurs because the hard-coded translator fails to encode the information about the relative position between the agent and the target object. Specifically, the translator returns [observed yellow key, observed yellow door, carrying purple key] for both frames 1 and 2.
In contrast, the proposed approach learns not to ask for help in this case and allows the agent to finish picking up the yellow key before asking for further instruction. This highlights the advantage of the proposed approach over the baselines, as it is capable of adapting to situations where the other components may not be perfect.
As illustrated in Figure <ref>, the proposed approach gradually outperforms the baseline methods in terms of task success rates. This outcome suggests that the learned mediator is capable of learning about the behaviors of the other components within the framework, leading to more robust performances in complex environments.
§.§ Comparison with a baseline RL
In an ablation study, the proposed approach was compared with an RL baseline to investigate the importance of the reasoning ability of the pre-trained LLM. The RL baseline learns the planner, i.e., policy-over-options, without involving any communication overhead with the LLM. As summarized in Table <ref>, even in the simplest environment, SimpleDoorKey, the RL baseline struggles to complete the task using a fixed number of training iterations. This suggests that it is challenging for an RL agent to learn to solve these tasks from scratch.
In these embodied environments, the agent needs to learn how to explore the environment, reason about the relationships between different objects, and plan optimal actions to complete tasks. With the help of the LLM, an agent can take advantage of the world knowledge embedded in the LLM to significantly reduce the difficulties in solving these tasks. Therefore, the results of the ablation study support the notion that the reasoning ability of the pre-trained LLM is crucial for achieving higher performance in complex environments.
§ CONCLUSIONS
In this paper, we aim to enhance the efficiency and cost-effectiveness of the interaction between an agent and an LLM in embodied environments. We assume that an LLM model is available while interacting with it is costly.
We propose an RL-based mediator model within the Planner-Actor-Mediator framework <cit.>. Our model enables the agent to interact with the LLM in a more intelligent way than the baseline strategies. We evaluate our approach using four partially observable MiniGrid environments <cit.>. Results demonstrate that, with our approach, the agent can explore the environment and respond to perceived new information in a more reasonable way. Specifically, it learns when to initiate or maintain interaction with the LLM and when to rely on its own learned skills without requiring LLM interaction. Furthermore, we found that the agent exhibits greater robustness by maintaining only a few necessary interactions with the LLM, compared to frequent and intensive interactions. We make our code open source for future research and intend to test our approach with larger LLMs and more complex embodied tasks in future work. Additionally, we plan to adapt our model for other scenarios, such as that considered by <cit.>, where LLM can be used for commonsense knowledge reasoning.
named
§ APPENDIX
§ OPTIONS SET
The Planner-Actor-Mediator framework allows for the separate learning of each component. In this work, we focus on learning the mediator component while using a hard-coded set of options. In this section, we provide a detailed description of the option set, with an emphasis on the termination conditions used in the hard-coded baselines presented in Section <ref>.
For our MiniGrid environments, we employ the following set of options: [explore, go to {object}, pick up {object}, toggle {object}]. All options can be initiated from any state, i.e., ℐ_ω = 𝒮 for any option. Additionally, all options have a maximum length of 100 steps, after which the agent will terminate the option.
During exploration (option explore), the agent follows a fixed strategy of scanning the unexplored grid row-by-row and finishes when it has observed walls forming a closed area. When using the go to option, the agent plans the path to the target object using the A^* algorithm and terminates the option upon reaching the target. During the pick up option, the agent will attempt to pick up the target object in front of it if it is not already holding another object; otherwise, it will first drop the current object at the nearest position before picking up the new one. The toggle option is a one-step action that attempts to interact with the object in front of the agent.
§ PPO HYPERPARAMETERS
§ TEST RESULT DETAILS
Here we present corresponding numerical results shown in Figure <ref>, see Tables <ref>-<ref>.
§ FULL DETAILS ABOUT PROMPTS FOR ALL ENVIRONMENTS
Here we provide a complete description of the LLM planner's prompting process for our four distinct environments: SimpleDoorKey, KeyInBox, RandomBoxKey, ColoredDoorKey.
|
http://arxiv.org/abs/2306.02546v1
|
20230605023948
|
LmPa: Improving Decompilation by Synergy of Large Language Model and Program Analysis
|
[
"Xiangzhe Xu",
"Zhuo Zhang",
"Shiwei Feng",
"Yapeng Ye",
"Zian Su",
"Nan Jiang",
"Siyuan Cheng",
"Lin Tan",
"Xiangyu Zhang"
] |
cs.SE
|
[
"cs.SE"
] |
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
Purdue University
West Lafayette
USA
[email protected]
plain
plain
Decompilation aims to recover the source code form of a binary executable. It has many applications in security and software engineering such as malware analysis, vulnerability detection and code reuse. A prominent challenge in decompilation is to recover variable names. We propose a novel method that leverages the synergy of large language model (LLM) and program analysis. Language models encode rich multi-modal knowledge, but its limited input size prevents providing sufficient global context for name recovery. We propose to divide the task to many LLM queries and use program analysis to correlate and propagate the query results, which in turn improves the performance of LLM by providing additional contextual information.
Our results show that 75% of the recovered names are considered good by users and our technique outperforms the state-of-the-art technique by 16.5% and 20.23% in precision and recall, respectively.
: Improving Decompilation by Synergy of Large Language Model and Program Analysis
Xiangyu Zhang
July 31, 2023
==================================================================================
§ INTRODUCTION
Decompilation aims to reverse engineer a binary executable, which often has no debugging or symbol information, to a source code form that is close to its original source and human-understandable. During compilation, variables at the source level are transformed to registers and memory locations at the binary level; type information is discarded; statements are broken down to instructions, relocated, and even removed; code structure may be reformed; functions may be inlined; and function boundaries, data and code boundaries are no longer explicit <cit.>.
Decompilation attempts to reverse these transformations, which has various challenges including disassembly <cit.>, variable and type recovery <cit.>, code structure recovery <cit.>, function boundary recovery <cit.>, and name recovery <cit.>. =-1
Decompilation is critical in many security and software engineering tasks. For example, it is often the first step for malware analysis <cit.>, in which human analysts inspect malware code to understand their behaviors. It is important for binary vulnerability analysis where analysts want to identify critical bugs in executables <cit.>,
for software supply chain analysis
<cit.>,
and for code reuse in which legacy executables may need to be ported or hardened <cit.>.
Its importance is evidenced by the popularity of decompilation tools such as IDA <cit.> and Ghidra <cit.>, e.g., in
security threat analysis <cit.>
There is a large body of existing work on binary reverse engineering and decompilation <cit.>. The state-of-the-art disassembling methods achieve over 95% precision and recall <cit.>; type recovery techniques can achieve over 90% precision and recall for primitive types <cit.> and over 70% for user defined types <cit.>; and function boundary recognition can achieve 97.1% precision and recall <cit.>. =-1
However, the state-of-the-art name recovery method DIRTY <cit.> only achieves 17.3% precision and 10.9% recall according to our experiment (Section <ref>).
Name recovery is arguably one of the most valuable steps in decompilation, because natural language artifacts such as identifier names are crucial for human developers to effectively apprehend a piece of code. Yet, name recovery tends to be more challenging compared to a few other tasks.
In addition to the challenges induced by the aforementioned compilation transformations, a large number of intermediate variables are introduced at the binary level and may not have correspondence to any source variable; function names and variable names are application and context dependent such that machine instructions with few syntactical differences may have substantially different names.
In DIRTY <cit.>, researchers proposed to use language models to infer variable types and names.
They trained a transformer model using a large repository of executables and their ground truth symbol information, and then used the model to generate type and name information for a program decompiled by IDA. Note that IDA decompilation focuses on recovering basic control structure and hence largely lacks type or name information. Before DIRTY <cit.>, there were a number of proposals using various deep learning <cit.> and probabilistic graph model based name generation <cit.>. More discussions are in the related work section.
DIRTY's performance degrades when the complexity of subject binary increases.
This is due to a number of limitations in existing language models. For instance, they only support inputs with a limited size.
Hence, DIRTY can only infer names for one function at a time and hardly consider calling contexts. In addition, although the training repository used in DIRTY has 75,656 binaries,
it may not be large enough to leverage the true benefits of language models. In comparison, ChatGPT was trained on massive natural language and programming language corpora including Wikipedia, digital books, GitHub, StackOverflow, and so on <cit.>.
In this paper, we develop a novel name recovery
technique leveraging the synergy between pre-trained large language model (LLM) and program analysis. LLMs are usually trained on enormous datasets with multiple modalities, including source code, binary code, and natural languages. The scale of their training is the key to their impressive performance <cit.>. We hence propose to build on the success of SOTA pre-trained LLMs to achieve generalizability. In the meantime, existing LLMs still have the aforementioned input size limitation. For example, ChatGPT allows at most 4,096
tokens at a time. We propose to break the procedure of name recovery for a program down
to multiple queries to an LLM and use program analysis to chain them together. The procedure is iterative, allowing LLMs to gradually improve over time.
Specifically, we develop a name propagation algorithm that has a similar nature to type inference. Assume in one round of query, the LLM is able to derive a meaningful name for some decompiled variable within the queried code snippet (called the query window), the name can be propagated to other places in the program outside the query window, following strict program semantics. This allows LLM queries in future rounds to have more contextual information. For example, a newly generated callee function name is propagated to the invocation sites in its callers. To tolerate the non-determinism of LLM-generated names, our analysis abstracts the name of a variable to a set, instead of a singular identifier. After convergence, the distribution in the set naturally informs us the most likely name for the variable.
Our contributions are summarized as follows.
* We propose a novel approach to name recovery for binary executables. It features an iterative algorithm involving using both an LLM and a program analysis.
* We develop a systematic method to construct LLM queries. The method has the capability of including up-to-date information collected from previous rounds of queries.
* We develop a name propagation analysis that can propagate predicted names by the LLM to other places and even construct new meaningful names.
* We devise a post-processing step that filters out meaningless names and selects appropriate names from the analysis results after convergence.
* We have implemented a prototype . We evaluate it on 1258 functions from 6 popular binary analysis benchmarks.
Our user study shows that 75% of the names recovered by are considered good while the number for DIRTY is 6%.
Using an automatic metric based on name similarity,
achieves 33.85% precision and 31.12% recall, substantially outperforming DIRTY, which has 17.31% precision and 10.89% recall. It takes on average 8 LLM queries to name variables in a function. The total fee for our experiments is only 30 USD. Our ablation study shows that if we directly query the LLM without the program analysis, the precision and recall degrade to 31.04% and 18.21%, respectively.
§ MOTIVATION AND OVERVIEW
We use a motivating example to illustrate challenges in decompilation, as well as the limitations of state of the arts.
We then present our method.
§.§ Motivating Example
The example is adapted from two functions
in Coreutils <cit.>.
The source code of the functions is shown in Fig. <ref>.
Function (defined at line <ref>) converts an input character to its lower case. Specifically, it is implemented with a switch-statement (line <ref>). If the input is an upper case letter, the function converts it to lower case (line <ref>); otherwise the input character is returned unchanged (line <ref>).
Function c_strcasecmp() (defined at line <ref>) takes as input two strings and compares them in a case-insensitive fashion.
It declares four variables (lines <ref>-<ref>): p1 and p2 are pointers to the next characters to be compared in the two input strings, respectively; c1 and c2 are two temporary variables holding the compared characters in lower cases.
The function iteratively compares each pair of characters (at the same position) in both strings (line <ref>), and stops at the first difference. It finally returns the difference
(line <ref>).
Note that before character comparison,
the function calls c_tolower() (lines <ref>-<ref>) to convert both characters to lower cases.
Challenges in Decompilation
We compile the example with GCC and the option -O0 (i.e., no optimization), resulting in a binary program. Then we remove the debugging information and symbol information from the binary, following a typical real-world reverse-engineering scenario <cit.>.
We further use IDA <cit.> to decompile the binary program, and part of the results are shown in Fig. <ref>.
During compilation and deployment, the symbol information and high-level code structures in the source code are lost.
For example, Fig. <ref>a shows
the decompiled form of c_tolower() by IDA <cit.>, and the corresponding assembly instructions are shown in the grey box.
We can observe that (1) the function name c_tolower and the variable name c are not preserved in the binary; (2) the switch-statement in lines 2-9 of Fig. <ref> is translated to comparison instructions like
line 37 in the grey box of Fig. <ref>a;
(3) the expression - `A' + `a' (at line <ref> in Fig. <ref>) is simplified to + 0x20 (at line 39 in the grey box
of Fig. <ref>a). Without any symbol or structural information from the original source code, the decompiled code is not similar to the original source, but rather a direct translation of the assembly code, which is difficult to understand.
Similarly, Fig. <ref>b shows the decompiled form of c_strcasecmp() and its corresponding assembly code. Note that the variables and callee functions do not have meaningful names. For example, the variable p1 at line <ref> of Fig. <ref> is stored in r12 at line 23 in the grey box of Fig. <ref>b. IDA thus fills in a dummy name v2. Also, the callee function c_tolower() is now invoked by its address 0x4BECE3. The decompiler gives it a dummy name sub_4BECE3.
Without
a meaningful name reflection mechanism, it is hard to understand
the decompiled function.
Limitations of State-of-the-Art Methods
DIRTY <cit.> leverages a transformer model to predict types and names of variables in decompiled programs. Although it demonstrates impressive results in type recovery, its name recovery results are limited.
In our motivating example, it does not produce any names different from those that are already in the decompiled code.
There are two possible reasons. First, limited by the input size of transformer models, DIRTY handles one function at a time and it does not support information sharing across functions. Second,
DIRTY assumes binaries still have function names and uses such names in training.
Function names provide strong contextual information as the model can learn the typical variable names used in a function with a particular name.
For example, in its training data, line 7 in Fig. <ref>b would be something like v6 = c_tolower(*v2).
However in practice, stripped binaries do not have function names.
Therefore, the transformer model cannot pick up enough context, and thus can hardly generate meaningful variable names.
Another line of work leverages probabilistic graph models <cit.> (PGM) to rename decompiled variables with names seen in the training data. We test our motivating example on DEBIN <cit.>, a representative technique in this line.
It could not generate desirable names either.
For example, in c_tolower(), it gives variable c a name index.
PGMs can be considered a more powerful form of Bayesian networks. They model type and name predicates of program artifacts as nodes, e.g., a predicate isInt(x) asserting x is of int type, and edges denote statistical dependences between nodes, which are acquired by program semantics and training.
Typing and naming patterns are hence learned and encoded as weight values in the PGMs.
However, their results heavily depend on the quality of training data, and PGM inferences are largely local, lacking an ability similar to the attention mechanism in transformers.
In our case, the decompiled code body of c_tolower() is too simple and does not provide much hint for DEBIN. However, in the caller c_strcasecmp(), individual characters in a string are passed to variable c in order. Such behavior pattern has been seen and encoded by the PGM, but it was connected with an id of index.
We show the full function with predicted names from DEBIN in Fig. <ref> of our supplementary material.
§.§ Our Technique
Existing techniques suffer from the relatively limited scale of their training.
We thus propose a technique that builds on the recent advances in large language models. LLMs <cit.> are typically trained on multi-modal data of an enormous scale.
They demonstrate superior capabilities in many natural language tasks and coding tasks <cit.>. However, they only allow input of a limited size. Our idea is hence to
query an LLM many times, requesting names for separate code snippets of a program, and use program analysis to propagate and filter query results.
The process is iterative, meaning that information acquired from past queries are used to provide additional contextual information for future queries, improving the LLM's performance.
We use ChatGPT as our underlying LLM, but our technique can be easily generalized to other LLMs (e.g., GPT-4 <cit.>).
Query to LLM.
ChatGPT is an online chat-bot that mimics the dialogue behavior of a human. Its input and output are natural language sentences.
To leverage ChatGPT in generating variable names, we have to: (1) formulate the problem into natural language questions; and (2) automatically parse ChatGPT's response and associate the suggested name with the corresponding variable in code.
We show in Fig. <ref> an example about how queries ChatGPT to rename function c_tolower(). The blue and green boxes are 's query and ChatGPT's response, respectively.
At the beginning, briefly describes the task of predicting names in the decompiled code.
Then the decompiled function is attached. After that, enumerates each variable and specifies the response format requirements.
As shown in Fig. <ref>, ChatGPT follows the format requirements in its response, and thus can post-process ChatGPT's answer by recognizing the format.
Fig. <ref>a and Fig. <ref>b show the two functions in our motivating example, with the variables and functions renamed according to ChatGPT's initial response (using the ChatGPT website between March 6–10).
For function c_tolower(), we can see that ChatGPT mistakenly considers it as a function converting digits to ASCII code, which shares some common behavior patterns with the target function.
The suggested name input_parameter for variable c is not that informative either.
On the other hand, for function c_strcasecmp(), ChatGPT produces a close name of , while missing the case-insensitive part. The predicted variable names in this function are of good quality too (e.g., string1 for s1, string1_pointer for p1, and string1_char for c1).
We speculate that the good results are due to the sufficient context, namely, the pairwise comparison of array elements (lines 5–11), the comparison with literal number 0 (line 8) to break the loop, and the return value that reports the first difference.
Iterative Name Propagation.
To leverage ChatGPT's success in one place to improve its performance in other places such as c_tolower(),
we further propose a name propagation technique that iteratively propagates names between functions.
The key insight is that some functions might be easier for ChatGPT to understand. Information (e.g., variable/function names) derived from these functions can provide better context for other functions.
The insight aligns with how a human reverse engineer understands a binary program <cit.>.
She typically starts from functions with special literals or well-known program idioms. The information from these functions will help her understand the other connected parts. =-1
Take Fig. <ref>c as an example of name propagation. adds a code comment at the beginning of the queried function. The comment describes how the function is used in its caller.
As depicted by the red dashed arrows and the red boxes, leverages the name of the caller function (i.e., compare_strings) and the name of the argument variable (i.e., string1_pointer) to compose a comment, propagating the newly acquired contextual information.
Readers may be curious why we use comments to propagate information instead of directly setting function and variable names.
The reason is that ChatGPT often refuses to generate new names if variables already have non-trivial names in the code. Using comments does not have such restraint.
Note that using comments in natural language to convey program analysis results to the chat bot is a unique capability enabled by the underlying LLM.
In Fig. <ref>c, the changes of ChatGPT's response are highlighted in light yellow.
With the additional context,
ChatGPT realizes that function c_tolower() takes as input a character, and further correctly recognizes the functionality of this function is converting a character to its lower case. Based on the correct functionality, ChatGPT generates a better name (i.e., input_char) for variable c.
Similarly, in the third round shown in Fig. <ref>d, conversely propagates the name convert_to_lowercase() back to its caller.
ChatGPT then generates a more precise name for c_strcasecmp() (see part of the function name in yellow). This time, the case insensitive part of the function name is recovered. The example illustrates the power of LLMs, the importance of name propagation, and the gradual improvement through multiple iterations.
§ METHOD
The overall workflow of is in Fig. <ref>. It takes as input a binary program, and outputs the decompiled program with recovered names.
first leverages IDA to decompile the input binary program to C code, and then iteratively queries ChatGPT to generate names for functions and variables in the C code.
Specifically, after the decompilation, first generates prompts for each function in the input C program (step 1 in Fig. <ref>), and then queries ChatGPT with the generated prompts via the ChatGPT API <cit.>, one function at a time (step 2).
After obtains responses from ChatGPT, it parses the natural language outputs and maps the names proposed by ChatGPT back to the C code (step 3). Then a program analysis (Name Propagator in Fig. <ref>) is applied to propagate good names among functions. How to determine if a name is good by its confidence will be discussed later in Section <ref>.
The results of propagation are further leveraged to construct the next round queries to ChatGPT (step 4), enabling improvement over time.
After convergence, the final results are further processed by selecting the most appropriate names from those that were ever predicted over the multiple rounds
(step 5). In the following, we discuss more details. =-1
§.§ Formalization of Problem
This section illustrates how we formulate the problem of name generation for decompiled programs. We first introduce a simple language and the abstract domains
(for the program analysis)
to facilitate the discussion.
Then we show the iterative algorithm uses to refine variable names.
Language.
To simplify the discussion, we use a simple language to model the decompiled C code.
Our implementation is based on the Clang-AST parser, and supports most commonly-used C syntax in decompiled functions.
The definition of our language is shown in the top part of Fig. <ref>.
A program in our language consists of a list of function declarations. Each declaration consists of an identifier for the function (Id), a list of arguments (Args), and the function body (S).
We use identifier to refer to the dummy names (e.g., ) in the decompiled program.
Our language has three types of statements: S_1;S_2 is used to concatenate two statements; E_1 E_2 is the assignment statement; return E is used to return values to caller functions. The definitions for expressions are standard: Id and Lit are expressions referring to an identifier and a literal, respectively; E_1 ♢ E_2 denotes a binary operation over two operand expressions; and Id(E_1, E_2,...) is a function call expression.
Abstract Domains.
We show 's abstract domains in the bottom part of Fig. <ref>. Our program analysis aims to derive information in these domains.
maintains a key-value mapping from an identifier to a list of its candidate names (NS in Fig. <ref>).
Note that the key consists of a pair of identifiers. The first one denotes the function in which the name of the second identifier is predicted.
has a mechanism to force ChatGPT to report its confidence when predicting a new name. Thus each predicted name (Pred) has both a confidence(Conf) and a name (Name).
Algorithm.
Algorithm <ref> shows how iteratively queries ChatGPT.
The enter function of is defined at line <ref>.
It begins from an empty name scheme (line <ref>), and adds new names to the name scheme in each iteration (line <ref>).
For each iteration, first goes over each function and asks ChatGPT to generate name predictions (line <ref>). Then the propagation rules are applied to each function (line <ref>) to get better contexts from high-quality names in other functions.
Then updates the query program according to the results of propagation (line <ref>).
Finally, before returning the name scheme to the user, picks one name for each variable (line <ref>).
The name propagation sub-procedure (line <ref>) has termination guarantees, which are determined by the lattice over the set-based abstract domain with set inclusion and the finite universal set.
In theory, Algorithm <ref> terminates as well if ChatGPT can only generate a finite set of names. In practice, we employ an early termination policy when a round of new queries yields fewer than 10% changes.
Another policy is to limit the number of rounds by a query budget.
As LLMs' responses are by their nature nondeterministic, ideally we would repeat each query for a few times. However, our study for robustness in Section <ref> shows that name predictions are stable when ChatGPT is given the same queries, and hence the repetitions are elided in our implementation for query budget savings.
§.§ Interaction with ChatGPT
Both the input and output of ChatGPT are natural language sentences. Thus the key challenge is to formulate the problem of name generation into natural language questions, and to automatically parse ChatGPT's responses.
Our solution is to use a prompt template to enumerate each variable we want ChatGPT to predict, and ask ChatGPT to follow a specific output format.
Prompt Generation. As shown in Fig. <ref>, to query for a function, first describes the task with a few natural language sentences, followed by the decompiled C code.
Then enumerates individual variables in the function and sends the query. We observed that ChatGPT may miss some variables when the question is too general, e.g., “What are the good names for all variables in the above function?”
If a function has many variables, groups them in two separate queries to prevent the length from going beyond ChatGPT's token limit.
Note that also asks names for functions. =-1
In addition to names, guides ChatGPT to report the confidence for each prediction.
This is because ChatGPT may generate dummy names (e.g., “function_input_argument”) or randomly pick irrelevant names when it cannot predict a good name from the context. prunes out these low-quality names by confidence.
Specifically, in prompts, instructs ChatGPT as follows:
You MUST mark your confidence as
`Confident' or `Not Sure' for each name. If you are confident about a name, you should mark it as `Confident'. Otherwise, if you are not sure about a name, you should mark it as `Not Sure'.
Then simply filters out all the predictions that are marked as Not Sure in post-processing. We observe that ChatGPT may overestimate its confidence for sub-optimal names but rarely underestimate.
For example, in our motivating example, ChatGPT marks the (wrongly) predicted name convert_to_ascii as Confident.
alleviates this problem by considering the name candidate distributions returned by ChatGPT over multiple iterations. Details are in Section <ref> of the supplementary material.
Finally, requires ChatGPT to output names in a machine-readable format.
Without the output format requirements, ChatGPT tends to generate its answers in natural language, or even give a rewritten version of the program.
Post-processing. Although specifies the output format, ChatGPT still has some variance in its answer. We manually craft a set of regular expressions for to parse the output, and will retry the query for one more time if the output format cannot be correctly read. Typically, we observe less than 3% format errors.
§.§ Name Propagation
's name propagation shares a similar nature as type inference in which known types of some variables are used to derive types for other variables, following program semantics. For example, assume a statement x=y and x has a known type of int, type inference algorithms can determine y also has an int type.
In , good names for a variable are leveraged to derive good names for other variables. Initially, all high confidence names from ChatGPT are considered good names and literals are assigned good names of their own textual forms. A set of rules are used to propagate good names. For instance, a good callee function name will be propagated to its invocation sites in callers. New good names may be constructed for an expression only involving operands with good names.
Different from type inference, name propagation is inclusive, meaning that a variable may have multiple good names. Therefore, the propagation is by monotonically deriving more and more relations.
Relations and Auxiliary Functions.
To facilitate the discussion, we define a few relations and functions in Fig. <ref>.
A good name is represented by a relation.
Specifically, GoodNameOf(name_0, id_0, id_1) indicates a string (name_0) is considered a good name for an identifier (id_1) in a function (id_0).
Similarly, GoodNameOf(name_0, id_0, e_1) indicates name_0 is a good name for an expression e_1 in function id_0.
CallerOf(id_0, id_1) indicates id_0 is a caller of id_1.
iteratively derives such relations during analysis till it reaches a fixed point.
The auxiliary function maps a literal number or an operator to its string representation.
The analysis is formally defined by a set of inference rules shown in Fig. <ref>.
Each rule is interpreted as follows: the predicates above the line are the premises of a rule; and the formula below the line depicts how new relations are inferred.
Rule Caller recognizes caller-callee relations.
It means that if a call to function id_2 is found in a statement of id_1, then there is a relation CallerOf(id_1, id_2). That is, id_1 is a caller of id_2. Rules GN-Id and GN-Lit denote starting points of our inference. GN-Id denotes that if in the function id_1, ChatGPT predicts a name n for id with high confidence, then n is considered a good name for id in function id_1.
GN-Lit specifies the string representations for all literal values are good names. The rationale is that literals (e.g., magic numbers) are important for human reverse engineers <cit.>.
Rule
PropExpr
constructs a good name for an expression e_1♢ e_2 if both sub-expressions have a good name. Note that similarly constructs good names for other expressions, such as call-expressions and unary operations. Details are elided.
Rules PropCalleeName and PropCalleeArg are inter-procedural and propagate name information from a callee function to its caller.
Specifically, Rule PropCalleeName denotes that a good name for the callee is considered a good name for the function invocation in the caller.
Rule PropCalleeArg represents how propagates the name for a formal argument in the callee to the corresponding actual argument expression in the caller.
For example, if a formal argument is named as file_descriptor in the callee function, then the expression corresponding to that argument at the invocation site
may also be a file descriptor.
The set of rules for propagation from a caller function to all its callees are symmetric and hence elided. =-1
§.§ Query Update
After name propagation,
further leverages the propagated names to construct the next round queries.
The query update algorithm takes as input a query text of a function and the GoodNameOf relations derived by the propagation rules, and outputs a new query for the function.
A few query construction rules are presented in
Fig. <ref>.
The green boxes show the derived GoodNameOf relations,
and the tan boxes show the function.
In Fig. <ref>a, derives a good name for the callee function id_1 in the context of function id_0.
It renames all the invocations to id_1 to the good name. Note that there may be multiple good names for a function/variable, selects the one with the latest timestamp.
Fig. <ref>b shows how to leverage good name information regarding an expression, including a singleton variable expression.
Recall that our name propagation allows generating names for composite expressions. We cannot simply
rename any identifier to utilize such information.
thus propagates the information by code comments. As shown in Fig. <ref>b, it puts the propagated name in the code comment before the expression e_i.
Note that even if the related expression is a singleton variable, simply replacing its identifier with a good name may yield undesirable results. The reason is that ChatGPT tends not to rename a variable that already has a meaningful name in the code. Thus directly setting variable names in the code prevents ChatGPT from generating any new names.
Fig. <ref>c shows that when a caller function and its actual argument expression have good names, they can be utilized in the query of a callee of the function. Specifically, a new comment is added before the callee function describing which caller function may call it and the good name for the argument expression. =-1
§ EVALUATION
We develop on top of IDA Pro 7.5 and Clang 12.
consists of a total of 2,770 lines of Python code and 3,214 lines of C++ code.
We examine the effectiveness of by addressing the following research questions (RQs):
RQ1: Can effectively help developers comprehend decompiled code? How does it compare with the SOTA?
RQ2: How well do names generated by and SOTA match their original versions in the source code?
RQ3: What are the impacts of the name propagation analysis on the overall performance of ?
RQ4: Does scale well on real-world data?
RQ5: Is resilient to the nondeterminism of LLM answers?
In addition to these RQs, we conduct four case studies to illustrate how helps in the real-world use scenarios.
§.§ Setup
Benchmark.
We assess using six well-established real-world projects that have been extensively employed in previous studies <cit.>.
It is worth noting that OpenAI enforces various resource restrictions when accessing ChatGPT <cit.>, such as query fees and intentional delays (e.g., around 20 seconds per query).
Our dataset consists of 16,212 functions in total. However, evaluating all the functions from our dataset would lead to high resource consumption.
Therefore, we adhere to existing practices <cit.> and randomly sample a subset of 1,258 functions consisting of 4,277 variables.
Detailed statistics of our dataset can be found in Section <ref> of our supplementary material.
Evaluation Metrics.
Assessing the degree of alignment between predicted names and ground-truth names (i.e., the variable names in the source code) presents a significant challenge because there may be many semantically equivalent names for a variable.
For instance, and are often deemed semantically equivalent in the context of programming, yet they do not match each other.
To address the issue, we propose the following two metrics for evaluation.
Developer Preferences.
Taking into account the complexity in evaluating the semantic equivalence of symbol names,
incorporating professional developers in the evaluation process is a judicious approach.
To this end, we conduct a user study with a group of developers, including a number of participants with substantial reverse engineering experience.
Each participant was presented with several functions, accompanied by their source code and ground-truth names.
The participants were then asked to score each predicted name on a scale of 1 to 5, with higher scores reflecting better predictions.
A more detailed description can be found in Section <ref>.
Name Similarity.
While user studies can provide reliable results, they are inherently difficult to scale up.
In order to automate the evaluation process, we introduce a similarity score function that quantifies the similarity between a predicted name and its corresponding ground-truth name.
Similarity(S_TP, S_P) = |LCS(S_TP, S_P)|/ |S_TP |
In the formula above, S_TP and S_P represent the ground-truth and predicted names, respectively.
LCS represents the longest common subsequence between the two input strings.
Essentially, this function assesses the proportion of characters in the ground-truth name that are accurately predicted in order.
For example, it yields similarity scores of 0.64 and 0.6 for the aforementioned buffer_size and ret_buffer examples, respectively.
It is important to note that the similarity function generates a score rather than a binary outcome, providing a more refined evaluation of the predictions.
More importantly, outcomes derived from our user study align well with
results by this automated method, as detailed in Section <ref>.
This provides additional support for its validity in practice.
§.§ RQ1: User Study
To evaluate the effectiveness of , we conduct a sizeable user study.
Specifically, we randomly select 30 functions from our dataset, and
all variables present in the sampled functions are examined as subjects within the study.
To help participants understand the context, each function is accompanied by its respective source code and decompiled code. =-1
We task the participants with evaluating the quality of predicted names by comparing them to their ground-truth counterparts.
The study encompasses four variable name prediction methods: DEBIN, DIRTY, ChatGPT without the propagation mechanism (one-shot), and .
Participants are instructed to rate each predicted name on a scale of 1 to 5, with the scores indicating
(1) misleading, (2) meaningless, (3) general but missing context, (4) acceptable, and (5) comparable to or better than ground truth.
We include concrete samples of the study in Section <ref> in the supplementary material.
In addition to the randomly-sampled 30 functions, we mix in the study another 8 functions with 33 variables as validation samples. In each validation sample, one of the four methods demonstrates a clear advantage over the others.
These samples are used to ascertain participants' attentiveness during the study.
It should be noted that results from validation questions are excluded from our final analysis.
In total, we construct 528 questions, consisting of 396 testing questions
and 132 validation questions.
We recruit 31 participants, with 16 from our institution, and the rest from three world-class CTF (Capture The Flag) teams[CTFs are renowned competitions designed to challenge participants in solving computer security problems, including reverse engineering tasks. In order to determine the world-class standing of a CTF team, we assess whether they have achieved a top-10 ranking on CTFTime <cit.> at least once during the period spanning from 2013 to 2023.].
All participants have extensive programming experience, with 26 of them having utilized C/C++ in project development and 10 possessing over three years of hands-on expertise in reverse engineering.
We ensure that at least four participants respond to each question.
Overall Results.
Fig. <ref> delineates the results of the user study, with the x-axis representing user scores and the y-axis indicating the count of predicted names corresponding to each score.
It is clear that surpasses the other three methods, as the majority of its predicted names achieve scores of 4 and 5, i.e., “good names”, indicating that is good at providing semantically meaningful names.
ChatGPT without propagation also exhibits a relatively commendable performance compared to the baselines.
However, due to the lack of a propagation mechanism and the inability to aggregate derived information, it yields fewer good names.
Specifically, generates good names for 75% variables, and ChatGPT without propagation generates good names for 45%. The two baseline methods DIRTY and DEBIN generate good names for 6% and 5% variables, respectively.
Note that the majority of DIRTY's predictions are scored 2 (i.e., meaningless names), and none of them obtain a score of 1 (i.e., misleading names).
This can be attributed to DIRTY's conservative nature, which tends to generate dummy names such as . =-1
Effectiveness of the Name Similarity Metric.
Piggybacking on this experiment, we validate the effectiveness of the proposed automated metric.
Specifically, for each predicted name, we calculate its similarity to the corresponding ground-truth name and compare the similarity score with the score by users.
Fig. <ref> presents the results.
The x-axis represents a name similarity score threshold.
The y-axis indicates the average user study scores
of the predicted names whose similarity scores exceed the threshold.
Observe that the average user study score has a close-to-linear positive relation with
the threshold.
It validates that the similarity score serves as a reasonable approximation of semantic equivalence of variable names from a user standpoint.
Observe that when the threshold is 0.0, the user score is still slightly above 3.
It essentially indicates that the average user study score of all predicted variables (generated by the four subject methods) is marginally above 3.
§.§ RQ2: Quality of Predicted Names
To assess the degree to which the names generated by correspond with their ground-truth counterparts, we employ the similarity function to gauge the prediction quality, thereby enabling the evaluation to be scaled across the entire benchmark.
Fig. <ref> shows that when the threshold is set at 0.6, the average human score exceeds 4, indicating that the predicted names are acceptable alternatives as rated by users.
Consequently, we select a threshold of 0.6 for the similarity metric, meaning that a predicted name is deemed a “good name” if its similarity score surpasses 0.6.
A good prediction is treated as a true positive, based on which we can further calculate the precision and recall of a name prediction technique <cit.>. =-1
Overall Results.
Table <ref> shows the performance of in comparison to DIRTY, the current state-of-the-art technique for predicting variable names in decompiled code.
Note that DIRTY assumes the decompiled program has the ground-truth function names. We thus provide the names of functions (only) in DIRTY's test samples. The results of are obtained on programs without ground-truth function names.
Although the setup for is more challenging,
outperforms DIRTY on most datasets in terms of both precision and recall.
We attribute this to the advances of LLM, and the name propagation technique that provides more context for the queries to LLM.
achieves the highest improvement on the ImageMagick dataset, with a precision that is over four times that of DIRTY's and a recall over six times.
Further analysis attributes the relatively higher performance to ImageMagick's heavy reliance on external library function calls, which supply an abundance of hints to the LLM.
On the Binutils dataset, DIRTY slightly outperforms in terms of precision. That is because more than 60% of the variables in that dataset overlap with DIRTY's training set (see the last column of Table <ref>),
while such overlap is lower than 17% in other benchmarks.
Note that DIRTY was trained on functions randomly sampled from Github, and thus their training data may overlap with some functions in our test sets.
On the other hand, still outperforms DIRTY in terms of recall. That is because propagates program context across functions, while DIRTY makes prediction on the local context of a function.
The authors of DIRTY reported better precision and recall in their paper. The reason is that they used many small projects from Github, in which variable names tend to have stronger connections with the provided ground-truth function names. In comparison, the benchmarks used in our evaluation are more complex than 80% of those used in DIRTY. =-1
Discussion.
Observe that the precision and recall of are not as remarkable as one would hope.
However, it does not necessarily mean that cannot provide informative names.
Based on our observations, the similarity metric used is relatively strict.
Even if the predicted names are semantically equivalent to the ground-truth names, they may not receive a high similarity score.
For example, and are semantically equivalent, but they have a very low similarity score.
Our user study indeed indicates that over 75% of the predicted names are considered good by users.
An ideal solution would be to precisely measure semantic distance of two names. However, the substantial variations in naming conventions make the development of such a method very challenging. We will leave it to our future work.
We further conduct a case study to show a typical failure of in Section <ref>.
Assessment with Various Thresholds. We further compare the performance of and DIRTY with different thresholds for a “good name”. The result shows that outperforms DIRTY across the entire spectrum of threshold levels. Details can be found in Section <ref> of our supplementary material.
§.§ RQ3: Ablation Study
To better understand the effects of the name propagation analysis, we conduct three ablation studies.
The first study compares the performance of with that by asking ChatGPT for one-shot.
The second study compares with a naive approach that simply appends callee functions of the query function in the query text.
The last ablation study shows how gradually achieves better performance as the iteration of propagation grows.
Comparison with One-shot ChatGPT Queries.
Fig. <ref> presents a comparison between and one-shot ChatGPT queries, with the left figure illustrating precision and the right figure depicting recall.
Notably, achieves a slightly superior, yet generally comparable precision in relation to one-shot ChatGPT queries.
Upon closer examination, we find that, for a given variable, when ChatGPT lacks sufficient information to predict an appropriate name, it tends to generate a “dummy name”.
These names are subsequently eliminated through the name selection process.
Consequently, only variables with adequate contextual information receive predicted names.
As such, the precision primarily assesses ChatGPT's capability of predicting names for variables already rich in contextual information and is not directly related to the presence of the propagation mechanism.
Nevertheless, significantly outperforms one-shot ChatGPT queries in terms of recall, achieving approximately twice the performance in most cases.
This can be attributed to the effective propagation mechanism.
Comparison with a Naïve Algorithm.
We conduct a study to show that substantially outperforms a method that includes callee functions in ChatGPT queries. Details are in Section <ref> in the supplementary material.
Impact of the Number of Propagation Iterations.
We observe performance improvement is substantial in the first a few rounds of analysis and 10 rounds deliver optimal results. Details are in Section <ref> in the supplementary material.
§.§ RQ4: Scalability
On average, querying ChatGPT for one time takes 22.8 seconds, leading to a relatively high time consumption for .
However, the queries can be easily parallelized and the cost are justifiable in practice, given the one-time nature of reverse engineering efforts.
Furthermore, scales well to large programs which in fact provides more context.
Details can be found in Section <ref> in the supplementary material.
§.§ RQ5: Robustness
Due to the nondeterministic nature of LLM, we repeat an experiment on the Coreutils dataset for 8 times to illustrate the robustness of . In each run, we let propagate names for 4 iterations. The results show that has a stable performance among different runs, with less than 0.04% variations
and the improvement from round to round
is significantly larger than the variance.
Details can be found in Fig. <ref> in the supplementary material.
§.§ Case Studies
Performance on Unseen Programs.
ChatGPT is trained on enormous data.
It is unclear whether our benchmarks have been used in ChatGPT's training.
To study 's performance on unseen programs, we conduct a case study on AudioFlux <cit.>, an audio processing library project started in 2023.
The results show that is equally effective whereas the baselines have lower than 5% precision and recall.
Details are in Section <ref> in the supplementary material. =-1
Failure Case of .
We examine a failure case of , which received a score of 1 in our user study.
Figure <ref> presents the source code for this case, which is simplified for illustrative purposes.
The code represents a wrapper function for , in which and are input memory buffers, while and denote the respective buffer sizes.
The variable is a copy of .
The code utilizes to store the value of that will be modified later.
Although accurately predicts the name of , it erroneously assigns the name to .
One might wonder why (line 3) does not help resolve the issue, given the name propagation analysis.
Recall that, unlike inter-procedural hints, does not employ code comments to explicitly propagate intra-procedural hints.
Instead, we rely on LLM itself to detect the potential relations among variables within the same function, avoiding the submission of lengthy queries to the LLM that might end up confusing the model.
In this case, ChatGPT does not correctly determine the relation between and and our propagation does not help either.
This issue could be tackled either by devising more sophisticated propagation rules for intra-procedural hints or by adopting a more advanced LLM.
In fact, we assessed the failure case utilizing a variant of built upon GPT-4 <cit.>.
The GPT-4-based successfully determines the desired relation and predicts as , supporting our hypothesis that 's performance exhibits a positive correlation with LLM quality.
Query with Program Functionality Description.
To simulate realistic application in which analysts roughly know a program's functionalities, we provide a textual description of a program at the beginning of 's query prompts and find that 's performance improves. Details are in Section <ref> in the supplementary material. =-1
Query to GPT-4.
We substitute ChatGPT with GPT-4 to investigate the impact of a more advanced LLM on 's performance.
Due to GPT-4's slower processing speed compared with ChatGPT <cit.>, we randomly sample a smaller dataset from Coreutils, comprising 140 variables, and evaluate both GPT-4-driven and ChatGPT-driven on this dataset.
The results are presented in Table <ref>.
Observe that demonstrates better performance when powered by GPT-4.
Specifically, with a propagation iteration count of 4, the GPT-4-driven achieves over 13% higher precision compared with its ChatGPT-driven counterpart.
We attribute this to the superior capability of GPT-4.
Note that precision essentially measures the LLM's performance when making confident predictions (see Section <ref>). Thus a stronger LLM leads to better performance of .
Additionally, the GPT-4-driven version achieves better recall than the ChatGPT-driven one.
We attribute the improvement to GPT-4's better capability of inferring good names based on local information, rendering
the overall contextual information propagation more effective.
It is also noteworthy that, for both GPT-4-driven and ChatGPT-driven , the propagation algorithm leads to improved results.
This highlights the necessity of our name propagation analysis, regardless of the underlying LLM employed.
Such results indicate that the performance of can be further enhanced as more powerful LLMs become available, while the propagation analysis continues to play an essential role in achieving optimal results.
§ THREATS TO VALIDITY
We choose to use ChatGPT, a closed-source LLM. The reported results may hence be tied to a specific version of ChatGPT.
We have logged the interactions with ChatGPT for reproducibility. In addition, our technique is independent of the LLM.
Our case study shows that the performance of has a positive correlation with LLM quality, which is supposed to improve over time.
LLMs including ChatGPT are trained on enormous multi-modal data. It is unclear if the benchmarks used in the paper had been used in ChatGPT's training. This is a general threat-to-validity to any research using LLMs. On one hand, we compile the benchmarks and generate fresh binaries, which likely differ from the binaries used in LLM training. On the other hand, we argue that LLMs are so general that they unlikely overfit-on/memorize specific training examples. In addition, our ablation study on a very recent project (unlikely seen by ChatGPT) shows that is equally effective, whereas the baseline has substantially degraded performance.
LLMs' responses are nondeterministic in general.
Our ablation study shows that name predication by ChatGPT yields largely stable results.
Our user study is susceptible to human errors. To mitigate the threat, we have carefully planned the study, using validation tests as part of the study and choosing programmers with extensive experience (e.g., in reverse engineering), and having multiple users covering a test. =-1
§ RELATED WORK
Binary Analysis.
Binary analysis is of fundamental importance in the field of software security and software engineering, encompassing a range of critical downstream applications such as malware analysis <cit.>, vulnerability detection <cit.>, software fingerprinting <cit.>, APT attack forensics <cit.>, and software reuse <cit.>.
is intrinsically connected to decompilation <cit.>, a foundational task in binary analysis.
In addition to the related works discussed in Section <ref>, substantial research has been conducted in the area of decompilation, addressing topics such as type inference <cit.>, binary-level data-flow analysis <cit.>, function signature inference <cit.>, and binary similarity <cit.>.
Our work is orthogonal to these existing contributions.
Large Language Models.
Large Language Models (LLMs) have made significant breakthroughs in language understanding and generative tasks, including language translation <cit.>, text summarization <cit.>, question answering <cit.>, and so on. LLMs developed for programming languages <cit.> have also shown their capabilities in software engineering tasks, such as code translation <cit.>, code completion <cit.>, and program repair <cit.>.
In this paper, we are the first to explore the potential of LLMs, especially ChatGPT, for name recovery, and demonstrate through extensive evaluation that they can significantly improve performance on this important task.
§ CONCLUSION
We develop a novel technique for symbol name recovery in decompilation. It leverages the synergy between large language models and program analysis. It features an iterative algorithm that propagates query results from ChatGPT following program semantics. The propagation in turn provides better context for ChatGPT.
Our results show that 75% of the recovered names are considered good by users and our technique outperforms the state-of-the-art technique by 16.5% and 20.23% in precision and recall, respectively.
ACM-Reference-Format
§ DETAILS OF USER STUDY
§.§ Data Availability
In this section, we first detail the setup of our user study. Then we
present an exemplary sample of our user study verbatim (with minor format changes for readability).
Finally, we include five concrete examples that receive scores ranging from 1 to 5 from our users.
§.§ Setup
Our user study is conducted online, with experimental data collected anonymously.
We record email addresses separately from each user for distributing compensation.
To better display the code, our questions are presented to participants in a Github repository in markdown formats.
Participants read the samples on the repository and enter their responses in an anonymous Google Form.
For each question sample, we first describe the task, the scoring criteria, and then present both the source code and the decompiled code to our user. Variables in the decompiled code are already renamed with their ground-truth names for better understanding.
On the Google Form, for each studied variable, we present four candidate names generated by different techniques and ask users to assign a score from 1 to 5 for each name.
Each participant is assigned approximately 20 variables, typically related to 6-7 functions.
We ensure that each variable is scored by at least 4 users.
The following section show our question sample verbatim.
Note that the source code and decompiled code are shown to our participant in a separate Github repository.
§.§ A Sample of the Study
Task Description. Thank you very much for taking this user study!
Since google form is unfriendly of showing code snippets, we provide a link for markdown files at https://...
We aim to evaluate techniques that infer variable names from binary code.
Specifically, we want you to help us evaluate the quality of recovered variable names in the decompiled code.
For each variable under study, please help us to evaluate each candidate name with the following 5-score standard:
Score-5 Candidate name is similar or better than the name in the code. The following examples are expected to be considered as score-5:
* file_descriptor for ground truth name fd
* dst for ground truth name destination
* size for ground truth name length
Also, please consider synonyms (in the code context), e.g., in function fwrite, for a ground-truth variable stream, file is considered as a similar name.
Score-4 Candidate name is an acceptable name. But it is not as precise as the name in the code. The following examples are expected to be considered as score-4:
* regex_info for ground truth name re_pattern
* buffer for ground truth name filename_buffer
Score-3 Candidate name has general information, but is not precisely related to the code context.
The following examples is expected to be considered as score-3:
* ptr_to_structure for ground truth name hash_table,
if hash_table in the code is indeed a pointer-typed variable AND it indeed points to a compound structure.
Score-2 Candidate name is meaningless.
The following examples are expected to be considered as score-2:
* v1, v2, v3
* argument
* some_variable
Score-1 Candidate name is misleading.
The following examples is expected to be considered as score-1:
* signal_handler is misleading for variable re_pattern
More explanations.
Since decompiled code has no symbol information (i.e., variable names), we have manually assigned names to variables if we can find the corresponding variables in the source code.
So you do NOT have to read into source code. We provide source code just to make sure you roughly understand the context of the function. (e.g., what the function does, what the input and output are, etc.)
Note that in the decompiled code, the data type and code structure may be incorrect.
Format. For each problem, the title `Q1-Var1-num' means: Please open the markdown file for Q1 and see the variable `num' in the decompiled code. We provide multiple candidate names for it (recovered by different techniques).
Now we can start the user study!
Q1-Var1-num.
Please see the corresponding markdown file and rate the following candidate names.
[h!]
a1 1 2 3 4 5
file_index 1 2 3 4 5
a1 1 2 3 4 5
sock 1 2 3 4 5
Note: The remaining questions are not shown here for simplicity.
§.§ Concrete Examples of Different Scores
In this section, we show five concrete examples receiving scores from 5 to 1 from our user. For each example, we also specify its question ID in the title.
Note that for better readability, we only show source code in this section.
Score 5.
A score of 5 implies that the predicted variable name is comparable to or better than the ground truth.
Fig. <ref> shows a binary search function that determines whether an element is in a set.
The evaluated variable has a ground-truth name of right, representing the upper bound of the search range.
Our user assigns a score of 5 to the predicted name upper_bound_index.
Although these two names differ, they convey the same meaning within the context of binary search.
Score 4.
A score of 4 denotes that the predicted name is deemed acceptable, but not as precise as the ground truth.
Fig. <ref> illustrates an instance of a variable with a score of 4. The ground-truth name of the studied variable is and the candidate name with a score 4 is .
This variable is specifically within a function designed to execute a bitwise Not operation on a composite structure, .
The variable serves as a loop index traversing the structure.
While the predicted name provides adequate information to facilitate reverse engineering, it does not explicitly convey the fact that it represents an index iterating over a structure.
Consequently, this variable receives a score of 4 in the evaluation.
Score 3.
A score of 3 means that the predicted name encompasses general information, but not accurately correlated with the code's specific context.
Fig. <ref> showcases a function responsible for compiling a regular expression.
The studied variable, with the ground-truth name , is a pointer to a composite structure, .
The predicted name receiving a score of 3 is .
Although it does not precisely related to the code context, a predicted name of still implies that it is a pointer referencing a structure-typed memory region.
Note that the information can be helpful for a reverse engineer because structural information is typically not available in decompiled code.
Identifying that a variable is a pointer to a structure-typed region can facilitate further analysis.
Score 2. A score of 2 means the predicted variable name is meaningless.
For example, in Fig. <ref>, v4 is simply a dummy name for the variable translation.
Score 1. A score of 1 means the predicted name is misleading.
Fig. <ref> delineates a function which tries to get the corresponding quote mark in the given context.
The function argument denotes the quoting style, but the predicted name is , which is misleading.
Such prediction is even worse than a meaningless one because it may draw the attention of analysts to the wrong direction.
§ NAME SELECTION
After multiple iterations, each variable is associated with a list of candidate names.
To select the best name from the list, employs a majority voting scheme in where each name receives a weight based on the confidence of its corresponding predictions.
Consequently, names with higher confidence scores are assigned greater weight, allowing to consider both the frequency of each name in the predictions and the level of confidence in those predictions.
In cases where support for the majority name is less than half, merely selects the most recent name with high confidence.
The rationale is that queries from later iterations are more likely to possess better contextual information, thus rendering the predicted names of higher quality.
§ DATASET STATISTICS
Table <ref> presents the detailed dataset information, including the total number of functions, number of sampled functions, and the number of variables in the sampled functions (in columns 1 to 4, respectively).
In total, we randomly select a subset of 1,258 functions from the entire pool of 16,212 functions, encompassing 4,227 variables within the sampled functions.
§ ASSESSMENT WITH VARIOUS THRESHOLDS
To provide a more comprehensive depiction of 's superior performance in comparison to DIRTY, we conduct an auxiliary experiment that evaluates and DIRTY under a range of threshold settings.
The findings of this experiment are depicted in Fig. <ref>, where the left figure presents the precision under different thresholds, and the right figure demonstrates the recall.
It is important to note that as the threshold increases, both precision and recall decline.
This is due to the fact that a higher threshold implies a more stringent standard for “good names”, which reduces the number of true positives.
The results clearly demonstrate that persistently and substantially outperforms DIRTY across the entire spectrum of threshold levels.
§ COMPARISON WITH A NAÏVE ALGORITHM
Given a query function, we develop a naïve algorithm that simply appends its (direct and transitive) callee functions to the query, and submits the whole query to ChatGPT.
Note that we also slightly revise our prompts to let ChatGPT know which function tries to query.
To avoid exceeding the input limit, we construct from Coreutils a dataset with relatively short functions consisting of 341 variables and limit the inlining bound to four.
The results are shown in Table <ref>. Inline-N means the related algorithm recursively appends callee functions with a max depth of N.
Inline-0 is the same as querying ChatGPT for one-shot.
Observe that both the precision and recall for the naïve inlining algorithm are better than those for the one-shot algorithm. That indicates the context information of caller and callee functions are indeed important to LLM.
However, the recall of these inlining algorithms are much lower than , because can propagate information globally and the propagation leverages precise program semantics. In addition, inlining cannot handle large functions.
§ IMPACT OF THE NUMBER OF PROPAGATION ITERATIONS
Fig. <ref> illustrates how the precision (blue line) and recall (orange line) of change with respect to the number of iterations.
Specifically, the x-axis represents the iteration count, and the y-axis indicates the precision and recall.
It is important to note that the precision remains relatively stable across varying iteration counts, as previously discussed.
Conversely, the recall exhibits a significant increase after the first four rounds.
This is because some variable names can hardly be derived from the local context and necessitate information from both direct or transitive callees and callers.
The propagation mechanism enables ChatGPT to obtain such information.
Furthermore, after the initial six iterations, both precision and recall stabilize, suggesting that all relevant information has approximately converged.
This observation implies that selecting an iteration count of 10 is a reasonable choice, as it empirically ensures that reaches a fixpoint.
§ SCALABILITY
Time and Query Cost.
In our experiments, propagates names for 10 iterations.
Note that in each iteration, we skip the queries for functions that are not changed during propagation.
Also, due to the frequency limits on our OpenAI accounts <cit.>, we insert 5 seconds waiting time after each query.
Table <ref> shows the number of ChatGPT queries for each dataset and the overall time consumption.
We can see that for each dataset, queries ChatGPT 1791 times on average. It is typically related to a cost of around 5 USD. The time consumption of is relatively high, with 22.8 seconds per query and 11.4 hours per dataset on average.
We inspect the logs and find that the time consumption varies depending on the server status of ChatGPT. For example, in dataset Findutils, for the same number of queries, the longest recorded time is 151 minutes while the shortest is 45 minutes.
Note that in practice, as shown in Fig. <ref>, the user can get good performance with only 4 rounds of queries.
Also, in each iteration, the queries can be conducted in parallel.
We argue that, given the one-time nature of reverse engineering efforts, such resource costs are justifiable in practice.
Note that the time for propagation and all other data processing in one iteration is typically less than 50 seconds. Thus we omit the discussion for simplicity.
Performance on Larger Workload. To test whether scales well, we randomly sample two larger set of functions from Findutils and Coreutils. Each set consists of around 800 functions and more than 2000 variables. For each set, runs for four rounds.
For comparison, we also collect the performance of at the fourth round on the corresponding but smaller datasets used in earlier experiments.
The results are shown in Table <ref>.
actually performs better on the larger sets in terms of both precision and recall.
That is because on a larger set, has more context information to propagate, and thus can potentially provide more accurate information for the queries.
§ PERFORMANCE ON UNSEEN PROGRAMS
ChatGPT is trained on enormous data.
It is thus unclear whether our benchmarks have been used in ChatGPT's training.
To study 's performan on unseen programs, we conduct a case study on AudioFlux <cit.>, an audio processing library project started in 2023.
It has around 40k lines of code and receives 1k stars on Github.
The chance that ChatGPT has seen the project in its training data is much lower than the others.
We compile the project to a binary program, strip all the symbol and debugging information, and randomly sample 193 functions with 735 variables.
We run on the dataset for 10 iterations.
As shown in Table <ref>, the precision of is 32.31% and recall 25.85%.
The precision is comparable to the others in Table <ref>.
On the other hand, the recall is slightly lower than those in Table <ref>.
That is because AudioFlux has many stand-alone functions (i.e., functions with no caller or callee functions).
For those functions, cannot effectively propagate contextual information.
Specifically, there are 7.6% stand-alone functions in AudioFlux, while the number for Coreutils is only 3%.
We leave as future work to derive more sophisticated propagation rules for those functions.
In comparison, we further run DIRTY on this dataset with two setups (i.e., with and without ground-truth function names, respectively). It achieves precision and recall of lower than 5% in both setups. It suggests that generalizes better than DIRTY.
§ CASE STUDY: QUERY WITH PROGRAM FUNCTIONALITY DESCRIPTION
The results are shown in Fig. <ref>.
We can see that with the whole-program information achieves better precision and recall at the first few rounds. Intuitively, that is because ChatGPT now knows the use scenario of the query function. It can generate more specific names. For example, ChatGPT predicts a variable with name stack_frame_pointer without the whole-program information. The ground-truth name for this variable is db. When told the program is from a database management system, ChatGPT realizes that the pointer may point to a database, and thus generates the name database_handle in the first round, which is closer to the ground-truth name.
On the other hand, the advantage diminishes when propagates names for more rounds. That is because some functions in the dataset may disclose similar information,
e.g., an error processing function with a literal string “db connection error”.
|
http://arxiv.org/abs/2306.10106v1
|
20230616180002
|
A heterotic Kodaira--Spencer theory at one-loop
|
[
"Anthony Ashmore",
"Javier José Murgas Ibarra",
"David Duncan McNutt",
"Charles Strickland-Constable",
"Eirik Eik Svanes",
"David Tennyson",
"Sander Winje"
] |
hep-th
|
[
"hep-th",
"math-ph",
"math.MP"
] |
=1
verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in
linecolor=blue,backgroundcolor=blue!25,bordercolor=blue,textsize=scriptsizetodonotes
backgrounds
mdframed
every picture/.style=
roundcorner=.5ex
Working
[backgroundcolor=white]
AMSb
bbold
rsfsUrsfsmn
rsfs
=
=þ
⋆
#1SU(#1)
#1U(#1)
#1SL(#1)
#1#1
#1#1
#1#1
CS
OT1pzc
OT1pzcmit<-> s * [1.200] pzcmi7t
OT1pzcmit
Ubbold
Ubboldmn
<-5.5> s*[1.05] bbold5
<5.5-6.5> s*[1.05] bbold6
<6.5-7.5> s*[1.05] bbold7
<7.5-8.5> s*[1.05] bbold8
<8.5-9.5> s*[1.05] bbold9
<9.5-11.5> s*[1.05] bbold10
<11.5-16> s*[1.05] bbold12
<16-> s*[1.05] bbold17
=cmr12 at 24pt
=cmr8 at 8pt
=cmcsc10
|
http://arxiv.org/abs/2306.05538v1
|
20230608202039
|
Geometric interpretation of valuated term (pre)orders
|
[
"Netanel Friedenberg",
"Kalina Mincheva"
] |
math.AG
|
[
"math.AG",
"math.AC",
"math.MG",
"math.RA",
"14T99 (Primary), 14T10, 14T15, 14T20, 15A80, 52B20, 16Y60\n (Secondary)"
] |
Higgs cross-section (including di-Higgs) with ATLAS and CMS
Rongkun Wang
on behalf of the ATLAS and CMS collaborations
July 31, 2023
===============================================================
Valuated term orders are studied for the purposes of Gröbner theory over fields with valuation.
The points of a usual tropical variety correspond to certain valuated terms preorders. Generalizing both of these, the set of all “well-behaved” valuated term preorders is canonically in bijection with the points of a space introduced in our previous work on tropical adic geometry. In this paper we interpret these points geometrically by explicitly characterizing them in terms of classical polyhedral geometry. This characterization gives a bijection with equivalence classes of flags of polyhedra as well as a bijection with a class of prime filters on a lattice of polyhedral sets. The first of these also classifies valuated term orders. The second bijection is of the same flavor as the bijections from <cit.>
in non-archimedean analytic geometry and indicates that the results of that paper may have analogues in tropical adic geometry.
§ INTRODUCTION
Classical theory uses a term order which compares two monomials m = ax^u and m' = a'x^u' by only considering u and u'. In <cit.> Chan and Maclagan build Gröbner theory in k[x_1,…,x_n], when k is a field with nontrivial valuation v:k→∪{-∞}=:. Their valuated theory takes into account the coefficients by picking a weight vector w∈^n and comparing v(a)+w·u with v(a')+w·u', and then using a term order to break any ties.
Valuated theory has a natural connection to tropical geometry. More specifically, the tropicalization of a subvariety of the torus is the set of w∈^n for which each polynomial in its defining ideal has at least two monomials of maximal weight. In particular, tropical geometry studies valuated monomial preorders, rather than orders. A valuated monomial preorder is a multiplicative total preorder ≤ on the set of monomials ax^u such that if v(a) ≤ v(b) then a ≤ b. Valuated monomial orders are those for which ax^u≡ bx^μ if and only if v(a)=v(b) and u=μ. In light of this, we treat ax^u and bx^u as being the same if v(a)=v(b).
When working with valuated monomial orders, one is usually only interested in those in which the valuation of the coefficients play a primary role. These are exactly the orders that arise by using a weight vector and term order as above. We call such orders Chan-Maclagan orders; they also occur in ongoing work of Vaccon and Verron <cit.> on universal bases in Tate algebras.
Recent work of Amini and Iriarte <cit.> aimed at the study of Newton-Okounkov bodies investigates spaces of (quasi)monomial preorders. An understanding of valuated monomial preorders is needed to extend their theory from varieties to semistable models over a valuation ring.
We see from the above that a weight vector w∈^n provides a geometric interpretation for certain valuated monomial preorders.
In fact, if v(k^×)⊆, then most points in ^n define a valuated monomial order. For if w∈^n has irrational entries which are linearly independent over , then distinct monomials will always have different weights.
This shows that certain valuated monomial orders have a geometric interpretation; however, not all of them arise this way.
The goal of the paper is to derive a geometric meaning for each of the following: (1) all valuated monomial preorders, (2) Chan-Maclagan orders, and (3) those valuated monomial preorders determined by a weight vector and a term preorder as in the construction of the Chan-Maclagan orders. We do this as an application of the theory proposed in <cit.>, thus verifying that
it
carries geometric information.
The theory from <cit.> applies more generally, and so we work with k[] where is a toric monoid. We are then able to quickly reduce to the torus case, where k[]=k[x_1^±1,…,x_n^±1].
Let Γ=v(k^×) be the value group of v. We can now state
our main results.
[Corollary <ref>]
There is an explicit bijection from the set of valuated monomial preorders on k[x_1^±1,…,x_n^±1] to the set of Γ-local equivalence classes of flags of polyhedral cones in _≥0×^n.
See Definition <ref> for the definition of Γ-local equivalence.
[Corollary <ref>]
There is an explicit bijection from the set of Chan-Maclagan preorders on k[x_1^±1,…,x_n^±1] to the set of Γ-local equivalence classes of flags of polyhedra in ^n.
[Proposition <ref>]
The bijection of Theorem <ref> gives a bijection from the set of Chan-Maclagan orders on k[x_1^±1,…,x_n^±1] to the set of Γ-local equivalence classes of complete flags of polyhedra in ^n.
The classification provided by Theorem <ref> can be seen as complementary to an earlier result in the non-valuated case.
Usual
monomial orders have been classified in <cit.> in terms of their defining matrices. Since two matrices can give the same monomial order, the classification proceeds by providing a canonical representative for the equivalence class. In contrast, Corollary <ref> gives an alternative, geometric criterion for when two matrices give the same valuated or non-valuated monomial order.
We now interpret the Chan-Maclagan preorders in the context of (additively idempotent) semirings, motivated by adic geometry.
We first recall that for a topological (usually f-adic) ring R the set of equivalence classes of continuous valuations v:R→ G∪{0}, where G can be any multiplicatively-written totally ordered abelian group, is called the continuous spectrum of R, denoted ContR. To consider continuity of valuations, a certain canonical topology is placed on G∪{0}.
The semiring analogue of a valuation is a homomorphism to a totally ordered semifield.
Just as equivalence classes of homomorphisms from a ring R to a field are given by prime ideals, equivalence classes of valuations on a semiring A are given by certain equivalence relations on A called prime congruences[first introduced in <cit.>].
For any topological semiring A we consider the set of prime congruences P on A for which A/P is not finite (a technical hypothesis that is redundant in the cases we consider) and the map from A to the residue semifield κ(P) is continuous. Here κ(P), the semifield generated by A/P, is given a canonical topology. We call the set of such P the continuous spectrum of A, and denote it by A. These spaces are directly linked to the spaces that occur in adic geometry: a continuous generalized valuation in the sense of <cit.> from a ring R to a semiring A induces a map from A to R.
In this language, the set of Chan-Maclagan preorders on k[x_1^±1,…,x_n^±1] is canonically identified with the continuous spectrum of [x_1^±1,…,x_n^±1]. Here is the set Γ∪{-∞} endowed with operations that make it a totally ordered semifield, and [x_1^±1,…,x_n^±1] is given a topology coming from the inclusion map [x_1^±1,…,x_n^±1]. We denote this continuous spectrum by [x_1^±1,…,x_n^±1]. Thus Theorem <ref> can be restated as follows.
<ref>'
There is an explicit bijection from [x_1^±1,…,x_n^±1] to the set of Γ-local equivalence classes of flags of polyhedra in ^n.
The motivation to interpret certain valuated monomial preorders as points on a tropical adic space, and thus prime congruences, comes from the connection between tropical geometry and Berkovich spaces. It is also inspired by the broader program to endow tropical varieties with more structure; in <cit.> we take an analytic approach to doing so, which led us to the results in this work. There are alternative algebraic approaches using many different frameworks. Among these are blueprints <cit.>, tropical ideals <cit.>, tropical schemes <cit.>, super-tropical algebra <cit.>, and systems <cit.>.
Our last major result gives an alternative classification of the points of [x_1^±1,…,x_n^±1]. This classification is reminiscent of a theorem for adic spaces.
[Theorem <ref>]
There is an explicit bijection between [x_1^±1,…,x_n^±1] and the set of prime filters on the in ^n such that contains a polytope.
This bijection between points of _[x_1^±1,…,x_n^± 1] and prime filters on the is of the same flavor as certain results in <cit.>. If X is a rigid affinoid space, then it is shown in <cit.> that the points of the Huber adic space corresponding to X are in bijection with the prime filters on the lattice of special subsets of X.
We also show that, for any P∈_[x_1^±1,…,x_n^±1], the minimum dimension of any element of the corresponding filter is equal to the transcendence degree of the semifield extension κ(P)/; see Proposition <ref>.
toc
§ ACKNOWLEDGEMENTS
The authors thank Hernán Iriarte for exciting discussions and Bernd Sturmfels for pointing us to the work of Vaccon and Verron in valuated Gröbner theory.
The second author is partially supported by Louisiana Board of Regents Targeted Enhancement Grant number 090ENH-21.
toc
§ PRELIMINARIES
By a semiring R we mean a commutative semiring with multiplicative unit. That is, R is a set which is a commutative monoid with respect to each of two binary operations: an addition operation +_R, whose identity is denoted 0_R, and a multiplication operation ·_R, whose identity is denoted 1_R. Furthermore, we require that 0_R≠1_R, 0_R is multiplicatively absorbing, and multiplication distributes over addition. We omit the subscripts from the operations whenever this will not cause ambiguity. A semifield is a semiring in which all nonzero elements have a multiplicative inverse.
We call a semiring R additively idempotent if a+a = a for all a ∈ R. We refer to additively idempotent semirings as just idempotent.
If R is an idempotent semiring, then the addition defines a canonical partial order on R in the following way: a ≥ b ⇔ a + b = a.
With respect to this order, a+b is the least upper bound of a and b. When we consider a totally ordered idempotent semiring, we mean an idempotent semiring for which this canonical order is a total order.
Some of the idempotent semifields that we use throughout the paper are:
* The Boolean semifield 𝔹 is the semifield with two elements {1,0}, where 1 is the multiplicative identity, 0 is the additive identity and 1+1 = 1. is the only finite additively idempotent semifield.
* The tropical semifield is defined on the set ∪{-∞}, by setting the + operation to be the usual maximum and the multiplication operation to be the usual addition, with -∞ = 0_.
* The semifield _max is a sub-semifield of . As a set it is ∪{-∞} and the operations are the restrictions of the operations on . We also use the same notation when is replaced by any other additive subgroup of . In fact, every sub-semifield of arises this way.
* The semifield n is defined on the set ^n ∪{-∞}, by setting the addition operation to be lexicographical maximum and the multiplication operation to be the usual pointwise addition.
Throughout this paper all monoids, rings and semirings are assumed to be commutative. All rings and semirings have a multiplicative unit. Whenever we use the word semiring without further qualification, we refer to an additively idempotent semiring.
Our focus will be on totally ordered semifields which are different from . Henceforth, when we refer to totally ordered semifields, we implicitly assume that they are not isomorphic to . In particular, whenever we refer to a sub-semifield of , we implicitly assume that it is not . Totally ordered semifields can be seen as the image of a non-archimedean valuation. The semifield then is the image of the trivial valuation.
In order to distinguish when we are considering a real number as being in or being in we introduce some notation. For a real number a, we let t^a denote the corresponding element of . In the same vein, given 𝔞∈, we write log(𝔞) for the corresponding element of ∪{-∞}. This notation is motivated as follows.
Given a non-archimedean valuation ν: K →∪{-∞} on a field and λ∈ with λ>1, we get a non-archimedean absolute value |·|_ν:K→[0, ∞) by setting |x|_ν=λ^ν(x). Since is isomorphic to the semifield ([0,∞),max,·_), we use a notation for the correspondence between elements of ∪{-∞} and elements of that is analogous to the notation for the correspondence between ν(x) and |x|_ν. This notation is also convenient as we get many familiar identities such as log(1_)=0_ and t^a t^b=t^a+_b.
Let R be a semiring and let a ∈ R{0}. We say that a is a cancellative element if for all b, c ∈ R, whenever ab=ac then b=c. If all elements of R{0} are cancellative, then we say that R is a cancellative semiring.
* A congruence E on a semiring R is an equivalence relation on R that respects the operations of R.
* The trivial congruence on R is the diagonal Δ⊆ R× R, for which R/Δ≅ R.
* We call a proper congruence P of a semiring R prime if R/P is totally ordered and cancellative (cf. Definition 2.3 and Proposition 2.10 in <cit.>).
We will need some more notation around prime congruences.
Let P be a prime congruence on a semiring A. The residue semifield of A at P, denoted κ(P), is the total semiring of fractions[The total semiring of fractions is a classical concept and is defined in <cit.>. An alternative reference following the notation of this paper is in our previous work <cit.>.] of A/P. We denote the canonical homomorphism A→ A/P→κ(P) by a↦|a|_P.
Let R be semiring and let P be a prime on R. For two elements r_1, r_2 ∈ R we say that r_1 ≤_P r_2 (resp. r_1 <_P r_2, resp. r_1 ≡_P r_2) whenever |r_1|_P ≤ |r_2|_P (resp. |r_1|_P < |r_2|_P, resp. |r_1|_P = |r_2|_P) in R/P.
We now focus our attention on the case when R is a monoid algebra. Given a totally ordered semifield and a monoid , the corresponding monoid -algebra is denoted []. Elements of [] are finite sums of expressions of the form aχ^u with a∈ and u∈, which are called monomials or terms. We call elements of [] generalized semiring polynomials or polynomials when no confusion will arise. We will be most interested in [^n], the semiring of tropical Laurent polynomials in n variables.
When ⊆ we describe the prime congruences on [^n] in terms of their defining matrices and the monomial preorder the give rise to. Every prime congruence on [^n] has a defining matrix; see <cit.>, where it is shown that the matrix can be taken to be particularly nice.
Let R = [^n] where is a subsemifield of . A k× (n+1) real valued matrix C such that the first column of C is lexicographically greater than or equal to the zero vector gives a prime congruence on R as follows:
For any monomial m=t^aχ^u∈[] we let Φ(m)=C[ a; u ]∈^k, where we view u∈^n as a column vector. We call [ a; u ] the exponent vector of the monomial m. For any nonzero f∈[], write f as a sum of monomials m_1,…,m_r and set Φ(f)=max_1≤ i≤ rΦ(m_i), where the maximum is taken with respect to the lexicographic order. Finally, set Φ(0)=-∞. We specify the prime congruence P by saying that f and g are equal modulo P if Φ(f)=Φ(g). In this case we say that C is a defining matrix for P. The first column of C is called the column corresponding to the coefficient or the column corresponding to . For 1≤ i≤ n, we say that the (i+1)^st column of C is the column corresponding to x_i.
When can also explicitly describe the order that P gives rise to. Given any defining matrix for P and any f,g∈ R, f≤_Pg exactly if Φ(f)≤_lexΦ(g).
It will be convenient to know that we can choose a defining matrix with a particular form. To accomplish this, we will use the following lemma.
Let ⊆ and =^n and let P be a prime congruence of [] with defining matrix C. Then the following elementary row operations do not change the prime.
* multiplying a row by a positive constant.
* adding a multiple of a row to any row below it.
Note that, if a row of the matrix is all zero, then removing the row does not change the prime congruence that the matrix defines. Thus, using downward gaussian elimination, we can always choose a defining matrix in which the rows are linearly independent (over ).
In <cit.> we define a set A of certain prime congruences on A. When A=[] with M=^n, the following proposition completely characterizes A.
Let be a subsemifield of and let ≅^n. Let P be a prime on [] and let C be any defining matrix for P. Then P∈[] if and only if the (1,1) entry of C is positive. In particular, if P∈[], then we can choose a defining matrix for P whose first column is the first standard basis vector e_1.
Let ≅^n. A relation ≼ on the terms of [] is the same as ≤_P for a prime P on [] if and only if the following conditions hold:
(1) ≼ is a total preorder,
(2) ≼ is multiplicative, and
(3) ≼ respects the order on .
Moreover, P∈[] if and only if
(4) for any term aχ^u of [], ∃ b ∈^×, such that b ≺ aχ^u.
The height of a prime congruence P on a semiring A is the maximum length (P) of a chain P_0⊊ P_1⊊⋯⊊ P_k=P of primes under P.
In <cit.> the dimension of a semiring A, denoted A, is defined to be the number of strict inclusions in a chain of prime congruences on A of maximal length. We refine this notion in the next definition.
When ⊆ is a sub-semifield and A is a -algebra, the relative dimension of A over is the number _ A of strict inclusions in a longest chain of prime congruences in A.
We illustrate the distinction between dimension and relative dimension in the following example.
Let be a sub-semifield of . Then S is equal to the number of proper convex subgroups of ^× which is 1. On the other hand, _=0.
We now briefly introduce some notation from toric varieties. In this paper we follow conventions from <cit.>. We will use the notation to make a statement about the dimension of the semiring [] and its geometric interpretation.
Let Λ be a finitely generated free abelian group and let Λ_ be the vector space Λ⊗_. We write N:=Λ^*=(Λ,) and N_=(Λ,)≅ N⊗_. We will use the pairing N_×Λ_→ given by (v,u)↦v,u:=v(u).
A cone σ is a strongly convex rational polyhedral cone in N_ if σ = ∑_i=1^r_≥ 0v_i for v_i ∈ N and σ contains no line.
We denote by σ^∨ the dual cone of σ,
σ^∨ = {u∈Λ_ : < v,u > ≤ 0, ∀ v∈σ} = ⋂_i = 1^r {u∈Λ_ : < v_i,u > ≤ 0}.
The monoids that we consider in this paper are of the form M = σ^∨∩Λ. Because σ is strongly convex, we can identify Λ with the groupification of M.
Let be sub-semifield of and let ⊆Λ be a toric monoid corresponding to a cone σ in N_. For any P∈[], (P)=(Λ)-(κ(P)^×/^×).
Let be a sub-semifield of , let Λ be a finitely generated free abelian group, let σ be a cone in N_, and let = σ^∨∩Λ. Then _[]=Λ.
§ RESULTS
We provide a geometric interpretation of the points of [] with a sub-semifield of and a toric monoid. We will denote by Γ:=log(^×) the subgroup of corresponding to .
By <cit.>, fixing the ideal-kernel of P∈[] decomposes [] into toric strata, which are tori. Thus, we only need to deal with the torus case, i.e., the case where is a finitely generated free abelian group.
The main results of this paper show that, in this case, there are explicit bijections between the points of [], the set of prime filters on the in N_ where the filter contains a polytope, and certain equivalence classes of flags of polyhedra.
§.§ Flags and prime congruences.
By a cone we will mean a strongly convex polyhedral cone which is not necessarily rational.
We note that there is a bijective correspondence between the set of non-empty strongly convex polyhedra 𝒫⊆ N_ and the set of strongly convex cones σ⊆_≥ 0× N_, not contained in {0}× N_. Under this correspondence we map a polyhedron 𝒫 to the closed cone over it, denoted c(𝒫), and a cone to the restriction of to {1}× N_, by which we mean the inverse image of under the map N_→_≥0× N_ given by x↦(1,x).
Additionally, we can think of a point w=(r,ξ)∈_≥0× N_ as a homomorphism w:Γ×→, mapping (γ,u)↦ rγ+ξ,u. We also use ∙,∙ to denote the resulting pairing (_≥0× N_)×(Γ×)→.
Thus, for any ⊆Γ× we can consider ^∨={ w∈_≥0× N_ : w,≤0 for all ∈}.
We denote by 𝒫_∙ = (𝒫_0 ≤𝒫_1 ≤⋯≤𝒫_k) a
flag of polyhedra, where 𝒫_i = i. We assume that all polyhedra are strongly convex, non-empty and live in N_.
Similarly, we let _∙=(_0≤_1≤⋯≤_k) denote a flag of cones in _≥_0× N_ with _i=i+1. We say that _∙ is simplicial if each _i is a simplicial cone.
Given a flag _∙ of polyhedra, the corresponding flag of cones is c(_∙)=(c(_0)≤ c(_1)≤⋯≤ c(_k)). The flags of cones _∙ which arise this way are exactly those for which _0 is not contained in {0}× N_. For such flags _∙, we write _∙|_{1}× N_ for the corresponding flag of polyhedra.
Consider the map taking a matrix whose first column is lexicographically at least 0 to the prime it defines on []. Here, because we have not fixed an identification ≅^n, is of the form =[ r_0 ξ_0; r_1 ξ_1; ⋮ ⋮; r_k ξ_k ] where r_0,…,r_k∈ and ξ_0,…,ξ_k∈ N_. We now show how this map factors through the set of simplicial flags of cones. Moreover, we show that if we restrict to matrices giving primes in [], the map factors through the corresponding set of flags of polyhedra.
Given a matrix as above, we can use downward gaussian elimination as in Lemma <ref> and removal of zero rows to get all of the entries in the first column non-negative and to get the rows linearly independent without changing the prime that defines. Then we let _∙() be the simplicial flag of cones defined by letting _i() be the cone generated by (r_0,ξ_0),…,(r_i,ξ_i) for each 0≤ i≤ k.
For any simplicial flag of cones _∙ in _≥0× N_, we get a prime congruence on [] as follows.
Let _-1={0} and pick points w_i ∈_i_i-1 for 0 ≤ i ≤ k. Since w_i∈_≥0× N_, we can think of each w_i as a homomorphism w_i:Γ× M→. Thus, we can consider the group homomorphism
w = ( w_0, …, w_k):Γ×→^k+1,
which preserves the order on Γ when we give ^k+1 the lexicographic order. Therefore, w gives rise to a semiring homomorphism
φ_ w:[] →k+1, aχ^u ↦ (w_0, …, w_k)(log a, u).
We let P_ w=_ w. That is, P_ w is the prime on [] defined by the matrix
[ w_0; ⋮; w_k ].
By Proposition <ref>, P_ w is in [] if and only if the (1,1) entry of this matrix is positive. Since w_0∈_0{0}⊆_≥0× N_, this happens exactly when _0 is not contained in {0}× N_, i.e., when _∙=c(_∙) for some flag _∙ of polyhedra.
In order for this construction to give a prime defined by _∙, we must show that P_ w is independent of the choice of w. Set v_0 = w_0. For 1≤ i≤ k, _i_i-1 contains a unique ray of _i. Fix a generator of this ray and call it v_i. Consider the vector v = ( v_0, …, v_k). Analogously to φ_ w and P_ w, we can define φ_ v and P_ v. The following proposition gives us that P_ v and P_ w are the same.
There is an automorphism ψ of k+1 given by a linear transformation on ^k+1 such that φ_ w = ψ∘φ_ v.
We proceed by induction on k. The case k =0 is trivial. Assume the statement is true for some k = r; we want to show that it is true for k=r+1.
Let φ̃_ w, φ̃_ v and ψ̃ be the maps we obtain for the flag (𝒫_0 ≤𝒫_1 ≤⋯≤𝒫_r). Write
w_r+1 = ∑_i=0^r+1α_i v_i, where α_i ∈_≥ 0 and α_r+1 >0.
Thus, we have
[ w_0; w_1; ⋮; w_r; w_r+1 ]
=
([ ψ̃ 0
0
⋮
0; α_0 α_1 ⋯ α_r α_r+1 ])
[ v_0; v_1; ⋮; v_r; v_r+1 ],
where we call the matrix in this product ψ. Note that it is invertible since ψ̃ is and α_r+1 >0. Moreover, ψ is order preserving, i.e., h > h' implies that ψh > ψh'. We can show this by induction; the base case is trivial. If ψ̃ is order preserving and then ψ is, since v_r+1 > v_r+1' if and only if α_r+1 v_r+1>α_r+1 v_r+1' and α_r+1>0.
Now consider w'=( w'_0,…, w'_k) with w'_i∈_i_i-1. In the same way as we got v from w, we now get v'. Since v_i and v'_i are generators of the same ray, they are positive multiples of each other. Thus, P_ w'=P_ v'=P_ v=P_ w. In light of this we can make the following definition.
Let _∙ be a simplicial flag of cones. The prime congruence P__∙ defined by _∙ is P_ w for any choice of w=( w_0,…, w_k) with w_i∈_i_i-1. If _∙=c(_∙), then P__∙=P_c(_∙) is the prime congruence defined by _∙.
If _∙=_∙() for some matrix , then we can chose w_i to be the (i+1)^st row of , so is a defining matrix for P__∙. In particular, for every prime congruence P on [], there is a simplicial flag _∙ of cones such that P=P__∙. If P∈[], then we know that _∙=c(_∙) for some flag _∙ of polyhedra, so P=P__∙.
§.§ Equivalence of flags.
Recall that a Γ-rational polyhedron is a polyhedron in N_ which can be written as {x∈ N_ : x,u_i≤γ_i for i=1,…,q} for some u_1,…,u_q∈ and γ_1,…,γ_q∈Γ. A Γ-rational polyhedral set is a finite union of Γ-rational polyhedra.
A cone in _≥0× N_ is Γ-admissible if it can be written as
{(r,x)∈_≥0× N_ : rγ_i+x,u_i≤0 for i=1,…,q}
for some u_1,…,u_q∈ and γ_1,…,γ_q∈Γ.
A polyhedron is Γ-rational if and only if c() is Γ-admissible. A cone contained in {0}× N_ is Γ-admissible if and only if it is rational. We call a finite union of Γ-admissible cones a Γ-admissible . Note that the set of Γ-rational polyhedral sets in N_ and the set of Γ-admissible each form a lattice when ordered by inclusion. In both of these lattices, meet and join are given by intersection and union, respectively.
Let 𝒫_∙ be a flag of polyhedra and let _∙ be a flag of cones. A Γ-rational neighborhood of 𝒫_∙ is a Γ-rational polyhedron 𝒬 which meets the relative interior of each 𝒫_i. A Γ-admissible neighborhood of _∙ is a Γ-admissible cone which meets the relative interior of each _i.
We say that two flags 𝒫_∙ and 𝒫'_∙ of polyhedra are (Γ-)locally equivalent if the Γ-rational neighborhoods of 𝒫_∙ are exactly the Γ-rational neighborhoods of 𝒫'_∙. Similarly, we say that a two flags _∙ and '_∙ are (Γ-)locally equivalent if they have the same Γ-admissible neighborhoods.
Note that is a Γ-rational neighborhood of _∙ if and only if c() is a Γ-admissible neighborhood of c(_∙). Thus _∙ and '_∙ are locally equivalent if and only if c(_∙) and c('_∙) are.
The following lemma provides an alternate characterization of neighborhoods of flags. To prove it we will use the fact that a point x is in the relative interior of a cone of dimension k if and only if x can be written as a positive linear combination of k linearly independent points in . See <cit.> for the proof of the corresponding fact about polytopes. The proof for cones is directly analogous.
A Γ-admissible cone is a neighborhood of _∙ if and only if meets _i_i-1 for 0≤ i≤ k. In particular,
a Γ-rational polyhedron 𝒬 is a neighborhood of 𝒫_∙ = (𝒫_0 ≤𝒫_1 ≤⋯≤𝒫_k) if and only if 𝒫_0 ∈𝒬 and 𝒬 meets 𝒫_i∖𝒫_i-1, for 1 ≤ i ≤ k.
One direction follows from the fact that _i⊆_i_i-1.
For the other direction, we will proceed by induction on k. The case k=0 is trivial because _0=_0_-1. Assume the statement holds for k-1. By the inductive hypothesis, meets the relative interior of _i for 1≤ i<k, so we only need to show that meets the relative interior of _k. Pick a point x ∈∩_k-1. So x is a positive linear combination of k linearly independent points in _k-1. Because meets _k_k-1, we can pick a point y ∈∩_k_k-1. Then x+y is a positive linear combination of k+1 linearly independent points in _k, so x+y∈_k. Since x,y∈ and is a cone, we get x+y∈∩_k.
The claim about polyhedra follows by considering _∙=c(_∙) and =c() and noting that, because _0 is a ray, _0_-1=_0{0} meets the cone if and only if _0{0} is contained in .
We now work towards showing that every flag of cones is locally equivalent to one that is simplicial. To do this, we will need a technical lemma.
Let be a cone and let be a facet of . Let be a subcone of with such that = and let be a subcone of with = and a face of . Suppose that v∈ and w∈. Then there is a real number ϵ>0 such that v+ϵ w∈.
Since is a face of , there is some u∈_ such that such that ⊆ u^∨ and =∩ u^⊥. In particular, v∈ u^⊥ and w∈ u^∨. Since =∩ u^⊥ is a facet of and v∈, the star of at v is u^∨. That is, u^∨=_≥0(-v). Since w∈ u^∨ is nonzero, this means that we can write w=r(z-v) for some r>0 and z∈. So, letting ϵ=1/r, we have v+ϵ w=v+(z-v)=z∈.
This lemma will help us with an inductive argument. The following definition will also facilitate this argument.
Let _∙ = (_0 ≤_1 ≤⋯≤_k) be a flag of cones.
For any j≤ k, the truncation of _∙ at j is _∙^(j) = (_0 ≤_1 ≤⋯≤_j).
Any flag _∙ of cones is locally equivalent to a simplicial flag _∙' of cones. Any flag _∙ of polyhedra is locally equivalent to a flag _∙' of cones such that c(_∙') is simplicial.
Consider any flag of cones _∙=(_0≤⋯≤_k) and, for each 0≤ i≤ k pick a ray ρ_i of _i not contained in _i-1. Define a simplicial flag _∙' of cones by setting _i'=_j=0^iρ_j. Note that _i' is a subcone of _i with _i'=i+1=_i and _i'_i-1'⊆_i_i-1. In particular, any Γ-admissible neighborhood of _∙' is a Γ-admissible neighborhood of _∙.
We now prove that any Γ-admissible neighborhood of _∙ is a Γ-admissible neighborhood of _∙' by induction on k. For the base of k=0, note that _0'=_0, so we are done. Now assume that k≥1 and the result is true for k-1. Suppose that is a Γ-admissible neighborhood of _∙. Then is a neighborhood of _∙^(k-1) so, by the inductive hypothesis, is a neighborhood of _∙'^(k-1). Thus meets the relative interior of _i' for i<k, and it suffices to show that meets _k'_k-1'. We can pick v∈(_k-1')∩ and w∈(_k)∩. By Lemma <ref>, there is an ϵ>0 such that v+ϵ w∈_k'. Since ϵ w∈_k and _k-1 is a proper face of _k, v∈_k-1 gives us that v+ϵ w∉_k-1⊇_k-1'. Thus, v+ϵ w∈_k'_k-1'. Since v,w∈ and is a cone, v+ϵ w∈.
Given any flag _∙ of polyhedra, consider the corresponding flag _∙=c(_∙) of cones and let _∙' be as above. Since _0'=_0 is not contained in {0}× N_, there is a flag _∙' of polyhedra such that _∙'=c(_∙'). Since c(_∙) and c(_∙') are locally equivalent, so are _∙ and _∙'.
§.§ The filter of a prime congruence.
Now we introduce the next key player in this paper. For
finitely many Laurent polynomials
f_0, …, f_n ∈[] we define the rational set R(f_0, …, f_n ) to be
R(f_0, …, f_n ) = { x ∈ N_ : f_0(x) ≥ f_i(x), for 1 ≤ i ≤ n }.
Here, when f=∑_u∈f_uχ^u∈[] and x∈ N_, we let f(x)=max_u∈(log(f_u)+_x,u).
For any f∈[], we can form the homogenized function f given by
f(r,x)=max_u∈(rlog(f_u)+_x,u)
for any (r,x)∈_≥0× N_. We let
R(f_0,f_1,…,f_n)={ w∈_≥0× N_ : f_0( w)≥f_i( w) for 1≤ i≤ n}.
Note that R(f_0,f_1,…,f_n) is the restriction of R(f_0,f_1,…,f_n) to {1}× N_.
Say f_0,…,f_n∈[] and write f_i=∑_u∈f_i,uχ^u. Then
* R(f_0,f_1, …, f_n ) = ⋂_i=1^n R(f_0, f_i),
* R(f_0, f_i) = ⋂_u∈ MR(f_0, f_i, uχ^u) = ⋃_u∈ MR(f_0, uχ^u, f_i), and
* for any μ∈, R(f_0,f_1, …, f_n ) ⊇R(f_0, μχ^μ, f_1, …, f_n ).
The same applies if we replace R with R.
(<ref>) follows from the definition. (<ref>) and (<ref>) follow from the fact that, in the evaluation of
f( w),
we take the maximum over the terms of f. The final claim follows by restriction to {1}× N_.
For any prime congruence P on [], the prime filter _P that P defines on the is the collection of Γ-admissible U for which there are f_0,f_1, …, f_n ∈[] such that R(f_0,f_1, …, f_n ) ⊆U and
|f_0|_P ≥ |f_i|_P for 1≤ i≤ n.
If P ∈[], then the prime filter _P that P defines on the is the collection of Γ-rational polyhedral sets U in N_ for which there are f_0,f_1, …, f_n ∈[] such that R(f_0,f_1,…,f_n)⊆ U and |f_0|_P ≥ |f_i|_P for 1≤ i≤ n. In this case, a Γ-rational polyhedral set U is in _P if and only if c(U) is in _P.
We will justify these names later in this section.
We begin our study of _P and _P by observing that, in the above definitions, it is enough to consider monomials.
Let P be a prime congruence on [] and U a Γ-admissible . Then U∈_P if and only if there are terms a_1χ^u_1,…,a_nχ^u_n∈[] such that R(1_,a_1χ^u_1,…,a_nχ^u_n)⊆U and 1_κ(P)≥|a_iχ^u_i|_P for 1≤ i≤ n.
If P∈[] and U is a Γ-rational polyhedral set, then U∈_P if and only if there are terms a_1χ^u_1,…,a_nχ^u_n∈[] such that R(1_,a_1χ^u_1,…,a_nχ^u_n)⊆ U and 1_κ(P)≥|a_iχ^u_i|_P for 1≤ i≤ n.
The “if” direction is clear.
For the other direction, suppose U∈_P, i.e., there are f_0,f_1,…,f_n∈[] such that R(f_0,f_1,…,f_n)⊆U and |f_0|_P≥|f_i|_P for 1≤ i≤ n.
Let a_0χ^u_0 be a term of f_0 such that the maximum in |f_0|_P=_u∈ M |f_0,uχ^u|_P is attained at |a_0χ^u_0|_P. Then we have that |a_0χ^u_0|_P≥ |f_i|_P for 1≤ i≤ n and, by Lemma <ref> (<ref>), R(a_0χ^u_0,f_1,…,f_n)⊆R(f_0,f_1,…,f_n)⊆U. Thus, we may assume without loss of generality that f_0=a_0χ^u_0. Also, a_0χ^u_0 is a unit in [] and |a_0χ^u_0|_P≥ |f_i|_P gives us that 1_κ(P)≥|(a_0χ^u_0)^-1f_i|_P. Since also R(a_0χ^u_0,f_1,…,f_n)=R(1_,(a_0χ^u_0)^-1f_1,…,(a_0χ^u_0)^-1f_n), we may assume without loss of generality that f_0=1_.
By Lemma <ref> (<ref>) and (<ref>), R(1_,f_1,…,f_n)=_i=1^n R(1_,f_i)=_i=1^n_u∈R(1_,f_i,uχ^u). So, if we relabel those f_i,uχ^u with f_i,u≠0_ as a_1χ^u_1,…,a_nχ^u_n, then we have R(1_,f_1,…,f_n)=_i=1^n R(1_,a_iχ^u_i)=R(1_,a_1χ^u_1,…,a_nχ^u_n).
The proof of the statement for _P is analogous.
Having reduced to considering the sets R(1_,a_1χ^u_1,…,a_nχ^u_n) and R(1_,a_1χ^u_1,…,a_nχ^u_n), we now consider the geometric nature of these sets.
For any terms a_1χ^u_1,…,a_nχ^u_n∈[], R(1_,a_1χ^u_1,…,a_nχ^u_n) is a Γ-admissible cone. If P∈[] and 1_κ(P)≥|a_iχ^u_i|_P for 1≤ i≤ n then R(1_,a_1χ^u_1,…,a_nχ^u_n) is a Γ-rational polyhedron.
Note that, for 1≤ i≤ n, R̃(1_,a_iχ^u_i)={(r,x)∈_≥0× N_ : 0_≥ rlog(a_i)+_x,u} is a Γ-admissible half-space. So R(1_,a_1χ^u_1,…,a_nχ^u_n)=_i=1^n R(1_,a_iχ^u_i) is a Γ-admissible cone.
Now
say
P∈[]. Since R(1_,a_1χ^u_1,…,a_nχ^u_n) can be written as the restriction of R(1_,a_1χ^u_1,…,a_nχ^u_n) to {1}× N_, it remains only to show that R(1_,a_1χ^u_1,…,a_nχ^u_n) is nonempty.
By Proposition <ref>, we can pick a defining matrix for P of the form [ 1 ξ_0; 0 ξ_1; ⋮ ⋮; 0 ξ_k ] with ξ_0,…,ξ_k in N_. Since 1_κ(P)≥|a_iχ^u_i|_P, we have that 0≥log(a_i)+_ξ_0,u_i, i.e., ξ_0∈ R(1_,a_iχ^u_i). Therefore, ξ_0∈_i=1^n R(1_,a_iχ^u_i)= R(1_,a_1χ^u_1,…,a_nχ^u_n).
At the end of the proof of the previous lemma, we took advantage of the fact that we could evaluate whether 1_κ(P)≥|aχ^u|_P by picking a defining matrix for P and using the lexicographic order. We now spell out the definition of the lexicographic order as it applies here and work towards translating it into geometric conditions.
Let
=[ v_0; v_1; ⋮; v_k ]
be a defining matrix for a prime congruence P on []. Let 𝔲=[ log(a); u ] be the exponent vector of aχ^u. Then
1_≥ |aχ^u|_P is equivalent to
[ v_0,𝔲; v_1,𝔲; ⋮; v_k,𝔲 ]≤_lex[ 0; 0; ⋮; 0 ],
which happens whenever
v_0,𝔲 < 0 , or
v_0,𝔲 = 0 and v_1,𝔲 < 0, or
⋮
v_0,𝔲 = 0 , …, v_k-2,𝔲= 0 , and v_k-1,𝔲<0 , or
v_0,𝔲 = 0 , …, v_k-1,𝔲= 0 , and v_k,𝔲≤ 0.
Equivalently,
v_0,𝔲 < 0 , or
v_0,𝔲 ≤ 0 and v_1,𝔲 < 0, or
⋮
v_0,𝔲 ≤ 0 , …, v_k-2,𝔲≤ 0 , and v_k-1,𝔲<0 , or
v_0,𝔲 ≤ 0 , …, v_k-1,𝔲≤ 0 , and v_k,𝔲≤ 0.
For convenience, we label the conditions in (<ref>) as (<ref>.0), (<ref>.1), …, (<ref>.k-1), and (<ref>.k). Similarly, we label the conditions in (<ref>) as (<ref>.0), (<ref>.1), …, (<ref>.k-1), and (<ref>.k).
Suppose ' is obtained from by downwards gaussian elimination. The proof of Lemma <ref> not only shows that 𝔲≤_lex0 if and only if '𝔲≤_lex0, but that, for 0≤ i≤ k, (<ref>.i) is satisfied for 𝔲≤_lex0 if and only if it is satisfied for '𝔲≤_lex0. Since (<ref>.i) is satisfied exactly if (<ref>.j) is satisfied for some j≤ i, we also get that (<ref>.i) is satisfied for 𝔲≤_lex0 if and only if it is satisfied for '𝔲≤_lex0.
The following proposition gives a geometric interpretation to the above conditions. In order to state it, we define R^∘(f,g)={ w∈_≥0× N_:f( w)>g( w)}. When we are considering a half-space R(1_,aχ^u), R^∘(1_,aχ^u) is the corresponding strict half-space.
Let _∙ be a simplicial flag of cones, and let P=P__∙. For any term aχ^u∈[], 1_κ(P)≥|aχ^u|_P if and only if
_0⊆R^∘(1_, a χ^u) , or
_1⊆R^∘(1_, a χ^u), or
⋮
_k-1⊆R^∘(1_, a χ^u), or
_k⊆R(1_, a χ^u).
As before, we label the conditions in (<ref>) as (<ref>.0), (<ref>.1), …, (<ref>.k-1), and (<ref>.k). Note that, if (<ref>.i) is satisfied, then, by taking closures in _≥0× N_, we get that _i⊆R(1_,aχ^u) and so also _j⊆R(1_,aχ^u) for all j≤ i. In particular, if 1_κ(P)≥|aχ^u|_P then _0⊆R(1_,aχ^u).
Fix a matrix =[ v_0; v_1; ⋮; v_k ] such that _∙=_∙().
Let aχ^u be any term of [] and let 𝔲=[ log(a); u ] be the exponent vector of aχ^u.
Suppose that 1_κ(P)≥|aχ^u|_P, and, specifically, (<ref>.i) is satisfied. If i=k then v_0,…, v_k are contained in the half-space R(1_,aχ^u), so the cone _k that they generate is also contained in R(1_,aχ^u), i.e., (<ref>.k) is satisfied. So now suppose i<k, and consider any w∈_i. Since _i is the simplicial cone generated by v_0, v_1,…, v_i, we can write w=_j=0^i _j v_j with _j>0. Since we have v_j,𝔲≤ 0 for j<i and v_i,𝔲<0, we have
w,𝔲=_j=0^i_j v_j,𝔲≤_i v_i,𝔲<0,
so w∈R^∘(1_,aχ^u). Thus (<ref>.i) is satisfied.
Now suppose that (<ref>.i) is satisfied for some 0≤ i≤ k. If i=k then v_0,…, v_k∈_k⊆R(1_,aχ^u), so v_j,𝔲≤0 for all 0≤ j≤ k, i.e., (<ref>.k) is satisfied. So now suppose i<k, giving us _i⊆R^∘(1_,aχ^u) and thus _i⊆R(1_,aχ^u). In particular, for all j≤ i, v_j∈_i⊆R(1_,aχ^u) gives us that v_j,𝔲≤ 0. Pick a point w∈_i, so w,𝔲<0. Since w is in the relative interior of the simplicial cone generated by v_0,…, v_i, we can write w=_j=0^i_j v_j with _j>0. Since
0> w,𝔲=_j=0^i_j v_j,𝔲,
we must have that v_j,𝔲<0 for some j<i. Then (<ref>.j) is satisfied.
With this background, we are now prepared to show that _P and _P are prime filters.
Let P be a prime congruence on []. Then _P is a prime filter on the lattice of . If P∈[], then ℱ = ℱ_P is a prime filter on the .
We need to check that =_P satisfies the conditions in the definition of prime filter on a lattice, namely, (1) is not the whole lattice, (2) is not empty, (3) if U_1,U_2∈ then U_1∪U_2∈, (4) if U∈ and V⊇U is a Γ-admissible , then V∈_P, and
(5) if U=_i=1^mU_i is in and each U_i is a Γ-admissible , then some U_i_0 is in .
(1) Write P=P__∙ for some simplicial flag _∙ of cones. If U∈ then _0⊆U. So, for any Γ-admissible ray ρ≠_0, ρ∉.
(2) We have _≥0× N_=R(1_,0_), so _≥0× N_∈.
(3) Say U_1,U_2∈, so U_1⊇_j∈ J_1R(1_,a_jχ^u_j) and U_2⊇_j∈ J_2R(1_,a_jχ^u_j) for some finite indexing sets J_1 and J_2. Then U_1∩U_2⊇_j∈ J_1∪ J_2R(1_,a_jχ^u_j), so U_1∩U_2∈.
(4) This is immediate from the definition of .
Where George meets Paul.
(5) Suppose U = ∪_i=1^m U_i is in with each U_i a Γ-admissible ; we want to show that some U_i is in . By writing each U_i as a finite union of Γ-admissible cones, we may assume without loss of generality that each U_i is a Γ-admissible cone. Let n_0 = 0. Note that we can write
U_i = ⋂_l=n_i-1+1^n_iR(1_, a_l χ^u_l),
for some increasing sequence of integers n_0<n_1<⋯<n_m and terms a_lχ^u_l∈[].
Since U ∈, there are terms c_1χ^u_1,…,c_qχ^u_q∈[] such that U ⊇∩_j=1^q R(1_, c_j χ^μ_j), and 1_≥ |c_j χ^μ_j|_P. For 1 ≤ l ≤ n_m, set
b_lχ^θ_l = a_l χ^u_l if 1_≥ |a_l χ^u_l|_P
a_l^-1χ^-u_l otherwise.
We have 1_≥ |b_lχ^θ_l|_P for 1 ≤ l ≤ n_m, so the cone
= ⋂_l=1^n_mR(1_, b_lχ^θ_l) ∩⋂_j=1^q R(1_, c_j χ^μ_j)
is in . Also, ⊆⋂_j=1^q R(1_, c_j χ^μ_j) ⊆U.
To finish the proof it is enough to see the following claim:
for any i, either ⊆ U_i or U_i ∩() = ∅. For, if this claim holds, then the fact that ⊆U=_i=1^mU_i implies that is contained in some U_i_0.
To prove the claim, consider the linear
hyperplane arrangement in _≥0× N_ℝ with hyperplanes
{(r,ξ)∈_≥0× N_ℝ : rlog(a_l) + < ξ, u_l> = 0} for all l and
{(r,ξ)∈_≥0× N_ℝ : rlog(c_j) + < ξ, μ_j> = 0} for all j.
This hyperplane arrangement defines a fan Σ such that ∈Σ and the support of Σ is _≥0× N_ℝ. Moreover, for every i, U_i is the support of a subfan Σ_i of Σ. Since U_i is the disjoint union of (𝔇) for 𝔇∈Σ_i and ∈Σ, if () meets U_i, then ∈Σ_i, implying that ⊆U_i.
The proof for =_P is analogous.
Theorem <ref> immediately gives us the following corollary.
For prime congruences P on [], _P is determined by which Γ-admissible cones it contains. For P∈[], _P is determined by which Γ-rational polyhedra it contains.
§.§ Flags and filters.
We now directly relate the filters and back to flags of polyhedra and cones. We start with a lemma that will facilitate our inductive argument.
Let _∙=(_0≤⋯≤_k) be a simplicial flag of cones, let P=P__∙, and let =_P. For any 0≤ j<k, consider the truncation _∙^(j), let P'=P__∙^(j), and let '=_P'. Then ⊆'.
It suffices to show the result for j=k-1; the general result follows by induction.
Recall that Proposition <ref> gives explicit geometric conditions for when 1_κ(P)≥|aχ^u|_P.
For convenience, we now state the corresponding conditions in the case where P is replaced by P'.
That is, 1_κ(P')≥|aχ^u|_P' if and only if
_0⊆R^∘(1_, a χ^u) , or
_1⊆R^∘(1_, a χ^u), or
⋮
_k-2⊆R^∘(1_, a χ^u), or
_k-1⊆R(1_, a χ^u).
As before, we label the conditions in (<ref>) as (<ref>.0), (<ref>.1), …, (<ref>.k-2), and (<ref>.k-1).
Suppose U∈, so there are terms a_1χ^u_1,…,a_nχ^u_n∈[] such that U⊇_l=1^nR(1_,a_lχ^u_l) and 1_κ(P)≥|a_lχ^u_l|_P for 1≤ l≤ n.
Note that, to get that U∈', it suffices to show that 1_κ(P')≥|a_lχ^u_l|_P' for each 1≤ l≤ n. Fix such an l.
Since 1_κ(P)≥|a_lχ^u_l|_P, there is some 0≤ i_l≤ k such that a_lχ^u_l satisfies (<ref>.i_l).
If i_l=k then _k-1⊆_k⊆R(1_, a_l χ^u_l), so a_lχ^u_l satisfies (<ref>.k-1), giving us 1_κ(P')≥|a_lχ^u_l|_P'. If i_l=k-1 then, by Remark <ref>, _k-1⊆R(1_, a_l χ^u_l), i.e, a_lχ^u_l satisfies (<ref>.k-1), and so 1_κ(P')≥|a_lχ^u_l|_P' again.
So now suppose i_l<k-1. Then (<ref>.i_l) and (<ref>.i_l) are the same condition, so we get that 1_κ(P')≥|a_lχ^u_l|_P'.
For any prime P on [], let =_P and pick a simplicial flag _∙ of cones such that P=P__∙. Then, for any Γ-admissible cone U in _≥0× N_, U∈ if and only if U is a neighborhood of _∙.
If P ∈[] then we can write P=P__∙ for some flag _∙ of polyhedra and consider =_P. For any Γ-rational polyhedron U in N_, U∈ℱ if and only if U is a neighborhood of the flag 𝒫_∙.
We first consider and _∙.
Suppose that U is a neighborhood of _∙, so for 0≤ i≤ k we can pick a point w_i∈U∩(_i). The matrix =[ w_0; ⋮; w_k ] defines a homomorphism :[]→k+1 and, by Definition <ref>, P=. Thus, for any f,g∈[], |f|_P≤|g|_P if and only if (f)≤(g). To get that U∈, we need to show that there are terms a_1χ^u_1,…,a_nχ^u_n∈[] such that U⊇R(1_, a_1χ^u_1, …, a_nχ^u_n) = ⋂_l=1^n R(1_, a_lχ^u_l) and 1_κ(P)≥|a_lχ^u_l|_P.
Since U is Γ-admissible, there are _1=(α_1,u_1),…,_n=(α_n,u_n)∈Γ× such that U={_1,…,_n}^∨=_l=1^nR(1_,a_lχ^u_l), where a_l=t^α_l∈^×. Since _l is the exponent vector of a_lχ^u_l, we have that 1_κ(P)≥|a_lχ^u_l|_P if and only if
[ 0; ⋮; 0 ]≥_lex_l=[ w_0,_l; ⋮; w_k,_l ].
Since each w_i∈U={_1,…,_n}^∨, we have that each w_i,_l is non-positive, so we get that 1_κ(P)≥|a_lχ^u_l|_P. Thus, U∈.
For the other direction, suppose U∈. We proceed to prove that U is a neighborhood of _∙=(_0≤⋯≤_k) by induction on k. For the base case of k=0, applying Proposition <ref> shows that U∈ gives us that there are a_1χ^u_1,…,a_nχ^u_n∈[] such that U⊇_l=1^nR(1_,a_lχ^u_l)⊇_0, so U is a neighborhood of _∙=(_0). For the inductive step, suppose that k≥1 and the statement is true for k-1.
Fix a_1χ^u_1,…,a_nχ^u_n∈[] such that U⊇_l=1^nR(1_,a_lχ^u_l) and 1_κ(P)≥|a_lχ^u_l|_P. In particular, :=_l=1^nR(1_,a_lχ^u_l)∈ and it suffices to show that is a neighborhood of _∙.
Consider the truncation _∙^(k-1) of _∙, let P'=P__∙^(k-1), and let '=_P'. Lemma <ref> tells us that
∈'
and so, by the inductive hypothesis,
is a neighborhood of _∙^(k-1). Since
is a neighborhood of _∙ if and only if it is a neighborhood of _∙^k-1 and meets _k_k-1, it now suffices to show that
meets _k_k-1.
For each 1≤ l≤ n, 1_κ(P)≥|a_lχ^u_l|_P, so a_lχ^u_l satisfies (<ref>.i_l) for some 0≤ i_l≤ k. If i_l=k then _k⊆R(1_,a_lχ^u_l), so
∩(_k_k-1)
=(
_
1≤≤ n
≠ lR(1_,a_χ^u_))∩(_k_k-1).
Hence,
it suffices to show that _1≤≤ n
≠ lR(1_,a_χ^u_) meets _k_k-1. Thus, we may assume without loss of generality that 0≤ i_l≤ k-1 for all l, so _i_l⊆R^∘(1_,a_lχ^u_l).
Since is a neighborhood of _∙^(k-1) and 0≤ i_l≤ k-1, we can pick a point w_l∈(_i_l)∩, so, in particular, w_l∈(_i_l)⊆R^∘(1_,a_lχ^u_l).
Also, w_l∈_k∩.
Fix v∈_k_k-1. Since w_l is in the interior of R(1_,a_lχ^u_l), there is some ϵ_l>0 such that w_l+[0,_l] v⊆R(1_,a_lχ^u_l). Letting ϵ=min_l ϵ_l, we get that w_l+ϵ v∈R(1_,a_lχ^u_l) for all l. Also, recall that each w_l∈=_=1^nR(1_,a_χ^u_).
Consider the point ω=ϵ v+_l=1^n w_l. If we fix l then we can write ω=( w_l+ϵ v)+_≠ l w_ with each of w_l+ v and w_ in the linear half-space R(1_,a_lχ^u_l), so ω∈R(1_,a_lχ^u_l). Since this is true for all l, we conclude that ω∈_l=1^nR(1_,a_lχ^u_l)=. Since _l=1^n w_l∈_k and v∈_k_k-1, we also have ω∈_k_k-1. So meets _k_k-1, concluding the proof that U∈ if and only if U is a neighborhood of _∙.
Now suppose P∈[] and consider _∙ and as in the statement of the theorem. Letting _∙=c(_∙), we have P=P__∙. So, using the earlier part of the theorem, we have
U∈ c(U)∈
c(U) is a neighborhood of _∙=c(_∙)
U is a neighborhood of _∙.
Consider simplicial flags _∙ and _∙' of cones and let P=P__∙ and P'=P__∙'. Then _P=_P' if and only if _∙ and _∙' are locally equivalent.
If P=P__∙ and P'=P__∙' for some flags _∙ and _∙' of polyhedra, then _P=_P' if and only if _∙ and _∙' are locally equivalent.
This is immediate from Corollary <ref> and Theorem <ref>.
Theorem <ref> allows us to consider examples of primes in [] and understand their filters via flags _∙ of polyhedra.
Let =, Γ =, and = ^2.
Let C=[ 1 0 0 ],
C'=[ 1 √(2) 0 ], and
C”=[ 1 √(2) √(3) ], and let
_∙,
_∙', and
_∙”
be the corresponding flags of polyhedra in N_=^2. Let P,P', and P” be the corresponding prime congruences in [x^±1,y^±1], and let ,', and ” be the corresponding prime filters on the . Each of
_∙,
_∙', and
_∙” consist of a single point whose coordinates are given by the second and third entries of the matrix. However, the corresponding filters look different.
The filter consists of the zero-dimensional polyhedron {(0,0)} and every rational polyhedral set containing it. On the other hand, ' contains no zero-dimensional polyhedron: any rational polyhedron in ^2 containing (√(2),0) must contain an open interval on the x-axis. Similarly, ” consists only of two-dimensional polyhedral sets.
Still in the case where = and = ^2, we
now consider the matrices C= [ 1 √(2) 0; 0 1 √(3) ] and C' = [ 1 √(2) 0; 0 0 1 ] and let _∙ and _∙' be the corresponding flags of polyhedra, which are depicted in Figure <ref>. Then the corresponding filters and ' are the same: they both consist of those rational polyhedral sets which contain (r_1,r_2)×[0,r_3) for some r_1,r_2,r_3∈ with r_1<√(2)<r_2 and r_3>0. One can see this visually depicted in Figures <ref> and <ref>. We will show in Corollary <ref> that this implies that C and C' define the same point P∈[x^±1,y^±1]. One can also check this by hand, by verifying that they give the same preorder on [x^±1,y^±1], even though neither of C,C' can be obtained from the other by downward gaussian elimination as in Lemma <ref>.
Consider the prime congruences on [x^±1,y^±1] given by the matrices C_1 = [ 1 √(2) √(3) ],
C_2 = [ 1 0 0; 0 1 0 ],
C_3 = [ 1 0 0; 0 1 √(2) ], and
C_4 = [ 1 0 0; 0 1 0; 0 0 1 ]. The corresponding flags of polyhedra and filters are pictured in Figure <ref>.
We can formalize the notion of the dimension of the “infinitesimal neighborhoods” in the figures above by considering the smallest possible dimension of a polyhedron in _P. See Proposition <ref> for a way to compute this dimension as an algebraic invariant of P.
§.§ From filters back to congruences.
We show that the points of [] are in bijection with prime filters on the lattice of Γ-rational polyhedral sets, where the filter contains some polytope.
One may hope that there is a geometrically meaningful bijection between the points of [] and prime filters on the lattice of Γ-rational polyhedral sets. To see why this is not true, consider the following example:
N_ℝ = ^2, and σ = { (0, x)∈^2 : x≥ 0}
ℱ = { U : Γ-rational polyhedral sets, with U ⊇σ+(0,y) for some y∈.}
One can verify that ℱ is a prime filter, however, it can only correspond to some point at infinity on the positive y-axis, and there is no such point in [].
The following lemma ensures that the prime filter of Remark <ref> does not occur as _P for any P∈[].
For any P∈[], _P contains a polytope.
Write P=P__∙ for some flag _∙ of polyhedra. Let U be any full-dimensional Γ-rational polytope with the point _0 in its interior. Then U meets the relative interior of any polyhedron containing _0,
so U is a neighborhood of _∙. By Theorem <ref>, U∈_P.
A Γ-rational polytopal set is a finite union of Γ-rational polytopes.
The set of Γ-rational polytopal sets in N_ forms a lattice when ordered by inclusion. In this lattice, meet and join are given by intersection and union, respectively.
There is a bijection between the set of prime filters on the lattice of Γ-rational polyhedral sets, where the filter contains some polytope and the set of prime filters on the lattice of Γ-rational polytopal sets.
Given a prime filter on the , it generates a filter ' on the such that ' contains a polytope, and a polytopal set is in if and only if it is in '. Moreover, ' is also prime. Indeed, say U = U_1 ∪ U_2 ∈' with U_1 and U_2 both Γ-rational. Letting V∈' be a polytope we have U∩ V = (U_1 ∩ V) ∪ (U_2 ∩ V) with U∩ V, U_1 ∩ V, and U_2 ∩ V polytopal and U∩ V∈. So either, U_1 ∩ V ∈ or U_2 ∩ V ∈, giving us that U_1 ∈' or U_2 ∈'.
It remains only to show that if ' is a filter on the such that ' contains a polytope, then the restriction of ' to the is a prime filter . We check that each of the five conditions for a prime filter is satisfied.
(1) Since every Γ-rational polyhedron contains a Γ-rational polytope and ' is not all of the , is not all of the .
(2) By hypothesis, is nonempty.
(3) If U_1, U_2 ∈, then U_1, U_2 ∈' and U_1, U_2 are polytopal, so U_1 ∩ U_2 ∈' and U_1 ∩ U_2 is polytopal, implying that U_1 ∩ U_2 ∈.
(4) If U ∈ and V is a Γ-rational polytopal set which contains U, then V ⊇ U ∈', implying that V ∈'. Since V is polytopal this gives V ∈.
(5) If U=∪_i=1^m U_i∈ and each U_i is a Γ-rational polytopal set, then U ∈' with each U_i a Γ-rational polyhedral set, so some U_i_0∈'. Since U_i_0 is polytopal, we have U_i_0∈.
Given a prime filter on the , we define a relation ≤_ on [] by saying that aχ^u≤_bχ^λ if there is some U∈ such that w,[ log(a); u ]≤w,[ log(b); λ ] for all w∈U, i.e., U⊆R(bχ^λ,aχ^u). Similarly, if is a filter on the , we have the relation ≤_ on [] defined by aχ^u≤_ bχ^λ if there is a U ∈ such that log(a) + < ξ, u> ≤log(b) + < ξ, λ> for all ξ∈ U, that is, U ⊆ R(bχ^λ, aχ^u). Note that, if ={c(U) : U∈}, then ≤_ and ≤_ are the same.
Since is a filter, we have that aχ^u≤_bχ^λ if and only R(bχ^λ,aχ^u)∈. Similarly, aχ^u≤_ bχ^λ if and only if R(bχ^λ,aχ^u)∈.
For any prime filter on the , ≤_ is ≤_P for some prime congruence P on [].
For any prime filter on the such that contains a polytope, the relation ≤_ is ≤_P for some P ∈[].
We begin by verifying that ≤_ satisfies conditions (1), (2), and (3) in Proposition <ref>.
(1) (≤_ is a total preorder) The relation is clearly reflexive. We need to show transitivity. Suppose aχ^u ≤_ bχ^λ and bχ^λ≤_ cχ^μ, so there are U_1, U_2 ∈ such that
w,[ log(a); u ]≤w,[ log(b); λ ] for all w∈U_1 and
w,[ log(b); λ ]≤w,[ log(c); μ ] for all w ∈U_2.
Then U_1 ∩ U_2 ∈ and w,[ log(a); u ]≤w,[ log(c); μ ] for all w∈U_1∩U_2
so aχ^u ≤_ cχ^μ. Thus, ≤_ is a preorder. To see that it is a total preorder, suppose that aχ^u ≰_ bχ^λ. Then for every U∈, U⊈R(bχ^λ,aχ^u). We now consider two cases: either there is some U∈ such that U⊆R( aχ^u, bχ^λ) or, for all U∈ both U∩R( aχ^u, bχ^λ) and U∩ R(bχ^λ, aχ^u) are non-empty.
In the first case aχ^u ≥_ bχ^λ and we are done. In the second case, fix any U∈. Since
U = (U∩R( aχ^u, bχ^λ)) ∪ (U∩R(bχ^λ, aχ^u)),
we get that either U∩R( aχ^u, bχ^λ) ∈ or U∩R(bχ^λ, aχ^u) ∈, because is prime. However, since all V∈ are not contained in R(bχ^λ,aχ^u), we must have U∩R(aχ^u, bχ^λ) ∈. Thus, we have aχ^u ≥_ bχ^λ.
(2) (≤_ is multiplicative) Let aχ^u, bχ^λ and cχ^μ be terms in [], with aχ^u ≤_ bχ^λ. Then there exists U∈ such that U⊆R(bχ^λ,aχ^u)=R(bχ^λ cχ^μ,aχ^u cχ^μ), so aχ^u cχ^μ≤_bχ^λ cχ^μ.
(3) (≤_ respects the order on ) Let a, b ∈^× such that a ≤ b. Then every U∈ is in R(b,a)=_≥0× N_.
Thus, ≤_ is ≤_P for some prime congruence P on [].
Now consider ≤_. Applying the above to ={c(U) : U∈}, we get that ≤_, which is the same as ≤_, satisfies conditions (1), (2), and (3) in Proposition <ref>. So it suffices to show that ≤_ satisfies condition (4) from that proposition.
Consider a term aχ^u ∈[]. Fixing a polytope U ∈, the set {log(a) + < ξ, u> : ξ∈ U } has a minimum value γ∈. Pick β < γ in Γ and set b = t^β∈^×. Then b <_ aχ^u.
In light of Proposition <ref>, and because a prime congruence P on [] is determined by the relation ≤_P on [], we can make the following definitions.
Given a prime filter on the , the prime congruence P_ defined by is the unique prime congruence P on [] such that ≤_ equals ≤_P. For any prime filter on the such that contains a polytope, the prime congruence P_ defined by is the unique P∈[] such that ≤_ equals ≤_P.
Consider elements a, a_1,a_2,…,a_n∈^× and u,u_1, u_2,…,u_n∈, and suppose that ⋂_l=1^nR(1_,a_lχ^u_l) is nonempty. Then ⋂_l=1^n R(1_,a_lχ^u_l)⊆
R(1_,aχ^u) if and only if there are
m_1,m_2,…,m_n∈_≥0, m∈_>0, and b∈
with b≥ 1_ such that b(aχ^u)^m=∏_l=1^n(a_lχ^u_l)^m_l.
Note that b(aχ^u)^m=∏_l=1^n(a_lχ^u_l)^m_l is equivalent to having
log b + mlog a = ∑_l=1^n m_l log a_l and mu = ∑_l=1^n m_l u_l,
which is in turn equivalent to the existence of nonnegative rational numbers r_1, …, r_n, such that
log a ≤∑_l=1^n r_llog a_l and u = ∑_l=1^n r_l u_l.
By <cit.> applied with the extension of ordered fields /, this happens exactly if
⟨ x,-u⟩≥log a is valid for all x in
{x∈ N_ : ⟨ x,-u_l ⟩≥log a_l, for 1≤ l≤ n}=⋂_l=1^n R(1_, a_lχ^u_l),
i.e., if
⋂_l=1^nR(1_, a_lχ^u_l)⊆ R(1_,aχ^u).
The version of Lemma <ref> with R in place of R is false. For example, let = and =. Then R(1_,t^1x^-1)∩R(1_,t^1)⊆R(1_,t^1x^-2), but there are no m_1,m_2∈_≥0, m∈_>0, and b∈ with b≥ 1_ such that b(t^1x^-2)^m=(t^1x^-1)^m_1(t^1)^m_2.
In light of Remark <ref>, the proof of the following proposition requires case-work.
For any prime congruence P on [], R(1_,aχ^u)∈_P if and only if 1_κ(P)≥|aχ^u|_P. If P∈[] then R(1_,aχ^u)∈_P if and only if 1_κ(P)≥|aχ^u|_P.
The “if” directions in the above statements follow immediately from the definitions of _p and _P. For the other directions, note that the statements are trivially true if a=0_, so assume a≠0_.
Suppose P∈[] and R(1_,aχ^u)∈_P. So there are terms a_1χ^u_1,…,a_nχ^u_n such that R(1_, aχ^u) ⊇∩_l=1^n R(1_, a_lχ^u_l) and |a_lχ^u_l|_P≤ 1_κ(P) for each l. Lemma <ref> tells us that ∩_l=1^n R(1_, a_lχ^u_l) is nonempty. So, by Lemma <ref>, there are m_1,m_2,…,m_n∈_≥0, m∈_>0, and b∈ with b≥1_ such that b(aχ^u)^m=∏_l=1^n(a_lχ^u_l)^m_l. Then b^-1≤ 1_ and
|aχ^u|_P^m=|b^-1|_P_l=1^n|a_lχ^u_l|_P^m_l≤ 1_κ(P),
so |aχ^u|_P≤ 1_κ(P).
Now consider a general prime congruence P on [], and suppose R(1_,aχ^u)∈_P. There are terms a_1χ^u_1,…,a_nχ^u_n such that R(1_, aχ^u) ⊇∩_l=1^n R(1_, a_lχ^u_l) and |a_lχ^u_l|_P≤ 1_κ(P) for each l. Let _∙=(_0≤⋯≤_k) be a simplicial flag of cones such that P=P__∙. Theorem <ref> tells us that there is some U∈_P with U⊆{0}× N_ if and only if _k⊆{0}× N_. We consider two cases, as to whether this happens or not.
Suppose that _k⊈{0}× N_. Since R(1_,a_1χ^u_1,…,a_nχ^u_n)=∩_l=1^n R(1_, a_lχ^u_l) and R(1_,aχ^u) are in _P, neither of them is contained in {0}× N_. Therefore, R(1_,aχ^u)=c(R(1_,aχ^u)) and R(1_,a_1χ^u_1,…,a_nχ^u_n)=c(R(1_,a_1χ^u_1,…,a_nχ^u_n)). So
R(1_,aχ^u)⊇ R(1_,a_1χ^u_1,…,a_nχ^u_n)=∩_l=1^n R(1_, a_lχ^u_l)
with ∩_l=1^n R(1_, a_lχ^u_l) nonempty. We can now apply Lemma <ref> and, as in the previous case, conclude that |aχ^u|_P≤ 1_κ(P).
Finally, suppose that _k⊆{0}× N_. Then, by the definition of P__∙, |b|_P=1_κ(P) for any b∈^×. Note that R(1_,aχ^u)∩({0}× N_)={0}× u^∨, where u^∨={ζ∈ N_ : ζ,u≤0_}. Similarly, ∩_l=1^nR(1_,aχ^u)∩({0}× N_)={0}×{u_1,…,u_n}^∨, so we have that {u_1,…,u_n}^∨⊆ u^∨. So, by the -version of <cit.>,
there are m_1,m_2,…,m_n∈_≥0 and m∈_>0 such that mu=∑_l=1^n m_lu_l. Thus,
|aχ^u|_P^m=|χ^u|_P^m=_l=1^n|χ^u_l|_P^m_l=_l=1^n|a_lχ^u_l|_P^m_l≤ 1_κ(P),
so |aχ^u|_P≤ 1_κ(P).
The map P↦_P gives a bijection from the set of prime congruences on [] to the set of prime filters on the . The inverse map is given by ↦_P.
The map P↦_P gives a bijection from [] to the set of prime filters on the where the filter contains some polytope. The inverse map is given by ↦ P_.
Note that, by Theorem <ref>, Lemma <ref>, and Definition <ref>, all of these maps are well-defined.
We show that P↦_P and ↦ P_ are inverses. The proof for P↦_P and ↦ P_ is analogous.
To show that P=P__P for any P∈[], it suffices to show that aχ^u≤_P bχ^λ if and only if aχ^u≤_P__P bχ^λ for any terms aχ^u, bχ^λ∈[]. Since bχ^λ is a unit in [], we can divide both inequalities by bχ^λ, and so we may assume without loss of generality that bχ^λ=1_. By Proposition <ref>, we know that aχ^u≤_P 1_ if and only if R(1_,aχ^u)∈_P.
By the definitions of P__P and ≤__P,
R(1_,aχ^u)∈_P if and only if aχ^u≤__P1_ which, in turn, is the same as having aχ^u≤_P__P1_.
Now fix a prime filter on the such that contains a polytope; we wish to show that =_P_. Since and _P_ are prime filters on the , they are determined by which Γ-rational polyhedra they contain. Using the fact that and _P_ are both filters, we then see that they are determined by which Γ-rational half-spaces they contain. But every Γ-rational half-space can be written as R(1_,aχ^u) for some term aχ^u∈[], so it suffices to show that R(1_,aχ^u)∈ if and only if R(1_,aχ^u)∈_P_. By Proposition <ref>, R(1_,aχ^u)∈_P_ if and only if aχ^u ≤_P_ 1_. By the definitions of P_ and ≤_, this happens if and only if R(1_,aχ^u)∈.
Let _∙ and _∙' be simplicial flags of cones. Then P__∙=P__∙' if and only if _∙ and _∙' are locally equivalent.
If P=P__∙ and P'=P__∙' for some flags _∙ and _∙' of polyhedra, then P=P' if and only if _∙ and _∙' are locally equivalent.
By Theorem <ref>, P=P' if and only if _P=_P'. Corollary <ref> tells us that this happens exactly if _∙ and _∙' are locally equivalent.
The proof of the second statement is analogous.
Let be a subsemifield of and let ≅^n.
There are explicit bijections between
* [],
* the set of flags _∙ of polyhedra in N_ modulo local equivalence,
* the set of prime filters on the such that the filter contains a polytope, and
* the set of prime filters on the .
These bijections are given as follows. The map from (2) to (1) sends a flag _∙ to the prime congruence
P__∙' where _∙' is any flag of polyhedra which is locally equivalent to _∙ and for which c(_∙') is simplicial.
The map from (1) to (3) is P↦_P. The map from (3) to (4) sends to
the filter
{U∈ : U is polytopal}.
This follows immediately from
Proposition <ref>,
Proposition <ref>, Theorem <ref>, and Corollary <ref>.
Let be a subsemifield of and let ≅^n.
There are explicit bijections between
* the set of all prime congruences on [],
* the set of flags _∙ of cones in _≥0× N_ modulo local equivalence,
* the set of prime filters on the .
These bijections are given as follows. The map from (2) to (1) sends a flag _∙ to the prime congruence
P__∙', where _∙' is any simplicial flag of cones which is locally equivalent to _∙.
The map from (1) to (3) is P↦_P.
This follows immediately from
Proposition <ref>,
Theorem <ref>, and Corollary <ref>.
There is a bijection from the set of prime congruences P on [] such that the map →[]→κ(P) is injective and the set of prime filters on the .
This bijection extends the bijection P↦_P from [] to the set of prime filters on the such that the filter contains a polytope.
Note that the map ↦{c(U) : U∈} is a bijection from the set of prime filters on the to the set of prime filters on the such that every U∈ is not contained in {0}× N_. So it suffices to show that, if P is a prime congruence on [], the map →[]→κ(P) is injective if and only if every U∈_P is not contained in {0}× N_.
Pick a simplicial flag _∙=(_0≤⋯≤_k) of cones such that P=P__∙ and fix any w_i∈_i⊆_i_i-1 for 0≤ i≤ k. Thus, [ w_0; ⋮; w_k ] is a defining matrix for P, and so the map →[]→κ(P) is injective if and only if the first column of this matrix is not all zero, i.e., if w_i∉{0}× N_ for some i. Since w_i∈_i, w_i∉{0}× N_ if and only if _i⊈{0}× N_. So →[]→κ(P) is injective if and only if _k⊈{0}× N_.
But, by Theorem <ref>, _k⊈{0}× N_ if and only if every U∈_P is not contained in {0}× N_.
Under the bijection of Corollary <ref>, the prime filter given in Remark <ref> corresponds to the prime congruence on [x^±1,y^±1] with defining matrix [ 0 0 1; 1 0 0 ].
Let ≅^n.
There are explicit bijections between
* the set of all prime congruences on [],
* the set of flags _∙ of cones in N_ modulo rational local equivalence,
* the set of prime filters on the lattice of rational fan support sets in N_,
* the set of all monomial preorders on a Laurent polynomial ring in n variables.
The bijections between (1), (2), and (3) are given by applying the bijections of Corollary <ref> as follows. Given a prime congruence P on [], view it as a prime congruence on [] by pulling back along the the map []→[]. Given a flag of cones in N_, consider it as a flag of cones in _≥0× N_ by adding a first coordinate 0. Given a prime filter on the lattice of rational fan support sets in N_, push it forward to a filter base on the lattice of -admissible fan support sets in _≥0× N_ via the map N_→_≥0× N_ adjoining a first coordinate 0, and let this filter base generate a filter. The map from (1) to (4) is given by sending a prime congruence P to the preorder <_P on []=.
The primes on [] which are pullbacks of primes on [] are those that are given by matrices with first column zero. This shows that the primes on [] are in bijection with local equivalence classes of flags of cones contained in N_×{0} modulo local equivalence, i.e., flags of cones in N_ modulo rational local equivalence. By Corollaries <ref> and <ref>, these are in bijection with the set of those prime filters on the lattice of -admissible fan support sets whose restriction to N_×{1} is not a prime filter because this restriction contains the empty set. The only way this can occur is if the filter contains a set that is contained in N_×{0}. This, in turn, happens if and only if the filter is generated by the pushforward of a prime filter on the lattice of rational fan support sets in N_ along the inclusion map N_→_≥0× N_ of the zero-slice.
The following corollary is an application of the above results. It provides an explicit criterion for when two matrices define the same prime.
Let C and C' be real matrices of sizes k× n and k'× n, respectively. Suppose that C and C' define prime congruences on [x_1^±1,…,x_n^±1], i.e., the first columns of C and C' are lexicographically at least the zero vector. Use downward gaussian elimination and removal of rows of zeros on C and C' to obtain matrices C and C', respectively, whose first columns have all entries non-negative and whose rows are linearly independent. Then C and C' define the same prime congruence on [x_1^±1,…,x_n^±1] if and only if the flags _∙(C) and _∙(C') of cones are locally equivalent.
By Lemma <ref>, the prime congruences that C and C' define are P__∙(C) and P__∙(C'), respectively. By Corollary <ref>, these are equal if and only if _∙(C) and _∙(C') are locally equivalent.
We now use Proposition <ref> to show that the smallest dimension of of a polyhedron in _P is an algebraic invariant of P ∈[].
For any P ∈[],
min{ U : U ∈_P}
= (κ(P)^×/^×)
= (M)-(P)=(κ(P)/).
Lemma <ref> gives us the second equality in the proposition. By <cit.>, we get the equality with (κ(P)/).
Note that min{ U : U ∈_P} is attained for some Γ-rational polyhedron U∈_P, for which U is the dimension of the smallest Γ-rational affine space that contains U.
In particular, if we let be the subgroup of Γ× consisting of those (α,u) such that
the hyperplane {ξ∈ N_ : α+ξ,u=0}
is in _P, then min{ U : U ∈_P}= N_-.
Note that, for any term aχ^u∈[], the hyperplane {ξ∈ N_ : log(a)+ξ,u=0} is in exactly if R(1_,aχ^u) and R(1_,a^-1χ^-u) are both in _P. By Proposition <ref>, this happens exactly if 1_κ(P)=|aχ^u|_P.
Choose a defining matrix for P ∈[] of the form
[ 1 ξ_0; 0 ξ_1; ⋮ ⋮; 0 ξ_k ]
for some ξ_0,…,ξ_k∈ N_.
Then, looking at the conditions (<ref>.0),…,(<ref>.k) from Section <ref>, we see that 1_κ(P)=|aχ^u|_P if and only if
log(a)+ < ξ_0, u> = < ξ_1, u>= ⋯ =< ξ_k-1, u>=< ξ_k, u>=0.
We now see that the projection Γ× M → M maps isomorphically onto its image. Indeed, the values of ξ_0 and u uniquely determine log(a). So
() ={ u∈ M : < ξ_0, u> ∈Γ and < ξ_i, u> = 0, for 1 ≤ i ≤ k }
= ( (M →κ(P)^×/^×)) = M-(κ(P)^×/^×),
where the final equality holds because the map M→κ(P)^×/^× is surjective. Thus,
min{ U : U ∈_P} = N_ - = M - ( (M →κ(P)^×/^×))
= κ(P)^×/^×.
The previous result finally allows us to prove the geometric criterion of when ≤_P is a valuated monomial order.
A flag _∙=(_0≤_1≤⋯≤_k) of polyhedra in N_ is called complete if k= N_.
For any P ∈[], ≤_P is an order on [] if and only if we can write P=P__∙ with _∙ a complete flag in N_.
If _∙ is complete, then any Γ-rational neighborhood of _∙ must be full-dimensional because it contains N_+1 affinely independent points.
So _N_=min{ U : U ∈_P}=(M)-(P). Thus (P)=0 and so, by <cit.> ≤_P is an order on [].
For the other direction, suppose that ≤_P is an order on []. Let _∙=(_0≤_1≤⋯≤_k) be a flag of polyhedra in N_ such that P=P__∙ and k is as large as possible. Now assume, for contradiction, that k< N_.
For 0≤ i≤ k pick ξ_i∈_i_i-1, so
C=[ 1 ξ_0; 1 ξ_1; ⋮ ⋮; 1 ξ_k ]
is a defining matrix for P. Since k< N_, there is a point ξ_k+1 which is affinely independent of ξ_0,…,ξ_k. So the matrix
C'=([ 2cC; 1 ξ_k+1 ])
defines a flag _∙ '=(_0'≤_1'≤⋯≤_k+1') of simplices in N_.
For any two exponent vectors _1 and _2 we have C'_1≤_C'_2 if and only if C_1≤_C_2 because ≤_P is an order. Thus P__∙'=P, contradicting the maximality of k.
The same proof
shows that the statement of Proposition <ref> is also true when is any toric monoid.
alpha
11
AI22
O. Amini, H. Iriarte
Geometry of higher rank valuations, arXiv preprint arXiv:2208.06237 (2022)
CM19
A. Chan, D. Maclagan,
Groebner bases over fields with valuations, Mathematics of Computation, Vol 88, Number 315, 467–-483, http://dx.doi.org/10.1090/mcom/3321, (2019).
CC14
A. Connes and C. Consani,
The Arithmetic Site, C. R. Math. Acad. Sci. Paris 352 (2014), no. 12, 971–975. https://doi.org/10.1016/j.crma.2014.07.009.
CC16
A. Connes, C. Consani,
Geometry of the Scaling Site, Selecta Math. 23, no.3, 1803–-1850, (2017) DOI: s00029-017-0313-y.
Fri19
N. Friedenberg
Normal completions of toric varieties over rank one valuation rings and completions of Γ-admissible fans arXiv pre-print arXiv:1908.00064 (2019).
FM22
N. Friedenberg, K. Mincheva,
Tropical Adic Spaces I: The continuous spectrum of a topological semiring, arXiv pre-print 2209.15116.
GG16
J. Giansiracusa, N. Giansiracusa,
Equations of tropical varieties, Duke Math. J. 165, no. 18 (2016), 3379-3433; DOI: 00127094-3645544.
Gol92
J. Golan,
The theory of semirings with applications in mathematics and theoretical computer science, Longman Sci & Tech., 54, (1992).
Izh19
Z. Izhakian,
Commutative ν-algebra and supertropical algebraic geometry,
arXiv preprint arXiv:1901.08032 (2019).
IR10
Z. Izhakian, L. Rowen,
Supertropical algebra, Advances in Mathematics, Volume 225, Issue 4, 2222–2286, (2010), https://doi.org/10.1016/j.aim.2010.04.007.
JM17
D. Joó, K. Mincheva,
Prime congruences of idempotent semirings and a Nullstellensatz for tropical polynomials, Sel. Math. New Ser., doi:10.1007/s00029-017-0322-x, (2017).
Lor15
O. Lorscheid
Scheme theoretic tropicalization
arXiv preprint arXiv: 1508.07949, (2015).
MR18
D. Maclagan, F. Rincón,
Tropical ideals, Compositio Mathematica, Volume 154, Issue 3, 640 – 670, (2018)
DOI: S0010437X17008004.
PS95
M. van der Put, P. Schneider,
Points and topologies in rigid geometry, Math. Ann. 302, 81–103 (1995)
Rab10
J. Rabinoff,
Tropical analytic geometry, Newton polygons, and tropical intersections, Advances in Mathematics vol. 229, issue 6, 3192–3255, (2012).
Rob85
L. Robbiano,
Term orderings on the polynomial ring, EUROCAL 85, Vol. 2 (Linz, 1985), Lecture Notes in Comput. Sci. 204, 513-517, Springer, Berlin (1985)
R18
L. Rowen
An informal overview of triples and systems,
arXiv preprint ArXiv:1709.03174 (2018).
VV
T. Vaccon, T. Verron
Universal Analytic Gröbner Bases and Tropical Geometry,
forthcoming.
Zieg95
G. Ziegler,
Lectures on Polytopes, Graduate Texts in Mathematics (GTM, volume 152), Springer New York, (1995), DOI: 978-1-4613-8431-1.
|
http://arxiv.org/abs/2306.12539v1
|
20230621200051
|
On the Hill discriminant of Lamé's differential equation
|
[
"Hans Volkmer"
] |
math.CA
|
[
"math.CA",
"33E10, 34D20"
] |
Hans Volkmer
Department of Mathematical Sciences
University of Wisconsin - Milwaukee
[email protected]
Lamé's differential equation is a linear differential equation of the second order with a periodic coefficient involving
the Jacobian elliptic function depending on the modulus k, and two additional parameters h and ν.
This differential equation appears in several applications, for example,
the motion of coupled particles in a periodic potential.
Stability and existence of periodic solutions of Lamé's equations is determined by the value of its Hill discriminant
D(h,ν,k).
The Hill discriminant is compared to an explicitly known quantity including explicit error bounds.
This result is derived from the observation that Lamé's equation with k=1 can be solved by hypergeometric functions because then
the elliptic function reduces to the hyperbolic tangent function.
A connection relation between hypergeometric functions then allows the approximation of the Hill discriminant by a simple expression.
In particular, one obtains an asymptotic approximation of D(h,ν,k) when the modulus k tends to 1.
[2010]33E10, 34D20
On the Hill discriminant of Lamé's differential equation
Hans Volkmer
July 31, 2023
========================================================
§ INTRODUCTION
Kim, Levi and Zhou <cit.> consider two elastically coupled particles positioned at x(t), y(t) in a periodic potential V(x).
The system is described by
ẍ+V'(x)=κ (y-x), ÿ+V'(y)=κ(x-y).
Let x(t)=y(t)=p(t) be a synchronous solution. If we linearize the system around
this synchronous solution, x=p+ξ, y=p+η, and set u=ξ+η, w=ξ-η, then we obtain
ü+V”(p)u=0,
ẅ+(2κ+V”(p))w=0 .
These are Hill equations <cit.>, that is, they are of the form
ẅ+ q(t) w =0
with a periodic coefficient function q(t), say of period σ>0.
In this and many other applications the Hill discriminant D associated with (<ref>) plays an important role.
The discriminant D is defined as the trace of the endomorphism w(t)↦ w(t+σ) of the two-dimensional solution space of
(<ref>) onto itself.
It is well-known <cit.> that equation (<ref>) is stable if |D|<2 and unstable if |D|>2. The condition D=2 is equivalent to
the existence of a nontrivial solution with period σ while D=-2 is equivalent to
the existence of a nontrivial solution with semi-period σ.
In <cit.> a remarkable asymptotic formula for the Hill discriminant of (<ref>) as the energy tends to 0 is given.
In this work we are interested in the special case V(x)=-cos x. Then p(t) is a solution of the differential equation p̈+sin p=0 of the mathematical pendulum.
We get <cit.>
p(t,E)=2(t/k,k), where k^2=2/E+2,
E denotes energy, and is Jacobi's amplitude function <cit.>.
Then equation (<ref>) becomes
d^2w/dt^2+(2κ+1-2^2(t/k,k)) w=0,
where (x,k)=sin(x,k) is one of the Jacobian elliptic functions <cit.>.
If we substitute t=ks we obtain Lamé's equation <cit.>
d^2w/ds^2 +(h-ν(ν+1)k^2^2(s,k)) w=0
with parameters h=k^2(2κ+1) and ν=1.
There is no explicit formula for the corresponding Hill discriminant
D=D(h,ν,k).
However, in <cit.> it is shown that
D(h,1,k)=acos(ωln E-ϕ)+o(E) as E→ 0,
where ω^2=2κ-1.
The main result of this paper is Theorem <ref>
which improves on (<ref>) in three directions.
* We allow any real ν in place of ν=1.
* We provide explicit values for the amplitude a and the phase shift ϕ in (<ref>)
* We give explicit error bounds. This makes it possible to prove
stability of the Lamé equation in some cases.
The idea behind the proof of Theorem <ref> is the observation that Lamé's equation (<ref>) with k=1
can be solved in terms of the hypergeometric function F(a,b;c,x).
Then well-known connection relations between hypergeometric functions play a crucial role.
As a preparation we present some elementary results on linear differential equations of the second order in
Section <ref>.
In Section <ref> we give a quick review of the Lamé equation. In Section <ref> we consider the special case
of the Lamé equation when k=1. In Section <ref> we combine our results to obtain Theorem <ref>.
§ LEMMAS ON SECOND ORDER LINEAR EQUATIONS
Let u be the solution of the initial value problem
u”+q(t)u=r(t), u(a)=u'(a)=0,
where q,r:[a,b]→ are continuous functions.
By the variation of constants formula <cit.>
u(t)=∫_a^t L(t,s) r(s) ds, u'(t)=∫_a^t ∂_1 L(t,s)r(s) ds,
where y(t)=L(t,s) is the solution of
y”+q(t)y=0
determined by the initial conditions y(s)=0, y'(s)=1.
Let L_1,L_2 be constants such that
|L(t,s)|≤ L_1, |∂_1 L(t,s)|≤ L_2 for a≤ s≤ t≤ b.
Then it follows that
u_∞≤ L_1 ∫_a^b |r(s)| ds, u'_∞≤ L_2 ∫_a^b |r(s)| ds,
where f_∞:=max_t∈[a,b] |f(t)|.
Let p,q:[a,b]→ be continuous. Let L_1, L_2 be as in (<ref>).
Let y be a solution of (<ref>) and w a solution of w”+p(t) w=0 with y(a)=w(a) and y'(a)=w'(a).
Then
y-w_∞ ≤ L_1 w_∞∫_a^b|p(s)-q(s)| ds,
y'-w'_∞ ≤ L_2 w_∞∫_a^b|p(s)-q(s)| ds.
For u=y-w we have
u”(t)+q(t) u(t)=(p(t)-q(t))w(t) .
The desired result follows from (<ref>).
Let q:[a,b]→(0,∞) be continuously differentiable and monotone. Set
m:=min_t∈[a,b] q(t)>0, M:=max_t∈[a,b] q(t).
Let y_1, y_2 be the solutions of (<ref>) determined by y_1(a)=y_2'(a)=1, y_1'(a)=y_2(a)=0.
If q is nondecreasing then
y_1^2_∞≤ 1, y_1'_∞^2≤ M, y_2_∞^2≤1/m, y_2'_∞^2≤M/m,
and, if q is nonincreasing,
y_1^2_∞≤M/m, y_1'_∞^2≤ M, y_2_∞^2≤1/m, y_2'_∞^2≤ 1.
Suppose first that q is nondecreasing. Set
u_j(t):=y_j(t)^2+1/q(t)y_j'(t)^2 .
Then
u_j'(t)=-q'(t)/q(t)^2y_j'(t)^2≤ 0
so u_j(t)≤ u_j(a) for all t∈[a,b]. Now u_1(a)=1 and u_2(a)=1/m imply y_1(t)^2≤ 1,
y_1'(t)^2≤ M, y_2(t)^2≤1/m, y_2'(t)^2≤M/m.
If q is nonincreasing we argue similarly using v_j(t)=y_j'(t)^2+q(t)y_j(t)^2 in place of u_j.
§ LAMÉ'S EQUATION
For h∈, ν≥ -1/2, k∈(0,1), we consider the Lamé equation <cit.>, <cit.>
y”+(h-ν(ν+1)k^2^2(t,k)) y=0 .
This is a Hill equation with period 2K(k), where K=K(k)
is the complete elliptic integral of the first kind:
K=∫_0^1dt/√(1-t^2)√(1-k^2t^2).
Equation (<ref>) also makes sense for k=1 <cit.> when it becomes
y”+(h-ν(ν+1)tanh^2 t) y=0 .
Of course, this is not a Hill equation anymore.
Let y_1(t)=y_1(t,s,h,ν,k) and y_2(t)=y_2(t,s,h,ν,k) be the solutions of (<ref>)
determined by the initial conditions y_1(s)=y_2'(s)=1, y_1'(s)=y_2(s)=0.
Set q(t):=h-ν(ν+1)k^2^2(t,k). This function is increasing on [0,K] if -1/2≤ν<0 and decreasing
on [0,K] if ν>0.
We assume that h>0 and h>ν(ν+1)k^2. Then q(t)>0 for t∈[0,K].
We define H:=(h-ν(ν+1)k^2)^1/2 and
C_1(h,ν,k) := 1 if ν<0
h^1/2H^-1 if ν≥0
C_1'(h,ν,k) := H if ν<0
h^1/2 if ν≥0
C_2(h,ν,k) := h^-1/2 if ν<0
H^-1 if ν≥0
C_2'(h,ν,k) := h^-1/2 H if ν<0
1 if ν≥0
Suppose that h>0 and h-ν(ν+1)k^2>0. Then, for 0≤ s≤ t≤ K,
|y_1(t,s)|≤ C_1,
|y_1'(t,s)|≤ C_1', |y_2(t,s)|≤ C_2, |y_2'(t,s)|≤ C_2'.
If k=1 this is true for all 0≤ s≤ t.
This follows from Lemma <ref>.
In the next lemma we use the complete elliptic integral E=E(k) of the second kind:
E=∫_0^1 √(1-k^2t^2)/√(1-t^2) dt .
Suppose that h>0 and h-ν(ν+1)>0. Then
|y_1(K,0,h,ν,k)-y_1(K,0,h,ν,1)| ≤ C_1C_2|ν|(ν+1)(E(k)-tanh K(k)),
|y_2'(K,0,h,ν,k)-y_2'(K,0,h,ν,1)| ≤ C_2C_2'|ν|(ν+1)(E(k)-tanh K(k)),
where the constants C are formed with k=1.
We apply Lemma <ref> with
q(t)=h-ν(ν+1)k^2^2(t,k), p(t)=h-ν(ν+1)tanh^2 t,
and
y(t)=y_1(t,0,h,ν,k), w(t)=y_1(t,0,h,ν,1)
on the interval t∈[0,K]. We note that <cit.>
k(t,k)≤tanh t≤(t,k) for t∈[0,K] .
Therefore,
∫_0^K|p(s)-q(s)| ds = |ν|(ν+1)∫_0^K (tanh^2 s-k^2^2(s,k)) ds
= |ν|(ν+1)∫_0^K (^2(s,k)-1+tanh^2 s) ds.
Using ∫_0^K ^2(s,k) ds= E <cit.>, we get
∫_0^K|p(s)-q(s)| ds=|ν|(ν+1)(E-tanh K) .
By Lemma <ref>, |w(t)|≤ C_1 and we can choose L_1=C_2.
This gives the desired estimate for y_1. The estimate for y_2' is proved similarly.
Note that
∫_0^K (tanh^2s-k^2^2(s,k)) ds≤ (1-k^2)∫_0^K ^2(s,k) ds≤ k'^2 K,
where k'=√(1-k^2),
so
E-tanh K≤ k'^2 K.
Also note that <cit.>
K(k)≤π/2-ln k' ,
so E(k)-tanh K(k)=O((1-k)ln(1-k)) as k→ 1.
§ THE LAMÉ EQUATION FOR K=1
Let w_1, w_2 be the solutions of (<ref>) determined by initial conditions
w_1(0)=w_2'(0)=1, w_1'(0)=w_2(0)=0. Then w_j(t)=y_j(t,0,h,ν,1) using the notation of the previous section.
We assume that ν≥ -1/2 and h>ν(ν+1), and set
μ:=√(ν(ν+1)-h)=iω, where ω>0.
The substitution x=tanh t transforms (<ref>) to the associated Legendre equation <cit.>
of degree ν and order μ. According to <cit.> we
express w_j in terms of the hypergeometric function F(a,b;c;z) as follows
w_1(t) = cosh^μ t F(-12(μ+ν),12(1-μ+ν);12;tanh^2 t),
w_2(t) = tanh t cosh^μ t F(12(1-μ-ν),12(2-μ+ν);32;tanh^2 t).
This can be confirmed by direct computation.
In order to determine the behaviour of the functions w_j(t) as ∋ t→∞ we use the connection formula <cit.> and find
w_j(t)=(v_j(t)), where
v_1(t) = A_1/(2cosh t)^-μF(-12(μ+ν),12(1-μ+ν);1-μ;cosh^-2 t),
v_2(t) = A_2tanh t/(2cosh t)^-μF(12(1-μ-ν),12(2-μ+ν);1-μ;cosh^-2 t),
and
A_1 = 2^1-μπ^1/2Γ(μ)/Γ(12(1+μ+ν))Γ(1/2(μ-ν)),
A_2 = 2^-μπ^1/2Γ(μ)/Γ(12(2+μ+ν))Γ(1/2(1+μ-ν)).
We set
z_j(t)=(A_je^iω t), j=1,2 .
Suppose h>0 and h>ν(ν+1).
Then, for all t≥ 0,
|w_1(t)-z_1(t)| ≤ ω^-1C_1|ν|(ν+1)(1-tanh t),
|w_2'(t)-z_2'(t)| ≤ C_2|ν|(ν+1)(1-tanh t),
where C_1, C_2 are formed with k=1.
Since F(a,b;c;0)=1 the representation w_j(t)=(v_j(t)) yields
lim_t→∞ w_j(t)-z_j(t)=lim_t→∞(A_j (2cosh t)^iω-A_je^iω t) = 0.
Similarly, we have
lim_t→∞ w_j'(t)-z_j'(t)=0 .
The function u_j=w_j-z_j satisfies
u_j”+ω^2 u_j=g_j(t), g_j(t):=ν(ν+1) (tanh^2 t-1) w_j(t) .
Let t_0, t≥ 0. Then
u_j(t)=u_j(t_0)cos(ω (t-t_0))+u_j'(t_0)sin(ω(t-t_0))/ω+
∫_t_0^t sin(ω(t-s))/ωg_j(s) ds .
Letting t_0→∞, using (<ref>), (<ref>) and Lemma <ref>, we obtain
|u_1(t)|≤ω^-1∫_t^∞ |g_1(s)| ds ≤ω^-1C_1|ν|(ν+1)(1-tanh t)
as desired.
The estimate for u_2' is derived similarly.
The constant wronskian of z_1, z_2 is
z_1(t)z_2'(t)-z_1'(t)z_2(t) =ω (A_1A̅_2) .
The reflection formula for the gamma function
Γ(x)Γ(1-x)=π/sin(π x)
gives
ω A_1A̅_2 = -sin(νπ)/sinh(ωπ) +i.
Therefore,
z_1(t)z_2'(t)-z_1'(t)z_2(t) =1.
Moreover,
z_1(t)z_2'(t)+z_1'(t)z_2(t)=2z_1(t)z_2'(t)-1=(B e^2iω t),
where B=iω A_1A_2.
Using the duplication formula for the gamma function
2^x-1Γ(12x)Γ(12(x+1))=π^1/2Γ(x)
we see that
B=Γ(1+μ) Γ(μ)/Γ(1+μ+ν)Γ(μ-ν) .
If ν∈_0 then
B=(iω-1)(iω-2)…(iω-ν)/(iω+1)(iω+2)…(iω+ν)
so |B|=1. If ν=1 then
B=iω-1/iω+1=ω^2-1+i2ω/ω^2+1
and
(Be^2iω t)=1/ω^2+1((ω^2-1)cos (2ω t)-2ωsin(2ω t)) .
By (<ref>),
|B|^2=|ω A_1 A̅_2|^2=1+sin^2νπ/sinh^2ωπ.
So |B|>1 if ν is not an integer.
§ HILL'S DISCRIMINANT OF LAMÉ'S EQUATION
The Hill discriminant D(h,ν,k) of Lamé's equation is given by <cit.>
D(h,ν,k)=2(y_1(K)y_2'(K)+y_1'(K)y_2(K)) =2(2y_1(K)y_2'(K)-1),
where y_j(t)=y_j(t,0,h,ν,k) in the notation of Section <ref>.
By combining Theorems <ref> and <ref> we obtain the following main theorem of this work.
(a) Suppose ν≥ 0 and h>ν(ν+1). Then, for all k∈(0,1),
|D(h,ν,k)-2(B e^2iω K(k))|≤ 8h^1/2ω^-2ν(ν+1)(E(k)+1-2tanh K(k)).
(b) Suppose ν∈[-1/2,0) and h>0. Then, for all k∈(0,1),
|D(h,ν,k)-2(B e^2iω K(k))|
≤ 8ω h^-1|ν|(ν+1)(E(k)+1-2tanh K(k)) .
The constants ω and B are given in (<ref>), (<ref>), respectively.
Using (<ref>) and (<ref>), we have
D(h,ν,k)-2(B e^2iω K)= 4(y_1(K)y_2'(K)-z_1(K)z_2'(K)) .
Using Lemma <ref>, we estimate
|D(h,ν,k)-2(B e^2iω K(k))|
≤ 4|y_1(K)||y_2'(K)-z_2'(K)|+4|z_2'(K)||y_1(K)-z_1(K)|
≤ 4C_1|y_2'(K)-z_2'(K)|+4C_2'|y_1(K)-z_1(K)|.
In fact, |w_2'(t)|≤ C_2'
implies |z_2'(t)|≤ C_2' because of (<ref>).
Now we use Theorems <ref>, <ref> to estimate
|y_1(K)-z_1(K)|≤ |y_1(K)-w_1(K)|+|w_1(K)-z_1(K)|
≤ C_1C_2|ν|(ν+1)(E-tanh K)+ω^-1C_1|ν|(ν+1)(1-tanh K),
and
|y_2'(K)-z_2'(K)|≤ |y_2'(K)-w_2'(K)|+|w_2'(K)-z_2'(K)|
≤ C_2C_2'|ν|(ν+1)(E-tanh K)+C_2|ν|(ν+1)(1-tanh K).
This gives the desired statements (a) and (b) substituting the values for C_j and C_j'.
We may use K(k)=ln(4/k')+O(k'^2ln k') <cit.> and |e^is-e^it|≤ |s-t| for s,t∈ to
obtain
D(h,ν,k)=2(B e^2iωln(4/k'))+O((1-k)ln(1-k)) as k→ 1.
As an illustration, take h=6, ν=1/2 and k=1-e^-τ. Figure <ref> depicts the graphs of τ↦ D(6,1/2,k) (red) and τ↦ 2(B e^2iω K) (black).
These graphs are hard to distinguish for τ>2.
If τ=5 then k=0.993262… and 2(B e^2iω K)=-1.274528… Theorem <ref> gives |D(6,1/2,k)-2(B e^2iω K)|≤ 0.066641.
Therefore, |D(6,1,k)|<2 and so Lamé's equation is stable for h=6, ν=1/2, k=1-e^-5.
§ DECLARATIONS
§.§ Ethical Approval
Not applicable.
§.§ Competing interests
The author has no relevant financial or non-financial interests to disclose.
§.§ Authors' contributions
Not applicable.
§.§ Funding
The author did not receive support from any organization for the submitted work.
§.§ Availability of data and materials
A Maple worksheet generating Figure 1 and the numerical example are available from the author.
9
A
F.M. Arscott, Periodic Differential Equations, Pergamon Press, The MacMillan Company, New York, 1964,
C
E. A. Coddington, Ordinary Differential Equations, Dover Publications, New York, 1989.
EMO
A. Erdélyi, W. Magnus, F. Oberhettinger and F. Tricomi, Higher
Transcendental Functions, Vol. 3. McGraw-Hill, New York, 1955.
Kim
Ki Yeun Kim, Mark Levi and Jing Zhou, Spectral asymptotics and Lamé spectrum for coupled particles in perioidic potentials,
J. Dyn. Diff. Equat. (2021), https://doi.org/10.1007/s10884-021-10108-z
MW
W. Magnus and S. Winkler,
Hill's equation,
Interscience Publishers, New York, 1966.
dlmf
F. W. J. Olver, D. W. Lozier, F.F. Boisvert, and C. W. Clark, editors, NIST handbook of mathematical functions, Cambridge University Press, Cambridge, 2010.
O
F.M.J. Olver, Asymptotics and Special Functions, A K Peters Ltd., Wellesley, MA, 1997.
V2004
H. Volkmer, Four remarks on eigenvalues of Lamé's equation, Analysis and Applications 2 (2004), 161-175.
WW
E. T. Whittaker and G. N. Watson, A course of modern analysis, Cambridge University Press, Cambridge, 1927.
|
http://arxiv.org/abs/2306.02141v1
|
20230603155127
|
Quantum delay in the time-of-arrival of free falling atoms
|
[
"Mathieu Beau",
"Lionel Martellini"
] |
quant-ph
|
[
"quant-ph",
"gr-qc",
"physics.atom-ph"
] |
APS/123-QED
Physics department, University of Massachusetts, Boston, Massachusetts 02125, USA
Finance department, EDHEC Business School, Nice 06200, France
We use a stochastic representation of measured trajectories to derive an exact analytical expression for the probability distribution of the time-of-arrival (TOA) for a free falling particle,
as well as approximate expressions for its mean value and standard-deviation in the semiclassical regime.
We predict the existence of a positive shift δ between the semiclassical TOA (t_mean) and the classical TOA (t_cl) for a particle of mass m falling in a constant and uniform gravitational field g with zero initial velocity, with a value given by
δ≡t_mean-t_cl/t_cl = ħ^2/16gx m^2σ^2,
where σ is the width of the initial Gaussian wavepacket, and x is the distance between the initial position of the source and that of the detector.
We discuss the implications of this result on the weak equivalence principle in the quantum regime.
Quantum delay in the time-of-arrival of free falling atoms
Lionel Martellini
July 31, 2023
==========================================================
While the Born rule gives the probability density of a position measurement at a fixed time, there is no available rule in the standard formalism of quantum mechanics for deriving the probability density for a time measurement at a fixed position. At the intuitive level, the origin of what is known as the first passage time problem can be traced down to the fact that a particle is believed not to have a well defined position at a given instant of time, at least according to the standard interpretation. More formally there is no operator in quantum mechanics associated with time measurements; only space measurements admit a Hermitian operator representation (see chapters 1 and 10 in <cit.>). Various attempts have been made to address the first passage problem, both within the standard formulation and with alternative formulations of quantum mechanics, but no satisfactory answer exists at this stage with respect to this fundamental question (see <cit.> for reviews of previous work).
In this note we introduce a straightforward stochastic representation of measurement outcomes in quantum mechanics, which we claim to be consistent with the standard formalism, and which can yield new insights into time of arrival (TOA) measurements. More specifically, we define X_t as the random variable associated with the measured position at a fixed time t, and symmetrically define T_x as the random variable associated with the measured time of passage at a fixed position x. Using standard results from statistics, we are able to obtain the probability distribution of T_x as a function of the probability distribution of X_t. In an application to the free falling quantum particle, we find (i) that the mean time of arrival is greater than the classical time of arrival as the result of a Jensen inequality, and (ii) is a function of the mass of the particle, among other ingredients.
A possible deviation from the universality of free fall in the quantum domain is the subject of ongoing debate in the literature. On the one hand, some authors invoke the correspondence principle to postulate that the time of flight should have a mean value agreeing with the classical value, and find that it is only at the level of uncertainties that such a mass-dependence should occur (see for example <cit.>). In contrast other authors argue in favor of a mass dependence in both the mean arrival time and its standard-deviation (<cit.>), but they do not provide analytical expressions for these quantities. In this context, our main contribution is to develop a novel stochastic approach to find the exact expression of the time-distribution for Gaussian states (see equation (<ref>)), and to present a precise quantitative prediction for the violation of the weak equivalence principle (WEP) at the mean level (see equation (<ref>)). Our prediction can be empirically tested in experimental conditions that we discuss below. If confirmed, it would help bring new constraints to alternative theories of quantum gravity.
Stochastic measured position at a fixed time (X_t). In quantum mechanics, the one-dimensional time-dependent Schrödinger equation sets the dynamics of the wavefunction Ψ_t(x):
-ħ^2/2m^2/ x^2Ψ_t(x) +V(x,t)Ψ_t(x) = iħ/ tψ_t(x),
where m is the mass of the particle and V(x,t) is a position- and time-dependent external potential. By the Born rule, the density of probability for the particle to be measured in a small region around the position x at a given time t is given by:
ρ_t(x) ≡ |Ψ_t(x)|^2.
Our stochastic representation simply consists in associating a random variable denoted by X_t to the Gaussian probability density function ρ_t(x). In other words X_t is defined to represent the uncertain outcome of a first measurement performed at date t after the system has been prepared in the state represented by Ψ_0 and has evolved, according to the Schrödinger equation and with no prior measurement, to the state Ψ_t.
From a principle standpoint, this position measurement corresponds to the following experiment: (i) we place n detectors at different positions x_1, x_2, ⋯, x_n with a spatial resolution δ x in an interval [a,b] (hence, x_0=a, x_1=a+δ x, ⋯, x_k = a+kδ x, ⋯, x_n=b), (ii) we synchronize these detectors to make sure they turn on at the same exact time t, (iii) and then we record the position of the position of the particle at the time t. From equation (<ref>), the probability that the k-th detector detects the particle at the time t is P_k(t) = ∫_x_k^x_k+1|Ψ_t(x)|^2 dx. (iv) We repeat this procedure a large number N of times, which allows us to reconstruct the density of probability ρ_t(x) at a fixed time t. At each trial p=1, 2, 3,⋯, N, we measure the value of the position x of the particle at a given time t, and we can thus represent this outcome as the realization of a stochastic variable X_t that gives the measured position x∈[a,b] of the particle at the time t, for which the density distribution is given by ρ_t(x).
While our approach is more general, we specialize in what follows the analysis to Gaussian states with a density distribution of the form
ρ_t(x) = 1/√(2πσ(t)^2)e^-(x-x_c(t))^2/2σ(t)^2,
where σ(t) is the standard-deviation of the Gaussian distribution that is centered at the classical path x_c(t), which by the correspondence principle is also the mean value of the position operator ⟨x̂_t⟩ = ∫_-∞^+∞xρ_t(x) = x_c(t)). We further denote the initial mean value of the position by x_0 and its initial standard-deviation by σ. In the Gaussian setting, X_t can be written with no loss of generality as:
X_t = x_c(t) + ξσ(t),
where ξ=𝒩(0,1) is a normally distributed random variable with a variance of 1 and a mean value of 0.
Gaussian states are standard forms in quantum physics, not only for the free fall problem which is the focus of this note, but also for the free motion, the simple and time-dependent harmonic oscillator, constant or time-dependent electric fields, and more generally for any quadratic potential of the form V(x,t) = a(t)x^2+b(t)x, where a(t) and b(t) are two functions of t (see for example <cit.>).
Stochastic measured time at a fixed position (T_x). In the previous thought experiment, we considered that the n position-detectors where turned on at a fixed time t. This means that the detectors are synchronized to an ideal clock that tells the detector to switch on through a signal at a precise time-value t.
We now turn to a symmetric perspective, where the focus is on time measurements at a fixed position.
As recalled in the introduction, there exists an abundant literature on the subject that has generated a number of insightful results that sometimes contradict each other because of implicit divergences in the underlying definition of a time measurement. In this context, we seek to avoid ambiguities by carefully explaining the experimental setup that would be involved in an idealized yet physical measurement of the TOA. More specifically, we consider the following procedure: (i) we place a single detector at the fixed position x; (ii) we drop a particle at time 0 and we turn on the detector (say by triggering a laser pulse) at some time t; (iii) we record 1 if the particle has been measured at position x for this particular time t and 0 otherwise. Then, (iv) we repeat the steps (i)-(iii) N times while keeping the exact same time t and then we count the total number of particles detected at this position (alternatively, we could in principle use in step (ii) an atomic cloud with N non-interacting particles <cit.>). Finally, (v) we repeat the steps (i)-(iv) by letting t vary, with a small enough temporal resolution δ t (hence, t_0=0, t_1=δ t, ⋯, t_k = kδ t, ⋯, t_n=nδ t). This procedure allows us to reconstruct the whole time distribution Π_x(t) of a random variable, denoted by T_x (note the symmetry in notation with respect to X_t), which can be regarded as a stochastic time of arrival (STOA) at the fixed position x. Please note that this approach differs from a procedure that would consist in performing continuous measurements (with a detector placed at position x) starting at t=0 and until the first detection is recorded. Indeed, such a procedure would involve multiple measurements before the detection occurs, in contradiction with our definition of T_x, which is defined as the random date of a first measurement at position x.
The question that naturally arises at this stage is to find the expression of the STOA as a function of the position x of the detector.
By fixing the position of the detector at x and allowing the time of observation to vary stochastically, we can determine the possible values of the stochastic-time-of-arrival T_x. These values correspond to the solution of the equation x = X_T_x, where X_t is given in (<ref>). Hence, we find the mapping between the random variable ξ and the STOA T_x to be:
x = x_c(T_x) + ξσ(T_x),
which implies
T_x = h_x(ξ),
or
ξ = h_x^-1(T_x) = x-x_c(T_x)/σ(T_x),
where h_x(·) is an invertible function (see (<ref>) for an approximate expression of the function h_x for the free falling particle).
Assuming that the function h_x is strictly monotonic, a standard result from statistics gives the following relation between the probability distribution Π_x(t) for the STOA T_x at the detection point x and the probability distribution f(·) of the standardized Gaussian variable ξ (see for example theorem 4.1 in chapter 4.1.3 in <cit.>):
Π_x(t) =
f(h_x^-1(t))×|/ th_x^-1(t)|.
This result can in fact be extended to a more general case: if h_x is not monotonic, we can usually partition its domain of definition into a finite number of intervals such that it is strictly monotone and differentiable on each partition.
Finally, we use
/ t(x-x_c(t)/σ(t)) = -(v_c(t)σ(t)+(x-x_c(t))σ̇(t)/σ(t)^2),
where v_c(t) is the classical velocity and σ̇(t) = dσ(t)/dt, we find the following expression for the time-of-arrival distribution:
Π_x(t) = |v_c(t)σ(t)+(x-x_c(t))σ̇(t)/σ(t)^2|×1/√(2π)e^-(x-x_c(t))^2/2σ(t)^2.
To our best knowledge, this general formula has never been found in the literature.
In what follows we discuss the application to a free-falling particle, and also to the free particle as a nested case with a zero gravitational potential.
Time-distribution of a free-falling particle. For the free falling particle, we remind the standard expressions for the classical path x_c(t) = v_0 t + g/2t^2 (fixing the mean initial position x_0=0 and assuming g>0) and for the standard-deviation σ(t) = σ√(1+t^2/τ^2), where τ = 2mσ^2/ħ is a characteristic time. Here v_0>0 represents the mean value <cit.> for the initial velocity of the particle, which itself is a random variable with a standard-deviation given from the uncertainty principle as ħ/2mσ. Using (<ref>) we obtain the exact expression for the probability distribution of the STOA for the free-falling particle:
Π_x(t)
= (v_0 +xt/τ^2+gt+g/2t^3/τ^2/1+t^2/τ^2)×exp(-(x-v_0 t-gt^2/2)^2/2σ^2(1+t^2/τ^2))/√(2πσ^2)√(1+t^2/τ^2),
where the distribution is normalized ∫_-∞^+∞Π_x(t)dt =1. Notice that this formula agrees with the one found in <cit.> from the quantum current density approach (see also <cit.>). However the authors derive it only for v_0=0 and did not give the explicit expression of the normalization factor. From equation (<ref>), we remark that all the n-th moments ∫_-∞^+∞ t^nΠ_x(t)dt <+∞ converge. There is no analytical expression for such integrals, but they can easily be computed numerically. In what follows, we use an alternative approach consisting in first finding an approximate explicit expression for the stochastic time-of-variable T_x as a solution to equation (<ref>), and then obtaining analytical estimates for its mean and standard deviation. The solution to equation (<ref>) with X_t = v_0 t + g/2t^2 + σξ√(1+t^2/τ^2) is indeed difficult to express in a closed form but we can find an approximation of this expression in the semiclassical regime for long time-of-flight t≫τ.
Time-of-arrival of a free-falling particle in the long time-of-flight regime
In the long time-of-flight regime where t≫τ, equation (<ref>) becomes:
x ≈gT_x^2/2 + v_0 T_x + σξT_x/τ,
which is exactly equal to the classical trajectory the particle would have at time T_x with an initial position equal to 0 and an initial velocity equal to v_0 + σξ/τ.
The solution to this equation is given by a standard quadratic formula, from which we find the stochastic time-of-arrival variable to be:
T_x = h_x(ξ) = 1/g(√((v_0+σξ/τ)^2+2gx)-(v_0+σξ/τ)).
Notice that for σ=0, we find the classical time:
t_c = 1/g(√(v_0^2+2gx)-v_0).
Considering the semiclassical condition q≪ 1, where q is the coefficient of quantumness
q = σ/√(v_0^2+2gx) τ = ħ/2mσ√(v_0^2+2gx) = λ/2πσ,
and where λ = h/(m√(v_0^2+2gx)), we find that
T_x≈ t_c - t_c/√(v_0^2+2gx)σξ/τ+xσ^2ξ^2/(v_0^2+2gx)^3/2τ^2.
We can see from equation (<ref>) that T_x is a convex function of ξ, which implies by the Jensen inequality that its mean value is greater than the classical time t_c. More specifically, we obtain the following expressions for the mean value and standard deviation of the STOA (see <cit.> for further details):
t_mean≈ t_c + xσ^2/(v_0^2+2gx)^3/2τ^2
Δ T_x ≈ t_cσ/√(v_0^2+2gx)τ.
It is worth mentioning that the expressions of the mean value and the standard deviation of the STOA for the free motion can be obtained by taking the limit g→ 0 (which implies that t_c→ x/v_0) in the expressions (<ref>)-(<ref>). However, these expressions only make sense in the semi-classical regime where v_0τ≫σ (see <cit.> for further details).
Notice that the expression for the time-distribution (<ref>) in the limit g→ 0 is the same as the equations (A9) and (A18) in <cit.>. The difference here is that we obtain the formula as an exact expression, while the authors of <cit.> present this result as an approximation to the order of σ/v_0τ in the semiclassical regime.
Other authors have also found a similar formula in the semi-classical regime using a quantum flux approach (see for example equation (9) in <cit.>).
Interestingly, if the particle is dropped in the gravity field (v_0=0), we find that the mean value of the fall is greater than the classical value t_c = √(2x/g) by a relative factor:
δ = t_fall-t_c/t_c≈q^2/2 = ħ^2/16gxm^2σ^2 .
This result is surprising for at least two reasons: (i) the time of free fall is not equal to the classical time, and (ii) it depends on the mass of the particle. In Figure <ref>, we show that the formula (<ref>) gives an excellent approximation for the numerical values of the integral ∫_0^+∞tΠ_x(t)dt (which we calculate using the scipy.integrate library of python 3.10) when the coefficient of quantumness q is less than 1. When the particle becomes quantum q>1, the two curves split, which shows that in this regime the quality of the analytical approximation deteriorates with respect to the more accurate numerical integration. However, our explicit formula (<ref>) provides a convenient tool to help search for optimal experimental conditions to observe this quantum delay in the semiclassical regime. In particular we propose in Table 1 three different scenarios for a realization of the quantum-delay experiment. It is clear that the typical experimental conditions (first line of Table 1) are not suitable since the required sensitivity of the measure would have to be exceedingly high, even for a light atom such as the hydrogen-1 atom. Fortunately, we can identify two different strategies to make the deviation significant enough to be detected: (i) either we decrease the distance traveled (first line of the table) or (ii) we realize a microgravity experiment (third line of the table). Some technical challenges must be tackled before one can reach these experimental conditions, but we believe that the potential benefits of measuring a deviation from the WEP make the investigation worth pursuing.
Discussion. Using a stochastic representation of measured positions, we introduce a novel approach to analyze the time-of-arrival for a free-falling particle. We derive the probability density for the stochastic time-of-arrival (STOA) T_x and obtain analytical expressions for its mean and its standard derivation in the semiclassical regime and for a long-time flight.
Our approach that can be extended in a straightforward manner to any other Gaussian system, which is a setting general enough to encompass a number of important standard systems including the free motion, the free fall, and the harmonic oscillator (simple and time-dependent).
While this present paper only considers applications to Gaussian states, our equations (<ref>) and (<ref>) actually provide a general framework that can be applied to the study of entangled particles <cit.>, quantum superposition <cit.>, quantum gases <cit.>,
potential barriers <cit.>, two-slit experiment <cit.>, diffraction in time <cit.>, and quantum backflow <cit.>.
Our main finding is that the mean TOA for a free falling object is not equal to the classical time. Although the time shift is generally very small, we propose various experimental scenarios where this subtle effect could be measured. Specifically, we consider situations involving atoms with small mass, short distances traveled (these two conditions could be met in the GBAR experiment<cit.>), and microgravity. A future space mission (see e.g., STE-QUEST <cit.>) with low-mass atoms emerges as a highly promising candidate for investigating and unraveling this phenomenon. If empirically verified, this result suggests a violation of the universality of free fall in the quantum regime, which has implications for theories of quantum gravity.
Acknowledgments. We are grateful to Jacob Barandes and Seth Lloyd for helpful comments and fruitful discussions. This research was conducted while Lionel Martellini was a visiting professor at MIT.
Supplementary Material: Derivation of equations (14)-(15)
In the long time-of-flight regime where t≫τ, we have:
x ≈gt^2/2 + v_0 t + σξt/τ,
which is exactly equal to the classical trajectory the particle would have with an initial position equal to 0 and an initial velocity equal to v_0 + σξ/τ.
The solution to this equation is given by a quadratic formula, thus we find the stochastic time-of-arrival variable to be:
T = 1/g(√((v_0+σξ/τ)^2+2gx)-(v_0+σξ/τ)).
Notice that for σ=0, we find the classical time:
t_c = 1/g(√(v_0^2+2gx)-v_0).
Consider now the semi-classical regime σ≪ v_0 τ.
Since
(v_0+σξ/τ)^2+2gx = v_0^2+2gx+2v_0σξ/τ+σ^2ξ^2/τ^2,
and given that the first two terms v_0^2+2gx are very large compared to the next two ones, we find the Taylor series:
√((v_0+σξ/τ)^2+2gx) ≈√(v_0^2+2gx)(1+v_0σξ/(v_0^2+2gx)τ +σ^2ξ^2/2(v_0^2+2gx)τ^2-v_0^2σ^2ξ^2/2(v_0^2+2gx)^2τ^2)
= √(v_0^2+2gx)(1+v_0σξ/(v_0^2+2gx)τ +gxσ^2ξ^2/2(v_0^2+2gx)^2τ^2) ,
where we used √(1+a)≈ 1+a/2-a^2/8, when a≪ 1.
We can now approximate (<ref>) as:
T≈1/g(√(v_0^2+2gx)-v_0) + (v_0/√(v_0^2+2gx)-1)σξ/gτ+xσ^2ξ^2/(v_0^2+2gx)^3/2τ^2.
Notice that the classical time is given by:
t_c = 1/g(√(v_0^2+2gx)-v_0),
as expected. Thus, we can rewrite the previous equation (<ref>) as:
T≈ t_c - t_c/√(v_0^2+2gx)σξ/τ+xσ^2ξ^2/(v_0^2+2gx)^3/2τ^2.
The second order approximation of the squared value of T is given by:
T^2 ≈ t_c^2 + t_c^2/v_0^2+2gxσ^2ξ^2/τ^2 + 2t_c xσ^2ξ^2/(v_0^2+2gx)^3/2τ^2 - 2t_c^2/√(v_0^2+2gx)σξ/τ.
using the last two expressions, we find the formulae for the mean and the variance of T:
t_mean = E(T) ≈ t_c + xσ^2/(v_0^2+2gx)^3/2τ^2
V(T) = E(T^2) - E(T)^2 ≈ t_c^2σ^2/(v_0^2+2gx)τ^2,
whence the expression of the standard deviation of T:
Δ T ≈ t_cσ/τ√(v_0^2+2gx).
Notice that when g→ 0, we obtain the standard-deviation for the TOA of the free particle. Interestingly, if the particle is dropped in the gravity field (v_0=0), we have:
t_mean≈√(2x/g) + 1/2√(2x/g)σ^2/2gxτ^2= √(2x/g)(1+σ^2/4gxτ^2) =√(2x/g)(1+ħ^2/16gxm^2σ^2)
Δ T ≈σ/gτ = ħ/2mσ g.
|
http://arxiv.org/abs/2306.04739v1
|
20230607192832
|
Automatic retrieval of corresponding US views in longitudinal examinations
|
[
"Hamideh Kerdegari",
"Tran Huy Nhat Phung1",
"Van Hao Nguyen",
"Thi Phuong Thao Truong",
"Ngoc Minh Thu Le",
"Thanh Phuong Le",
"Thi Mai Thao Le",
"Luigi Pisani",
"Linda Denehy",
"Vital Consortium",
"Reza Razavi",
"Louise Thwaites",
"Sophie Yacoub",
"Andrew P. King",
"Alberto Gomez"
] |
cs.LG
|
[
"cs.LG"
] |
F. Author et al.
H. Kerdegari et al.
School of Biomedical Engineering & Imaging Sciences, King’s College London, UKHospital for Tropical Diseases, Ho Chi Minh City, Vietnam Oxford University Clinical Research Unit, Ho Chi Minh City, Vietnam Mahidol Oxford Tropical Medicine Research Unit, Bangkok, Thailand Melbourne School of Health Sciences, The University of Melbourne, Australia Membership of the VITAL Consortium is provided in the Acknowledgments
[email protected]
Automatic retrieval of corresponding US views in longitudinal examinations
Hamideh Kerdegari1This work was supported by the Wellcome Trust UK (110179/Z/15/Z, 203905/Z/16/Z, WT203148/Z/16/Z). H. Kerdegari, N. Phung, R. Razavi, A. P King and A. Gomez acknowledge financial support from the Department of Health via the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy's and St Thomas' NHS Foundation Trust in partnership with King's College London and King's College Hospital NHS Foundation Trust. Tran Huy Nhat Phung1,3 Van Hao Nguyen2 Thi Phuong Thao Truong2 Ngoc Minh Thu Le2 Thanh Phuong Le2 Thi Mai Thao Le2 Luigi Pisani4 Linda Denehy5 Vital Consortium6 Reza Razavi1 Louise Thwaites3 Sophie Yacoub3 Andrew P. King1 Alberto Gomez1
July 31, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Skeletal muscle atrophy is a common occurrence in critically ill patients in the intensive care unit (ICU) who spend long periods in bed. Muscle mass must be recovered through physiotherapy before patient discharge and ultrasound imaging is frequently used to assess the recovery process by measuring the muscle size over time. However, these manual measurements are subject to large variability, particularly
since the scans are typically acquired on different days and potentially by different operators.
In this paper, we propose a self-supervised contrastive learning approach to automatically retrieve similar ultrasound muscle views at different scan times.
Three different models were compared using data from 67 patients acquired in the ICU. Results indicate that our contrastive model outperformed a
supervised baseline model in the task of view retrieval
with an AUC of 73.52% and when combined with an automatic segmentation model achieved 5.7%±0.24% error in cross-sectional area. Furthermore, a user study survey confirmed the efficacy of our model for muscle view retrieval.
§ INTRODUCTION
Muscle wasting, also known as muscle atrophy (see Fig. <ref>), is a common complication in critically ill patients, especially in those who have been hospitalized in the intensive care unit (ICU) for a long period <cit.>. Factors contributing to muscle wasting in ICU patients include immobilization, malnutrition, inflammation, and the use of certain medications <cit.>. Muscle wasting can result in weakness, impaired mobility, and increased morbidity and mortality. Assessing the degree of muscle wasting in ICU patients is essential for monitoring their progress and tailoring their rehabilitation program to recover muscular mass through physiotherapy before patient discharge.
Traditional methods of assessing muscle wasting, such as physical examination, bioelectrical impedance analysis, and dual-energy X-ray absorptiometry, may be limited in ICUs due to the critical illness of patients <cit.>. Instead, ultrasound (US) imaging has emerged as a reliable, non-invasive, portable tool for assessing muscle wasting in the ICU <cit.>.
The accuracy and reliability of US imaging in assessing muscle wasting in ICU patients have been demonstrated by Parry et al. <cit.>. US imaging can provide accurate measurements of muscle size, thickness, and architecture, allowing clinicians to track changes over time. However, these measurements are typically performed manually, which is time-consuming, subject to large variability and depends on the expertise of the operator. Furthermore, operators might be different from day to day and/or start scanning from different positions in each scan which will cause further variability.
In recent years, self-supervised learning (SSL) has gained popularity for automated diagnosis in the field of medical imaging due to its ability to learn from unlabeled data <cit.>. Previous studies on SSL for medical imaging have focused on designing pretext tasks <cit.>. A class of SSL, contrastive learning (CL), aims to learn feature representations via a contrastive loss function to distinguish between negative and positive image samples. A relatively small number of works have applied CL to US imaging, for example to synchronize different cross-sectional views <cit.> and to perform view classification <cit.> in echocardiography (cardiac US).
In this paper, we focus on the underinvestigated application of view matching for longitudinal RF muscle US examinations to assess muscle wasting. Our method uses a CL approach (see Fig. 2) to learn a discriminative representation from muscle US data which facilitates the retrieval of similar muscle views from different scans.
The novel contributions of this paper are: 1) the first investigation of the problem of muscle US view matching for longitudinal image analysis, and 2) our approach is able to automatically retrieve similar muscle views between different scans, as shown by quantitative validation and qualitatively through a clinical survey.
§ METHOD
§.§ Problem Formulation
Muscle wasting assessment requires matching of corresponding cross-sectional US views of the RF over subsequent (days to weeks apart) examinations. The first acquisition is carried out following a protocol to place the transducer half way through the thigh and perpendicular to the skin, but small variations in translation and angulation away from this standard view are common. This scan produces the reference view at time T_1 (RT_1). The problem is as follows: given RT_1, the task is to retrieve the corresponding view (VT_2) at a later time (T_2) from a sequence of US images captured by the operator using the transducer at approximately the same location and angle as for T_1. The main challenges of this problem include: (1) the transducer pose and angle might be different, (2) machine settings might be slightly different, and (3) parts of the anatomy (specifically the RF) might change in shape and size over time. As a result, our aim is to develop a model that can select the most similar view acquired during T_2 to the reference view RT_1 acquired at T_1.
§.§ Contrastive Learning Framework for Muscle View Matching
Inspired by the SimCLR algorithm <cit.>, our model learns representations by maximizing the similarity between two different augmented views of the same muscle US image via a contrastive loss in the latent space. We randomly sample a minibatch of N images from the video sequences over three times T_1, T_2 and T_3, and define the contrastive learning on positive pairs (Xi, Xj) of augmented images derived from the minibatch, resulting in 2N samples. Rather than explicitly sampling negative examples, given a positive pair, we consider the other 2(N-1) augmented image pairs within a minibatch as negative.
The contrastive loss function for a positive pair (Xi, Xj) is defined as:
L^i_C= -logexp(sim(z_i,z_j)/τ )_/∑_k=1^2n 1_[k≠ i]exp(sim(z_i,z_k)/τ ),
where 1∈(0,1), τ is a temperature parameter and sim(·) denotes the pairwise cosine similarity. z is a representation vector, calculated by z = g(f(X)), where f(·) indicates a shared encoder and g(·) is a projection head. L^i_C is computed across all positive pairs in a mini-batch. Then f(·) and g(·) are trained to maximize similarity using this contrastive loss.
§.§ The Model Architecture
The model architecture is shown in Fig. <ref>a. First, we train the contrastive model to identify the similarity between two images, which are a pair of image augmentations created by horizontal flipping and random cropping (size 10×10) applied on a US image (i.e., they represent different versions of the same image). Each image of this pair (Xi, Xj) is fed into an encoder to extract representation vectors (hi, hj) from them. The encoder architecture (Fig. <ref>b) has four conv layers (kernel 3×3) with ReLU and two max-poolings. A projection head (a multilayer perceptron with two dense layers of 512 nodes) follows mapping these representations to the space where the contrastive loss is applied.
Second, we use the trained encoder f(·) for the training of our main task (i.e. the downstream task), which is the classification of positive and negative matches (corresponding and non-corresponding views) of our test set. For that, we feed a reference image X_ref, and a candidate frame X_j to the encoder to obtain the representations hi, hj and feed these in turn to a classification network (shown in Fig. <ref>c) that contains four dense layers with ReLU activation and a softmax layer.
§ MATERIALS
The muscle US exams were performed using GE Venue Go and GE Vivid IQ machines, both with linear probes (4.2-13.0 MHz), by five different. During examination, patients were in supine position with the legs in a neutral rotation with relaxed muscle and passive extension. Measurements were taken at the point three fifths of the way between the anterior superior iliac spine and the patella upper pole.
The transducer was placed perpendicular to the skin and to the longitudinal axis of the thigh to get the cross-sectional area of the RF. An excess of US gel was used and pressure on the skin was kept minimal to maximise image quality. US measurements were taken at ICU admission (T_1), 2-7 days after admission (T_2) and at ICU discharge (T_3).
For this study, 67 Central Nervous System (CNS) and Tetanus patients were recruited and their data were acquired between June 2020 and Feb 2022. Each patient had an average of six muscle ultrasound examinations, three scans for each leg, totalling 402 examinations. The video resolution was 1080 × 1920 with a frame rate of 30fps. This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Ethics Committee of the Hospital for Tropical Diseases, Ho Chi Minh City and Oxford Tropical Research Ethics Committee.
The contrastive learning network was trained without any annotations. However, for the view matching classification task, our test data were annotated automatically
as positive and negative pairs based upon manual frame selection by a team of five doctors comprising three radiologists and two ultrasound specialists with expertise in muscle ultrasound. Specifically, each frame in an examination was manually labelled as containing a similar view to the reference RT_1 or not. Based upon these labellings, as shown in Fig. <ref>, the positive pairs are combinations of similar views within each examination (T_1/T_2/T_3) and between examinations. The rest are considered negative pairs.
§ EXPERIMENTS AND RESULTS
§.§ Implementation Details
Our model was implemented using Tensorflow 2.7. During training, input videos underwent experimentation with clip sizes of 256 × 256, 128 × 128, and 64 × 64. Eventually, they were resized to 64 × 64 clips, which yielded the best performance. All the hyperparameters were chosen using the validation set. For the CL training, the standard Adam optimizer was used with learning rate =0.00001, kernel size = 3 × 3, batch size = 128, batch normalization, dropout with p = 0.2 and L2 regularization of the model parameters with a weight = 0.00001. The CL model was trained on 80% of the muscle US data for 500 epochs. For the view retrieval model, the standard Adam optimizer with learning rate = 0.0001, batch size = 42 and dropout of p = 0.2 was used. The classifier was trained on the remaining 20% of the data (of which 80% were used for training, 10% for validation and 10% for testing) and the
network converged after 60 epochs. For the supervised baseline model, the standard Adam optimizer was used with learning rate =0.00001, kernel size = 3 × 3, batch size = 40, and batch normalization. Here, we used the same data splitting as our view retrieval classifier. The code we used to train and evaluate our models is available at <https://github.com/hamidehkerdegari/Muscle-view-retrieval>.
§.§ Results
§.§.§ Quantitative Results
We carried out two quantitative experiments. First, we evaluated the performance of the view classifier. Second, we evaluated the quality of the resulting cross-sectional areas segmented using a U-Net <cit.>.
The classifier performance was carried out by measuring, for the view retrieval task, the following metrics: Area Under the Curve (AUC), precision, recall, and F1-score. Because there is no existing state of the art for this task, we created two baseline models to compare our proposed model to: first, a naive image-space comparison using normalized cross-correlation (NCC) <cit.>, and second, a supervised classifier. The supervised classifier has the same architecture as our CL model, but with the outputs of the two networks being concatenated after the representation h followed by a dense layer with two nodes and a softmax activation function to produce the probabilities of being a positive or negative pair.
Table <ref> shows the classification results on our dataset.
As shown in Table <ref>, our proposed method achieved superior performance in terms of AUC, precision, recall, and F1-score compared to all other models. The NCC method demonstrated the lowest performance, as it lacked the capability to accurately capture dynamic changes and deformations in US images which can result in significant structural differences. A representative example of a model-retrieved view for one case is presented in Fig. <ref>. It shows positive,
negative, and middle (i.e., images with a probability value between the highest and lowest values predicted by our model) pairs of images generated by our model from a patient's left leg. As reference, on the left we show the user pick (RT_2).
To assess the quality of the resulting cross-sections, we calculated the mean relative absolute area difference (d) between the ground truth (a_GT) frame and that of the model predicted frame (a_pred) for each examination as follows:
d =|a_GT-a_pred|/a_GT
We applied a trained U-Net model (already trained with 1000 different US muscle images and manual segmentations). Results showed an overall cross-sectional mean relative absolute area error of 5.7%±0.24% on the test set (Full details provided in Fig. <ref>, right). To put this number into context, Fig. <ref>, left visualizes two cases where the relative error is 2.1% and 5.2%.
§.§.§ Qualitative Results
We conducted a user study survey to qualitatively assess our model's performance. The survey was conducted blindly and independently by four clinicians and consisted of thirty questions.
In each, clinicians were shown two different series of three views of the RF: (1) RT_1, GT match from T_2 and model prediction from T_2, and (2) RT_1, a random frame from T_2 and model prediction from T_2. They were asked to indicate which (second or third) was the best match with the first image.
The first question aimed to determine if the model's performance was on par with clinicians, while the second aimed to determine if the model's selection of images was superior to a randomly picked frame. As shown in Fig. <ref>, left, clinicians chose the model prediction more often than the GT; however, this difference was not significant (paired Student's t-test, p=0.44, significance=0.05). Therefore, our model can retrieve the view as well as clinicians, and significantly better (Fig. <ref>, right) than randomly chosen frames (paired Student's t-test, p=0.02, significance=0.05).
§ DISCUSSION AND CONCLUSION
This paper has presented a self-supervised CL approach for automatic muscle US view retrieval in ICU patients. We trained a classifier to find positive and negative matches. We also computed the cross-sectional area error between the ground truth frame and the model prediction in each acquisition time to evaluate model performance. The performance of our model was evaluated on our muscle US video dataset and showed AUC of 73.52% and 5.7%±0.24% error in cross-sectional view. Results showed that our model outperformed the supervised baseline approach. This is the first work proposed to identify corresponding ultrasound views over time, addressing an unmet clinical need.
§ ACKNOWLEDGMENTS
The VITAL Consortium: OUCRU: Dang Phuong Thao, Dang Trung Kien, Doan Bui Xuan Thy, Dong Huu Khanh Trinh, Du Hong Duc, Ronald Geskus, Ho Bich Hai, Ho Quang Chanh, Ho Van Hien, Huynh Trung Trieu, Evelyne Kestelyn, Lam Minh Yen, Le Dinh Van Khoa, Le Thanh Phuong, Le Thuy Thuy Khanh, Luu Hoai Bao Tran, Luu Phuoc An, Nguyen Lam Vuong, Ngan Nguyen Lyle, Nguyen Quang Huy, Nguyen Than Ha Quyen, Nguyen Thanh Ngoc, Nguyen Thi Giang, Nguyen Thi Diem Trinh, Nguyen Thi Kim Anh, Nguyen Thi Le Thanh, Nguyen Thi Phuong Dung, Nguyen Thi Phuong Thao, Ninh Thi Thanh Van, Pham Tieu Kieu, Phan Nguyen Quoc Khanh, Phung Khanh Lam, Phung Tran Huy Nhat, Guy Thwaites, Louise Thwaites, Tran Minh Duc, Trinh Manh Hung, Hugo Turner, Jennifer Ilo Van Nuil, Vo Tan Hoang, Vu Ngo Thanh Huyen, Sophie Yacoub. Hospital for Tropical Diseases, Ho Chi Minh City: Cao Thi Tam, Ha Thi Hai Duong, Ho Dang Trung Nghia, Le Buu Chau, Le Mau Toan, Nguyen Hoan Phu, Nguyen Quoc Viet, Nguyen Thanh Dung, Nguyen Thanh Nguyen, Nguyen Thanh Phong, Nguyen Thi Cam Huong, Nguyen Van Hao, Nguyen Van Thanh Duoc, Pham Kieu Nguyet Oanh, Phan Thi Hong Van, Phan Vinh Tho, Truong Thi Phuong Thao. University of Oxford: Natasha Ali, James Anibal, David Clifton, Mike English, Ping Lu, Jacob McKnight, Chris Paton, Tingting Zhu Imperial College London: Pantelis Georgiou, Bernard Hernandez Perez, Kerri Hill-Cawthorne, Alison Holmes, Stefan Karolcik, Damien Ming, Nicolas Moser, Jesus Rodriguez Manzano. King’s College London: Liane Canas, Alberto Gomez, Hamideh Kerdegari, Andrew King, Marc Modat, Reza Razavi. University of Ulm: Walter Karlen. Melbourne University: Linda Denehy, Thomas Rollinson. Mahidol Oxford Tropical Medicine Research Unit (MORU): Luigi Pisani, Marcus Schultz
splncs04
|
http://arxiv.org/abs/2306.07964v1
|
20230613175857
|
High resolution spectroscopy of SN~2023ixf's first week: Engulfing the Asymmetric Circumstellar Material
|
[
"Nathan Smith",
"Jeniveve Pearson",
"David J. Sand",
"Ilya Ilyin",
"K. Azalee Bostroem",
"Griffin Hosseinzadeh",
"Manisha Shrestha"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR"
] |
0000-0001-5510-2424]Nathan Smith
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
0000-0002-0744-0047]Jeniveve Pearson
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
0000-0003-4102-380X]David J. Sand
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
0000-0002-0551-046X]Ilya Ilyin Leibniz-Institut
für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482
Potsdam, Germany
0000-0002-4924-444X]K. Azalee Bostroem
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
LSSTC Catalyst Fellow
0000-0002-0832-2974]Griffin Hosseinzadeh
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
0000-0002-4022-1874]Manisha Shrestha
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
We present a series of high-resolution echelle
spectra of SN 2023ixf in M101, obtained nightly during the
first week or so after discovery using PEPSI on the LBT.
Na i D absorption in these spectra indicates a reddening of E(B-V)=0.031 mag
and a systemic velocity of +7 km s^-1 relative to the
average redshift of M101. Dramatic changes are seen in
in the strength and shape of strong emission lines
emitted by circumstellar material (CSM), including He ii λ4686,
C iv λλ5801,5811, Hα, and
N iv λλ7109,7123. In general, these narrow lines
broaden to become intermediate-width lines before disappearing from
the spectrum within a few days, indicating a limited extent to
the dense CSM of around 20-30 AU (or 10^14.7 cm). Hα persists in the spectrum
for about a week as an intermediate-width emission line with P Cyg absorption at 700-1300 km
s^-1 arising in the post-shock shell of swept-up CSM. Early
narrow emission lines are blueshifted and indicate an expansion speed
in the pre-shock CSM of about 115 km s^-1, but with even broader
emission in higher ionization lines. This is faster than the
normal winds of red supergiants, suggesting some mode
of eruptive mass loss from the progenitor or radiative
acceleration of the CSM. A lack of narrow blueshifted absorption
suggests that most of the CSM is not along our line
of sight. This and several other clues indicate that the CSM of
SN 2023ixf is significantly aspherical. We find that CSM lines disappear after a few days because the
asymmetric CSM is engulfed by the SN photosphere.
§ INTRODUCTION
Understanding the late evolution and end fates of massive stars
remains an enduring challenge. It was recognized long ago that mass
loss plays a key role in determining the outcome of stellar evolution
<cit.>. In recent years, however, the traditional view
where well-behaved, steady stellar winds of single stars lead to
predictable outcomes with reliable metallicity-dependence
<cit.>, has gradually been eroding, giving way
instead to a more complicated picture where binary interaction and
eruptive events dominate the mass loss <cit.>. These modes of mass loss do not have
well-established monotonic trends with initial mass or metallicity,
and are challenging for models of
single-star evolution.
A major reason for this shifting paradigm is that normal, steady
stellar winds of hot massive stars are evidently not as strong as we
used to think, reducing their ability to remove the H envelope and to
strongly impact evolution <cit.>. The
same applies to RSG winds, indicated by recent downward revisions of
normal RSG wind mass-loss rates and the general scarcity of
dust-enshrouded RSGs <cit.>. This shift is also
influenced by results from several different lines of inquiry: (1)
strong evidence for binary-stripped progenitors of H-poor supernovae
(SNe) <cit.>, (2) observational evidence for
extreme, eruptive modes of mass loss <cit.>, and (3) firmer
observational estimates of a high interacting binary fraction among O-type
stars <cit.>.
Another driving factor toward a more complicated view of mass loss has
been the discovery of a number of different explosive transients that
simply do not fit predictions of the traditional view of a massive star
evolution dominated by single-star wind mass loss. Chief among these
are SNe with signatures of strong shock interaction with circumstellar
material (CSM). Evolved massive stars that retain their H envelopes
usually have large radii and relatively slow escape speeds, which can
lead to slow CSM that produces narrow H lines in the spectrum of the
SN. In this case, they are classified as Type IIn
<cit.>. The illumination or shock heating of close-in CSM can provide
unique clues about the mass loss properties of the progenitor star in
the late evolutionary phases of its life, which are otherwise
difficult to infer <cit.>. There is
wide diversity among SNe with observed signatures of
H-rich CSM interaction, ranging from super-luminous SNe IIn, scaling
down through normal SNe IIn, and further down to those with barely
any observable signatures of CSM interaction.
On the less extreme end, we see interacting SNe where the spectral
signatures of CSM interaction are fleeting. One of these events is the topic
of the current paper. The narrow lines may last for
only a few days or a week before fading, and they can quickly
transition to look like normal[“Normal” here means a
visual-wavelength spectrum dominated by an ejecta photosphere, not
by CSM interaction.] SNe. This class of objects has been known for
about four decades. Well-studied examples of the phenomenon include
SN 1983K <cit.>, SN 1993J <cit.>, SN 1998S
<cit.>, SN 2006bp <cit.>, PTF11iqb
<cit.>, and SN 2013cu <cit.>.
Additional events studied in detail include SN 2013fs, SN 2017ahn, SN 2020pni, and SN 2020tlf <cit.>. Because study of this class requires early discovery on timescales of
hours or days after explosion, early examples were limited to
fortuitous early detections of nearby events. With more systematic
transient searches and early discovery becoming more routine, growing
samples of this class have been identified <cit.>. Some
estimates suggest that a large fraction (≳1/3) of
otherwise normal core-collapse SNe (ccSNe) have these early CSM features <cit.>,
which is larger than the 8-9% of ccSNe that are more
traditional strongly-interacting SNe IIn <cit.>.
The defining characteristic of this class is very short-lived (a few
days) narrow emission lines in the spectrum, which are thought to
result from dense and confined CSM within 10s of AU around the
progenitor star. As with the broader class of SNe IIn, this is a
phenomenon that is not unique to any one type of explosion or any
unique progenitor type because it depends on the characteristics of the
surrounding material — in principle, any SN type might be surrounded
by dense and confined CSM. In practice, the observed events tend to
be H rich (perhaps because narrow lines require that a progenitor had
slow escape speeds due to a large H envelope), and they usually evolve
into SNe IIb, II-P, or II-L when the CSM interaction signatures fade.
Early spectra show high-ionization emission lines like He ii and
doubly or triply ionized C and N lines with narrow cores and broad
wings, and these emission lines sit atop a smooth blue continuum. The
high ionization level is thought to arise from photoionization of the
CSM by a hard radiation field, produced either by a UV/X-ray flash
from shock breakout, or produced by the shock when the fastest SN
ejecta first crash into the CSM. These high-ionization lines cause
the early spectra to resemble Wolf-Rayet (WR) stars, leading to some
claims that this points to WR progenitors <cit.>. However,
the WR spectral features arise because a slow, dense, H-rich wind is
ionized by the SN; the progenitor star is likely to have been cool and
potentially even self-obscured by its CSM, and would not have been
seen as a WR star, more likely resembling a cool hypergiant
<cit.>. Of course, this would depend on when
exactly the progenitor star was observed, since the confined CSM may
have just been produced shortly before the SN (i.e. the star may have
appeared as a normal RSG or YSG a few years earlier).
There are several remaining open questions about this class of
objects, concerning the mechanism that ionized the CSM (flash from
shock breakout or shock interaction), the range of physical properties of
the CSM (total CSM mass or mass-loss rate of the progenitor, range of
shell/envelope radii, asymmetry, composition, etc.), timescale of the
mass-loss before explosion, details of the evolution of the shock
through the CSM, range of initial masses for the progenitors, and so
on. All of these help to inform the most important question, which
concerns the physical mechanism operating within the star that caused
it to suddenly eject so much mass right before core collapse. The
observed velocities of the CSM and the quick disappearance of the
narrow lines (and hence, the small inferred outer boundary of the CSM)
imply that the strong mass loss occurred very soon before core
collapse, perhaps in the last few months or the final year or two of
the star's life.
This timescale is a strong hint that something is going haywire in the
star during the last rapid phases of nuclear burning (Ne, O, or Si
burning), and several ideas have been proposed for extreme mass loss
triggered during these phases
<cit.>.
Since the CSM interaction is so short-lived (and the total CSM mass
estimates are on the order of 0.1 M_⊙), and as these objects
evolve into relatively normal SN types when the narrow lines fade
(perhaps implicating moderately massive 10-20 M_⊙ red or
yellow supergiant progenitors), it is unlikely that some other
mechanisms proposed for pre-SN mass loss in SNe IIn will be applicable
to this particular class. For instance, pulsational-pair instability
eruptions <cit.> are limited to only very high initial
masses, and are probably ruled out for these objects. Also, it is
difficult to understand why strong pulsationally driven superwinds
from very luminous RSGs <cit.> would only operate
for ∼1 yr before core collapse.
In any case, the range of physical parameters for the CSM deduced from
studies of individual events can help inform what mechanism ejected
the CSM. Perhaps it can also help to understand how/if these objects
are connected to the broader class of interacting SNe IIn, or if they
are a distinct phenomenon.
l c c c
Log of LBT/PEPSI Observations
Date (UTC) MJD Epoch (days) Airmass
2023-05-21 60085.373 2.62 1.35
2023-05-22 60086.244 3.49 1.08
2023-05-23 60087.150 4.40 1.14
2023-05-24 60088.155 5.40 1.12
2023-05-26 60090.357 7.60 1.34
2023-05-27 60091.183 8.43 1.08
2023-06-05 60100.329 17.56 1.34
Here we discuss a new member of this class, SN 2023ixf, which exploded
in the very nearby spiral galaxy M101. It was discovered by
K. Itagaki on 2023 May 19, and was soon classified as a Type II SN by
<cit.>. In the following, we adopt a
host redshift for M101 of z=0.000804 <cit.>.
From examining pre-explosion archival images, a candidate progenitor
consistent with a moderate-luminosity RSG progenitor has been
identified <cit.>, suggesting a star that had an
initial mass of around 12-17 M_⊙.
SN 2023ixf was quickly rising at the time of discovery and was expected to become very
bright, and because it was a Type II event that could potentially show
early narrow lines in the spectra, we chose to initiate an intensive
observing campaign to obtain high-resolution echelle spectra every
night (or almost every night) for the first week or so after
discovery, in order to document rapid changes in the narrow emission
from CSM. These observations and initial results are described here,
while companion papers describe the early light curve <cit.>
and low-resolution spectra (Bostroem et al., in prep.). Section <ref>
describes the observations, Section <ref> describes the resulting
data and analysis, and Section <ref> presents our
interpretation of these early data.
§ OBSERVATIONS
Shortly after discovery, we initiated a campaign to obtain
observations of SN 2023ixf with a nearly nightly cadence using the
Potsdam Echelle Polarimetric and Spectroscopic Instrument (PEPSI;
) mounted on the Large Binocular Telescope (LBT)
located on Mt. Graham, AZ. PEPSI is a cross-dispersed echelle
spectrograph with separate blue and red channels, each with three
wavelength ranges corresponding to three cross dispersers (CDs), with
CD I, II, and III in the blue arm, and CD IV, V, and VI in the red
arm. When combined, these are designed to cover the full optical
wavelength range with no gaps. At the time of these observations, CD
I and CD III were not available, so we used CD II (covering 4219-4787
Å) in the blue channel, and CD IV (5361-6316 Å), CD V (6232-7428
Å) and CD VI (7351-9064 Å) in the red channel. All observations were composed of a 60 minute blue channel exposure with CD II and three 20 minute red channel exposures with CD IV, CD V, and CD VI. We used a 300
μm fiber (2.2 arcsec diameter) corresponding to a spectral
resolving power of R=λ/Δλ=50,000, or a velocity
resolution of about 6 km s^-1.
The data were reduced using the
Spectroscopic Data Systems (SDS) pipeline <cit.>.
The pipeline performs bias subtraction and flat field correction, order
tracing and optimal extraction with cosmic ray elimination, and wavelength
calibration. The spectral orders are normalized in 2D with a non-linear
constrained least-squares fit to account for broad emission lines
spanned over adjacent spectral orders. Finally, the spectral orders
are rectified into a single spectrum for each CD. The wavelength scale
was reduced to the Solar System Barycentric rest frame using
JPL ephemerides. The pipeline also estimates the variance in each pixel.
Based on early photometry and upper limits, <cit.> estimate a likely explosion time of MJD=60082.75.
Using this as a reference, our LBT/PEPSI spectra were obtained between 2 and 9
days after explosion. We use this date to calculate the time since
explosion for each PEPSI spectrum, listed as the 3rd column in Table 1.
§ RESULTS
An example of the resulting normalized PEPSI spectrum of SN 2023ixf is
shown in Figure <ref>. This shows the spectra on days 2.6 (blue),
3.5 (black), and 8.4 (magenta). Although the spectrum appears somewhat complicated, most
of the structure results from complex telluric absorption bands, which
are labeled in blue in Figure <ref>. Overall, the spectrum
of SN 2023ixf at these early times is dominated by a very smooth blue
continuum (although the continuum slope is normalized here) plus a
small number of prominent lines labeled in red-orange in
Figure <ref>. At this early epoch within only about a week after
explosion, it does not yet show any emission or absorption from very
broad features associated with the fast SN ejecta. Besides the
interstellar absorption from Na i D, the most interesting
features are the strong narrow emission lines in the spectrum that
dramatically change strength and shape with time over only a few
days: He ii λ4686, C iv λλ5801,5811,
and Hα. The spectra also show weak narrow emission N iii λλ4634,4641 and C iii λλ4648,4650 seen only in our first epoch, as well as weak emission from N iv
λλ7109,7123 and He i λ5876, the latter of
which grows in strength with time. We discuss each of these lines in turn, after
briefly examining the narrow interstellar absorption.
§.§ Na I D and Interstellar Reddening
Because the line of sight Milky Way reddening toward M101 of
E(B-V)=0.0074 mag <cit.> is quite low and because we are
primarily examining line profiles in normalized spectra (where the
continuum flux is divided out anyway), we make no reddening correction
to our echelle spectra. However, high-resolution echelle spectra afford an
opportunity to provide a precise constraint on the equivalent width
(EW) of the narrow Na i D interstellar absorption lines arising
along the line of sight through the host galaxy.
Figure <ref> shows a detail of the region of the spectrum
around the Na i D_1 and D_2 resonance doublet, which has
been corrected for a redshift of z=0.000804. Each epoch of PEPSI
spectra is shown, and the nature of narrow absorption lines in the
spectrum is remarkably consistent over the various epochs. The blue
boxes in Figure <ref> indicate the expected positions of the
two Na i lines in M101. Indeed there are two strong narrow
absorption features detected here. These are seen to be mixed amid a large
number of other weaker narrow absorption features, which may arise
from various clouds along the line of sight through the Milky Way at
various rotation velocities. This suggests that high spectral
resolution is important to accurately estimate the reddening within
M101, because its low redshift means that its interstellar
absorption features may overlap with those from the Milky Way.
We measure Na i EW values of 0.156 Å for D_1 and 0.187 Å
for D_2. The measured EWs were consistent to better than 1% from
one epoch to the next; the main uncertainty in the absolute
measurement of the EW values is the choice of continuum level, and how
much contamination there might be from overlapping lines. Although
there are many other weak narrow lines, the continuum level in these
spectra is well determined to about 1% of the flux level as well.
Following the relations from <cit.>, these EWs translate to a
reddening value of E(B-V) = 0.036 mag. Following <cit.>, we
multiply this by 0.86 to account for the conversion to <cit.>
values, yielding a host reddening along the line of sight to
SN 2023ixf of E(B-V) = 0.031 mag. The uncertainty in the relation from
<cit.> is 30-40 %, which is much larger than any error in
E(B-V) introduced by measurement error in these PEPSI spectra. This
resulting value of E(B-V) = 0.031 mag is larger than the Milky Way
reddening, indicating that the local host dust in M101 is the dominant
source of extinction along the line of sight. This may, of course,
vastly underestimate the extinction toward the progenitor star arising from dust that
may have been present in the pre-SN CSM.
The velocities of these deep Na i D absorption features are also
useful later in our analysis, especially when interpreting velocities
of narrow emission components. We measure the centroid velocity of
the strongest components of the D_1 and D_2 lines. After
correcting the spectra for the adopted host redshift of z=0.000804,
we take the average of the two lines to derive a Doppler velocity for
the Na i D absorption of +7 (±1) km s^-1, relative to
the adopted redshift of M101 (which is about +241 km s^-1).
When we interpret the observed velocities below in Section <ref>,
we take this value of +7 km s^-1 as representative of the
velocity of interstellar material in the vicinity of SN 2023ixf that
results from galactic rotation, and therefore as a likely indication of
the progenitor star's systemic velocity. In
figures in this manuscript showing the observed line profiles, the velocity scale is only
corrected for z=0.000804, but we show this +7 km
s^-1 systemic velocity with a vertical green dashed line.
§.§ High ionization Features
Notable high ionization lines in these early-time echelle spectra are
He ii λ4686, C iv λλ5801,5811, and
N iv λλ7109,7123. The line profile evolution of
each of these can be seen in Figures <ref>a, <ref>c,
and <ref>d, respectively (Panel <ref>b shows He i λ5876, discussed in the next section). If we take into
account the fact that C iv and N iv are closely spaced
doublets or blends, the evolution of strength and line profile shape
in each of these is similar. All three have relatively strong, narrow
(100 km s^-1), blueshifted (-50 to -150 km s^-1) emission peaks with broader wings
at the first epoch. These lines then appear to broaden and fade over
the next 2-3 days, and completely disappear 3-4 days later. The
evolution of the equivalent widths (EWs) of these emission lines
measured in our PEPSI spectra are shown in Figure <ref>. All
the high-ionization lines fade by factors of 10-20 or more over a time
period of 4 days.
The change in width of He ii over only 1 day
from May 21 to 22 is particularly stunning, where the broad ±1000
km s^-1 emission wings come and go very quickly. The He ii profile on day 2.6 is narrow, lacking the broad electron scattering wings seen in Hα on that same date (as we show below, when we subtract off a 1000 km s^-1 Lorentzian profile from Hα, the two line profiles are very similar). He ii then fades and disappears by day 5.4. Of these
high-ionization lines, C iv λλ5801,5811 has the
broadest wings (extending to -2000 km s^-1 on the blue side) and
it lingers the longest before fading, disappearing from the spectrum
about a day later than He ii. In any case, all narrow and
intermediate-width emission from these high-ionization features is
completely absent from the spectrum by day 7.6. The narrow emission component fades more quickly, leaving
only a fainter intermediate-width component to persist for a few days.
High-ionization emission might fade either because the gas cools and
recombines, or because the CSM is overtaken by the SN; we return to
this topic later in Section <ref>.
There is a persistent weak emission feature on the blue wing of
He ii at about -450 km s^-1, possibly with a P Cygni
profile. This is most likely an artifact that arises where edges of
echelle orders are merged (see the top panel of Figure <ref>,
as noted earlier). This is probably not emission from N iii
λλ4679.4,4679.8, which is, however, seen in low-resolution spectra a day earlier (Bostroem et al., in prep.).
Notable for their general absence in our PEPSI spectra are N iii λλ4634,4641 and C iii λλ4648,4650. Together with He ii λ4686, these lines constitute the so-called blue WR bump. In many examples of SNe II with fleeting CSM interaction signatures, these N iii/C iii lines are very strong (often equal in strength to He ii λ4686), and with strong electron scattering wings <cit.>. In fact, these N iii/C iii lines are seen in our spectra of SN 2023ixf, but only in our first spectrum on day 2.6, where they are extremely weak (Fig <ref>). The lines disappear the next day. Also, when seen on day 2.6, they only show the narrow emission components with no broad wings; these narrow components have the same blueshift and approximately the same width as He ii (Fig <ref>). These lines are stronger the previous day in lower-resolution spectra (Bostroem et al., in prep.), as noted above for N iii λλ4679.4,4679.8. Over the same period from day 2.6 to 3.5 when these lines vanish from our spectra, the strengths of C iv and He ii are still increasing (Fig. <ref>). This indicates that even as late as 2-3 days after explosion, the compact CSM is still increasing in ionization level, even though the light travel time to 20-30 AU is only about 3 hrs. This suggests that a sudden flash of ionization from shock breakout is probably not the primary ionization source for the CSM, which may instead be photoionized by emission from the ongoing shock/CSM interaction <cit.>.
§.§ Recombination of He II to He I
The detailed evolution of He i λ5876 is shown in
Figure <ref>c. This line is often seen as a strong narrow
emission line in early SNe II with CSM signatures, and in SNe IIn, but
it is totally absent in the first two epochs of our echelle spectra of
SN 2023ixf. Interestingly, however, He i λ5876 starts
to grow in strength and becomes an admittedly still very weak
intermediate-width emission feature by days 7.6 and 8.4 (blue and violet
in Figure <ref>c; note that this line is found amid a forest
of telluric H_2O absorption features). Figure <ref> shows
the equivalent widths of He i emission as compared to other
lines, demonstrating how the strength of He i emission increases
as He ii and the other high-ionization lines fade away over 4-5
days. (Note that while the actual value of the EW for He i is
quite uncertain because of all the overlapping telluric absorption, the
relative increase in strength of He i shown in
Figure <ref> is real because the forest of telluric lines does not alter the flux passing between them.)
The line profiles of He i during this evolution can be seen in
Figure <ref> as noted earlier, but are shown more clearly in
Figure <ref>c. The emission component of He i has a width of about
500-1000 km s^-1; there is no narrow emission from He i
λ5876 that would correspond to the blueshifted, narrow (50-100
km s^-1) peak of He ii in the first epoch. He i
λ5876 shows weaker intermediate-width P Cyg absorption, discussed more below.
This lack of narrow emission indicates that the weak He i
λ5876 emission detected in these spectra is arising from
accelerated gas that has already been swept up by the shock and is now
cooling and recombining. Accordingly, this indicates that the narrow
component of He ii (and by extension, the narrow components of
N iv and C iv) disappear because the slow gas is
accelerated by the shock, not because it survives as pre-shock CSM
and recombines.
Figure <ref> compares profiles of the fading He ii
λ4686 emission to the last epoch of He i λ5876
emission. Note that as He ii λ4686 emission wings
within ±1000 km s^-1 fade away, the He i λ5876
emission and P Cygni absorption over the same range of wavelengths
become stronger. This suggests that the gas expanding at around 1000
km s^-1 is cooling and recombining. These velocities are
significantly faster than the narrow (50-100 km s^-1) component
from the unshocked CSM seen in the first-epoch spectra. Again, this
confirms that the He i-emitting gas has been accelerated,
probably because it is now in the post-shock shell of swept-up CSM.
The He i λ5876 profile on day 8.4 also shows a
weak and broad P Cygni absorption feature at -500 to -1200 km
s^-1, which is similar to the later epochs of Hα discussed
below.
Interestingly, we do not see a similar ionization transition in alpha
elements. While C iv and N iv fade quickly over a few
days, we do not see a corresponding growth in the strength of N iii or C iii lines that are seen in several other SNe II with
early narrow CSM features, as noted above. At the end of our spectral series, the C iii and N iii
emission features do not turn on as the C iv and N iv
fade. This implies that the C iv and N iv is not fading
primarily because the N and C ions are recombining to a lower
ionization state. Instead, it may suggest that the CSM and shocked
shell are largely getting enveloped by the expanding SN photosphere
after a week; this is discussed more below.
§.§ Intermediate-width and Narrow Hα
Hα exhibits the most interesting and informative evolution of
the lines seen in the spectrum during the first week after explosion
(Fig. <ref>). Overall, Hα displays a clear and steady
evolution from a narrow line core with broad Lorentzian-shaped wings
at the first epoch, transitioning to a clear intermediate-width P
Cygni profile a week later. It is similar to the evolution seen in
normal SNe IIn, but on a vastly compressed timescale. The narrow
component fades more rapidly than the intermediate-width component.
Overall, the EW of Hα fades by about a factor of 5 in the first
week (Fig. <ref>), during a time when the r magnitude only
brightens modestly <cit.>, corresponding to a factor
of ∼1.6 increase in continuum flux. This indicates that the
narrow/intermediate Hα line luminosity fades by a factor of
∼3 during the first week of observations. By 2 weeks after explosion (day 17.7 in Fig. <ref>), the intermediate-width emission component of Hα is gone. Some of the details of
the line profile evolution are interesting. Note that Hα is blended with weak emission from He ii λ6560, which produces a small bump of excess emission on the blue wing of Hα at -130 km s^-1; this is discussed at the end of the current section.
First, we consider the evolution of the intermediate-width component.
Aside from an overall fading with time, the shape of the red wing of
the line changes little, with a gradual reduction in maximum velocity
from about +2,000 km s^-1 on the first epoch down to about
+1,000 km s^-1 a week later. Some of this apparent slowing may
simply arise because the extremes of the line wings fade below the
noise, but it may also result from cooling of the region where the
electron scattering occurs.
There are more dramatic and important changes occurring on the blue
wing of the line. At first the blue wing appears as a nearly
symmetric reflection of the red wing, extending to -2,000 km
s^-1. However, the blue side fades more quickly than the red
wing, steadily transforming into an intermediate-width P Cygni
absorption feature with a trough at -700 km s^-1 and a blue edge
at about -1,300 km s^-1. This change is physically significant.
The intermediate-width wings of interacting SNe are often presumed to
be caused by thermal electron scattering of the narrow line emission,
broadening those narrow line photons into wings that can be
approximated by a Lorentzian shape
<cit.>. This is the case for our
first spectrum on day 2.6 (Fig. <ref>). A Lorentzian with a
FWHM=1000 km s^-1 and with a center shifted 105 km s^-1 to the
blue matches the line wing shapes of Hα on day 2.6 reasonably
well, except for some low-level broad excess emission on the blue
wing. Electron scattering is thermal, and the wings are expected to
be symmetric about the wavelength of the original narrow-line photons. Therefore, the -105 km s^-1 blueshifted centroid of this 1000 km s^-1 Lorentzian makes sense in this case, since the narrow emission that is being scattered and broadened is also blueshifted by a similar amount.
However, electron scattering cannot turn narrow emission into a broad
absorption feature. The transition of SN 2023ixf's Hα line to
an intermediate-width P Cygni absorption feature requires that after day 3, we are
seeing Doppler-shifted absorption and emission by accelerated H atoms, in the post-shock gas,
and not emission from slow moving pre-shock gas that has been broadened
mostly by electron scattering (although electron scattering may
obviously still influence the red wing of the line, for example).
The fact that this broader absorption increases in strength over a
time period when the narrow emission component mostly fades away
strongly suggests that after a few days, most of the pre-shock CSM has
been swept up by the shock. After that, Hα emission and
absorption traces CSM that has been hit by the forward shock and
accelerated, being swept up into a dense post-shock shell (often
called the “cold dense shell”, or CDS, in SNe IIn). By day 8.4, the
red wing of the line can no longer be well matched by a Lorentzian
shape; instead, a Gaussian function with FWHM=900 km s^-1 and with
its center shifted to the blue by 150 km s^-1 gives a better match
(Fig. <ref>). The fact that the CDS is seen in absorption
against the SN continuum photosphere requires that it has indeed
cooled. Similar intermediate-width absorption features arising in the
CDS are seen in some SNe IIn, including SN 2006gy <cit.>
and SN 1994W <cit.>. We also note that the width and shape
of the Hα P Cygni profile on day 8.4 is very similar to He i λ5876 on the same date (Fig. <ref>).
Although the intermediate-width Hα persists longer than the high-ionization lines, it doesn't last long. Figure <ref> also shows the observed Hα profile in a PEPSI spectrum on day 17.6 (gray). While there is a gap in our spectral coverage, this shows that the intermediate-width emission component of Hα is gone by a little over 2 weeks after explosion. There is still a kink at around zero velocity, hinting at some persistent intermediate-width P Cygni absorption of Hα at this time. However, there is also a deficit of flux at high velocity (resembling a lower continuum level on the blueshifed side of the line on day 17.6; Fig <ref>), suggesting that the broad absorption from SN ejecta is beginning to influence the spectra by this epoch.
The narrow component of Hα, presumably arising from the
pre-shock CSM, shows surprisingly complex profile evolution.
Excluding the effects of broadening from electron scattering discussed
above, one might expect to see a very narrow (10-20 km s^-1) and
symmetric profile shape from the core of an emission line arising from
a more-or-less spherical and slow RSG wind, but this is evidently not
the case in SN 2023ixf.
The inset (upper right) in Figure <ref> documents changes in
line profile shape of the narrow component of Hα (see also Fig. <ref>). At our first
epoch (day 2.6; black), the narrow component is asymmetric (a broader
blue wing and sharper drop on the red side), it has a blueshifted
centroid (at about -25 km s^-1), and it has a FWHM of 48 km
s^-1. At the second epoch on day 3.5 (red) it becomes weaker,
broader (FWHM = 79 km s^-1), and even more blueshifted (centroid
velocity of -42 km s^-1). In both of these first two epochs,
the narrow component seems rather abruptly cut off at zero velocity
on the red wing. After that, the narrow component becomes much
weaker, and settles down to a more symmetric and narrower (FWHM
≈ 45 km s^-1) emission component that has a centroid
closer to zero velocity or even slightly redshifted (about +8 km
s^-1). At all epochs, the FWHM of the narrow component is
resolved, being significantly larger than the instrumental resolution
of about 6 km s^-1.
How shall we interpret the changing offsets in the centroid velocities
of the narrow emission component? In Section 3.1, we noted that the
centroid velocities of the interstellar Na i D absorption (after
correction for M101's redshift of z=0.000804) was +7 km s^-1,
which we take to be the likely systemic velocity of the progenitor.
This agrees (to within 1 km s^-1) with the centroid velocity that
we measure for the narrow Hα component at ∼1 week post-explosion. One possible interpretation of this is
that the lingering narrow emission centered on the systemic velocity
with a resolved width of 45 km s^-1 corresponds to photoionized
gas in distant regions of the progenitor's RSG wind, which may be
roughly spherical, or at least symmetric about the systemic velocity.
This, in turn, means that the pronounced blueshift of the narrow
Hα component on days 2-3 (as well as the similar blueshift
of narrow components of He ii and other high ionization lines on
these same dates) is real. Thus, in early epochs, we are seeing
emission from inner regions of the CSM that are predominantly on the
near side of the SN, which are expanding toward us at 30-50 km
s^-1 or more (the observed blueshifts are even larger for higher
ionization lines). We return to the implications of this blueshifted
narrow emission later in Section <ref>.
At no time during the first week do the spectra show any hint of a
narrow P Cygni absorption component in Hα that might arise from
absorption along the line of sight through dense, slow, pre-shock CSM.
Such narrow absorption features from the pre-shock CSM are often seen
in SNe IIn, providing that spectra have sufficient resolution
<cit.>. We
note that the narrow emission bump seen at -130 km s^-1 is
actually weak narrow emission from He ii λ6560
superposed on the blue side of the Hα line. It is not a
separate velocity component of Hα, and the gap between these
two is not narrow P Cygni absorption of Hα. The expected systemic
velocity of this He ii line is marked with a vertical
dashed magenta line in Figure <ref>. The emission feature in question is blueshifted
from this reference position by about -30 km s^-1, similar to
the blueshift of the narrow components in other lines at the same
early epoch. The lack of narrow P Cyg absorption may suggest that the CSM is asymmetric, as discussed below.
§ DISCUSSION
§.§ Narrow features and the CSM Speed
As noted above, all of the emission lines seen in our first
PEPSI epoch on day 2.6 have narrow emission peaks that are blueshifted
(see Figures <ref> and <ref>). Recall that the
presumed systemic velocity of SN 2023ixf (indicated by
Na i D absorption) is at +7 km s^-1 relative to the average
redshift of M101. Relative to the systemic velocity, the narrow
Hα peak is at -22 km s^-1 and the narrow emission
component extends from about +15 km s^-1 at the drop on the red
side, reaching out to about -100 km s^-1 or more on the blue
side. Similarly, the high ionization lines (He ii, C iv,
N iv) have their strongest narrow emission on day 2.6 ranging
from about 0 km s^-1 to -150 or -200 km s^-1.
Thus, it appears that the narrow emission on day 2.6 is being emitted by
expanding CSM that is primarily on the near (approaching) side of the
SN. This has been seen before in early spectra of interacting SNe,
and seems to be well understood as being due to the combined effects
of light travel time in the presence of a variable illumination source
and occultation by the SN photosphere <cit.>.
This does not mean that the CSM is actually one sided, but that we are
not able to detect as much emission from the redshifted side of the
CSM because it is blocked from our view or not yet fully illuminated,
depending on the time after explosion. If much of the redshifted CSM
emission is missing, this is important for interpreting the
velocities.
Figure <ref> shows a detail of the narrow Hα
component in our first spectrum, after subtracting away a broad
Lorentzian as in Figure <ref>. This is compared to a
narrow Lorentzian function centered at the systemic velocity of +7
km s^-1, matching the blue wing of Hα (note that this curve
does not fit the excess emission bump at -150 km s^-1, which is
due to emission from He ii λ6560 superposed on the
Hα line). This symmetric Lorentzian curve demonstrates what
the true symmetric narrow Hα emission from the CSM would look
like at this epoch, if we could detect all of it. This, in turn, tells us
that the CSM expansion speed indicated by the narrow Hα
profile is actually about 115 km s^-1. This is much faster than a typical RSG
wind of ∼20 km s^-1.
However, interpreting this blueshifted emission is complicated by the
fact that the amount of blueshift is different for different lines (tracing different ionization levels),
and the amount of blueshift in the narrow components is time
dependent. This is discussed next.
§.§ Acceleration of the pre-shock CSM
Figure <ref> illustrates the complex time-dependent and
ionization-dependent evolution of the narrow emission components from
the pre-shock CSM in our first two epochs of PEPSI spectra (these are
day 2.6 and 3.5, shown in the middle and bottom tracings in
Fig. <ref>, respectively). Note that we have removed the
broader wings of these profiles by subtracting Lorentzian functions
with FWHM values of around 1000 km s^-1 (as shown for Hα in
Fig.<ref>). Figure <ref> includes narrow profiles
of Hα (black), He ii (magenta), and C iv (blue),
which trace recombination emission from increasing ionization levels
(13.6 eV, 54.4 eV, and 64.5 eV, respectively).
Figure <ref> does not include N iv because of its
relatively low signal to noise.
On day 2.6 (middle tracings), all the narrow lines are blueshifted,
but there is a remarkable and systematic increase in both width and in
the amount of blueshift of the line center as we move from Hα
to He ii and then to C iv. Basically, higher ionization
lines exhibit faster outflow speed. There are two potential
explanations for this.
1. The source function for each of these lines may have a slightly
different radial dependence in an ionized pre-shock CSM, because of
radial gradients in CSM density and ionization level
<cit.>. If there is a velocity gradient in the CSM
(with faster velocities at smaller radii), this might account for why
higher ionization lines show faster outflow speeds. However, this
would require a very steep velocity gradient (increasing by a factor
of more than 2 from Hα to C iv) in a narrow radial zone.
In the models presented by <cit.>, the CSM velocity was
assumed to be constant with radius, but new models may be able to
quantify the velocity gradient that would be needed to explain the
observed profiles. The required velocity gradient is not only steep,
but decreases with radius (faster outflow at smaller radii). This is
the opposite of what is expected for a Hubble-like flow from a sudden burst of
pre-SN mass loss, and it is the opposite of an acceleration zone of a stellar wind that eventually reaches its terminal speed at a large radius. Instead, it would point to significant radiative
acceleration of the inner pre-shock CSM by the radiation from the shock
itself. If the CSM is very dense, a small mean free path could lead
to a sharp velocity gradient immediately ahead of the shock, and this
would be an interesting problem for models to quantify.
2. Another possibility is that the CSM is asymmetric. Imagine that
the SN blast wave hits CSM with a disk or torus geometry that has a
dropping density away from the equatorial plane. In the equatorial
plane, the forward shock will hit the densest CSM and will be
decelerated the most. Moving out of the plane, the shock will
encounter relatively less dense CSM, and hence, the forward shock will
decelerate less and will continue outward at a higher speed. At these
intermediate latitudes, the faster shock will yield hotter post-shock
gas, and so the CSM immediately ahead of the shock will be illuminated
by a harder radiation field from the shock. Thus, we might expect to
see higher ionization tracers (i.e. C iv) coming preferentially
from higher latitudes in the immediate pre-shock environment, and
lower ionization (Hα) emitted by CSM in the dense equatorial
plane. The different velocities at different ionization might then
arise because the less dense CSM out of the plane also has less
inertia, and will therefore experience more radiative acceleration.
This is a difficult scenario to investigate theoretically, because it
requires 3-D radiation hydrodynamics with line transfer. However, if
the pre-shock mean free path is small at these high densities, one
might address the question with a series of spherical calculations
with shocks running into a range of different CSM density.
So far we have only discussed the narrow profiles in our first epoch
on day 2.6; the bottom tracings in Figure <ref>
show that the situation changes markedly one day later on day
3.5. Namely, the stark differences in velocity in the three
different lines are mostly gone. While the red edge of the narrow
component of Hα still extends closer to zero, the width and
blueshifted centroid velocity are now much more similar for the three
lines. The essential change is that Hα has become broader and
more blueshifted, and He ii as well to a lesser degree, making
all three lines similar. The factor of 2 difference in velocity
between Hα and C iv is now gone. As above, there are two
potential explanations for this change:
1. We may be seeing direct evidence for the pre-shock radiative
acceleration of CSM. After more time has passed, the radiation field
has accelerated the rest of the slower CSM traced by Hα
emission so that it now shares the faster expansion speeds seen in
higher ionization gas.
2. Another possibility arises if the CSM is asymmetric. As noted
earlier, the slowest and densest CSM in the equatorial plane that
emitted the narrow peak of Hα would be at a smaller radius
because the blast wave in this direction gets decelerated the most.
As such, the dense and slow equatorial CSM at the narrow pinched waist
would be the first to be enveloped by the expanding SN photosphere.
In this scenario, the Hα line would appear broader simply
because the narrowest emission is hidden by the SN ejecta and
therefore removed from the observed line profile, not because that slow
gas is accelerated. This would be consistent with the fact that the
flux of the narrow component of Hα drops precipitously in this
first day.
One difference that may help discriminate between these two
possibilities concerns the relative density of the emitting CSM. In
the spherical case (option 1 for both epochs discussed above), the
narrower Hα emission comes from lower ionization zones at
larger radii, and therefore should have lower densities compared to
the He ii and C iv emitting zones. In the aspherical
second case, the narrowest Hα is emitted by the densest CSM at
the equator, while the higher speeds at higher ionization result from
lower densities at higher latitudes. Perhaps future modeling of the
spectrum can help quantify the physical conditions in the emitting
zones. Although it is tempting to ascribe the increasing Hα
speed from day 2.6 to day 3.5 to radiative acceleration of the CSM, it is
difficult to confirm this with available information because asymmetry
(which might be expected anyway) also provides a suitable explanation.
§.§ Disappearing narrow lines and CSM radius
Regardless of the details discussed above, the fact that the narrow
emission lines disappear after a few days — a defining
characteristic of this class of events — provides a straightforward
way to estimate the radial extent of the dense CSM. Combined with
empirical expansion speeds, this can inform the timescale for the pre-SN mass
loss by the progenitor. Whether the CSM is obliterated by the forward
shock or enveloped by the SN photosphere, the narrow lines should
disappear from the spectrum.
Narrow lines disappear after 1-2 days in our PEPSI
spectra, and the intermediate-width lines disappear after 3-4 days.
We assume that our first spectrum on May 21 was taken 2.6 days after
explosion, so this means that the narrow lines disappear in 3.6-4.6
days, and the intermediate-width components (except Hα)
disappear after 5.6-6.6 days. These timescales can be used to
constrain the properties of the CSM.
In these early phases, we adopt an expansion speed for the SN ejecta
photosphere of 10,000 km s^-1. The speed may be a bit slower as
time goes on, and the fastest ejecta are faster than this, but this
assumption is sufficient for a rough estimate. At this speed the
relevant radius corresponding to the disappearance of the narrow
components on day 3.6 is R = vt = (3-5)×10^14 cm or
about 20-30 AU. This is only 5-10 R_* for a typical RSG. The
corresponding timescale of the pre-SN mass ejection depends, of
course, on the expansion speed of the CSM; i.e. t_ preSN =
t_ obs× (v_ CSM / v_ SN). For CSM produced by
a normal RSG wind speed of 10-20 km s^-1, the CSM would have been
ejected 5-10 years before explosion. For a faster CSM expansion speed
corresponding to the observationally inferred expansion speed of 115
km s^-1 (see Fig <ref>), the pre-SN timescale for mass
ejection is more like 0.9-1.5 yr.
Which one of these is correct returns us
to the ambiguous question of whether the CSM is aspherical and if it has
already been radiatively accelerated by the time of our first PEPSI
spectrum on day 2.6. A timescale of 5-10 yr, expected for a normal
slow RSG wind speed, does not match any expected timescale for late
nuclear burning, but it is similar to the observationally inferred
timescales for some SNe IIn <cit.>. We should not,
however, necessarily expect a phase of extreme pre-SN mass loss to
behave like a normal RSG wind. On the other hand, a pre-SN ejection
timescale of ∼1 yr, derived from the observed CSM speed, agrees
well with the expected time for Ne or O burning. This agreement may
favor instabilities in late Ne or O burning as the culprit for
triggering the extreme pre-SN mass loss for SN 2023ixf. However, recent studies suggest that wave driving on its own may be unlikely to drive severe mass loss, instead being more likely to inflate the star's envelope <cit.>. As proposed by <cit.>, however, this type of sudden pre-SN swelling may trigger severe and asymmetric mass loss in binary systems.
§.§ Intermediate width features: Fading away, not broadening
As noted above, the intermediate-width components of emission last
longer than the narrow lines, which fade after 1-2 days. This is
especially true of Hα, which persists for at leat a week after explosion as a
strong, intermediate-width P Cygni line. The intermediate-width emission component of Hα disappears after another week or so, being gone by day 17.6 (gray spectrum in Fig. <ref>). Unlike Hα, the
intermediate-width components of the higher ionization lines
(He ii, C iv, N iv) do not develop any P Cygni
absorption features before they vanish after a few days. Why does this happen?
One reason for high-ionization lines to fade is recombination of the
gas. We noted above, however, that as the N iv and C iv lines fade and disappear, we do not see a corresponding increase
in strength of N iii and C iii lines. Also, although He i λ5876 (only seen as an intermediate-width component) does increase in strength as He ii fades, it never gets very strong, and it has disappeared again by day 17.7 (Fig. <ref>b). Thus, even though
we do see some evidence of recombination and cooling in the post-shock
CDS (from Hα P Cygni absorption and the increasing He i
emission), recombination is not the primary explanation for the
disappearance of the N iv and C iv lines.
Another reason why the intermediate-width emission from the post-shock
shell might fade would be if the SN ejecta that feed the reverse shock
are able to accelerate the shock front, essentially obliterating the
slower moving post shock gas as it is swept up to become part of the
fast SN ejecta moving at 5,000-10,000 km s^-1. In this case, we
would expect the intermediate-width lines to broaden from 1,000 to
5,000 km s^-1 or more as they fade. This is not what the
observational data show. Instead, the intermediate-width components
stay at about the same width or even become slightly narrower as they
fade away. Importantly, on days 7.6 and 8.4 when the high-ionization
lines have vanished, the P Cygni absorption seen in Hα and
He i still maintains the same slower speeds of 500-1,300 km
s^-1. This directly contradicts the idea that the post-shock shell
is getting faster. We therefore find it unlikely that the CSM interaction
signatures fade because the post shock shell is accelerated and
incorporated into the SN ejecta.
One last possibility arises if the CSM is significantly asymmetric, as
in a case where the CSM is primarily equatorial. As noted above, the
shock front that crashes into the densest material in the equator will
be decelerated by the CSM. However, in other directions with much
less dense CSM, the SN ejecta will expand unimpeded. Since the CSM
here is found at radii of 20-30 AU, whereas the SN photosphere will
eventually reach a radius of around 100 AU, the slower CSM
interaction regions in the equatorial zones can be engulfed by the SN
ejecta and hidden inside the SN photosphere. The opaque SN ejecta will wrap around the disk, if the disk is slow and thin enough. Even if the SN ejecta to not completely engulf the disk, it may be hidden from observers at a wide range of viewing angles. This scenario was
discussed in detail previously by <cit.>, and invoked as
the explanation for the bizarre behavior of iPTF14hls <cit.>.
In this scenario, the CSM interaction zone with its relatively slow
CDS may still be there, but its emission is blocked from our view or
completely thermalized by the surrounding optically thick SN ejecta.
There are observed cases where the CSM interaction persists while it
is hidden beneath the photosphere, and signatures of strong CSM
interaction reappear when the recombination photosphere recedes (i.e. after the plateau drops), as in
PTF11iqb, iPTF14hls, SN 1993J, and SN 1998S
<cit.>. On the other hand, if the
asymmetric CSM has a low-enough total mass, it may indeed get
obliterated and incorporated into the fast SN ejecta during the time
when it is hidden beneath the photosphere, yielding little or no
lingering CSM interaction signatures after the photosphere recedes.
It will be interesting to see what happens after SN 2023ixf fades from
its plateau in a few months.
Note that this scenario where the CSM interaction region is engulfed by the SN photosphere only works of the CSM is highly asymmetric. Of the various pre-SN ejection
mechanisms related to late-phase nuclear burning mentioned in the
introduction (wave driving, super-Eddington winds, etc.), only
pre-SN binary interaction triggered by envelope inflation <cit.>
is necessarily expected to produce strong asymmetry in the CSM. This may turn out to be an important clue to the pre-SN mass loss.
Overall, the observational data seem to favor a scenario where the
narrow lines fade and disappear mostly because the slow-moving CSM is
swept up by the shock and accelerated, or ewven occulted. The intermediate-width
components that are emitted by this shocked and accelerated CSM
disappear a few days later because they are engulfed by and hidden
inside the SN photosphere, and not because they are accelerated to
the same speed as the SN ejecta or because the ionized CSM
recombines. Engulfing the CSM interaction region rather than
accelerating to the same speed as the ejecta requires that the CSM is
asymmetric, as noted above, and probably indicates that we are observing the SN from some mid-latitude
direction that is offset from the equatorial plane. This, in turn, is also
consistent with the lack of any narrow P Cygni features in any of the
high resolution PEPSI spectra. This lack of narrow P Cygni absorption
suggests asymmetry in the CSM because the slow CSM is not seen in
absorption along our line of sight to the SN continuum photosphere,
even though it is clearly seen in emission. This is only possible if
the CSM is not spherical.
Although we argue that the CSM is asymmetric in the case of SN 2023ixf
based on the observational behavior of the narrow and
intermediate-width lines, this may not necessarily be the case for
all SNe II with fleeting CSM interaction. It would be interesting to
determine what fraction of these events show evidence of aspherical
mass ejection shortly before explosion, and how many seem consistent
with spherical CSM. This may help elucidate what role binary
interaction may play in ejecting the mass, or shaping the mass ejected
by some other mechanism.
§.§ Noteworthy things we no not see
Here we briefly comment on a few things that we do not see in the PESPI
spectra of SN 2013ixf, but which have been seen in early spectra of
some other SNe II with fleeting signs of CSM interaction. These may
help us to understand how SN 2023ixf fits into the observed diversity
of this phenomenon.
1. Except for very weak and anrrow components on day 2.6, we do not detect lines like the C iii and N iii blend
in the “WR bump” just to the blue side of He ii λ4686,
or strong emission from narrow He i that has been seen in other
early spectra of SN 1998S, PTF11iqb, 2013cu, and others
<cit.>.
2. We do not detect a broad emission feature near λ4600 at any epoch. This broadened feature has been seen in several SNe II-P with fleeting CSM interaction signatures <cit.>, and
is often attributed to broad blueshifted He ii
λ4686 emission from fast (5,000-10,000 km s^-1) SN ejecta crossing the reverse shock, or a blend of He ii
λ4686 with several other ionized features in the region.
3. Importantly, as noted above we see no evidence for narrow P Cygni
(or any) narrow absorption from unshocked CSM. While narrow absorption features can easily be lost in low-resolution spectra when they are seen next to a strong narrow emission feature, they are easily detected in echelle spectra <cit.>. The absence of this absorption in SN 2023ixf may indicate that its slow and dense CSM is not seen along our
line of sight to the continuum photosphere, requiring that the CSM
has a nonspherical geometry. This would allow it to be seen in
emission out of our line of sight, but not in absorption.
§ SUMMARY AND CONCLUSIONS
We present a series of high-resolution echelle spectra of the recent SN 2023ixf in M101. These provide an unprecedented record of the high-resolution emission-line evolution in a SN II with early signs of CSM interaction with an almost nightly cadence. These spectra reveal rapid evolution in the strength and profile shape of narrow and intermediate-width emission lines associated with CSM interaction. Here is a summary of the main observational results:
1. As in other SN of this class, we detect strong narrow and intermediate-width emission from Hα and high-ionization lines such as He ii, C iv, and N iv. Unlike several other SNe of this class, however, SN 2023ixf does not show strong emission from lower-ionization species like He i or C iii/N iii, which can be very strong in these objects. These lines are seen, but they are very weak and limited in time during our observational window.
2. All narrow line components fade quickly in 1-2 days, and intermediate-width components of high-ionization lines linger for another 1-2 days before fading from the spectrum.
3. All narrow emission components show a pronounced blueshift in the earliest epochs. The blueshift is understood as resulting from a combination of light travel time effects and occultation of the far side of the CSM by the photosphere. However, the amount of blueshift and the width of the narrow component depends on both time and on the ionization level of the line. Higher ionization lines are broader and more blueshifted than lower ionization in our first epoch, and this difference with ionization level diminishes after a day with all lines showing roughly the same width and blueshift as the higher ionization species. This requires either acceleration of the innermost dense CSM, or asymmetric CSM.
4. The Hα wings in our first epoch are consistent with electron scattering wings (i.e. they are well fit by a symmetric Lorentzian shape, with a centroid that has a similar blueshift as the narrow component). However, this changes after 1-2 days. Hα and He i lines develop intermediate-width P Cygni absorption, requiring that the broadening of these lines after the first day or two is tracing kinematic expansion and is not due only to electron scattering. The P Cygni absorption indicates expansion speeds of 700-1,300 km s^-1, tracing CSM that has been swept up in to the post-shock shell.
5. As the intermediate-width components fade, the observed velocities do not increase. The P Cygni absorption, in particular, remains steady at <1300 km s^-1. This requires that the CSM interaction signatures are not fading because the post-shock gas is getting accelerated and incorporated into the fast SN ejecta. Instead, the CSM interaction region is likely asymmetric and gets engulfed and hidden by the SN photosphere.
6. Although the narrow components are easily resolved in our echelle spectra, none of our spectra show narrow P Cyg absorption from dense pre-shock CSM along our line-of-sight to the continuum photosphere. This may require that the CSM is asymmetric.
7. The width of the narrow Hα component indicates a CSM expansion speed of about 115 km s^-1, and this is seen in our first epoch before the Hα appears to get accelerated to the same blueshift and width as the higher ionization lines. This expansion speed is 5-10 times faster than a normal RSG wind.
8. The disappearance of the CSM interaction signatures after a few days suggests that the CSM is confined to a relatively compact radius of 20-30 AU (or 10^14.7 cm). This radius, combinad with its observed expansion speed, implies that the CSM was ejected roughly 1 yr before core collapse.
Altogether, we find several clues that the confined CSM of SN 2023ixf is asymmetric. We interpret the evolution of the line profiles as indicating that the asymmetric CSM interaction region is engulfed by the SN photosphere (e.g., ). While the narrow lines may weaken because the pre-shock gas is accelerated and incorporated into the post-shock shell, the resulting intermediate-width lines are hidden from view behind the SN photosphere, rather than the high-ionization lines fading away because the shocked shell is accelerated or because the gas recombines. While the timescale for creating SN 2023ixf's pre-SN CSM is about a year (which suggests an instability during Ne or O burning), the implied asymmetry in the CSM points to a scenario where pre-SN inflation during Ne or O burning will instigate binary interaction that ejects mass into a disk or torus. Thus, CSM interaction may continue to occur as SN ejecta hit the engulfed CSM, but it may be hidden from our view. Depending on the mass of the CSM, CSM interaction signatures may reappear after SN 2023ixf drops from its pleateau when the recombination photosphere recedes again.
For help with obtaining and reducing the spectra, we thank LBT staff members including Alex Becker, Jennifer Power, and director Joe
Shields. Some of these LBT/PEPSI spectra were obtained as part of a
pre-approved program called AZTEC (Arizona Transient Exploration and
Characterization), but some resulted from Director's Discretionary
Time and Engineering time, which allowed a nearly nightly cadence
during a critical time period. The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia.
Time domain research by D.J.S. and team is supported by NSF grants AST-1821987, 1813466, 1908972, & 2108032, and by the Heising-Simons Foundation under grant #20201864.
This publication was made possible through the support of an LSSTC Catalyst Fellowship to K.A.B., funded through Grant 62192 from the John Templeton Foundation to LSST Corporation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of LSSTC or the John Templeton Foundation.
LBT:PEPSI
aasjournal
|
http://arxiv.org/abs/2306.05034v1
|
20230608083718
|
Topological Superconducting States and Quasiparticle Transport on Kagome Lattice
|
[
"Zi-Qian Zhou",
"Weimin Wang",
"Zhi Wang",
"Dao-Xin Yao"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con",
"cond-mat.str-el"
] |
[These authors contributed equally to this work.]
School of Physics, Sun Yat-Sen University, Guangzhou 510275, China
[These authors contributed equally to this work.]
School of Physics, Sun Yat-Sen University, Guangzhou 510275, China
School of Physics, Sun Yat-Sen University, Guangzhou 510275, China
[Corresponding author:][email protected]
School of Physics, Sun Yat-Sen University, Guangzhou 510275, China
State Key Laboratory of Optoelectronic Materials and Technologies, Center for Neutron Science and Technology, Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Guangzhou 510275, China
The pairing symmetry of superconducting states is a critical topic in the realm of topological superconductivity. However, the pairing symmetry of the AV_3Sb_5 family, wherein A=K,Rb,Cs, remains indeterminate. To address this issue, we formulate an effective model on the kagome lattice to describe topological superconducting states featuring chiral charge density wave. Through this model, we explore the topological phase diagrams and thermal Hall conductivity under various parameters, with and without spin-orbit coupling. Our analysis reveals that the disparities in thermal Hall conductivity curves between different pairing symmetries are safeguarded by the topology resulting from the interplay of spin-orbital coupling and superconducting states. Remarkably, this theoretical prediction can potentially enable the differentiation of various superconducting pairing symmetries in materials via experimental measurements of thermal Hall conductivity curves.
Topological Superconducting States and Quasiparticle Transport on Kagome Lattice
Dao-Xin Yao
July 31, 2023
================================================================================
§ INTRODUCTION
Revealing specific physics through minimal lattice models is an essential task in modern condensed matter physics. The unique lattice characteristics of kagome materials make them important for studying exotic electronic properties and topologies, which include both topologically predicted bands and flat bands <cit.>. When rotational and spin symmetries are broken, a non-trivial Z_2 invariant and gapless edge states emerge <cit.>. Furthermore, a higher-order topological insulator is realized on a kagome lattice <cit.>.
Recently, the family of AV_3Sb_5( A=K,Rb,Cs) was discovered to be the first example of quasi-two-dimensional kagome superconductors. These materials have been proven to exhibit charge density wave (CDW) order and superconducting (SC) properties <cit.>, and the respective characteristics and their coexistence have garnered significant attention <cit.>. Additionally, anomalous Hall effect (AHE), Magneto-Seebeck effect, and Nernst effect have been observed in these materials <cit.>.
AV_3Sb_5 is a quasi-two-dimensional material, whose crystal structure is shown in Fig.<ref> <cit.>. The material exhibits a CDW that can be classified into two types: in-plane and out-of-plane CDW that also known as c-axis modulation. The 2× 2 modulation of the CDW has been verified by several experiments <cit.>. Muon spin spectroscopy has detected a magnetic response of the chiral charge order, indicating time-reversal symmetry breaking (TRSB) <cit.>. This finding has been confirmed by other experiments <cit.>. Several theories have attempted to explain the origin of TRSB, with the chiral flux phase (CFP) being the most successful in carrying nontrivial topology and naturally explaining TRSB <cit.>. Additionally, a 1× 4 modulation emerges on the material's surface, as detected by scanning tunneling microscopy (STM) and hard X-ray diffraction (XRD) <cit.>. However, there is significant controversy over the c-axis modulation. While some experiments detected a 2×2×2 modulation <cit.>, others observed a 2×2×4 modulation <cit.>. An article reported a first-order-like phase transition at 60K from 2×2×2 modulation to 2×2×4 modulation <cit.>. Other experiments reported the coexistence of both types of modulations, indicating the coexistence of 2×2×2 modulation and 2×2×4 modulation <cit.>.
The competition between SC and CDW is a fascinating topic. Multiple experiments have confirmed the presence of two domes in superconducting diagram in 2D properties under pressure, indicating the coexistence but competitive nature of CDW and SC <cit.>. CsV_3Sb_5 exhibits different 2D properties from KV_3Sb_5 and RbV_3Sb_5. In CsV_3Sb_5, the second SC dome appears after the disappearance of the first one, while the second domes of KV_3Sb_5 and RbV_3Sb_5 appear before or at the disappearance of the first dome <cit.>. Such behavior suggests unconventional pairing states <cit.>. A strain experiment concluded that pressure is equivalent to the strain along the c-axis <cit.>, with the same 2D dome structure observed in both the strain and pressure experiments <cit.>. First principle calculations indicate conventional pairing in the second dome and possibly unconventional pairing in the first one <cit.>.
The pairing symmetry of superconductivity in the AV_3Sb_5 family remains a highly controversial problem that has not been resolved yet. While most results indicate conventional s-wave pairing, some experiments have observed unconventional phenomena. The U-shape differential conductivity and absence of in-gap states suggest s-wave pairing <cit.>. In s-wave pairing materials, only magnetic impurities can induce in-gap states, not for sign-changing states. A Cr cluster only induces an in-gap bound state, indicating s-wave pairing in AV_3Sb_5 <cit.>. The clear Hebel Slichter coherence peak observed in nuclear magnetic resonance provides another solid evidence for s-wave pairing <cit.>. Furthermore, two-gap s-wave pairing fittings have best explained the measurements of resistance, penetration depth, and superfluid density <cit.>. However, two independent differential conductivity experiments have detected a non-split zero bias conductivity peak, which may be induced by p-wave pairing <cit.>. The thermal conductivity is similar to the d-wave superconductor Tl-2201 <cit.> with residual conductivity at 0K. Additionally, some theories predict nodal s-wave, p-wave, and d-wave pairings in the AV_3Sb_5 <cit.>.
Based on the properties of materials, we have investigated the topological superconducting states on a kagome lattice with chiral charge density wave and spin-orbit coupling. While there is a possibility of p-wave superconducting pairing in the materials AV_3Sb_5, we have not considered spin triplet pairing due to its low likelihood. Our study includes the calculation of Chern numbers and Berry curvatures in different parameter regions, taking into account s-wave, d+id-wave, and d-id-wave pairings, to reveal the topologies of the SC states. Furthermore, we have calculated the quasiparticle transport to provide a measurable value for distinguishing between different SC pairing symmetries.
The paper is organized as follows. Following this introduction, Section <ref> describes an effective model on a kagome lattice with chiral charge density wave and superconductivity. In Section <ref>, we present the method for calculating Berry curvature, Chern number, and thermal Hall conductivity. In Section <ref>, we present the phase diagrams and thermal Hall conductivity curves for the model without spin-orbit coupling (SOC) as mentioned above. In Section <ref>, we present the results of the phase diagrams and thermal Hall conductivity for the model with SOC, considering both the chemical potential μ=0 and μ=0.1. Section <ref> presents a discussion on how we can distinguish different pairing symmetries by analyzing the thermal Hall conductivity curves, and concludes the paper. Supplementary materials are provided in the appendices.
§ MODEL
We develop an effective model on a two-dimensional kagome lattice that incorporates chiral charge density wave (CDW), spin-orbit coupling (SOC), and superconductivity (SC). The primary goal of this model is to examine the topological properties of the system's superconducting states and the transport properties of quasiparticles in the presence of time-reversal symmetry breaking. Furthermore, we intend to propose a method for discriminating between different pairing symmetries based on our results.
Taking inspiration from the properties exhibited by AV_3Sb_5, we have constructed our model on a two-dimensional kagome lattice, consisting of three atoms in each unit cell of the basic lattice. When we consider the 2× 2 chiral CDW, the unit cell expands by a factor of four, while the Brillouin region shrinks to one quarter of its original size. This results in the number of sublattice atoms becoming 12. Upon the introduction of a non-zero SOC, the spin symmetry is broken, thereby doubling the number of bands to 24.
We decompose the Hamiltonian into four parts: the nearest neighbor tight-binding part, the CDW part, the SOC part, and the SC part, as expressed by Eq.(<ref>). The first three terms combined are referred to as Ĥ_0.
Ĥ = Ĥ_TB+Ĥ_CDW+Ĥ_SOC+Ĥ_SC ,
The nearest neighbor tight-binding model for the kagome lattice is given by
Ĥ_TB = ∑_k,σ∑_α,α' (ℋ_TB)_α,α'c^†_kσ,αc_kσ,α'
= ∑_k ∑_α,α'[-μδ_α,α'-2tcos(k_l/2|ϵ_αα' l|)]c^†_kσ,αc_kσ,α' ,
where the sublattice indexes α,α'=A,B,C are extended to α,α'=1,2,⋯ ,11,12 with the inclusion of CDW. We only consider the case where α and α' represent the nearest neighbor sublattices. Spin indices are denoted by σ=↑, ↓, and the hopping t is chosen to be isotropic, implying that we are studying the low-energy state (Appendix.<ref>). For the sake of simplicity, we choose t=1 as energy unit throughout the paper, and μ represents the chemical potential.
The 2×2 modulation charge order results in an enlarged unit cell that is four times larger than the previous one, while the Brillouin zone shrinks by 1/4 (as shown in Fig.<ref>). Several theories have been proposed to explain the properties of the 2×2 CDW, including those presented in Refs.<cit.>. Among them, the CFP model (shown in Fig.<ref>) is the most convincing model for capturing the chiral CDW characteristic, which has been confirmed by muon spin spectroscopy measurements <cit.>. The CFP Hamiltonian can be expressed in real space as <cit.>
Ĥ_CDW = -i ξ∑_𝐑Δ_CFP(𝐑)· O(𝐑)+h.c. ,
where Δ_CFP(𝐑)_i=cos(𝐐_i·𝐑) and O(𝐑)_k=ϵ_ijkc_i^†c_j are three-dimensional vectors. The wave vectors 𝐐_i(i=a,b,c) are related to van Hove singularities at the three equivalent M points on the boundary of the Brillouin zone, as shown in Fig.<ref>.
For the 2×2 CDW modulation kagome lattice, there would be about 120 free parameters in the SOC term if no approximations were made. Even after considering time-reversal symmetry and point group symmetry, there might still be five free parameters left, which deviates from our original goal of constructing an effective model that is simple. To simplify the model, we adopt the Rashba model <cit.> and rewrite it into a lattice model with six-fold symmetry, given by
Ĥ_SOC(𝐫)=λ∑_<i,j>∑_<α,α'>c_iα^†(R_π/2𝐞_iα,jα'·σ) c_jα' ,
where c_iα^† = (c_iα,↑^†, c_iα,↓^†), λ represents the Rashba spin-orbit coupling strength, and 𝐞_iα,jα'=𝐞_iα-𝐞_jα' is the unit vector from site iα to site jα', which is a constant value when the unit cell indexes i,j are specially chosen. We consider the nearest neighbor tight-binding model with a periodic boundary condition, thus it is not necessary to involve the unit cell index. The vector 𝐞_iα,jα'≡𝐞_α,α'=𝐞_α-𝐞_α' is only dependent on the sublattice indexes. Hence, we can explicitly see that the Fourier transformation form of the SOC Hamiltonian in 𝐤-space is independent of the real space index (see Appendix.<ref>).
In order to simplify analysis and numerical calculations, a Fourier transform is often employed to convert the real-space Hamiltonian into momentum space, using the basis c_𝐤^† = (c_𝐤1,↑^†, c_𝐤2,↑^†⋯, c_𝐤12,↑^†, c_𝐤1,↓^†, c_𝐤2,↓^†,⋯ , c_𝐤12,↓^†). In this space, the Hamiltonian can be expressed as Ĥ_0=c_𝐤^†ℋ_0 c_𝐤, where
ℋ_0=
[ ℋ_TB+ℋ_CDW ℋ_SOC^↑↓; ℋ_SOC^↓↑ ℋ_TB+ℋ_CDW ] .
It should be noted that ℋ_TB+ℋ_CDW is identical for both spin-up and spin-down, as magnetism is not considered in this model. ℋ_SOC^↑↓ represents the SOC Hamiltonian with the basis c_𝐤α↑^† c_𝐤α'↓, and the naming convention for ℋ_SOC^↓↑ follows the same rule.
It is straightforward to verify that our model exhibits a six-fold rotation symmetry, denoted by C_6. Specifically, the six-fold rotation symmetry of ℋ_CDW has been demonstrated in a previous study <cit.>. To establish the existence of the six-fold rotation symmetry in ℋ_TB and ℋ_SOC, we need to demonstrate that their forms preserve the symmetry. The six-fold rotation symmetry of ℋ_TB can be revealed by its form as shown in Eq.(<ref>). Notably, when we rotate the real space, it is equivalent to exchange the numbering rules while maintaining Eq.(<ref>). Furthermore, the six-fold symmetry of ℋ_SOC is guaranteed by the fact that σ can be treated as a series of constant matrices under space rotation transformations. Therefore, the rotations are equivalent to exchange the numbering rules again in accordance with Eq.(<ref>).
After discussing the geometric and electric properties, we will now delve into the model of superconductivity. While the pairing symmetry of AV_3Sb_5 (A=K, Rb, Cs) has not yet been confirmed <cit.>, we can construct some possible SC pairing symmetries to gain insight into the transition properties of the SC states. Additionally, we would like to highlight an observable value that can potentially distinguish between different pairing symmetries in experiments.
The most probable SC pairing symmetry is s-wave pairing, also known as conventional SC. The corresponding Hamiltonian can be expressed as
Ĥ_s-wave= Δ/2∑_𝐤, αc_𝐤α,↑^† c_-𝐤α,↓^† + h.c.
- Δ/2∑_𝐤, αc_𝐤α,↓^† c_-𝐤α,↑^† + h.c. ,
where Δ represents the SC gap function, which is a constant for s-wave pairing. The negative sign in the second term arises from the anti-commutation relation of the Fermion creation and annihilation operators {c_𝐤,α^†,c_𝐤',α'^†} = {c_𝐤,α,c_𝐤',α'} = 0 (see Appendix.<ref>).
Another possible spin-singlet pairing is d-wave pairing, with angular momentum l=2 and even spatial wave function. Based on the irreducible representations of the finite subgroups of SO(3), there are two possible d-wave pairings, namely d_x^2-y^2-wave and d_xy-wave. When the momentum (k_x,k_y) rotates by π/2 to become (-k_y,k_x), the gap function Δ_d-wave changes sign to become -Δ_d-wave, and when (k_x,k_y)→(-k_x,-k_y), Δ_d-wave→Δ_d-wave. Thus, as 𝐤 rotates once in 𝐤-space, Δ_d-wave undergoes two periods.
On a kagome lattice, it is more convenient to consider a complex d-wave pairing or d+id-wave pairing. To construct a SC gap function using the tight-binding model, a d-wave pairing SC is transformed from real-space to 𝐤-space by assuming an extra phase when pairing in different directions <cit.>. As a result, the real-space d+id-wave SC can be determined and written in a Fourier transformation form as follows.
Ĥ_d+id-wave= Δ/2∑_𝐤, αe^i2θ_iα,jαe^i𝐤·𝐞_iα,jαc_𝐤α,↑^† c_-𝐤α,↓^† + h.c.
- Δ/2∑_𝐤, αe^i2θ_iα,jαe^i𝐤·𝐞_iα,jαc_𝐤α,↓^† c_-𝐤α,↑^† + h.c. ,
where θ_iα,jα is the angle between 𝐞_iα and 𝐞_jα, and it depends on the sublattice index α. This is independent of the unit cell indexes i,j due to the periodic boundary condition. Note that the gap function of d+id-wave pairing is given by Δ(𝐤) = Δ e^i2θ_iα,jαe^i𝐤·𝐞_iα,jα, which is an even function of 𝐤. It can be confirmed that under the transformation of 𝐤→ -𝐤, Δ(𝐤) remains unchanged. The sign-change between the first and second terms occurs for the same reason as the s-wave pairing. The phase of the SC gap function for the d+id-wave pairing can be represented by Fig.<ref>, where the blue lines represent ϕ=2θ=0, yellow lines represent ϕ=2π/3, and green lines represent ϕ=4π/3. It is important to note that the d+id-wave pairing here refers to a complex d-wave SC state in the sense of real space and atomic level, as opposed to a simple d-wave pairing.
The d-id-wave pairing state can be seen as the opposite SC pairing state of the d+id-wave pairing state when the normal state Hamiltonian Ĥ_0 is topologically trivial. However, when Ĥ_0 is topologically non-trivial, there is a significant difference between the d+id-wave and d-id-wave pairing states. Therefore, it is necessary to consider the d-id-wave SC pairing state, which can be obtained by transforming θ_iα,jα→-θ_iα,jα, resulting in the Hamiltonian expression
Ĥ_d-id-wave= Δ/2∑_𝐤, αe^-i2θ_iα,jαe^i𝐤·𝐞_iα,jαc_𝐤α,↑^† c_-𝐤α,↓^† + h.c.
- Δ/2∑_𝐤, αe^-i2θ_iα,jαe^i𝐤·𝐞_iα,jαc_𝐤α,↓^† c_-𝐤α,↑^† + h.c. ,
where the parameters in the d-id-wave pairing state Hamiltonian are defined in exactly the same way as in the d+id-wave pairing state.
In the customary approach, the Hamiltonian is expressed in Bogoliubov-de Gennes (BdG) form in the Nambu representation, given by
(c_𝐤^†, c_-𝐤)_r=
(. .c_𝐤1,↑^†,⋯,c_𝐤12,↑^†,c_𝐤1,↓^†,⋯,c_𝐤12,↓^†,.
.c_-𝐤1,↑,⋯,c_-𝐤12,↑,c_-𝐤1,↓,⋯,c_-𝐤12,↓) ,
where the subscript r denotes the rearrangement of the basis. The SC Hamiltonian in the Nambu representation can be written as
Ĥ_SC = (c_𝐤^†, c_-𝐤)_r
[ 0 ℋ_SC; ℋ_SC^† 0 ][ c_𝐤; c_-𝐤^† ]_r .
Thus, the entire Hamiltonian can be expressed as
ℋ =
[ ℋ_0(𝐤) ℋ_SC; ℋ_SC^† - ℋ_0^*(-𝐤) ] .
§ METHOD
In this article, we investigate the topological properties and quasiparticle transport of superconducting pairing states with chiral CDW and SOC on a kagome lattice. We aim to distinguish different superconducting pairing symmetries by comparing the thermal Hall conductivity curves of different SC pairings, and we argue that the differences can be attributed to topology.
The Z invariant, or the Chern number, serves as a good topological number for systems with particle-hole symmetry in the absence of time-reversal symmetry. The presence of the CFP term breaks time-reversal symmetry, and as the Hamiltonian is in a BdG form, the system possesses a natural particle-hole symmetry. We explore three different types of superconducting pairing symmetries: s-wave, d+id-wave, and d-id-wave pairings. When we apply the transformation 𝐤→ -𝐤^*, the gap functions of spin-singlet pairings switch to their negative counterparts. Therefore, the spin-singlet pairings belong to D class, which can be characterized by a Z invariant in a two-dimensional system <cit.>. In our model, the normal state Hamiltonian is topologically non-trivial, which makes the situation particularly intriguing.
In addition to the topological analysis, we also investigate quasiparticle transport, specifically the thermal Hall effect. In the semiclassical theory, the low-temperature Hall conductivity is mainly influenced by the Berry curvature near the Fermi surface <cit.>. Therefore, we can utilize the calculation of the Berry curvature to understand the intrinsic thermal Hall conductivity and attempt to connect it with the topological number of the system, the Chern number, through the Berry curvature.
Berry curvature, which is one of the most important topological representations, is derived from the Berry phase γ_n = (1/2)∫_𝒮dR^μdR^νΩ_μν^n(𝐑). The Berry phase is an observable quantity that is also known as a geometric phase, so it must be gauge-invariant module 2π. Therefore, the Berry curvature must be able to be written in a gauge-invariant form <cit.>
Ω_μν^n = i ∑_n'≠ n⟨n|∂ H/∂ R^μ|n'⟩⟨n'|∂ H/∂ R^ν|n⟩-(μ↔ν)/(ϵ_n-ϵ_n')^2 ,
where |n⟩,|n'⟩ are both the eigenstates of the Hamiltonian, and ϵ_n,ϵ_n' are the eigenvalues of the Hamiltonain. The Chern number is calculated by integrating the Berry curvature divided by 2π,
C^n=1/2π∫_BZΩ_k_x,k_y^n(𝐤)d^2𝐤 ,
where C^n represents the Chern number for the n-th band. BZ represents the integral are done inside the Brillouin region, and Ω_k_x,k_y^n(𝐤) represents the Berry curvature respect to the 2-dimensional momentum space.
When energy degeneracy occurs, the gauge-invariant form of Berry curvature is no longer well-defined, rendering C^n ill-defined as well. However, we can still describe the topological properties of superconducting states through the Chern number C=∑_n∈ occC^n. To achieve this, we introduce the pseudo Berry curvature as follows
Ω_μν^*n = i ∑_n'∉ occ⟨n|∂ H/∂ R^μ|n'⟩⟨n'|∂ H/∂ R^ν|n⟩-(μ↔ν)/(ϵ_n-ϵ_n')^2 ,
where n∈ occ and n'∉ occ. Note that Ω_μν^*n is not the Berry curvature, and its integral divided by 2π is not the Chern number for the n-th band. Nevertheless, it can be proven that the Chern number is
C = 1/2π∑_n∈ occ∫_BZΩ_k_x,k_y^*n(𝐤)d^2𝐤 .
where BZ represents the integration over the Brillouin zone, and Ω_k_x,k_y^*n(𝐤) represents the pseudo Berry curvature with respect to the 2-dimensional momentum space. (See Appendix.<ref> for a detailed proof.)
After calculating the Berry curvature, we can determine the quasiparticle transport, particularly the thermal Hall effect which we investigate in this article.
The thermal Hall effect is a crucial observable effect that is possible to distinguish different superconducting pairings. The intrinsic anomalous Hall effect (AHE) is governed by the Berry curvature <cit.>, which can also be obtained through semiclassical theory that considers wave-packet dynamics <cit.>. However, neither quantum nor semiclassical theory involves the superconductivity that relies on the effective attraction between electrons. Therefore, we employ the semiclassical theory for superconductors <cit.> to derive the thermal Hall conductivity given by
κ_xy^q = 2/T∫d^2 k/(2π)^2(Ω_𝐤)_xy∫^∞_E_𝐤f'(η , T) η^2dη ,
where we set k_B=1 for convenience. The factor 2 comes from the spin contribution. Ω _𝐤 represents the Berry curvature with the subscript xy indicating the flat where the Hall conductivity locates. f(E,T) is the Fermi-Dirac distribution and f' is its derivative with respect to E. The zero-temperature thermal Hall conductivity can be written as κ_0 = π C_1k_B^2T/6ħ, where C_1 is the first Chern number of the system. However, the low-temperature Hall conductivity depends on both the Berry curvature and energy band structures. Thus, to understand the low-temperature results, we must combine these two aspects.
§ NON-SOC SITUATION
We present a model that describes superconducting states on a kagome lattice with chiral charge density wave, and we analyze the topological phase diagrams and quasiparticle transport in the absence of SOC. In the absence of SOC, the system retains spin symmetry. The Hamiltonian in the case without SOC can be written as shown below
ℋ=
[ ℋ_TB(𝐤)+ℋ_CDW(𝐤) 0 0 Δ; 0 ℋ_TB(𝐤)+ℋ_CDW(𝐤) -Δ 0; 0 -Δ^† -[ℋ_TB(-𝐤)+ℋ_CDW(-𝐤)]^* 0; Δ^† 0 0 -[ℋ_TB(-𝐤)+ℋ_CDW(-𝐤)]^* ],
where Δ represents a part of the Hamiltonian for the superconducting term. Without SOC, the Hamiltonian can eliminate the influence of spins and be reduced to half of its original dimensions. One of the reduced matrices comes from the first and fourth row, and the other comes from the second and third row. It can be easily proved that the two matrices are the same by performing a unitary transformation with σ_z. The reduced Hamiltonian can be written as shown below
ℋ=
[ ℋ_TB(𝐤)+ℋ_CDW(𝐤) Δ; Δ^† -[ℋ_TB(-𝐤)+ℋ_CDW(-𝐤)]^* ],
which is expressed using the basis of (C_𝐤1^†,⋯,C_𝐤12^†,C_-𝐤1,⋯,C_-𝐤12).
The topological phase diagram was calculated using Eq.(<ref>), as shown in Fig.<ref>. Although the phase diagram was calculated for the parameters (t,ξ,Δ)=(1,0.3,0.03), we verified that the phase diagram is valid for Δ∈[0.01,0.03], indicating that the superconducting gap does not affect the system's topological properties.
For μ∈ [-0.1,0], the Chern number for all pairing symmetries is 2. This is because the system becomes an insulator for μ∈ [-0.1,0], and the sum of the Chern numbers of all occupied bands equals 2. Since the Fermi surface is already gapped, the superconducting pairing symmetry does not contribute to the system's topology, and hence all superconducting pairing symmetries are topologically trivial. However, for μ∈(.0,0.1]., the system becomes a metal, and the Chern number for s-wave pairing remains 2, while the Chern number for d+id-wave pairing increases by 2 to become 4, and the Chern number for d-id-wave pairing decreases by 2 to become 0. S-wave superconducting pairing is topologically trivial and does not affect the system's Chern number. On the other hand, complex d-wave contributes to a Chern number of 2, where d+id-wave and d-id-wave possess opposite angular momenta, with the former SC pairing contributing +2 and the latter contributing -2.
Strictly speaking, there are no topological superconducting terms when μ∈[-0.1,0]. That is, when the normal states of the system are insulating, there are no differences in topology between s-wave, d+id-wave, and d-id-wave. Our objective is to calculate the thermal Hall conductivity curves, which depend on the system's topological properties, to distinguish different SC pairings. If there are no differences between the three pairing symmetries, we cannot differentiate them. Hence, we study the parameter regions where μ∈(.0,0.1]., and we take μ=0.1 as an example.
The Fermi surface of the model at parameters (ξ,Δ,μ)=(0.3,0.03,0.1) is shown in Fig.<ref>, mainly distributed around the Γ and M points. The Berry curvature for the three different pairing symmetry superconducting states is shown in Fig.<ref>-<ref>, whose integral equals the Chern numbers of the systems representing the topology of the whole system. The distribution of Berry curvature of the three different pairing symmetry states matches that of the Fermi surface because the superconducting terms gap the Fermi surface, making the system fully gapped and topologically non-trivial, while having topological superconducting term. Focusing on the differences between the three pairing states, we can see that the main difference is the Berry curvature on the ring-like regions around the Γ point. Berry curvature on the ring-like region of s-wave and d+id-wave pairings are both positive, but that of s-wave is significantly smaller than that of d+id-wave, while Berry curvature on the ring-like region of d-id-wave pairing is negative. The results make sense because complex d-wave superconducting states contribute a ring-like region of Berry curvature around the Γ point in the hexagonal lattice <cit.>.
As Eq.<ref> states, thermal Hall conductivity depends on Berry curvature. Due to the Fermi-Dirac distribution, the low temperature portion of the thermal Hall conductivity curve is primarily determined by the highest occupied band, which is the band closest to zero energy. Therefore, we calculated the Berry curvatures of the highest occupied band (shown in Fig.<ref>-<ref>) and the thermal Hall conductivity curves (shown in Fig. <ref>). The qualitative differences in the curves make it easy to distinguish between different pairing symmetry states, which can be inferred from the Berry curvatures of the highest band. The thermal Hall conductivity of the s-wave pairing state, represented by the red curve, is nearly zero near 0K and gradually increases with temperature, while that of the d+id-wave pairing state, represented by the blue curve, increases rapidly. On the other hand, the thermal Hall conductivity of the d-id-wave pairing state, represented by the green curve, decreases to a negative value as the temperature increases. Although all three types of SC pairing symmetry states have non-zero Berry curvature at the K and M points, they are almost identical at these points. The primary difference between these three states is the ring-like region around the Γ point, which we have explained arises from different topological superconducting states.
Therefore, different superconducting states can be qualitatively distinguished by examining the thermal Hall conductivity curves since different pairing symmetry SC states contribute different topological properties, or more precisely, different Berry curvature in the ring-like region around the Γ point.
§ WITH-SOC SITUATION
In this chapter, we investigate the impact of SOC on a model of superconducting states on a kagome lattice with chiral CDW, where the spin symmetry is absent. We analyze two scenarios: (i) the normal state is a metal, such that a small SOC can split the Fermi surface, resulting in a non-zero Chern number and providing different topological superconducting states; (ii) the normal state is an insulator, requiring a large SOC to break the spin symmetry violently. In this case, a Fermi surface will emerge, and the superconducting terms can gap the new Fermi surface and produce topological superconducting states.
When λ≠ 0, the Hamiltonian can no longer be reduced to a block diagonal form, so we must consider the complete Hamiltonian. It is important to note that Fig.<ref> depicts the phase diagram for the reduced Hamiltonian in Eq.(<ref>). However, we must consider Eq.(<ref>) as a part of Eq.(<ref>), so the complete Hamiltonian's topological phase diagrams only need to double the Chen number in Fig.<ref>.
In the first scenario with SOC, we gradually increase the strength coefficient λ∈ [0,0.1] in Eq.(<ref>), the Hamiltonian is given by
ℋ=
[ ℋ_TB(𝐤)+ℋ_CDW(𝐤) ℋ_SOC^↑↓(𝐤) 0 Δ; ℋ_SOC^↓↑(𝐤) ℋ_TB(𝐤)+ℋ_CDW(𝐤) -Δ 0; 0 -Δ^† -[ℋ_TB(-𝐤)+ℋ_CDW(-𝐤)]^* -ℋ_SOC^↑↓(-𝐤)^*; Δ^† 0 -ℋ_SOC^↓↑(-𝐤)^* -[ℋ_TB(-𝐤)+ℋ_CDW(-𝐤)]^* ],
where ℋ_SOC(𝐤) depends on the strength coefficient λ. We choose a small λ to observe how SOC breaks the spin symmetry and splits the Fermi surface shown in Fig.<ref> more clearly.
We have computed phase diagrams at the fixed parameter values (Δ,μ)=(0.03,0.1), which are presented in Fig.<ref>-<ref>. To identify the topological phase transition boundaries, we observed that the gap closing always accompanied the topological phase transition. The phase transition boundary is discerned as a diagonal slash from the bottom left to the top right, indicating that it is more challenging to gap the Fermi surface for larger values of ξ. Furthermore, we note that the phase transition is primarily driven by the SOC term. By increasing the value of λ from 0 to 0.1, the SOC term re-gaps the Fermi surface and causes a reduction of Chern number by 3. For our analysis of the impact of the SOC term on the topological properties and thermal Hall conductivity of the system, we chose the parameter values (ξ,λ,Δ,μ)=(0.3,0.1,0.03,0.1).
In Fig.<ref> and Fig.<ref>-<ref>, we present the Fermi surface and the Berry curvatures of the summation of the occupied bands related to the Chern numbers for the three pairing symmetry SC states. In the ring-like region around the Γ point, which is contributed by the SOC term, both the Fermi surface and the Berry curvature are split into two pieces. Furthermore, we observe changes in the shapes of the Berry curvature around the M points.
The thermal Hall conductivity curves for three different SC states with distinct pairing symmetries are displayed in Fig.<ref>. It is observed that, despite the variation of λ from 0 to 0.1, the curves exhibit little change. This is due to two factors that are consistent with the scenario without SOC. First, the changes in the Chern numbers of all three SC states with distinct pairing symmetries are identical, which implies that the SOC term's impact on the topological properties is uniform. Second, the Berry curvature, which is crucial for determining the thermal Hall conductivity, is illustrated in Fig.<ref>-<ref>. Although the pattern of Berry curvature in the presence of SOC is more intricate than that without it, the principal differences between the three SC states are found in the ring-like region around the Γ point. To be specific, there are only two rings that differ in the ring-like region of the Berry curvature shown in Fig.<ref>-<ref> because the Fermi surface (Fig.<ref>) is only divided into two pieces in this region. The rings of s-wave pairing are fainter, which means that the Berry curvature here is small, whereas the rings of d+id-wave pairing are significantly thicker. In contrast, the Berry curvature in the rings of d-id-wave pairing is negative.
Consequently, the differences in the thermal Hall conductivity curves arise from the distinct pairing symmetry SC states rather than from the SOC term, which is the same as the situation in the absence of SOC. It is noteworthy that the strength of the SOC term is relatively small compared to the strength of the CDW and SC terms and can therefore be treated as a perturbation. Thus, in this case, the differences in the thermal Hall conductivity curves are protected by topology provided by the topological SC state.
Up to this point, we have focused on the scenario where μ=0.1, in which case the system is a superconductor even if λ=0, due to the fact that the normal state without SOC term Ĥ_TB+Ĥ_CDW is metallic. Therefore, the breaking of spin symmetry in a superconducting state by the SOC term can be considered as a perturbation.
In contrast, when μ≤0 in the absence of SOC, the normal state is an insulator, and thus it cannot be considered a superconducting state. However, if we examine the situation where λ is large enough to shift at least one band across the Fermi energy level, the system can undergo a phase transition and become a metallic state. In such a scenario, topological superconducting states may exist as well. We wish to investigate whether it is possible to differentiate between various pairing symmetry SC states based on their thermal Hall conductivity curves in this context. For the sake of convenience, we will assume μ=0 as an illustrative example.
The phase diagrams in Fig.<ref> illustrate the s-wave, d+id-wave, and d-id-wave pairing symmetry SC states. Our focus is on understanding how the Chern number changes with parameters ξ and λ, and how different Chern numbers control the shapes of thermal Hall conductivity curves. Specifically, we are interested in the region with well-defined Chern number where there is a complete Fermi surface and well-defined quasiparticle transports. Thus, we do not calculate the Chern number for the small transition region (highlighted in gray in the phase diagrams), and we conclude that there is not even a topological number in this region.
The first boundary is a diagonal line running from around (ξ,λ)=(0.32,0.35) to (0.35,0.37) for d+id-wave and d-id-wave pairings (Fig.<ref>,<ref>). Under this boundary, the Chern number equals 4 for complex d-wave pairing symmetry states, indicating that there is no superconducting state in this parameter zone. Note that we do not observe any points whose Chern number is not well-defined, and therefore we believe that there is no phase transition at this boundary for s-wave. The second boundary is a diagonal line running from about (ξ,λ)=(0.25,0.38) to (0.34,0.45) exclusively for s-wave pairing (Fig.<ref>). Below this boundary, the Chern number equals 4, and above it, the Chern number equals 10. The third boundary is a diagonal line running from (ξ,λ)=(0.25,0.41) to (0.32,0.45) exclusively for d-id-wave pairing (Fig.<ref>), and it does not exist in the d+id-wave pairing state. The upper left corner in Fig.<ref> falls in the region C=2, and the small region beside the first boundary in Fig.<ref> falls in the region C=6. We did not analyze them because the phase space areas they cover are too small. Our focus is on regions with well-defined Chern numbers and larger phase space areas. Such regions are more representative and ensure the universality of our conclusions.
We have observed that the topological phase diagrams in the case of μ=0.1 exhibit some notable differences when compared to those for μ=0. Firstly, the positions of boundaries are dissimilar for μ=0, whereas they are quite similar for μ=0.1. Secondly, the variation of Chern number is not uniform for μ=0, whereas it is uniform for μ=0.1. To understand these differences, we can examine the Berry curvature and explore the nature of interactions that might account for the dissimilarities in both the topology and quasiparticle transport. In essence, we aim to investigate the topological origin of the differences in thermal Hall conductivity curves for the three pairing symmetry states and the reason behind the contrast in the phase diagrams for s-wave and complex d-wave pairing symmetry states.
We will focus on two regions: (i) the region in which the Chern number for s-wave equals 4, d+id-wave equals 0, and d-id-wave equals 2, and (ii) the region where the Chern number for s-wave equals 10, d+id-wave equals 0, and d-id-wave equals -4. We do not consider the situation where the Chern number for all three pairing symmetry states is 4 because it implies they are all topologically trivial. Furthermore, we do not analyze the situation where the Chern number for s-wave equals 10, d+id-wave equals 0, and d-id-wave equals 2 separately, as the differences can be inferred by examining the aforementioned regions.
In the first case, we consider the parameters (ξ,λ,Δ,μ)=(0.3,0.4,0.03,0), resulting in Chern numbers of 4, 0, and 2 for the s-wave, d+id-wave, and d-id-wave pairing SC states, respectively. As shown in Fig.<ref>, the Fermi surface is notably different from that of the situation when μ=0.1. Although the Fermi surface in the ring-like region also splits into two pieces, the Fermi surface around M points moves to K points, which is a distinguishing feature.
We cannot directly compare our current results with previous findings; therefore, we further verified the symmetry of the system. It has been confirmed that the Fermi surface displays a six-fold rotational symmetry, consistent with the D_6h^* symmetry of the CDW term <cit.> and at least C_6h symmetry of the SOC term in our model. Furthermore, we explored the topological superconductivity of spin singlet pairing at the atomic level, where the energy gap function of the s-wave pairing possesses the same D_6h symmetry as the lattice, and the modulus of the energy gap function of the d-wave pairing has at least C_6h symmetry. Hence, we can infer that the Berry curvature of all energy bands also exhibits at least a six-fold rotational symmetry. This confirms the precision of our calculations.
Let us examine the Berry curvature, whose integral equals the Chern number, shown in Fig.<ref>-<ref>. Two main characteristics emerge: (i) The inner ring in the ring-like region is affected by the superconducting pairing symmetry. The inner ring of the s-wave pairing state is the largest, followed by the d+id-wave pairing state, while the d-id-wave pairing state has a negative inner ring. (ii) The outer ring of the ring-like region is influenced by the interaction between SOC and SC term. Fragments on the ring facing M points of s-wave pairing states are negative, while those of d+id-wave and d-id-wave pairing states are positive. Furthermore, the number of fragments on outer ring for s-wave and d+id-wave pairing states is 12, while that for d-id-wave is double.
Moving on to the thermal Hall conductivity curves in Fig.<ref>, we observe significant differences from the situation when μ=0.1. The curve for the s-wave pairing state goes negative in the low-temperature region and quickly becomes positive again. The curve for the d+id-wave pairing state remains flat, while that for the d-id-wave pairing state goes positive in the low-temperature region. Combined with the above, we conclude that the low-temperature behavior of the thermal Hall conductivity curves depends on the outer ring of the ring-like region around Γ point, which is determined by the topology contributed by the interaction of SOC and SC. Although there are other differences in the Berry curvature of the 25th bands (Fig.<ref>-<ref>), they are either too small to contribute to the curve shape or far away from the Fermi surface.
Let's consider the second case, where the parameter values of (ξ, λ, Δ, μ) = (0.27, 0.43, 0.03, 0) are used as an example. In this case, the Chern numbers for the s-wave, d+id-wave, and d-id-wave pairing SC states are 10, 0, and -4, respectively. The differences in the Fermi surfaces between (ξ, λ, Δ, μ) = (0.3, 0.4, 0.03, 0) (Fig.<ref>) and (ξ, λ, Δ, μ) = (0.27, 0.43, 0.03, 0) (Fig.<ref>) are minimal, with only a slight expansion of the outer ring and a slight shrinkage of the inner ring.
The summation of the Berry curvature of the occupied bands is shown in Fig.<ref>-<ref>. The change in the Chern number of the s-wave pairing symmetry SC state from 4 to 10, a large leap, is due to the change from negative to positive on the outer ring. The change in the Chern number of the d+id-wave pairing from 2 to -4 is due to some of the positive segments on the outer ring becoming negative. Consequently, the topological phase transitions arise from the outer ring of the ring-like region.
The thermal Hall conductivity is shown in Fig.<ref>. Examining the Berry curvature shown in Fig.<ref>-<ref>, we observe that they are almost identical except for the region around the Fermi surface. The curve for s-wave pairing becomes positive, the curve for d-id-wave pairing becomes negative, and the curve for d+id-wave pairing remains almost flat. All of these differences arise from the outer ring of the ring-like region, which is the origin of the topological properties at this parameter.
Hence, it can be concluded that, for chemical potential μ=0, the shapes of the thermal Hall conductivity curves are determined by the outer ring of the ring-like region of the Berry curvature. This outer ring is closely linked to the topology that arises from the interplay between the SOC and SC terms.
§ DISSUCUSION AND CONCLUSION
Based on the AV_3Sb_5 system, where A is either K, Rb, or Cs, we have developed a model on the kagome lattice to describe topological superconducting states characterized by chiral charge density waves. We have also predicted their quasiparticle transport. We have considered different superconducting pairing symmetry states, namely s-wave, d+id-wave, and d-id-wave pairings, and we aim to differentiate these states based on a measurable value, which is the thermal Hall conductivity.
In the absence of spin-orbit coupling, the phase diagrams of the superconducting states are divided into two regions. For μ∈ [-0.1,0], the normal states are insulators, and thus, topological superconducting states do not exist. For μ∈(.0,0.1]., topological superconducting states exist, and the thermal Hall conductivity curves of different superconducting pairing symmetry states are qualitatively distinct. The Chern number contributed by the s-wave pairing state's SC term is 0, whereas that contributed by the d±id-wave pairing states is ±2. Notably, the primary difference in the Chern number arises from the ring-like region around the Γ point of the Berry curvature, which also determines the qualitative difference among the three superconducting pairing symmetry states. Hence, the thermal Hall conductivity curves of the three superconducting pairing symmetry states are qualitatively distinct and topologically protected in the absence of SOC.
In the presence of spin-orbit coupling, our analysis can be divided into two classical scenarios. First, we add an SOC term as a perturbation to a topological superconducting state (μ=0.1). Second, we apply a large SOC term to drive the system into an insulator-metal phase transition, followed by adding an SC term (μ=0).
At μ=0.1, as the strength of the SOC term increases, the system undergoes a topological phase transition, and the SOC contribution to the Chern number equals -3, which primarily arises from the ring-like region of the Berry curvature. Thus, the qualitative differences in the thermal Hall conductivity curves are topologically protected by the contribution of the SC term, since the SOC term contributes the same Chern number for all three superconducting pairing symmetry states.
At μ=0, varying the strength ratio of the SOC and CDW leads to topological phase transitions in our parameter range. Notably, the topological phase transition boundaries for the s-wave, d+id-wave, and d-id-wave pairing symmetry SC states are different, owing to the complex interaction among the CDW, SOC, and SC terms. We have considered two parameters as examples and concluded that the differences in the thermal Hall conductivity curves arise from the outer ring of the ring-like region of the Berry curvature. Therefore, the qualitative differences in the thermal Hall conductivity are also topologically protected by the interaction between SOC and SC terms of the system.
In conclusion, we have analyzed the topology of various superconducting states on the kagome lattice with a chiral charge density wave and calculated their quasiparticle transport. Owing to the topological protection of the system, the thermal Hall conductivity curves can be used to distinguish different superconducting pairing symmetry states in such systems.
We would like to thank Fan Yang, Zhongbo Yan, Zheng-Yang Zhuang, and Shanbo Chow for the helpful discussions. This project is supported by NKRDPC-2018YFA0306001, NKRDPC-2022YFA1402802, NSFC-92165204, NSFC-12174453, NSFC-11974432, GBABRF-2019A1515011337, Leading Talent Program of Guangdong Special Projects (201626003), and Shenzhen International Quantum Academy (Grant No. SIQA202102).
§ CONSTRUCTION OF TIGHT-BINDING MODEL
§.§ Nearstest Neighbor Tight-Binding Model
Fourier transformation of creation and annihilation operators can be written as
{
c_𝐣^† =1/√(N)∑_𝐤 e^+i𝐤·𝐫_𝐣c_𝐤^†
c_𝐣 =1/√(N)∑_𝐤 e^-i𝐤·𝐫_𝐣c_𝐤 .
.
So the tight-binding model can be written in 𝐤 space as
H_TB=
[ -μ -2tcos(k_1/2) -2tcos(k_2/2); -2tcos(k_1/2) μ -2tcos(k_3/2); -2tcos(k_2/2) -2tcos(k_3/2) -μ ].
§.§ Spin-Orbit Coupling
In this section, we are deriving the Rashba SOC Hamiltonian, which can be written as<cit.>
H_SOC(𝐫)=-λ( σ×𝐩) ·ẑ = λ(𝐩×σ) ·ẑ ,
where we can see that the momentum of the electron is perpendicular to the Pauli matrix vector. Rewrite the Hamiltonian into the second quantization form in the basis of c_iα^† = (c_iα,↑^†, c_iα,↓^†)
H_SOC(𝐫) =λ∑_<i,j>∑_<α,α'>c_iα^†σ_iα,jα' c_jα'
=λ∑_<i,j>∑_<α,α'>c_iα^†(R_π/2𝐞_iα,jα'·σ) c_jα' ,
where σ_iα,jα' represents the Pauli matrix vector that is perpendicular to the bond direction. R_π/2 is the 3-D in-plain rotation matrix with rotation angle π/2. 𝐞_iα,jα'=𝐫_𝐢α-𝐫_𝐣α' is the vector connecting the nearest sites. λ is the parameter of SOC strength. More clearly, when electrons hop or bond with the neighbor electron, there is a certain direction for the momentum, which is actually the bond direction represented by the creation and annihilation operators on certain sites.
Fourier transformation of the SOC Hamiltonian can be written as
Ĥ_SOC(𝐤)=λ∑_𝐤∑_<α,α'> e^i𝐤·𝐞_iα,jα'c_𝐤α^†(R_π/2𝐞_iα,jα'·σ)c_𝐤α',
where c_𝐤α^†=(c_𝐤α,↑^†, c_𝐤α,↓^†) is the Fourier transformation operator of c_i^†. The Ĥ_SOC(𝐤)is the Fourier transformation value of H_SOC(𝐫), marked as H_SOC(𝐫) ℱ⟷Ĥ_SOC(𝐤). For the kagome lattice with CDW modulation, there are 12 atoms in the unit cell, so the Hamiltonian expands to a 24× 24 matrix in the basis of c_𝐤^†=(c_𝐤1^†,c_𝐤2^†,⋯, c_𝐤12^†).
§ SUPERCONDUCTIVITY
AV_3Sb_5 is the first quasi-2D superconductor in kagome lattice. As the result, the superconductivity is widely interested. However, the SC pairing symmetry is not yet clear, and there are conflicting experimental results that indicate different pairing symmetries. We are considering both spin singlets and spin triplets.
§.§ s-wave pairing
S-wave SC is the so-called conventional SC which is explained by the BCS theory. Based on the self-consistent field approximation, the BCS Hamiltonian can be written as
Ĥ_s-wave=V/2∑_𝐤𝐤'(c_𝐤'↑^† c_-𝐤'↓^† c_-𝐤↓ c_𝐤↑+c_𝐤'↓^† c_-𝐤'↑^† c_-𝐤↑ c_𝐤↓),
where we ignore the interband coupling. This is the simplest pairing, and the self-consistent field approximation(SCFA) is the usual simplification of the Hamiltonian, which can be written as
{ <c_-𝐤↓ c_𝐤↑>=-<c_-𝐤↑ c_𝐤↓>≠ 0
<c_𝐤'↑^† c_-𝐤'↓^†> = -<c_𝐤'↓^† c_-𝐤'↑^†>≠ 0 ,
.
where we ignore the difference of the index 𝐤 because the average energy is 𝐤-independent. Define that Δ = ∑_𝐤<c_-𝐤↓c_𝐤↑>, so the SCFA Hamiltonian can be written as
Ĥ_s-wave=Δ/2 ∑_𝐤,α(c_𝐤α,↑^† c_-𝐤α,↓^† - c_𝐤α,↓^† c_-𝐤α,↑^†)
+Δ^*/2 ∑_𝐤,α(c_-𝐤α,↓ c_𝐤α,↑-c_-𝐤α,↑ c_𝐤α,↓) ,
where we have used the equation
∑_𝐤,α c_𝐤α,↓^† c_-𝐤α,↑^†=∑_-𝐤,α c_-𝐤α,↓^† c_𝐤α,↑^†
= ∑_𝐤,α c_-𝐤α,↓^† c_𝐤α,↑^†=∑_𝐤≠0,α -c_𝐤α,↑^† c_-𝐤α,↓^† ,
which is the same as ∑_𝐤,αc_-𝐤α,↑ c_𝐤α,↓.
We hope that we can easily expand the BCS Hamiltonian into a topological SC Hamiltonian, so we rewrite the s-wave SC in a tight-binding form which means the superconducting electrons have limited spatial mobility. The Hamiltonian can be written as
Ĥ_s-wave= Δ/2∑_𝐤, αe^i𝐤·𝐞_iα,jαc_𝐤α,↑^† c_-𝐤α,↓^† + h.c.
- Δ/2∑_𝐤, αe^i𝐤·𝐞_iα,jαc_𝐤α,↓^† c_-𝐤α,↑^† + h.c. .
Bogoliubov Hamiltonian is written in the basis of Nambu representation
(c_𝐤^†, c_-𝐤)_r=
(. .c_𝐤1,↑^†,⋯,c_𝐤12,↑^†,c_𝐤1,↓^†,⋯,c_𝐤12,↓^†,.
.c_-𝐤1,↑,⋯,c_-𝐤12,↑,c_-𝐤1,↓,⋯,c_-𝐤12,↓) ,
where the subscript r on the left of the equation represents the rearrangement of the creation and annihilation operators. Note that we are here considering the in-band coupling and ignore the interband coupling, so the site indexes i,j should be considered to be the identical sublattice of different unit cells.
§.§ d+id-wave pairing
For another spin-singlet pairing SC, we are here consider a chiral d+id-wave SC. This kind of SC has an angular-momentum two times faster than the rotation speed of 𝐤 vector. One of the simple methods to construct such a gap function Δ_d+id(𝐤) on the lattice model is transforming a real-space effective Hamiltonian to 𝐤-space. The real-space Hamiltonian can be written as
H_d-wave = ∑_i,j∑_αΔ(iα,jα)c_iα,↑^† c_jα,↓^†+h.c. ,
where Δ(iα,jα)=Δ e^i2θ_iα,jα is the SC gap function which depends on sublattice of the unit cell. θ_ij is the angle between vector 𝐞_x and 𝐞_iα,jα=𝐫_iα-𝐫_jα, and the double-angle phase represent the l=2 angular momentum of d-wave.
The Fourier transformation of the d+id-wave pairing Hamiltonian can be written as
Ĥ_d-wave=Δ∑_𝐤, αe^i2θ_iα,jαe^i𝐤·𝐞_iα,jαc_𝐤α,↑^† c_-𝐤α,↓^† + h.c. .
It might be a little bit weird to involve real-space indexes into a 𝐤-space Hamiltonian. However, 𝐞_ij is independent of the position of cell i and j, but depend on the relative position between i and j, which is actually some confirmed vector and we can tell them without knowing the site index. Here we can do a similar trick, rewrite the Hamiltonian as
Ĥ_d-wave= Δ/2∑_𝐤, αe^i2θ_iα,jαe^i𝐤·𝐞_iα,jαc_𝐤α,↑^† c_-𝐤α,↓^† + h.c.
- Δ/2∑_𝐤, αe^i2θ_iα,jαe^-i𝐤·𝐞_iα,jαc_𝐤α,↓^† c_-𝐤α,↑^† + h.c. .
§ PROOF OF EQ.<REF>
Consider the situation that n,n'∈ occ. One of the terms of Ω^n_μν can be written as
i⟨n|∂ H/∂ R^μ|n'⟩⟨n'|∂ H/∂ R^ν|n⟩-(μ↔ν)/(ϵ_n-ϵ_n')^2 .
And one of the terms of Ω^n'_μν can be written as
i⟨n'|∂ H/∂ R^μ|n⟩⟨n|∂ H/∂ R^ν|n'⟩-(μ↔ν)/(ϵ_n'-ϵ_n)^2
= -i⟨n|∂ H/∂ R^μ|n'⟩⟨n'|∂ H/∂ R^ν|n⟩-(μ↔ν)/(ϵ_n-ϵ_n')^2 .
Therefore, the two terms that belongs to Ω_μν^n and Ω_μν^n' cancelled, which means when we add them together, the summation do not include the matrix elements labeled by (n,n'), even if there is energy degeneracy.
As a result, when we add up all the Ω_μν^n,n∈ occ, the summation will not include the matrix elements labeled by (n,n'),∀ n,n'∈ occ. The result can be written as
∑_n∈ occΩ_μν^n = ∑_n∈ occ∑_n'∉ occi⟨n|∂ H/∂ R^μ|n'⟩⟨n'|∂ H/∂ R^ν|n⟩-(μ↔ν)/(ϵ_n-ϵ_n')^2,
which can guarantee we are able to obtain a well-define Chern number.
apsrev4-2
|
http://arxiv.org/abs/2306.11121v1
|
20230619184853
|
Projection-Free Online Convex Optimization via Efficient Newton Iterations
|
[
"Khashayar Gatmiry",
"Zakaria Mhammedi"
] |
math.OC
|
[
"math.OC",
"cs.LG"
] |
Supervised Auto-Encoding Twin-Bottleneck Hashing
[
Received XXX, XXXX; accepted YYY, YYYY
================================================
This paper presents new projection-free algorithms for Online Convex Optimization (OCO) over a convex domain 𝒦⊂ℝ^d. Classical OCO algorithms (such as Online Gradient Descent) typically need to perform Euclidean projections onto the convex set to ensure feasibility of their iterates. Alternative algorithms, such as those based on the Frank-Wolfe method, swap potentially-expensive Euclidean projections onto 𝒦 for linear optimization over 𝒦. However, such algorithms have a sub-optimal regret in OCO compared to projection-based algorithms. In this paper, we look at a third type of algorithms that output approximate Newton iterates using a self-concordant barrier for the set of interest. The use of a self-concordant barrier automatically ensures feasibility without the need for projections. However, the computation of the Newton iterates requires a matrix inverse, which can still be expensive. As our main contribution, we show how the stability of the Newton iterates can be leveraged to compute the inverse Hessian only a vanishing fraction of the rounds, leading to a new efficient projection-free OCO algorithm with a state-of-the-art regret bound.
§ INTRODUCTION
We consider the Online Convex Optimization (OCO) problem over a convex set ⊂^d, in which a learner (algorithm) plays a game against an adaptive adversary for T rounds. At each round t∈[T], the learner picks w_t ∈ given knowledge of the history _t-1{(ℓ_s, w_s)}_s<t. Then, the adversary picks a convex loss function ℓ_t: →ℝ with the knowledge of _t-1 and the iterate w_t, and the learner suffers loss ℓ_t(w_t) and proceeds to the next round. The goal of the learner is to minimize the regret after T rounds:
_T(w) = ∑_t=1^T ℓ_t(w_t) -∑_t=1^T ℓ_t(w),
against any comparator w∈. The aim of this paper is to design computationally-efficient (projection-free) algorithms for OCO that enjoy the optimal (up to log-factor in T) O(√(T)) regret.
The OCO framework captures many optimization settings relevant to machine learning applications. For example, OCO algorithms can be used in offline convex optimization as more computationally- and memory-efficient alternatives to interior-point and cutting plane methods whenever the dimension d is large <cit.>. OCO algorithms are also often used in stochastic convex optimization, where the standard O(√(T)) regret (achieved by, e.g. Online Gradient Descent) translates into the optimal O(1/√(T)) rate[This is the optimal rate when no further assumptions are made.] via the classical online-to-batch conversion technique <cit.>. It has been shown that OCO algorithms can also achieve state-of-the-art accelerated rates in both the offline and stochastic optimization settings despite being designed for the more general OCO framework <cit.>. What is more, it has recently been shown that even non-convex (stochastic) optimization can be reduced to online linear optimization (a special case of OCO), where it is then possible to recover the best-known convergence rates for the setting <cit.>.
Given the prevalent use of OCO algorithms in machine learning applications, it is important to have computationally-efficient algorithms that scale well with the dimension d of the ambient space. However, most OCO algorithms fall short of being efficient because of the need of performing (Euclidean) projections onto (potentially at each iteration) to ensure that the iterates are feasible. These projections are often inefficient, especially in high-dimensional settings with complex feasible sets. Existing projection-free OCO algorithms address this computational challenge by swapping potentially-expensive Euclidean projections for often much cheaper linear optimization or separation over the feasible set . However, existing projection-free algorithms have sub-optimal regret guarantees in terms of their dependence in T, or have potentially unbounded “condition numbers” for the feasible set multiplying their regret guarantee.
Contributions.
In this paper, we address these computational and performance challenges by revisiting an existing (but somewhat overlooked) type of projection-free OCO algorithms. Unlike existing algorithms, our proposed method does not require linear optimization or separation over the feasible set . Instead, the algorithm, Barrier-Regularized Online Newton Step (),[We credit the name to <cit.> who used barrier-regularized Newton steps for the portfolio selection problem.] uses a self-concordant barrier Φ for the set to always output iterates that are guaranteed to be within ; much like interior point methods for offline optimization. In particular, our algorithm outputs Newton iterates with respect to time-varying, translated versions of Φ. The main novelty of our work is in devising a new efficient way of computing the Newton iterates without having to evaluate the inverse of the Hessian of the barrier at every iteration, which can be computationally expensive in high-dimensional settings. Our algorithm only needs to compute a full inverse of the Hessian a vanishing O(1/√(T)) fraction of the rounds. For the rest of the rounds, the computational cost is dominated by that of evaluating the gradient of the barrier Φ, which can be much cheaper than evaluating the inverse of its Hessian in many cases.
For the special case of a polytope with m constraints, we show that there is a choice of a barrier (e.g. the Lee-Sidford barrier) that when used within our algorithm, reduces the per-round computational cost to essentially O(1) linear-system-solves of size m× d. We show that this is often cheaper than performing linear optimization over , which other projection-free algorithms require. More importantly, our algorithm achieves a dimension-free O(√(T)) regret bound. This improves over the existing regret bounds of projection-free algorithms over polytopes. For example, among projection-free algorithms that achieve a O(√(T)) regret, the algorithms by <cit.>, which require a separation/membership Oracle for , have a multiplicative κ =R/r factor multiplying their regret bounds, where r,R>0 are such that (r)⊆⊆(R). The constant κ, known as the asphercity <cit.>, can in principle be arbitrarily large. Even after applying a potentially expensive pre-processing step, which would typically involve putting the set into (near-) isotropic position <cit.>, κ can still be as large as d in the worst-case, and so the regret bounds achieved by the algorithms of <cit.> can be of order O(d√(T)); this is worse than ours by a d factor. Other projection-free algorithms based on the Frank-Wolfe method, e.g. those in <cit.>, also have multiplicative condition numbers that are even less benign that the asphercity κ. In fact, the condition numbers in the regret bounds for polytopes appearing in, e.g. <cit.>, can in principle be arbitrarily large regardless of any pre-processing.
Finally, another advantage of our algorithm is that it can guarantee a sublinear regret even for non-Lipschitz losses (i.e. where the norm of the sub-gradients may be unbounded). In particular, we show that the general guarantee of implies a O(√(dT)) regret bound for the portfolio selection problem <cit.> and a problem of linear prediction with log-loss <cit.>, all while keeping the per-round computational cost under O(d^2), when T≥ d. The losses in both of these problems are neither bounded or Lispchitz.
Related works.
In the past decade, many projection-free OCO algorithms have been developed to address the computational shortcoming of their projection-based counter parts <cit.>. Most projection-free algorithms are based on the Frank-Wolfe method and perform linear optimization (typically once per round) over instead of Euclidean projection. Under no additional assumptions other than convexity and lipschitzness of the losses, the best-known regret bound for such algorithms scales as O(T^3/4) <cit.>. While this bound is still sublinear in T and has no dependence in the dimension d, it is sub-optimal compared to the O(√(T)) regret bound achievable with projection-based algorithms. In the recent years, there have been improvements to this bound under additional assumptions such as when the functions are smooth and/or strongly convex <cit.>, or when the convex set is smooth and/or strongly convex <cit.>. For the case where is a polytope, <cit.> presented a linear-optimization-based algorithm that enjoys a O(μ√(d T)) regret bound, where μ is a conditioning number for the set . Unfortunately, μ can be large for many sets of interests as it essentially scales inversely with the minimum distance between the vertices of . In this work, we achieve a dimension-free O(√(T)) regret bound without the μ factor.
More recently a new type of projection-free algorithms have emerged which use membership/separation oracle calls instead of linear optimization <cit.>. From a computational perspective, separation-based and linear optimization-based algorithms are not really comparable, since there are sets over which separation is cheaper than linear optimization, and vice-versa. On the regret side, separation-based algorithms have been show to achieve a O(κ√(T)) regret bound, where κ is the asphercity of the set . Separation-based algorithms are simple, often easy to analyze, and achieve the optimal-in-T regret bound, unlike linear optimization-based algorithms. However, the multiplicative factor κ in their regret bounds means that a pre-conditioning step may be required to ensure it is appropriately bounded. This precondition step would involve putting the set into (near-) isotropic position <cit.>; an operation, that can cost O(d^4) arithmetic operations <cit.>; and even after such a pre-processing step, κ can still be as large as d in the worst-case. Our algorithm has the benefit of not requiring any pre-processing step.
A third type of algorithms avoid projections by outputting Newton iterates that are guaranteed to be feasible thanks to the use of a self-concordant barrier. The first such algorithm in the context of online learning was introduced by <cit.>. They presented a general recipe for using self-concordant barriers with Newton steps in online linear optimization. However, their approach falls short of being computationally-efficient as their algorithm needs to compute the inverse of the Hessian of the barrier at every iteration. Inspired by the work of <cit.>, <cit.> used damped Newton steps with quadratic terms added to the barrier to design an efficient algorithm for the classical portfolio selection problem. Closer to our work is that of <cit.> who used a similar barrier for designing an algorithm for exp-concave optimization that can be viewed as a computationally-efficient version of the Online Newton Step <cit.>. Similar to our work, <cit.> also leverage the stability of the Newton iterates to avoid computing the inverse of the Hessian of the barrier at every step. However, their approach and analysis, which are tailored to the exp-concave setting do not necessarily lead to improved regret bounds in the general OCO setting we consider. In particular, their algorithm does not lead to a O(√(T)) regret bound over polytopes.
Finally, for our application to polytopes, we make use of recent tools and techniques developed for solving linear programs efficiently. In particular, we make use of the Lee-Sidford barrier <cit.>, which can be computed efficiently and, when used to compute Newton iterates, leads to the state-of-the-art O(√(d)) iteration upper-bound for solving a linear program. For the OCO setting, we show that using the Lee-Sidford barrier within our algorithm leads to a O(√(T)) regret bound.
We also note that ideas similar to the ones we use to avoid computing the inverse of the Hessian of the barrier at every round were used to amortize computations in the context of solving linear programs (see e.g. <cit.>).
Outline.
In section <ref>, we present our notation and relevant definitions. In Section <ref>, we present our algorithm and guarantees. In Section <ref>, we apply our results to the case of a polytope. All the proof are differed to the appendix.
§ PRELIMINARIES
Throughout the paper, we let be a closed convex subset of ^d. We denote by · the Euclidean norm and by (R)⊂^d the Euclidean ball of radius R>0. We let denote the interior of .
Our main algorithm, which can be viewed as an “online” counter-part to the Newton iterations <cit.>, uses a self-concordant barrier over the set of interest to avoid the need of performing Euclidean projections onto . Next, we present the definition of a self-concordant barrier.
Self-concordant barriers.
For the rest of this section, we let be a convex compact set with non-empty interior . For a twice [resp. thrice] differentiable function, we let ∇^2 f()̆ [resp. ∇^3 f()̆] be the Hessian [resp. third derivative tensor] of f at $̆.
A convex function f→ is called self-concordant with constant M_f≥ 0, if f is C^3 and satisfies
* f(x_k)→ +∞ for x_k → x∈∂ ; and
* For all x∈ and u ∈^d, |∇^3f(x)[u,u,u]| ≤ 2 M_f u^3_∇^2 f(x).
For M_f,ν≥ 0, we say that f→ is a (M_f,ν)-self-concordant barrier for if f is a self-concordant function over with constant M_f and
∀ w ∈, ∇ f(w)^⊤∇^-2f(w) ∇ f(w) ≤ν.
Computational Oracles.
We will assume that our algorithm has access to a self-concordant function over the setthrough the following gradient and Hessian Oracles.
Given a point w∈ and a tolerance >0, the gradient Oracle ^_(Φ) returns an -approximate vector ∇_w of the gradient ∇Φ(w) in the dual local norm of the Hessian:
∇_w - ∇Φ(w)_∇^-2Φ(w)≤.
We denote by ^_(Φ) the computational cost of one call to ^_(Φ).
When clear from the context, we will simply write^_and^_for^_(Φ)and^_(Φ), respectively.
Given a point w∈ and a tolerance >0, the Hessian Oracle ^_(Φ) returns a matrix H and its inverse H^-1 which are 1 ± spectral approximations of the Hessian and inverse Hessian of Φ at w:
(1-)∇^2 Φ(w) ≼ H ≼ (1+)∇^2 Φ(w) and (1-)∇^-2Φ(w) ≼ H^-1≼ (1+)∇^-2Φ(w).
We denote by ^_(Φ) the computational cost of one call to ^_(Φ).
When clear from the context, we will simply write^_and^_for^_(Φ)and^_(Φ), respectively.
Additional notation.
We use the notationf ≲gto meanf ≤C gfor some universal constantC>0. We also writef ≤Ogto meanf≤polylog(T,d)·g. We let∇^-2 (∇^2)^-1and∇^-1/2refer to the inverse of the Hessian and the inverse of the square root of the Hessian, respectively.
§ ALGORITHM AND REGRET GUARANTEES
In this section, we construct a projection-free algorithm for Online Convex Optimization. The algorithm in question (Alg. <ref>) outputs approximate Newton iterates with respect to “potential functions”(Φ_t)that take the following form:
Φ_t(w) Φ(w) + w^⊤∑_s=1^t-1 g_s,
where(g_s∈∂ℓ_s(w_s))are the sub-gradients of the losses(ℓ_s)at the iterates(w_s)of Algorithm <ref>, andΦis a self-concordant function over. Algorithm <ref> uses the the approximate gradient and Hessian Oracles ofΦ(see <ref>) to output iterates(w_t)approximate Newton iterates in the following sense:
∀ t∈[T],
w_t+1≈ w_t - ∇^-2Φ_t+1(w_t)∇Φ_t+1(w_t).
As is by now somewhat standard in the analyses of online Newton iterates of the form in (<ref>), we will bound the regret of Algorithm <ref> by showing that:
* The iterates (w_t) are close (in the norm induced by the Hessian ∇^2 Φ(w_t)) to the FTRL iterates, which are given by
w_t^⋆∈_w∈Φ_t(w).
* The regret of FTRL is bounded by O(√(T)).
Our main contribution is an algorithm that outputs iterates(w_t)that satisfy the first bullet point (i.e. iterates that satisfy (<ref>)) while only calling a Hessian Oracle (which is potentially computationally expensive) aO(1/√(T))fraction of the rounds afterTrounds. As we show in Section <ref>, for the case whereis a polytope withm∈ℕconstraints, the algorithm achieves aO(√(T))regret bound, where the per-iteration computational cost essentially reduces to a linear-system-solve involving ad×mmatrix. Among existing OCO algorithms that achieve aO(√(T))regret bound, none can achieve this computational complexity for general polytopes withmconstraints (see Section <ref> for more details).
§.§ Efficient Computation of the Newton Iterates with
The key feature of (Algorithm <ref>) is that is uses an amortized computation of the Hessians. Namely, computes the inverse of the Hessian of the barrierΦonly for a small fractions of the iterates(w_t). Henceforth, we refer to the iterates where the algorithm computes the full inverse of the Hessian as landmark iterates; these are the iterates(u_t)in Lines <ref> and <ref> of Algorithm <ref>. The idea behind this is that for a sufficiently curved[Informally, the “curvature” of a convex function is high when the rate of change of its gradients is high.] barrierΦ, the Newton iterates with respect toΦare stable enough that it suffices to compute the inverse of the Hessian ofΦat the closest landmark iterate. For example, this is what was done in <cit.> to design an efficient algorithm for exp-concave optimization.
Unlike the setting of <cit.>, where it is possible to add quadratic terms to the barrier for additional stability, in our setting we cannot do that without sacrificing performance in terms of regret. Without the quadratic terms, the Newton iterates are not stable enough for our desired guarantee. Instead of adding regularization terms, takesO(1)Newton steps per round to get “closer” to the Newton iterate with the true Hessian matrix. This simple approach is key to the success of our approach.
In the next subsection, we give a generic guarantee for .
§.§ Generic Regret Guarantee of
In this subsection, we present a general regret and computational guarantee for under minimal assumptions on the sequence of losses and without turning the “step size”η. In the next subsection, we will instantiate the regret guarantee when additional assumptions on the sequence of losses are available. We now state the main guarantee of (the proof in Appendix <ref>).
Let Φ be a self-concordant function over with constant M_Φ>0, and let b,η,, α>0 and m_∈ℕ be such that η≤1/1000 b M_Φ, ≤1/20000 M_Φ, α=0.001, and m_Θ(log1/ M_Φ). Further, let (w_t) be the iterates of Algorithm <ref> with input (η, , α, m_) and suppose that the corresponding sub-gradients (g_t) satisfy g_t_∇^-2Φ(w_t)≤ b, for all t≥ 1. Then, the regret of Algorithm <ref> is bounded as:
∑_t=1^T (ℓ_t(w_t)- ℓ_t(w)) ≲1/ηΦ(w) + η∑_t=1^T g_t_∇^-2Φ(w_t)^2 + ∑_t=1^T g_t_∇^-2Φ(w_t), ∀ w ∈.
Furthermore, the computational cost of the algorithm is bounded by
O( (^_ + d^2)· T·log1/ M_Φ + ^_α·(M_Φ T + M_Φ∑_t=1^T ηg_t_∇^-2Φ(w_t)) ).
Theorem <ref> essentially shows that it is possible to achieve the same regret as FTRL, while only computing the inverse of the Hessian ofΦat mostO(M_ΦTη)number of times.
§.§ Regret Guarantee Under Local and Euclidean Norm Bounds on the Sub-Gradients
We now instantiate the guarantee in Theorem <ref> with a(M_Φ, ν)-self-concordant barrierΦfor the set, with respect to which the local norms of the sub-gradients are bounded; that is, wheng_t_∇^-2Φ(w_t)≤b. We note that the regret bound in (<ref>) has an additiveΦ(w)which may be unbounded near the boundary of. However, it is still possible to compete against comparators inby making additional assumptions on the range of the losses <cit.>. We discuss some of these assumptions in the sequel. For the next theorem, we will state the regret bound of relative to comparators in the restricted set:
_c (1-c) ⊕{ c w^⋆} ,
where⊕denotes the Minkowski sum,w^⋆∈_w∈ Φ(w), andc∈(0,1)is a parameter.
With this, we now state a regret bound for when the sub-gradients of the losses have bounded local norms. The proof of the next theorem is in Appendix <ref>.
Let Φ be an (M_Φ, ν)-self-concordant barrier for and let c∈(0,1),b>0. Further, suppose that for all t∈[T], g_t_∇^-2Φ(w_t)≤ b, where (w_t) are the iterates of with input parameters (η, , α,m_) such that
η√(νlog c/b^2T), √(ν/T), α 0.001, and m_Θ(log1/ M_Φ).
For T≥ 1 large enough such that η≤1/1000 b M_Φ, ≤1/20000 M_Φ, the regret of is bounded as
^_T(w) ≲ b √(ν Tlog c ), ∀ w∈_c,
where _c⊂ is as in (<ref>). Further, the computational complexity of in this case is bounded by
O((^_ + d^2)· T·logT/ν M_Φ + ^_α· M_Φ√(Tνlog c)).
remarkRemark
The regret bound in Theorem <ref> is stated with respect to comparators in the restricted set _c defined in (<ref>). It is possible to extend this guarantee to all comparators in under an additional assumption on the range of the losses. For example, if for w^⋆∈_w∈Φ(w), we have
sup_w∈, t∈[T]ℓ_t((1-1/T)· w + 1/T· w^⋆)- ℓ(w)≤ O(1/√(T)),
then the regret guarantee in (<ref>) can be extended to all comparators in up to an additive O(√(T)) term (see Lemma <ref> in the appendix). In this case, the log c term in the computational complexity need be replaced by log T. We note that the condition in (<ref>) does not require a uniform bound on the losses. Instead, it only restrict the rate of growth of the losses (ℓ_t(w)) as w approaches the boundary of . As we show in the sequel (<ref>), (<ref>) is satisfied for some popular losses which are not Lipschitz.
We now instantiate the guarantee in Theorem <ref> when the sub-gradients are bounded in Euclidean norm (instead of local norm); that is, we assume that for allt∈[T],g_t≤Gfor someG>0. We note that this assumption implies (<ref>), and we will be able to bound the regret against all comparators inas alluded to in Remark <ref>. The proof of the next theorem is in Appendix <ref>).
Let Ψ be an (M_Ψ, ν) self-concordant barrier for and let Φ(·) Ψ(·)+ν/2R^2·^2. Further, let G, R>0 and suppose that ⊆(R) and for all t∈[T], g_t≤ G, where g_t∈∂ℓ_t(w_t) and (w_t) are the iterates of with input parameters (η, , α,m_) such that
ην/R G√(log T + 1/T), √(ν/T), α 0.001, and m_Θ(log1/ M_Ψ).
For T≥ 1 large enough such that η≤1/1000 G M_Ψ, ≤1/20000 M_Ψ, the regret of is bounded as
^_T(w) ≲ R G√(Tlog T), ∀ w∈.
Further, the computational complexity of in this case is bounded by
O((^_(Ψ) + d^2)· T·logT/ν M_Ψ + ^_α(Ψ)· M_Ψ√(Tνlog T)).
§ APPLICATION TO POLYTOPES USING THE LEE-SIDFORD BARRIER
In this section, we assume that the setis a polytope inℝ^dspecified bymlinear constraints:
= {w ∈ℝ^d | ∀ i ∈ [m], a_i^⊤ w ≥ b'_i},
and we construct efficient gradient and Hessian Oracles for a self-concordant barrier for. This will then allow us to instantiate the guarantees of in Section <ref> and provide explicit and state-of-the-art bounds on the regret of .
We will assume without loss of generality thata_i=1, for alli∈[m], and letA (a_1,…,a_m)^⊤∈^m ×ddenote the constraint matrix of the set.
For the rest of this section, it will be convenient to define the “slack” variabless_w,i = a_i^⊤w - b'_i, fori∈[m]. Here,s_w,iessentially represents the distance ofwto theith facet of the polytope. Further, we letS_w(s_w)be the diagonal matrix whoseith diagonal entry iss_w,i.
The barrier.To perform Online Convex Optimization over,
we pick the regularizerΦof to be the Lee-Sidford (LS) barrierΦ^<cit.> with parameterp>0, which is defined as
Φ^(v) = min_v ∈ℝ^m_>0 -log(A^⊤ S_w V S_w A) + 1/1+p^-1(V^1+1/p),
whereV= (v). One way to think of thebarrier is as a weighted log-barrier. As we will discuss in the sequel, this choice will confer computational and performance (in terms of regret) advantages over the standard log-barrier.
Self-concordance of the LS barrier. According to <cit.>, the LS barrier with the choicep = O(log(m))is a self-concordant function with parameterM_Φ^satisfying
M_Φ^ = O(log(m)^2/5) = O(1),
The other favorable property of this barrier is that its Newton decrement at any pointw∈is of orderO(√(d)); that is,
∇Φ^(w)_∇^-2Φ^(w) = O(√(d)).
Therefore,Φ^is a(O(1), O(d))-self-concordant barrier. For the log-barrier, the right-hand side of (<ref>) would be√(m).
Cost of gradient and Hessian Oracles.
We consider the computational complexities of gradient and Hessian Oracles forΦ^. By <cit.>, we have that for>0,
^_(Φ^) ≤O(^·log (1/)), and ^_(Φ^) ≤O(^√(d)·log (1/)),
where^is the computational cost of solving a linear system of the formA^⊤(v) A x = y, for vectorsv∈^d_≥0andy∈^d; we recall thatA=(a_1,…,a_m)^⊤is the constraint matrix for. In the worst-case, such a linear system can be solved with cost bounded as
^≤ O(m d^ω-1),
whereωis the exponent of matrix multiplication, andmis the number of constraints of. However, as we show in the sequel,^can be much smaller in many practical applications.
With this, we immediately obtain the following corollary for the regret and run-time of under local norm and Euclidean norm bounds on the sub-gradients.
Let c∈(0,1), G,R,b>0, and suppose is given by (<ref>) and that Φ^ is the corresponding barrier. Further, let (w_t) be the iterates of , and let _c be the restricted version of defined in (<ref>). Then, the following holds:
* Local norm bound: If g_t_∇^-2Φ(w_t)≤ b, for all t≥1, and the parameters (η,,α,m_) of are set as in Theorem <ref> with Φ=Φ^ and (M_Φ,ν) =(O(1), O(d)), then for T large enough (as specified in Theorem <ref>), the regret of is bounded by
^_T(w) ≲ b√(dTlog c), ∀ w∈_c.
* Euclidean norm bound: If ⊆(R) and g_t≤ G, for all t≥1, and the parameters (η,,α,m_) of are set as in Theorem <ref> with Φ(·) = Φ^(·) + ν/2R^2·^2 and (M_Ψ,ν) =(O(1), O(d)), then for T large enough (as in Theorem <ref>) has regret bounded as
^_T(w) ≲ R G√(Tlog T), ∀ w∈.
In either case, the computational complexity is bounded by
O((^+d^2) · T + ^· d√(T)),
where ^ is the computational cost of solving a linear system of the form A^⊤(v) A x = y, for vectors v∈^d_≥0 and y∈^d (recall that A is the constraint matrix for the polytope ).
Using the log-barrier.We note that sinceis a polytope, we could have used the standard log-barrier
Φ^log(w) ∑_i=1^m log (b_i'- a_i^⊤ w).
This barrier is(1,m)-self-concordant, and so instantiating Theorem <ref> with it would imply aO(b√(m d T))regret bound in the case of local sub-gradient norms bounded byb>0. Using the barrier replaces the√(m)term in this bound by√(d)regardless of the number of constraints—see (<ref>). However, this comes at a^computational cost, which can be as high asm d^ω-1in the worst-case (see (<ref>)). In the case of the log-barrier, this cost would be replaced bym d(essentially because^_(Φ^log)≤O(m d)). Thus, whenmis of the order ofd, using the log-barrier may be more computational-efficient compared to using the barrier. In the next corollary, we bound the regret of whenΦ= Φ^log; this result is an immediate consequence of Theorem <ref>.
Let G,b>0, and suppose is given by (<ref>) and that Φ^log is the corresponding log-barrier. Further, let (w_t) be the iterates of . If ⊆(R) and g_t≤ G, for all t≥1, and the parameters (η,,α,m_) of are set as in Theorem <ref> with Φ(·) = Φ^log(·) + ν/2R^2·^2 and (M_Ψ,ν) =(1, m), then for T large enough (as in Theorem <ref>) has regret bounded as
^_T(w) ≲ R G√(Tlog T), ∀ w∈.
The computational complexity is bounded by
O((m d+d^2) · T + m d^ω -1√(m T)).
§.§ Implications for Lipschitz Losses
We now discuss implications of Corollary <ref>, and compare the bound of to those of existing algorithms for Lipschitz losses.
Dimension-free regret bound. We note when the Euclidean norms of the sub-gradients are bounded, achieves a dimension-freeO(√(T))regret bound. In contrast, the best dimension-free regret bound[The dependence in T can be improved under additional structure such as smoothness or strong-convexity of the losses.] achieved by existing projection-free algorithms is of orderO(T^3/4)(see e.g. <cit.>). We also note that existing separation/membership-based algorithms that achieve a√(T)regret; for examples those presented in <cit.>, are not dimension-free. Their regret bounds are of orderO(κ√(T)), whereκ= R/rwithr,R>0such that(r)⊆⊆(R). The asphercity parameter can depend on the dimensiond<cit.>, and even after a pre-conditioning step (which would involve putting the setinto near-isotropic position and can cost up toΩ(d^4)<cit.>),κcan be as large asdin the worst-case. Of course, to make a fair comparison with existing projection-free algorithms, we also need to take computational complexity into account. This is what we do next.
Computational cost. The computational cost in (<ref>) should be compared with that of existing projection-free algorithms. For linear optimization-based projection-free algorithms, the computational cost afterTrounds is typically of order^ ·T, where^is the cost of performing linear optimization overwhich, for a polytope, reduces to solving a linear program. Using state-of-the-art interior point methods for solving such a linear program would cost^ ≤O(√(d) ·^); see e.g. <cit.>. Thus, linear optimization-based projection-free algorithms[This only concerns algorithms that use an interior point method to implement linear optimization over .] can have a cost that is a factor√(d)worse than that of in the setting of Corollary <ref>. On the other hand, separation/membership-based algorithms, the computational cost scales withO(^·T)afterTrounds, where^is the cost of performing separation for the set. For a general polytope in^dwithmconstraints, we have^≤O(m d), which may be smaller than^(the latter can be as large asm d^ω-1in the worse case;see (<ref>)). Here, it may be more appropriate to compare against the computational guarantee of given in Corollary <ref>; by (<ref>), we have that forT≥d^ω-2√(m), the computational cost of in the setting of the corollary is dominated by(m d +d^2) ·T, which is comparable to that of existing separation-based algorithms.
§.§ Implications for Non-Lipschitz Losses
Another advantage has over projection-free, and even projection-based, algorithms is that it has a regret bound that scales with a bound on the local norms of the gradients—see (<ref>). We now showcase two online learning settings where this leads to non-trivial performance and computational improvements over existing OCO algorithms.
Online Portfolio Selection <cit.>. The portfolio selection problem is a classical online learning problem where the gradients of the losses can be unbounded. In this paragraph, we demonstrate how the guarantee of in Corollary <ref> leads to a non-trivial guarantee for this setting both in terms of regret and computational complexity. In the online portfolio setting, at each roundt, a learner (algorithm) chooses a distributionw_t∈Δ_dover a fixed set ofdportfolios. Then, the environment reveals a return vectorr_t ∈_≥0^d, and the learner suffers a loss
ℓ_t(w_t) - log w_t^⊤ r_t.
The goal of the learner is to minimize the regretReg_T(w)∑_t=1^T( ℓ_t(w_t)-ℓ_t(w))afterT≥1rounds. For this problem, it is known that a logarithmic regret is achievable, but the specialized algorithms that achieve this have a computational complexity that scales withmin(d^3 T,d^2T^2)<cit.>. On the other hand, applying the generic Online Gradient Descent or the Online Newton Step to this problem leads to regret bounds that scale with the maximum norm of the gradient (which can be unbounded). Instantiating the guarantees of in Corollary <ref> withΦset to the standard log-barrier for the simplex[Technically, we need to use a barrier for the set {w̃∈^d_≥ 0|∑_i∈[d-1]w̃_i≤ 1}; see e.g. <cit.>.], in particular the bound in (<ref>), to the online portfolio selection problem leads to anO(√(d T))regret bound, which does not depend on the norm of the observed gradients. Furthermore, we have^≤O(d), and so by (<ref>) the computational complexity is essentiallyO(d^2 T)afterTrounds. Technically, the bound in (<ref>) is only against comparators in the restricted set_c. However, by settingc=1/T, it possible to extend this guarantee to all comparators inas explained in Remark <ref>, since the losses in this case satisfy (<ref>)<cit.>.
Linear prediction with the log-loss.
Another classical online learning problem with unbounded gradients is that of linear prediction with the log-loss <cit.>. For this problem, at each roundt, the learner receives a feature vectorx_t∈⊆^d, outputsw_t∈⊆^d, then observes labely_t∈{-1,1}and suffers loss
ℓ_t(w_t) - 𝕀{y_t = 1}·log (1 - w_t^⊤ x_t) - 𝕀{y_t = 0}·log (1- w_t^⊤ x_t).
In the settings, where(, )=(Δ_d,_∞(1))and(, )=(_∞(1), Δ_d), we have that∇ℓ_t(w)_∇^-2Φ(w)≤O(1)for allw∈, whereΦis set to the corresponding log-barrier for. Thus, instantiating Corollary <ref> (in particular (<ref>)) in this setting implies that achieves a regret bound of the form:
O(√(d T)),
and has computational complexity bounded byO(d^2 T), as long asT≥d. Again, we emphasize that the bound in (<ref>) does not depend on the norm of the gradients, which may be unbounded.
Finally, we note that there exist a few specialized algorithms that provide sublinear regret bounds for non-lipschitz losses. This includes, for example, the Soft-Bayes algorithm <cit.>. However, this algorithm is specialized to the log-loss with a particular dependence on the predictions, and it is not clear, for example, what regret bound it would have in the linear prediction setting and other similar settings with non-Lipschitz losses.
It is possible to get away with this only because the Newton iterates are stable and use the fact that for a suitable choice of the barrier Φ, the Hessian of the potential function Φ_t defined in Equation (<ref>) does not differ by much for close by points w_1,w_2. Since the Hessian of Φ and Φ_t are the same, this in turn requires the Hessian of Φ to be stable, which justifies choosing Φ to be self-concordant.
Following a similar approach as in <cit.>, in Lemma <ref> we show that taking an approximate Newton step using approximate Hessian and gradient instead of the exact ones still can reduce the Newton decrement.
Nonetheless, because of using approximate Hessian and gradient instead of the exact ones, the decrease in the Newton decrement is not sufficiently large for our need. The key idea that we use here is that after observing each new sub-gradient g_t, we apply multiple approximate Newton steps, i.e. an approximate version of the step in Equation (<ref>) using our approximate gradient and Hessian oracle. Since we won't get too far from the landmark point by performing these Newton steps, the Hessian at these iterates does not differ too much from the Hessian of the most recent landmark. We then use Lemma <ref> to show that can reduce our distance to the global optimum of Φ_t, w^⋆_t, exponentially fast only by taking Newton steps based on the approximate Hessian computed at the previous landmark point. We then use this high precision proximity to w^⋆_t in Lemma <ref> to show a guarantee on the regret of . In particular, this bypasses the need for using a tailor expansion for approximating the inverse of Hessian at the current point based on the previous landmark point, an idea exploited in <cit.> to amortize matrix inversions.
Based on this new idea, we inductively prove that recomputing the Hessian only at O(1/√(T)) fraction of the iterates suffice. To show this, crucially, we bound the overall sum of movements of w_t in the local Hessian norm by relating it to the local norm of the sub-gradients g_t in Lemma <ref>. To have an efficient bound on the local norms of the sub-gradients g_t_H_t^-1, which happens to be important for both minimizing the regret as well as the number of recomputations of the Hessians at the landmarks (see Theorem <ref>), we need to carefully pick the barrier and the step size, so that it also does not blow up too fast close to the boundary. Competing against vectors w that are in-worst close to the boundary turns out to be sufficient to obtain a regret against any vector in . In particular, given a self-concordant barrier Φ for as defined in Section <ref>, we bound the regret and amortized computational cost of in Theorem <ref>, given a bound on the local norms of the sub-gradients with respect to Φ. Furthermore, given that we only have a Euclidean norm bound on the sub-gradients, we introduce a novel hybrid barrier as a combination of Φ and the Euclidean norm squared in Theorem <ref>, and bound the regret and computational cost of using this barrier depending only on the Euclidean norms of the sub-gradients and the diameter of .
plainalpha
§ SELF-CONCORDANCE PROPERTIES
Throughout, for a twice-differentiable functionf→, we letλ(x,f) ∇f(x)_∇^-2f(x)denote the Newton decrement offatx∈.
Let f → be a self-concordant function with constant M_f≥ 1. Further, let x∈ and x_f∈_x∈ f(x). Then, I) whenever λ(x,f)<1/M_f, we have
x -x_f_∇^2 f(x_f)∨x -x_f_∇^2 f(x)≤λ(x,f)/(1-M_f λ (x,f));
and II) for any M≥ M_f, the Newton stepx^+ x - ∇^-2f(x)∇ f(x) satisfies x^+∈ and
λ(x^+,f)≤ M λ(x,f)^2/ ( 1 - M λ(x,f))^2.
Let f→ be a self-concordant function with constant M_f and x ∈. Then, for any w such that rw - x_∇^2 f(x) < 1/M_f, we have
(1-M_f r)^2∇^2 f(w) ≼∇^2 f(x) ≼ (1-M_f r)^-2∇^2 f(x).
The following result from <cit.> will be useful to show that the iterates of algorithms are always in the feasible set.
Let f→ be a self-concordant function with constant M_f≥ 1 and x ∈. Then, _x{w ∈^dw-x_x<1/M_f }⊆. Furthermore, for all w∈_x, we have w-x_w≤w- x_x/1-M_f w- x_x.
Finally, we will also make use of the following result due to <cit.>:
Let f→ be a self-concordant function with constant M_f>0. Then, for any x, w ∈ such that rx-w_∇^2 f(x)<1/M_f, we have
∇ f(x) - ∇ f(w)^2_∇^-2f(x)≤1/(1-M_f r )^2w- x^2_∇^2f(x).
§ TECHNICAL LEMMAS
Our analysis relies on the crucial fact that the Newton decrement can be sufficiently decreased by taking a Newton step using only approximate gradients and Hessians. We state this fact next; the proof is in <ref>.
Let Φ be a self-concordant function over with constant M_Φ>0, and let y∈^d be such that λ(y, Φ)≤ 1/(40M_Φ). Further, let H∈^d× d and ∇_y∈^d be such that
∇_y - ∇Φ(y)_∇^-2Φ(y)≤< 1/40 M_Φ,
(1-α)∇^2 Φ(y) ≼ H ≼ (1+α)∇^2 Φ(y),
for α < 1/5.
Then, for ỹ^+ y- H^-1∇Φ(y) and y^+ y - H^-1∇_y, we have
λ(ỹ^+, Φ) ≤ 9M_Φλ(y, Φ)^2 + 2.5αλ(y, Φ),
λ(y^+, Φ) ≤ 20(1+α) + (1+20(1+α)) ·λ(ỹ^+, Φ).
Next, we show that as long as the Newton decrement is small enough at the current iteratew_t-1, the “intermediate” Newton iterates(w^m_t)remain close to the landmark pointu_t-1; this will be important for the proof of Theorem <ref>. The proof is in <ref>.
Let Φ be a self-concordant function over with constant M_Φ>0. Let b>0, m_Θ(log1/ M_Φ), α≤ 1/(1000M_Φ), <1/(20000M_Φ), α=0.001, and η≤ 1/(1000M_Φb). Further, let (w_t,w_t^m,u_t,g_t,H_t) be as in Algorithm <ref> with input (η, ,α, m_). Suppose that at round t-1 of Algorithm <ref>, we have
λ(w_t-1, Φ_t-1) ≤α and u_t-1 - w_t-1_∇^2 Φ(u_t-1)≤1/40M_Φ.
For t>1, if the sub-gradient g_t-1 at round t-1 satisfies g_t-1_∇^-2Φ(w_t-1)≤ b, then
λ(w^m_t, Φ_t) ≤(15/16)^m-1λ(w^1_t, Φ_t) + 500≤1/40 M_Φ.
Furthermore, we have for all m∈[m_]:
1/2∇^2 Φ(w^m_t) ≼∇^2 Φ(w^⋆_t) ≼ 2∇^2 Φ(w^m_t),
w_t^m - u_t-1_H_t-1≤1/10 M_Φ,
w_t^m - w_t^⋆_∇^2 Φ(w_t^⋆)≤
1/49M_Φ(15/16)^m-1 + 240,
|w^m_t - u_t-1_H_t-1 - w_t-1 - u_t-1_H_t-1|≤ 2ηg_t-1_∇^2 Φ(w_t-1) + 1/40M_Φ(15/16)^m-1 + 500 + 2α,
where w^⋆_t∈_w∈Φ_t(w) is the optimum solution of Φ_t.
We note that we have not made an attempt to optimize over the constants in Lemma <ref>.
Let Φ be a (M_Φ, ν)-self-concordant barrier for , and let w^⋆∈_w∈Φ(w). Further, suppose that the losses satisfy:
sup_w∈, t∈[T]ℓ_t((1-1/T)· w + 1/T· w^⋆)- ℓ(w)≤ O(1/√(T)),
Then, for any w ∈, there exists w̃∈_1/T (where _c is as in (<ref>)) such that
∑_t=1^T (ℓ_t(w_t)-ℓ_t(w)) ≤∑_t=1^T (ℓ_t(w_t)-ℓ_t(w̃) + O(√(T)).
Fix w∈ and define w̃ = 1/ Tw^⋆ + (1 - 1/T)w∈_1/T. Then, by (<ref>), we have, for all t∈[T],
ℓ_t(w_t)- ℓ_t(w)≤ℓ_t(w̃_t) - ℓ_t(w̃) + O(T^-1/2).
Summing this over t=1,…, T leads to the desired result.
§ PROOFS OF THE MAIN RESULTS
Next, we present the proof of Theorem <ref>.
§.§ Proof of Theorem <ref>
The proof consists of three parts: I) First, we show that keeps the Newton decrements λ(w_t,Φ_t), t≥ 1, small—this is the main invariant of ; II) Then, we bound the regret of using this invariant and the results of Lemma <ref>; III) Finally, we bound the runtime of .
Bounding the Newton decrements. We will show that the Newton decrements satisfy
λ(w_s, Φ_s) ≤αmin{1/1000M_Φ, 1000},
for all s≥ 1.
We will show (<ref>) by induction over t≥ 1.
Base case. The base case follows by the facts that w_1 ∈_w∈Φ(w), Φ_1≡Φ and that the Newton decrement is zero at the minimizer.
Induction step.
Suppose that (<ref>) holds with s=t-1 for some t≥ 1. We will show that it holds for s=t.
First, note that by the update rule of landmark (see Lines <ref> and <ref> of Alg. <ref>), we have that
w_t-1 - u_t-1_H_t-1≤1/41 M_Φ,
where H_t-1 =^_α(u_t-1). Thus, by the choice of α in the theorem's statement, we have
u_t-1 - w_t-1_∇^2Φ(u_t-1)≤1/40M_Φ.
This, combined with the fact that (<ref>) holds with s=t-1 (the induction hypothesis) implies that the conditions of Lemma <ref> are satisfied. This in turn implies
λ(w_t, Φ_t) (a)=λ(w_t^, Φ_t) ≤(15/16)^λ(w^1_t, Φ_t) + 500≤50/M_Φ(15/16)^+ 500(b)≤α,
where (a) follows by the fact that w_t=w_t^ (by definition; see Algorithm <ref>) and (b) follows by the choice of in the theorem's statement.
This shows that (<ref>) holds for s=t and concludes the induction.
Bounding the regret. To bound the regret of , we make use of the FTRL iterates {w_t^⋆}, which are given by w_t^⋆∈_w∈Φ_t(w): By Lemma <ref>, we have that for all t∈[T],
w_t - w^⋆_t_∇^2 Φ(w^⋆_t) = w_t^m_ - w^⋆_t_∇^2 Φ(w^⋆_t)≤1/49M_Φ(15/16)^m_ + 240 = O(),
where the last inequality follows by the choice of m_ = Θ(log (1/( M_Φ))) in the theorem's statement. Using this and Hölder's inequality, we now bound the sum of linearized losses of the algorithm in terms of the sum of linearized losses with respect to {w_t^⋆}:
∑_t=1^T ⟨ w_t, g_t⟩ ≤∑_t=1^T ⟨ w^⋆_t, g_t⟩ + ∑_t=1^T w^⋆_t - w_t_∇^2 Φ(w^⋆_t)·g_t_∇^2 Φ(w^⋆_t)^-1.
Now, by (<ref>) in Lemma <ref> (which holds due to (<ref>) and (<ref>) with s=t-1 as we showed in the prequel), we have 1/2∇^2 Φ(w_t) ≼∇^2 Φ(w^⋆_t), which implies that g_t_∇^-2Φ(w^⋆_t)≤ 2g_t_∇^-2Φ(w_t).
Combining this with (<ref>) and (<ref>), we get that
∑_t=1^T ⟨ w_t, g_t⟩ ≤∑_t=1^T ⟨ w^⋆_t, g_t⟩ + O()∑_t=1^T g_t_∇^-2Φ(w_t).
Now fix w∈. Subtracting ∑_t=1^T ⟨ w, g_t⟩ from both sides of (<ref>) implies the following bound the regret of :
^_T(w) ≤∑_t=1^T ⟨ w^⋆_t, g_t⟩ - ∑_t=1^T ⟨ w, g_t⟩
+ O()∑_t=1^T g_t_∇^-2Φ(w_t) ,
≤1/ηΦ(w) + η∑_t=1^T g_t_∇^-2Φ(w_t)^2 + O()∑_t=1^T g_t_∇^-2Φ(w_t),
where the last inequality follows by the regret bound of FTRL (see e.g. <cit.>).
Bounding the run-time.
Note that updates the landmark points on the rounds where u_t - w_t_H_t > 1/(41M_Φ). Now, by (<ref>) in Lemma <ref> (which holds due to (<ref>) and (<ref>) with s=t-1 as we showed in the prequel), we have
|u_t - w_t_H_t - u_t - w_t-1_H_t| =
|u_t - w^m__t_H_t - u_t - w_t-1_H_t|,
≤ 2ηg_t-1_∇^-2Φ(w_t-1) + 1/40M_Φ(15/16)^m_ + 500 + 2α
≤ 2ηg_t-1_∇^-2Φ(w_t-1) + O(),
where the last inequality follows by the choice of m_ in the theorem's statement.
Hence, the quantity u_t - w_t_H_t increases each time by at most 2ηg_t-1_∇^-2Φ(w_t-1) + O(). Therefore, the number of times that the landmark u_t changes is bounded by
O(∑_t=1^T (2ηg_t_∇^-2Φ(w_t) + O())/1/(41 M_Φ)) = O(M_Φ T + M_Φ∑_t=1^T ηg_t_∇^-2Φ(w_t)).
Thus, the overall computational cost of recalculating the Hessians and their inverses at the landmark iterates is bounded by
^_α·(M_Φ T + M_Φ∑_t=1^T ηg_t_∇^-2Φ(w_t)).
where the multiplicative cost ^_α reflects the fact that the instance of in the theorem's statement needs 1±α accurate approximations of the Hessians and their inverses (in the sense of Definition <ref>) at the landmark iterates.
Moreover, needs to compute an -approximate gradient of Φ at every point w^m_t for all t ∈[T] and m ∈ [m_]. Thus, the cost of computing the gradients is ^_· Tlog1/ M_Φ. Finally, the matrix-vector product H_t^-1∇Φ_t(w) in costs O(d^2) work, and so overall the computational cost is
O( (^_ + d^2)· Tlog1/ M_Φ + ^_α·(M_Φ T + M_Φ∑_t=1^T ηg_t_∇^-2Φ(w_t)) ).
§.§ Proof of Theorem <ref>
Note that without having any effect on the algorithm, we can add an arbitrary constant to the barrier Φ. Thus, without loss of generality, we assume Φ(w^⋆) = 0, which implies Φ(w) ≥ 0, for all w ∈. We define the restricted comparator class
{ w∈ : Φ(w)≤Φ(w^⋆)+νlog c}.
By <cit.> and the fact that Φ is an (M_Φ,ν)-self-concordant barrier for , we have that
_c ⊆,
and so it suffices to bound the regret against comparators in . Fix w̃∈. Under the assumptions of the theorem, the preconditions of Theorem <ref> are satisfied and so we have,
∑_t=1^T (ℓ_t(w_t)- ℓ_t(w̃)) ≲1/ηΦ(w̃) + η∑_t=1^T g_t_∇^-2Φ(w_t)^2 + ∑_t=1^T g_t_∇^-2Φ(w_t),
= 1/ηνlog c + η b^2 T + Tb, (since w̃∈ and g_t_∇^-2Φ(w_t)≤ b)
= 2 b √(ν T log c) + b √(ν T),
where in the last step we used the choices of η and in (<ref>). Combining this with (<ref>) implies the desired regret bound. The bound on the computational complexity follows immediately from Theorem <ref>, the fact that g_t_∇^-2Φ(_t)≤ b, and the choices of η and in (<ref>).
§.§ Proof of Theorem <ref>
Similar to the proof of Theorem <ref>, and without loss of generality, we assume that Ψ is zero at its minimum, i.e. Ψ(w^⋆) = 0. We define the restricted comparator class
{ w∈ : Ψ(w)≤Ψ(w^⋆)+νlog T}.
By <cit.> and the fact that Ψ is an (M_Ψ,ν)-self-concordant barrier for , we have that
_1/T⊆.
On the other hand, by Lemma <ref> we have that
sup_w∈∑_t=1^T (ℓ_t(w_t)-ℓ_t(w)) ≤sup_w̃∈∑_t=1^T (ℓ_t(w_t)-ℓ_t(w̃) + O(√(T)).
Combining this with (<ref>) implies that it suffices to bound the regret against comparators in . Fix w̃∈. Note that since Φ is equal to Ψ plus a quadratic, Φ is also a self-concordant function with constant M_Φ=M_Ψ<cit.>. Thus, under the assumptions of the theorem the preconditions of Theorem <ref> are satisfied and so we have,
∑_t=1^T (ℓ_t(w_t)- ℓ_t(w̃)) ≲1/ηΦ(w̃) + η∑_t=1^T g_t_∇^-2Φ(w_t)^2 + ∑_t=1^T g_t_∇^-2Φ(w_t).
Now, by the choice of Φ, we have that
g_t_∇^-2Φ(w_t)≤ Rg_t/√(ν)≤ RG/√(ν).
Moreover, from the condition that ⊆(R), (<ref>) and the fact that Ψ(w^⋆) = 0, we have for all w∈:
Φ(w) ≤νlog T + ν/2.
Plugging this into (<ref>) and using that w̃∈, we get
∑_t=1^T (ℓ_t(w_t)- ℓ_t(w̃)) = 1/ηνlog T + 1/2η + ηR^2 G^2/ν T + T RG/√(ν),
= 2 R G √(T log T) + 5/2R G √(T),
where in the last step we used the choices of η and in (<ref>). Combining this with (<ref>) implies the desired regret bound. The bound on the computational complexity follows from the computational complexity in Theorem <ref> and the fact a gradient Oracle _^(Φ) [resp. Hessian Oracle _α^(Ψ)] for Φ(·) = Ψ(·) + ν/2 R^2·^2 can be implemented with one call to _^(Ψ) [resp. _α^(Ψ)] plus d arithmetic operations.
§ PROOFS OF THE TECHNICAL LEMMAS
§.§ Proof of Lemma <ref>
Throughout, we let h is the Newton step based on the exact gradient ∇Φ(y):
h = -H^-1∇Φ(y).
Recall that ỹ^+ and y^+ from the lemma's statement satisfy
ỹ^+ = y + h and y^+ = y - H^-1∇_y.
Bounding the Newton decrement at ỹ^+. First, we bound the Newton decrement at ỹ^+.
By definition, the square of the Newton decrement at ỹ^+=y + h is
λ(ỹ^+,Φ) = ∇Φ(y + h)^⊤∇^-2Φ(y + h)∇Φ(y + h).
Now for the vector z defined below, we define the function F as
z ≜∇^-2Φ(y+h) ∇Φ(y+h) and
F(y) ∇Φ(y)^⊤ z.
The partial derivative of F in direction h is given by
DF(y)[h] = -h^⊤∇^2 Φ(y) z,
= -∇Φ(y)^⊤ H^-1∇^2 Φ(y) z,
= -∇Φ(y)^⊤ H^-1/2 H^-1/2∇^2 Φ(y) H^-1/2 H^1/2 z ,
= -∇Φ(y)^⊤ H^-1/2 (H^-1/2∇^2 Φ(y) H^-1/2 - I) H^1/2 z - ∇Φ(y)^⊤ z.
Now, by (<ref>), we have
H^-1/2∇^2 Φ(y) H^-1/2 - I≤α/1-α.
Thus, the first term on the right-hand side of (<ref>) can be bounded as
∇Φ(y)^⊤ H^-1/2 (H^-1/2∇^2 Φ(y) H^-1/2 - I) H^1/2 z
≤α/1-α∇Φ(y)^⊤ H^-1/2H^1/2 z,
=α/1-α∇Φ(y)_H^-1z_H,
≤α(1+α)/(1-α)^2∇Φ(y)_∇^-2Φ(y)z_∇^2 Φ(y),
= α(1+α)/(1-α)^2λ(y, Φ)·z_∇^2 Φ(y).
Plugging this into (<ref>) and using the definition z in (<ref>), we obtain
|DF(y)[h] + F(y)| ≤α(1+α)/(1-α)^2λ(y, Φ)·z_∇^2 Φ(y).
Now, let (s) y + sh and F∘(s) F(y(s)).
With this, we have
(F∘)'(s) -(F∘)'(0) = h^⊤(∇^2 Φ(y(s)) - ∇^2 Φ(y(0)))z.
On the other hand, by Lemma <ref> and our assumption on λ(y, Φ), we have
(s) - y_∇^2 Φ(y) = sh_∇^2 Φ(y)≤s/1-α∇Φ(y)_∇^-2Φ(y) ≤1/1-αλ(y,Φ),
< 1/30M_Φ.
Thus, by Lemma <ref>, we have
(1-M_Φ(s)-y_∇^2 Φ(y)^2)^2∇^2 Φ(y) ≼∇^2 Φ(y(s)) ≼1/(1-M_Φ(s)-y_∇^2 Φ(y))^2∇^2 Φ(y).
This, together with (<ref>) also implies that
(1 - 3 M_Φ(s)-y_∇^2 Φ(y))∇^2 Φ(y) ≼∇^2 Φ(y(s)) ≼ (1 + 3 M_Φ(s)-y_∇^2 Φ(y))∇^2 Φ(y).
After rearranging, this becomes
-3 M_Φ(s)-y_∇^2 Φ(y)∇^2 Φ(y) ≼∇^2 Φ(y(s)) - ∇^2 Φ(y) ≼ 3 M_Φ(s)-y_∇^2 Φ(y)∇^2 Φ(y).
Combining this with (<ref>) and the fact that c ≤1/4 gives
-4 M_Φλ(y,Φ) ∇^2 Φ(y) ≼∇^2 Φ(y(s)) - ∇^2 Φ(y) ≼ 4 M_Φλ(y,Φ) ∇^2 Φ(y).
Finally, by Lemma <ref> and (<ref>), we obtain the following bound on the right-hand side of (<ref>):
(F∘)'(s) -(F∘)'(0)
≤
6 M_Φλ(y,Φ) h_∇^2 Φ(y)z_∇^2 Φ(y).
Integrating this over s gives
∇Φ(y + h)^⊤ z = (F∘)(1)
=(F∘)(0) + (F∘)'(0) + ∫_0^1 ((F∘)'(s) -(F∘)'(0)) ds
≤∇Φ(y)^⊤ z + DF(y)[z] + 6 M_Φλ(y,Φ) h_∇^2 Φ(y)z_∇^2 Φ(y),
≤∇Φ(y)^⊤ z + DF(y)[z] + 6 M_Φ/1-αλ(y,Φ)^2 z_∇^2 Φ(y),
where the last inequality follows by (<ref>). Now, note that from (<ref>) (with s=1) and the assumption that λ(w, Φ) ≤ 1/(40M_Φ), we have
∇^2 Φ(y) ≼10/9∇^2 Φ(y+h).
This implies
z_∇^2Φ(y)≤10/9z_∇^2Φ(y+h) = 10/9λ(y+h, Φ).
Plugging this into (<ref>) and using (<ref>), we get
∇Φ(y + h)^⊤ z ≤|∇Φ(y)^⊤ z + DF(y)[h]| + 20 M_Φ/3(1-α)λ(y,Φ)^2 λ(y+h,Φ)
≤α(1+α)/(1-α)^2λ(y, Φ)z_∇^2 Φ(y) + 20 M_Φ/3(1-α)λ(y,Φ)^2 λ(y+h, Φ),
≤10α(1+α)/9(1-α)^2λ(y, Φ)λ(y+h, Φ) + 20 M_Φ/3(1-α)λ(y,Φ)^2 λ(y+h, Φ).
Now, from the definition of z, we have
∇Φ(y + h)^⊤ z = λ(y+h, Φ)^2.
Thus, since α < 1/4, we finally get
λ(y+h, Φ) ≤ 9M_Φλ(y, Φ)^2 + 2.5αλ(y, Φ).
This proves the first part of the claim, i.e. (<ref>).
Bounding the Newton decrement at y^+. We now bound the Newton decrement at y^+= y - H^-1∇_y in terms of that of ỹ^+= y +h. Note that
ỹ^+ - y^+ = H^-1(∇Φ(y) - ∇_y).
On the other hand, from (<ref>) and (<ref>), we have
∇^2 Φ(ỹ^+) ≼ (1 + 3M_Φh)∇^2 Φ(y) ≼7/4∇^2 Φ(y) ≼7/4(1+α)H,
which implies that
∇^2 Φ(ỹ^+)^1/2H^-1∇^2 Φ(ỹ^+)^1/2≤7/4(1+α).
Therefore,
ỹ^+ - y^+_∇^2 Φ(ỹ^+)^2
= (∇Φ(y) - ∇_y)^⊤ H^-1∇^2 Φ(ỹ^+) H^-1 (∇Φ( y) - ∇_y),
=(∇Φ(ỹ^+) - ∇_y)^⊤∇^-1/2Φ(ỹ^+)(∇^2 Φ( ỹ^+)^1/2 H^-1∇^2 Φ(ỹ^+)^1/2)^2 ∇^-1/2Φ(ỹ^+)(∇Φ(y) - ∇_y),
≤49/16(1+α)^2 (∇Φ(y) - ∇_y)^⊤∇^-2Φ(ỹ^+)(∇Φ(y) - ∇_y),
=49/16(1+α)^2∇Φ(y) - ∇_y_∇^-2Φ(ỹ^+).
Combining this with (<ref>) and our assumption on ∇_y from (<ref>) implies
ỹ^+ - y^+_∇^2 Φ(ỹ^+)≤7/2(1+α)∇Φ(y) - ∇_y_∇^-2Φ(y)≤ 5(1+α).
Thus, by Lemma <ref>, we have
((1-5(1+α) M_Φ)^2 - 1) ∇^2 Φ(y^+) ≼∇^2 Φ(ỹ^+) - ∇^2 Φ(y^+) ≤(1/(1-5(1+α) M_Φ)^2 - 1)∇^2 Φ(y^+).
Since < 1/(40M_Φ), we get
-20(1+α)∇^2 Φ(y^+) ≼∇^2 Φ(ỹ^+) - ∇^2 Φ(y^+) ≼ 20(1+α)∇^2 Φ(y^+).
Now, by Lemma <ref> instantiated with x=ỹ^+ and w =y^+, we have
√((∇Φ(ỹ^+) - ∇Φ(y^+))^⊤∇^-2Φ(ỹ^+) (∇Φ(ỹ^+) - ∇Φ(y^+))) ≤y^+ - ỹ^+_∇^2 Φ(ỹ^+)/(1-M_Φy^+ - ỹ^+_∇^2 Φ(ỹ^+)),
≤ 10(1+α),
where in the last inequality we used (<ref>) and the fact that ≤ 1/(40M_Φ).
Using the triangle inequality, we can bound the Newton decrement at y^+ as
λ(y^+, Φ) ≤√((∇Φ(ỹ^+) - ∇Φ(y^+))^⊤∇^-2Φ(y^+)(∇Φ(ỹ^+) - ∇Φ(y^+)))
+ √(∇Φ(ỹ^+)^⊤∇^-2Φ(y^+)∇Φ(ỹ^+)),
≤ 2 √((∇Φ(ỹ^+) - ∇Φ(y^+))^⊤∇^-2Φ(ỹ^+)(∇Φ(ỹ^+) - ∇Φ(y^+)))
+ √(∇Φ(ỹ^+)^⊤∇^-2Φ(y^+)∇Φ(ỹ^+)), (by (<ref>) and ≤1/(40 M_Φ))
≤ 20(1+α) + (1+20(1+α)) ·λ(ỹ^+, Φ),
where the last inequality follows by (<ref>) and (<ref>). This completes the proof.
§.§ Proof of Lemma <ref>
By definition of (w_t^m) in Algorithm <ref>, we have w^1_t = w_t-1 and w_t = w_t^m_. We show properties (<ref>), (<ref>), (<ref>), and <ref> using induction over m =1,…, m_.
Base case. We start with the base case; m = 1. Note that from the assumption in (<ref>) and definition of w^1_t, we have
w^1_t - u_t-1_∇^2Φ(u_t-1)≤ 1/(40M_ϕ).
Now, by definition of the Oracle ^_α and the fact that H_t-1= ^_α(u_t-1) (see Algorithm <ref>) with α=0.001, we have
(1-0.001)∇^2 Φ(u_t-1) ≼ H_t-1≼ (1+0.001)∇^2 Φ(u_t-1).
Combining this with (<ref>) implies property (<ref>) for the base case. Furthermore, since w^1_t=w_t-1 (by definition), (<ref>) follows trivially for the base case.
Now, using that Φ_t(w) = Φ_t-1(w) + η g_t-1^⊤ w, we have
λ(w^1_t, Φ_t)^2 =λ(w_t-1, Φ_t)^2
= (∇Φ_t-1(w_t-1) + ηg_t-1)^⊤∇^-2Φ(w_t-1) (∇Φ_t-1(w_t-1) + ηg_t-1)
≤ 2∇Φ_t-1(w_t-1)^⊤∇^-2Φ(w_t-1) ∇Φ_t-1(w_t-1) + 2η^2g_t-1^⊤∇^-2Φ(w_t-1) g_t-1
= 2λ(w_t-1, Φ_t-1)^2 + 2η^2g_t-1^⊤∇^-2Φ(w_t-1) g_t-1
≤ 2α^2 + 2η^2 b^2 ≤ 1/(2500M_Φ^2),
where the last inequality follows by (<ref>) and the fact that g_t-1_∇^-2Φ(w_t-1)≤ b. This shows property (<ref>) for the base case. Thus, by Lemma <ref>, we have, for w^⋆_t∈_w∈Φ_t(w),
w^1_t - w^⋆_t_∇^2 Φ(w^1_t) = w^1_t - w^⋆_t_∇^2 Φ_t(w^1_t) ≤λ(w^1_t, Φ_t)/(1-M_Φλ(w^1_t, Φ_t)) ≤1/49M_Φ.
Now, combining (<ref>) with the fact that u_t-1 - w^1_t_∇^2 Φ(u_t-1) = u_t-1 - w_t-1_∇^2 Φ(u_t-1)≤ 1/(40M_Φ) (see (<ref>)) and Lemma <ref>, we obtain
(31/32)^2 H_t-1≼∇^2 Φ(w^1_t) ≼(32/31)^2 H_t-1,
Plugging (<ref>) into (<ref>), we get
w^1_t - w^⋆_t_H_t-1≤ 1/(40M_Φ).
Now, by the triangle inequality
u_t-1 - w^⋆_t_H_t-1 ≤u_t-1 - w^1_t_H_t-1 + w^1_t - w^⋆_t_H_t-1
≤ 1/(20 Φ_M),
where in the last inequality we used (<ref>) and (<ref>). Combining (<ref>) with (<ref>) and Lemma <ref>, we get
(4/5)∇^2 Φ(w^⋆_t) ≼ H_t-1≼ (5/4)∇^2 Φ(w^⋆_t).
Combining Equations (<ref>) and (<ref>) implies property (<ref>) for the base of Induction.
Furthermore, note that from Lemma <ref>:
w^1_t - w^⋆_t_∇^2 Φ(w^⋆_t)≤50/49λ(w_t^1, Φ_t) ≤1/49M_Φ,
which shows property (<ref>) for the base case.
Induction step.
Now, assume that properties (<ref>), (<ref>), (<ref>), and (<ref>) hold for m≥ 1. We will show that these properties holds for m+1. From the hypothesis of induction, we have
w^m_t - u_t-1_H_t-1≤1/12 M_Φ,
which combined with (<ref>) and Lemma <ref> implies
0.84∇^2 Φ(w^m_t) ≼ H_t-1≼ 1.2∇^2 Φ(w^m_t).
Thus, by Lemma <ref> (instantiated with c = 1/5) and the fact that λ(w^m_t, Φ_t) ≤ 1/(40M_Φ) (by the induction hypothesis), we get that for w̃_t^m+1w^m_t - H_t-1^-1∇Φ_t(w^m_t):
λ(w̃_t^m+1, Φ_t) ≤ 9M_Φλ(w^m_t, Φ_t)^2 + 2.5cλ(w^m_t, Φ_t)
≤ (7/8)λ(w^m_t, Φ_t).
Again, by Lemma <ref> with c=1/5, we have
λ(w^m+1_t, Φ_t) ≤ 20(1+c) + (1+20(1+c)) λ(w̃_t^m+1, Φ)
≤ 25 + (15/16)λ(w^m_t, Φ_t)
≤ 1/(50 M_Φ).
By the induction hypothesis, we also have that λ(w_t^m, Φ_t) ≤(15/16)^m-1λ(w^1_t, Φ_t) + 500. Combining this with (<ref>), we get
λ(w_t^m+1, Φ_t) ≤(15/16)^mλ(w^1_t, Φ_t) + 500.
This shows that (<ref>) holds with m replaced by m+1.
Next, we show that (<ref>) holds with m replaced by m+1. Combining (<ref>) with Lemma <ref> implies
w^m+1_t - w^⋆_t_∇^2 Φ(w^⋆_t) ≤λ(w^m+1_t, Φ_t)/(1-M_Φλ(w^m+1_t, Φ_t)) ≤
1/(49M_Φ).
This, together with (<ref>) gives
w^m+1_t - w^⋆_t_H_t-1≤ 1/(32M_Φ).
Combining (<ref>) with (<ref>) gives:
w^m+1_t - u_t-1_H_t-1≤1/12 M_Φ,
which proves that (<ref>) holds with m replaced by m+1.
Next, we show that (<ref>) holds with m replaced by m+1. By (<ref>) and Lemma <ref>, we have
w^m+1_t - w^⋆_t_∇^2 Φ(w^⋆_t) ≤50/49(15/16)^mλ(w_t^1, Φ_t) + 240,
≤1/49M_Φ(15/16)^m + 240≤1/20M_Φ,
where in the last inequality we used (<ref>) and the bound on in the lemma's statement. This shows that (<ref>) holds with m replaced by m+1.
Next, we show that (<ref>) holds with m replaced by m+1. By plugging (<ref>) into (<ref>), we get
w^m+1_t - w^⋆_t_H_t-1≤1/40M_Φ(15/16)^m + 300.
On the other hand, by (<ref>), we have
λ(w^1_t, Φ_t) ≤√(∇Φ_t-1(w_t-1)^⊤ (∇^2 Φ(w_t-1))^-1∇Φ_t-1(w_t-1))
+ √(η^2 g_t-1^⊤ (∇^2 Φ(w_t-1))^-1 g_t-1)
= λ(w_t-1, Φ_t-1) + ηg_t-1_∇^-2Φ(w_t-1).
Plugging this into (<ref>) and using (<ref>), we get
w^1_t - w^⋆_t_∇^2 Φ(w^1_t) ≤50/49 (λ(w_t-1, Φ_t-1) + ηg_t-1_∇^-2Φ(w_t-1)),
≤50/49(α + ηg_t-1_∇^-2Φ(w_t-1)),
where the last inequality follows by (<ref>). Combining (<ref>) with (<ref>) (instantiated with m=1),
w^1_t - w^⋆_t_H_t-1≤ 2(α + ηg_t-1_∇^-2Φ(w_t-1)).
Now, by (<ref>), (<ref>), and the triangle inequality, we have
w^1_t - w^m+1_t_H_t-1≤ 2ηg_t-1_∇^-2Φ(w_t-1) + 1/40M_Φ(15/16)^m + 500 + 2α.
Via another triangle inequality, we get
|u_t-1 - w^m+1_t_H_t-1 - u_t-1 - w_t-1_H_t-1| ≤ 2ηg_t-1_∇^-2Φ(w_t-1) + 1/40M_Φ(15/16)^m
+ 500 + 2α.
This shows that (<ref>) holds with m replaced by m+1. Finally, combining (<ref>) with (<ref>) implies
1/2∇^2 Φ(w^m_t) ≼∇^2 Φ(w^⋆_t) ≼ 2∇^2 Φ(w^m_t),
which completes the proof.
§ HELPER LEMMAS
Let y∈ and H∈^d× d be such that (1-c)∇^2 Φ(y) ≼ H ≼ (1+c)∇^2 Φ(y). Then, for h -H^-1∇Φ(y), we have
h_∇^2 Φ(y)≤1/1-c∇Φ(y)_∇^-2Φ(y).
We can write
h_∇^2 Φ(y) =H^-1∇Φ(y)_∇^2 Φ(y)
= √(∇Φ(y)^⊤ H^-1∇^2 Φ(y) H^-1∇Φ(y))
= √(∇Φ(y)^⊤∇^2 Φ(y)^-1/2(∇^2 Φ(y)^1/2 H^-1∇^2 Φ(y)^1/2)^2 ∇^2 Φ(y)^-1/2∇Φ(y)).
For the middle matrix ∇^2 Φ(y)^1/2 H^-1∇^2 Φ(y)^1/2 we have that
1/1+cI ≼∇^2 Φ(y)^1/2 H^-1∇^2 Φ(y)^1/2≼1/1-cI,
since (1-c)∇^2 Φ(y) ≼ H ≼ (1+c)∇^2 Φ(y) by assumption. Plugging this back into (<ref>), we get
h_∇^2 Φ(y)≤1/1-c∇Φ(y)_∇^2 Φ(y)^-1.
If -B ≼ A ≼ B are symmetric matrices and B is PSD, then
x^⊤ A y ≤x_B y_B.
|
http://arxiv.org/abs/2306.03173v2
|
20230605182900
|
Linear Distance Metric Learning with Noisy Labels
|
[
"Meysam Alishahi",
"Anna Little",
"Jeff M. Phillips"
] |
cs.LG
|
[
"cs.LG"
] |
My editor
Linear Distance Metric Learning with Noisy Labels
Meysam Alishahi [email protected]
School of Computing
University of Utah
Salt Lake City, UT 84112, USA
Anna Little [email protected]
Department of Mathematics and the Utah Center For Data Science
University of Utah
Salt Lake City, UT 84112, USA
Jeff M. Phillips [email protected]
School of Computing
University of Utah
Salt Lake City, UT 84112, USA
July 31, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In linear distance metric learning, we are given data in one Euclidean metric space and the goal is to find an appropriate linear map to another Euclidean metric space which respects certain distance conditions as much as possible. In this paper, we formalize a simple and elegant method which reduces to a general continuous convex loss optimization problem, and for different noise models we derive the corresponding loss functions.
We show that even if the data is noisy, the ground truth linear metric can be learned with any precision provided access to enough samples, and we provide a corresponding sample complexity bound.
Moreover, we present an effective way to truncate the learned model to a low-rank model that can provably maintain the accuracy in loss function and in parameters – the first such results of this type. Several experimental observations on synthetic and real data sets support and inform our theoretical results.
Linear metric learning, Mahalanobis distance, positive semi-definite matrix
§ INTRODUCTION
The goal of distance metric learning is to map data in a metric space into another metric space in such a way that the distance between points in the second space optimizes some condition on the data.
Early work in this area focuses mostly on the Euclidean to Euclidean setting, and specifically on the case of learning linear transformations. For data X ∈^n × d, it attempts to learn a Mahalanobis distance _̣ : ^d ×^d → as
_̣(, ) = -_ = √((-)^t (-)).
This is a metric on the original space ^d as long as is positive definite. We can decompose as = ^t for ∈^d × d. Then can be used as a map so ' = ^t is a point set and in the new space the standard Euclidean distance ' - ' = - _, where ' = ^t and ' = ^t.
Linear metric learning has been studied in <cit.> for kNN classification, in <cit.> via margin/distance optimization, in <cit.> via discriminant analysis, and in <cit.> via Jeffrey divergence.
Many of these linear methods also propose kernelized versions, and kernelized metric learning was also considered in <cit.>.
But the current state of the art uses arbitrarily complex neural encoders that attempt to optimize the final objective with very little restriction on the form or structure of the mapping <cit.>. Merging with the area of feature engineering <cit.>, these approaches are an integral element of information retrieval <cit.>, natural language processing <cit.>, and image processing <cit.>. To access further details, readers can refer to two well-conducted surveys <cit.>.
In this work, we revisit linear distance metric learning. We posit that there are two useful extremes in this problem, the anything-goes non-linear approaches mentioned above, and the very restrictive linear approaches. The linear approaches exhibit a number of important properties which are essential for certain applications:
* When the original coordinates of the data points have meaning, but for instance are measured in different units (e.g., inches and pounds), then one may want to retain that meaning and interpretability while making the process invariant to the original underlying (and often arbitrary) choice of units.
* Many geometric properties such as linear separability, convexity, straight-line connectivity, vector translation (linear parallel transport) are preserved under affine transformations. If such features are assumed to be meaningful on the original data, then they are retained under a linear transformation.
* Some physical equations, such as those describing ordinary differential equations (ODEs) can be simulated through a linear transform <cit.>. We will demonstrate an example application of this (in Section <ref>) where because of changing units, it is not clear how to measure distance in the original space, and locality based learning can be more effectively employed after a linear transformation.
While several prior works have already explored linear distance metric learning <cit.>, they often reduce to novel optimization settings where specially designed solvers and analysis are required. For instance, <cit.> utilize a clever subgradient descent formulation to ensure the learned retains its positive-definiteness.
In our work we provide a simple and natural formulation that converts the linear distance metric learning task into a simple supervised convex gradient descent procedure, basically a standard supervised classification task where any procedure for smooth convex optimization can be employed.
Formulation.
Specifically, we assume N i.i.d. observations (_i, _i) ∈^d ×^d and each pair is given a label ℓ_i ∈{ Far, Close}. Our goal is to learn a positive semi-definite (p.s.d.) matrix and threshold τ≥ 0 so that _i - _i_^2 ≥τ if ℓ_i = Far and _i - _i_^2 < τ if ℓ_i = Close.
Towards solving this we formulate an optimization problem
min_[ τ≥ 0; ≽ 0 ]
R_N(,τ) =
min_[ τ≥ 0; ≽ 0 ]1/N∑_i=1^N L(_i,_i,ℓ_i; , τ)
where L(_i,_i,ℓ_i; , τ) is a loss function that penalizes the mismatch between the observed label and the model-predicted label. Then we propose to optimize this in an (almost, except for τ≥ 0) unconstrained setting, where we can apply standard techniques like (stochastic) gradient descent:
min_[ τ∈ℝ; ∈ℝ^d × d ]
R_N(^t,τ)
Our core results.
We analyze this simple, flexible, and powerful formulation and show that:
* This optimization problem is convex over , τ. Moreover, while optimizing over is not convex, we can leverage an observation of <cit.> to show that the minimizer ^* over the unconstrained formulation generates ^* = ^* (^*)^t, which is the minimizer over the convex, but (positive semi-definite) constrained formulation over .
* The sample complexity of this problem is N_d(,δ) = O(1/^2(log1/δ + d^2 logd/)). More specifically, let f be the pdf of the distribution from which difference pairs - are drawn, let ℓ be the (noisy) label associated with a pair, and let R(,τ) = 𝐄_-∼ f[L(,,ℓ; , τ)] be the expected loss.
Then, given N_d(,δ) observations, |R(,τ̂) - R(^*,τ^*)|≤ with probability at least 1-δ, where (,τ̂) is the minimizer of R_N.
* If the labels ℓ_i
are observed with unbiased noise, and the loss function is chosen appropriately to match that noise distribution, then R_N still approximates R, and in fact, the minimizers , τ̂ of R_N converge to the true minimizers ^*,τ^* of R.
* Returning a low-rank approximation _k to can achieve bounded error with respect to |R(_k,τ̂) - R(^*, τ^*)| and - ^*+|τ̂- τ^*|, as elaborated on just below. To the best of our knowledge, this is the first such dimensionality reduction result of this kind.
Reasonable choice for the loss function L: logistic noise.
We assume the labeling of {Close, Far} through the evaluation of _i-_i_^2 is noisy.
We will prove that if the noise comes from the logistic distribution, then
L(_i,_i,ℓ_i; , τ) = -logσ(ℓ_i(_i-_i_^2- τ))
serves as an excellent theoretical choice; here σ(x) = 1/1+e^-x is known as the logistic function.
We note that prior work <cit.> also considered this special case of our formulation, and provided an empirical study on face identification;
however they did not theoretically analyze this formulation.
As mentioned, our work will show that this form of L is indeed optimal under a Logistic noise model. We also show that if one assumes a different noise model (e.g., Gaussian), then a different loss function would be more appropriate.
Furthermore, we show that irrespective to the amount of unbiased noise, we are able to recover the ground truth parameters if we observe enough noisy data, and we provide precise sample complexity bounds.
Section <ref> also confirms these theoretical results with careful experimental observations and demonstrates that the method is in fact robust to misspecification of the noise model.
Dimensionality Reduction.
It is natural to ask if linear distance metric learning approaches can be used for linear dimensionality reduction. That is, if one restricts to a rank-k positive semi-definite _k, then we can write _k = _k _k^t where _k ∈^d × k. Hence _k can be used as a linear map x' = _k^t x from ^d →^k.
Yet optimizing R_N(_k _k^t, τ) with _k ∈^d × k is not only non-convex, the optimization has non-optimal local minima <cit.>.
Another natural approach is to run the optimization with a full rank , and then truncate by rounding down its smallest d-k singular values to 0. As far as we know, no previous analysis of a linear DML approach has shown if this is effective; a direction (singular vector of ) associated with a small singular value of could potentially have out-sized relevance towards cost function R_N that we seek to optimize, and this step may induce uncontrolled error in R_N.
In this paper, we detail some reasonable assumptions on the data necessary for this singular value rounding scheme to have provable guarantees. The key assumption is that the width of the support of the data distribution is bounded by √(F) (for some parameter F),
and either this support must include measure in regions which assign labels of both Close and Far, or the unbiased noise must be large enough to generate some of each label.
Specifically we consider the following algorithm.
1) Sample N = N_d(,δ) pairs , from a Lebesgue measurable distribution with the width of support at most √(F) in each direction.
2) Solve for ∈^d × d and τ̂≥ 0 in R_N(^t, τ) using any convex gradient descent solver.
3) For positive integer k ≤ d, set the d-k smallest singular values of to 0, resulting in low-rank matrix _k. Let γ be the value of the (d-k)th singular value of (the largest one rounded to 0).
4) Return _k = _k _k^t, a rank-k positive semi-definite matrix.
For this algorithm (formalized in Theorems <ref> and <ref>), we claim with probability at least 1-δ
| R(_k, τ̂) - R(^*, τ^*) | ≤ + F γ^2.
Thus, for instance if we draw - from a unit ball (so F = 1), and set ' = 2, and assume that ^* has d-k eigenvalues less than '/4, then with probability at least 1-δ
| R(_k, τ̂) - R(^*, τ^*) | ≤'.
Outline.
In Section <ref> we more carefully unroll this model formulation, and in Section <ref> we show sample complexity and convergence results, including under noise.
In Section <ref> we discuss the optimization procedure and show that the unconstrained optimization approach is provably effective, and useful for dimensionality reduction tasks.
Finally, in Section <ref> we verify our theory on a variety of synthetic data experiments and demonstrate the utility of this linear DML framework on two real data problems that benefit from a learned Mahalanobis distance.
§ MODEL AND KEY OBSERVATIONS
Data and Model Assumptions.
We work under the following data model throughout the paper.
We assume there exists some positive semi-definite ^* ∈^d × d that defines a Mahalanobis distance that we seek to discover. We observe pairs of data points _i, _i ∈^d, but we only consider the differences of the pairs _i = _i - _i. Moreover, we assume that all observations _i ∈^d are i.i.d. from some unknown distribution with pdf f().
We also assume there exists a threshold τ^* which generates labels ℓ_i ∈{ Close, Far}; more specifically, z_i is Close if and only if
Label Assumption_i_^*^2 + η_i < τ^* ,
where η_i is a noise term. Each η_i is generated i.i.d. from a distribution Noise(η| 0, s) which is symmetric around 0 with scale parameter s>0 uniquely determining the distribution Noise(η| 0, s).
For different distributions, s can have a different meaning. For example, for the normal distribution, s is the standard deviation. For mathematical convenience, we overload “Close” = -1 and “Far” = +1 so ℓ_i ∈{-1,+1}.
Scaling ^*, τ^*, and s by a common parameter does not change the labeling distribution, so w.l.o.g., we remove this degree of freedom in the analysis presented in this paper by setting s=1. Thus what the techniques in this paper ultimately recover is actually ^*/s and τ^*/s.
Moreover, ignoring the issue of noise, each pair (,τ) is unique in how it labels points up to the ratio /τ. The following formalization is proved in Appendix <ref>:
Given two pairs (_1, τ_1) and (_2, τ_2), if the two indicator functions
_{^2__1 - τ_1≥ 0} and _{^2__2 - τ_2≥ 0} are pointwise equal, then
_1/τ_1 = _2/τ_2.
In order to be able to solve our optimization challenges and bound the sample complexity, we need to make a few simple assumptions on ^*,τ^* and the data distribution. We the have the following assumption on the model:
Model Assumption^*_2 ≤β and τ^*∈[0, B] .
Note that as we have assumed that s =1, ^* and τ^*, as well as the upper bounds β and B, have been scaled by 1/s.
Since β and B later appear in the sample complexity (see Section <ref>), noise affects the sample complexity through these terms.
Next we assume something about the data we observe.
Assume _1,…,_N ∈ℝ^d are N i.i.d. samples from an unknown distribution with probability density function f() with respect to a Lebesgue measure.
Hence ∫_ℝ^df()x̣ = 1 which implies that the set { f() ≠ 0} is a Lebesgue nonzero measure set.
We also assume that the probability density function f() has bounded support, i.e.,
Data Assumptionmax{^2 f() ≠ 0}≤ F.
By this assumption, we know that almost always ^2≤ F.
Also, by <ref>, for each
∈ = {∈^d × d : is p.s.d., _2 ≤β},
we have
^2_≤_2^2≤β F.
When it is clear by context, we sometimes write ℳ for .
Accordingly, for ∼ f(), almost always
| _i^2_ - τ|≤max{B, β F} ∀ (, τ) ∈× [0,B].
Since the sign of _i^2_ - τ determines the label of , it is a reasonable assumption that
Meta Assumption
B≤β F,
which implies
| _i^2_ - τ|≤β F ∀ (, τ) ∈× [0,B].
Convexity.
Our core objective is optimizing R_N or R over (, τ) such that ≽ 0,τ≥0.
Similar to typical supervised learning problems, we consider loss functions L which for a data point (_i, ℓ_i) are convex functions of ℓ_i (_M^2 - τ). To show the space of valid parameter choices defining model (, τ) is convex, we can show a convex combination of two valid models is still valid.
Consider any two _1, _2 ≽ 0 and τ_1, τ_2 ≥ 0 and an interpolation parameter λ∈ [0,1], then
^2_λ_1 + (1-λ)_2 - (λτ_1 + (1-λ) τ_2)
=
λ (^2__1 - τ_1) + (1-λ)(^2__2 - τ_2).
Hence, the convex interpolation of the argument from those two models is equivalent to the convexly interpolated model ( = λ_1 + (1-λ)_2, τ = λτ_1 + (1-λ) τ_2). Coupled with a convex loss function, this implies that minimizing R_N or R over (, τ) with ≽ 0 and τ≥ 0 is a convex optimization problem. Hence any critical point is a global minimum.
However, restricting ≽ 0 under gradient descent is non-trivial since generically the gradient may push the solution out of that. While manifold optimization methods have been developed for other matrix optimization challenges <cit.>, we develop a simpler unconstrained approach in this work.
§ NOISE OBSERVATIONS AND OPTIMAL LOSS FUNCTIONS
Recall that _1,…,_N ∈ℝ^d are N i.i.d. samples from an unknown distribution with probability density function f() with bounded support. As the noise distribution Noise(η |0, 1) makes the labeling probabilistic,
for a given ∼ f(), the probability that the corresponding label is 1 can be computed as
p(ℓ=1|; , τ) = p(η > τ - _^2)
= ∫_-∞^_^2 - τ Noise(η| 0, 1)η̣
= Φ_ Noise(_^2 - τ),
where Φ_ Noise(a) = ∫_-∞^a Noise(η| 0, 1)η̣.
Observe that
p(ℓ=-1|; , τ) = 1 - p(ℓ=1|; , τ)
= Φ_ Noise(-1(_^2 - τ)).
Consequently, we have
, ℓ∼ g(, ℓ;, τ) = f()Φ_ Noise(ℓ(^2_-τ)).
We can consider various noise pdfs Noise(η| 0, 1) in this section. For the full uniform convergence result, we will require some properties. The noise should be symmetric, continuous, non-zero everywhere, and -logΦ_ Noise should be convex and ζ-Lipschitz on [-β F, β F]. Throughout the paper, such a noise is called simple noise models.
choices for noise are logistic, Normal, Laplace distributions, and Hyperbolic secant distribution (HS), see Table <ref> for details and some important constants under our model and data assumptions (for some verifications of what we have in this table, see Appendix <ref>).
We also considered other noise models like Cauchy, but then -logΦ_ Noise is not convex.
In Figure <ref>, we compare the four simple noise distributions when they share the same mean and variance.
Thus by the <ref>,
(_1,ℓ_1),…,(_N,ℓ_N) i.i.d.∼ g(, ℓ;^*, τ^*) = f()Φ_ Noise(ℓ(^2_^*-τ^*)).
Since we have a probabilistic model, we can use the MLE method to approximate ^*, τ^*.
As one of the main contributions of this work, we will prove that this method works and deduce the corresponding sample complexity.
The average negative log-likelihood of the given data _1…,_N and their labels ℓ_i,…,ℓ_N as a function of and τ is
NLL(, τ) = -1/N∑_i=1^N log g(_i, ℓ_i;, τ)
= -1/N∑_i=1^N[ log f(_i) +log p(ℓ_i |_i; , τ)]
= -1/N∑_i=1^N log f(_i)_independent of ,τ-1/N∑_i=1^N log p(ℓ_i |_i; , τ)_the loss function.
Therefore, to solve the MLE, we need to find a p.s.d. matrix and a τ∈[0, B] minimizing
R_N(,τ) = -1/N∑_i=1^N log p(ℓ_i |_i; , τ)
= -1/N∑_i=1^N logΦ_ Noise(ℓ_i(_i^2_ - τ)).
Therefore, the optimization problem we are dealing with is
min_≽ 0,τ≥0 R_N(,τ),
where R_N(, τ) = -1/N∑_i=1^N logΦ_ Noise(ℓ_i(_i_^2 - τ)).
We will justify that solving this optimization problem with high probability ends up in a guaranteed approximation of (^*, τ^*).
For fixed arbitrary , τ, using Chebyshev's inequality, we know that
R_N(,τ) tends to (in measure induced by g(, ℓ;^*, τ^*))
R(,τ) = -𝔼_z,ℓ∼ g(, ℓ;^*, τ^*)logΦ_ Noise(ℓ(^2_-τ))
= -∫ g(, ℓ;^*, τ^*)logΦ_ Noise(ℓ(^2_-τ))d dℓ
provided that logΦ_ Noise(ℓ(^2_ - τ)) has a bounded variance which is the case since
f() has bounded support.
This observation suggests that we can see R_N(,τ) as the empirical risk function and R(,τ) as the true risk function.
However, what we have access to is only R_N(,τ).
The rest of this subsection can be summarized as follows:
* We prove that both functions R(,τ) and R_N(,τ) are convex, so we are dealing with convex optimization problems.
* We prove that R(,τ) uniquely minimized at (^*, τ^*).
* We show that R_N(,τ) converges uniformly in measure to R(,τ). We also bound the corresponding error for an arbitrarily given confidence bound.
* Combining these results, we conclude that minimizing R_N(,τ) is a good proxy for minimizing R(,τ).
The true loss R(,τ) is uniquely minimized at (^*, τ^*).
First, note that
R(,τ) - R(^*,τ^*)
= 𝔼_,ℓ∼ g(, ℓ;^*, τ^*)(logΦ_ Noise(ℓ(^2_^*-τ^*))/Φ_ Noise(ℓ(^2_-τ)))
= 𝔼_,ℓ∼ g(, ℓ;^*, τ^*)(log f() Φ_ Noise(ℓ(^2_^*-τ^*))/ f() Φ_ Noise(ℓ(^2_-τ)))
= D_KL(g(, ℓ;^*, τ^*) g(, ℓ;, τ))≥ 0.
This indicates that R(,τ) takes its minimum at (^*, τ^*). Moreover, for any (^+, τ^+) at which R(,τ) attains its minimum,
D_KL(g(, ℓ;^*, τ^*) g(, ℓ;^+, τ^+))= 0.
This implies that g(, ℓ;^*, τ^*) = g(, ℓ;^+, τ^+) almost everywhere
(according to the probability measure induced by g(, ℓ;^*, τ^*) over ℝ^d×{-1,1}).
Since μ_L({ f()>0})>0 (Lebesgue measure) and logΦ_ Noise(·) is one-to-one, we can conclude that there is a set S⊆ℝ^d
such that μ_L(S)>0 and for every ∈ S,
^2_^*-τ^* = ^2_^+-τ^+.
By Lemma <ref> in Appendix <ref>, we then conclude
^+ = ^* and τ^+=τ^*.
Although we have proved that the true loss R(,τ) is uniquely minimized at (^*, τ^*), in reality, we do not have the true loss.
Indeed, we only have access to the empirical loss R_N(,τ).
Next we will show that R_N(,τ) is uniformly close to R(,τ) as N gets large, and then conclude that instead of minimizing R(,τ), we can minimize R_N(,τ) to approximate (^*, τ^*). Note that
for two given p.s.d. _1 and _2,
|^2__1 - ^2__2|≤_1 -_2_2 ^2.
For proof, see Appendix <ref>: Observation <ref>.
In the next lemma, using this inequality, we prove that the true loss and empirical loss are both Lipschitz with respect to the metric
d((_1, τ_1), (_2, τ_2)) = _1- _2_2 + |τ_1 - τ_2|.
If logΦ_ Noise(·) is ζ-Lipschitz, then, for any given (_1, τ_1),(_2, τ_2)∈ℳ× [0,B],
* |R(_1, τ_1) - R(_2, τ_2)| < ζ(F+1) d((_1, τ_1), (_2, τ_2)),
* |R_N(_1, τ_1) - R_N(_2, τ_2)| < ζ(F+1) d((_1, τ_1), (_2, τ_2)).
Because of similarity, we only prove the first inequality which follows as below. The proof of the other works the same way.
|R(_1, τ_1) - R(_2, τ_2)|
= | 𝔼_,ℓ[logΦ_ Noise(ℓ(^2__1-τ_1)) - logΦ_ Noise(ℓ(^2__2-τ_2))]|
≤𝔼_,ℓ[ |logΦ_ Noise(ℓ(^2__1-τ_1)) - logΦ_ Noise(ℓ(^2__2-τ_2))|]
≤ζ𝔼_[ | (^2__1- τ_1) - (^2__2-τ_2)|] (logΦ_ Noise(·) is ζ-Lipschitz)
≤ζ𝔼_[ |^t(_1 -_2)|] + ζ𝔼[|τ_1 -τ_2 |]
≤ζ(_1 -_2)_2 𝔼[^2] + ζ |τ_1 -τ_2 | (using (<ref>))
≤ζ(_1 -_2)_2 F + ζ |τ_1 -τ_2 | (<ref>)
< ζ(F+1) d((_1, τ_1), (_2, τ_2)).
As -logΦ_ Noise(·) is a decreasing function, using Equation <ref>, we have
0≤ -logΦ_ Noise(ℓ_i(_i^2_ - τ))≤ -logΦ_ Noise(-β F) = T,
which indicates that the random variables -logΦ_ Noise(ℓ_i(_i^2_ - τ)) are bounded by a value T; see Table <ref>.
In the next theorem, we prove that, with high probability, the empirical loss R_N is everywhere close to the true loss R. We will sketch the proof; for the complete proof, see Appendix <ref>. The full bound for N_d(,δ) appears as (<ref>), it is poly(F,T, log(B), log(β)), which we omit here for simplicity, assuming those terms are constants.
Assume that the noise model Noise(η) is simple.
For any ε,δ>0, define
N_d(ε, δ) =O(1/ε^2[
log1/δ + d^2logd/ε]).
If
N>N_d(ε, δ), then with probability at least 1-δ,
sup_(,τ)∈ℳ× [0, B]|R_N(, τ) - R(, τ)|<ε.
Set α = ε/3ζ(F+1).
Consider ℰ={(_i, τ_i); i = 1,…,m=m(α)} as an α-cover for ℳ×[0, B].
By Lemma <ref>, for every (,τ), there exists an index i∈[m] such that
|R(, τ) - R(_i, τ_i)| < ε/3 and
|R_N(, τ) - R_N(_i, τ_i)| < ε/3
which concludes
|R_N(, τ) - R(, τ)| ≤2ε/3 + |R_N(_i, τ_i) - R(_i, τ_i)|.
Now, using an appropriate upper bound for m(α) = B/α(4 β d √(d) /α)^d^2 in Lemma <ref>, along with Chernoff-Hoeffding bound, and union bound, we conclude the desired result.
Note that combining Theorem <ref> and Theorem <ref>, we conclude that minimizing R_N(,τ) is a good proxy for minimizing R(,τ).
We restate this result in the next theorem.
Assume that the noise model Noise(η) is simple.
For any given ε, δ >0, if N > N(ε/2, δ), then, with probability at least 1-δ,
for any point (, τ̂) minimizing R_N(, τ), we have
0< R(, τ̂) - R(^*, τ^*) < ε.
As N > N(ε/2, δ), by Theorem <ref>, with probability at least 1-δ, we have
|R_N(, τ) - R(, τ)|< ε/2 for all (,τ)∈ℳ× [0,B].
Consequently, with probability at least 1-δ,
R(, τ̂) - ε/2 < R_N(, τ̂)
≤ R_N(^*, τ^*) R_N(, τ) minimized at (, τ̂)
< R(^*, τ^*) + ε/2,
implying that
0≤ R(, τ̂) - R(^*, τ^*) ≤ε.
In the next four subsections, we set the noise to follow the logistic, Gaussian, Laplace, and Hyperbolic secant distributions.
§.§ Logistic distribution as the noise
The probability density function of the logistic distribution is
L(x|μ, s) = 1/4s sech^2(x-μ/2s),
where μ is the mean and s is the scale parameter of this distribution.
The variance of the logistic probability density function is s^2π^2/3.
The logistic distribution looks very much like a normal distribution and is sometimes used as an approximation for it. In this subsection, we assume that the noise has a logistic distribution with μ = 0 and s = 1, i.e.,
Noise(η) = L(η|0, 1) (since scaling ^*, τ^*, and η does not change the labeling distribution, w.l.o.g. we may assume s=1). The Cumulative distribution function of L(x|0, 1) is the sigmoid function
σ(x) = 1/1+e^-x, i.e.,
Φ_ L(x) = σ(x).
As the logistic noize is simple with ζ =1 (see Table <ref>), we have Theorem <ref> valid for this noise option.
Plugging it into R_N(, τ) in Optimization Problem <ref>, we obtain
R_N(, τ) = -1/N∑_i=1^N logσ(ℓ_i(_i_^2 - τ)).
In this setting where we have a closed form for Φ_ L(x) = σ(x), the loss function is computationally easier to work with.
So as the main setting for the paper, we assume that the noise comes from a logistic distribution. Thus although we consider other noise models and loss functions in our experiments, by default we work with the logistic noise and corresponding loss function.
§.§ Normal distribution as the noise
If we consider the noise having a Normal distribution instead of a Logistic distribution,
then we will end up with a probit function in place of the sigmoid function.
Indeed, if we set Noise(η) = 𝒩(η| 0, 1), then
Φ(a) = Φ_ Noise(a) = ∫_-∞^a 𝒩(η| 0, 1)η̣.
The function Φ(a) is known as the probit fuction.
Plugging it into R_N(, τ) in Optimization Problem <ref>, we obtain
R_N(, τ) = -1/N∑_i=1^N logΦ(ℓ_i(_i_^2 - τ)).
Unfortunately, the probit function has no close-form formula, so working with the probit function is not as simple as the logistic function. We will observe in Section <ref> that as the logistic and Gaussian distributions are very similar, the logistic loss does well under Normal noise.
§.§ Laplace distribution as the noise
As the third natural option for noise, we assume that
Noise(η) = Laplace(η | 0, 1) = 1/2e^-|η|.
In this setting,
Φ_ Laplace(a) = ∫_-∞^a 1/2e^-|η|η̣= {[ 1/2e^a a≤ 0; ; 1 - 1/2e^-a a≥ 0. ].
Similar to the logistic noise, we have a closed-form formula for Φ_ Laplace(a) in this setting.
So this setting is also convenient to work with.
Note that
-logΦ_ Laplace(a) =
{[ -a + log 2 a≤ 0; ; -log(1 - 1/2e^-a) a≥ 0 ]. ,
which yields a closed-form formula for R_N.
§.§ Hyperbolic secant distribution as the noize
The hyperbolic secant distribution is a continuous probability distribution whose probability density function is
HS(η|μ, σ) = 1/2σ sech(π/2σ(η-μ))
and whose Cumulative distribution function is
Φ_ HS(η|μ, σ) = 2/πarctan(exp(π/2σ(η-μ))).
The mean and variance of this distribution are μ and σ^2 respectively.
As the last option for noise, we consider Noise(η) = HS(η|0, 1).
In this case, we have
Φ_ HS(a) = 2/πarctan(exp(π/2η)).
Plugging Φ_ HS into R_N(, τ) at Optimization Problem <ref>, we obtain
R_N(, τ) = -1/N∑_i=1^N log[
arctan(exp(π/2(
-ℓ_i(_i_^2 - τ)
)))
]+ Constant ,
and we can ignore the constant term.
§ ALGORITHMS, APPROXIMATION, AND DIMENSIONALITY REDUCTION
§.§ How to solve Optimization problem <ref>
We will prove that solving Optimization problem <ref> leads us to recover parameters ^* and τ^*.
We restate that optimization problem here
min_≽ 0,τ≥0 R_N(,τ).
As we need to maintain to be p.s.d., using gradient descent directly is difficult.
Notice that is p.s.d. if and only if = ^⊤ for some _d× k where k≤ d; in this case indeed has rank at most k.
Therefore, if we replace by ^⊤ and optimize over , we no longer need to maintain the p.s.d. condition on . Then the optimization problem can be rewritten as follows:
min_τ≥ 0, ∈ℝ^d× k R_N(^⊤,τ).
If we set k=d, then Optimization <ref> is equivalent to Optimization <ref>.
The only downside of this reformulation is that we lose the convexity by this variable change.
So, we are dealing with a non-convex optimization, and thus, there may be no guarantee that the gradient descent will converge to a global minimum.
Fortunately, the next theorem proved by <cit.> resolves this issue.
We remind the reader that for convex optimization problems, global minimums and stationary points are equivalent.
A local minimizer ^* of Problem <ref> provides a stationary point (global minimum) = ^*(^*)^⊤ of Problem <ref>
if ^* is rank deficient ( rank(^*)<k).
Moreover, if d=k, then any local minimizer ^* of Problem <ref> provides a stationary
point (global minimum) ^* = ^*(^*)^⊤ of Problem <ref>.
So, we can use gradient descent for k=d to find a local minimum ^* of Problem <ref>, then using this theorem we know that
^*=^*(^*)^⊤ is a global minimum of Problem <ref>.
Another possible approach is to try some k<d, and if ^* is rank deficient, then again ^*=^*(^*)^⊤ is a global minimum of Problem <ref>.
However, even if we know that the solution for Problem <ref> has rank r< k, we might find ^* to be full rank. Indeed, setting k>r does not imply that ^*(^*)^⊤ is the global minimum of Problem <ref>. Moreover, Problem <ref> is a generalization of Low-Rank Semi-definite Programming, which is known as an NP-Hard problem <cit.> (weighted Max-Cut is a special case of it), which indicates that solving it when k<d might be a difficult task.
§.§ How well is (𝐌^*, τ^*) approximated?
Throughout this section, for simplicity of notation and w.l.o.g we assume that ζ F=1.
In this section, we will see that, with high probability, we can approximate (^*, τ^*) with any given precision if N is large enough.
Recall that Theorem <ref> establishes that R(, τ) is uniquely minimized at (^*,τ^*).
Theorem <ref> asserts that if N is large enough, the value of the true loss on parameters minimizing the empirical loss is close to the minimum of the true loss. Although we can infer from this theorem that the error at the ground truth parameters (^*, τ^*) is close to the error at (, τ̂), it is still possible that (^*, τ^*) and (, τ̂) are far from each other w.r.t. the metric
d((^*, τ^*), (, τ̂)) = ^*- _2 + |τ^* - τ̂|.
Recall that the random variable ∈ℝ^d is generated from an unknown distribution with probability density function f() with bounded support.
Let us define the L_1(f)-norm of (,τ) as
(, τ)_L_1(f) = ∫ f()|^t - τ|.
To see that this is a norm, note that if (, τ)_L_1(f) = 0, then Lemma <ref> along with the fact that μ_L({ f()>0})>0 implies that = and τ = 0. The other required properties follow by standard reductions.
This norm naturally induces the following L_1(f)-metric
(_1,τ_1)-(_2,τ_2)_L_1(f) = ∫ f()|(^t_1 - τ_1) - (^t_2 - τ_2)|.
Let Noise(η) be a simple noise and set ω = min_|η|≤β A Noise(η); see Table <ref>. Then for all (,τ)∈ℳ× [0, B]
R(,τ) - R(^*,τ^*) ≥1/2ω^2((,τ) - (^*,τ^*)_L_1(f))^2.
Define
μ__(,τ) to be the measure induced by the probability density function
g(, ℓ;, τ) = f()Φ_ Noise(ℓ(^2_-τ)).
Note that
R(,τ) - R(^*,τ^*)
= 𝔼_,ℓ∼ g(, ℓ;^*, τ^*)(logΦ_ Noise(ℓ(^2_^*-τ^*))/Φ_ Noise(ℓ(^2_-τ)))
= 𝔼_,ℓ∼ g(, ℓ;^*, τ^*)(logf()Φ_ Noise(ℓ(^2_^*-τ^*))/f()Φ_ Noise(ℓ(^2_-τ)))
= D_KL(g(, ℓ;^*, τ^*) g(, ℓ;, τ))
≥1/2(μ__(^*,τ^*) - μ__(,τ)_TV)^2
where μ__(^*,τ^*) - μ__(,τ)_TV is the total variation of the signed measure μ__(^*,τ^*) - μ__(,τ) and
the last line follows from Pinsker's inequality.
So to find a lower bound for |R(,τ) - R(^*,τ^*)|, it suffices to find a lower bound for μ__(^*,τ^*) - μ__(,τ)_TV.
To this end,
μ__(^*,τ^*) - μ__(,τ)_TV = 1/2∫ f()|[Φ_ Noise(ℓ(^2_-τ)) - Φ_ Noise(ℓ(^2_^*-τ^*)) ]|ℓ̣
= 1/2∫ f() |Φ_ Noise'(ξ(, ℓ))[ℓ(^2_-τ) - ℓ(^2_^*-τ^*)]|ℓ̣
= 1/2∫ f() Noise(ξ(, ℓ))|(^2_-τ) - (^2_^*-τ^*)|ℓ̣
≥1/2(min_|ξ|≤β A Noise(ξ))×∫ f() |(^2_-τ) - (^2_^*-τ^*)|ℓ̣
= ω∫ f() |(^2_-τ) - (^2_^*-τ^*)|
= ω(,τ) - (^*,τ^*)_L_1(f),
where the first step is true because of a simple fact relating the total variation distance to the L_1-norm known as Scheffé's Lemma, the second step comes from the Mean Value Theorem, and the fourth step is true since |ξ(, ℓ)|≤β F.
This is valid since ξ(, ℓ) is a value between ℓ(^2_-τ) and ℓ(^2_^*-τ^*) and for each and , |^2_-τ|≤β F. Therefore,
R(,τ) - R(^*,τ^*) ≥1/2ω^2 ·(,τ) - (^*,τ^*)_L_1(f)^2.
The next corollary is an immediate consequence of Theorems <ref> and <ref>. It indicates that, with high probability, (,τ̂) can
approximate (^*,τ^*) with any given precision with respect to the L_1(f)-metric defined in (<ref>) provided
that N is large enough with respect to that precision.
Assume that the noise model Noise(η) is simple.
For ε, δ>0, if N>N(1/2ε^2 ω^2,δ), then with probability 1-δ
(,τ̂) - (^*,τ^*)_L_1(f)≤ε.
The L_1(f)-metric is dependent on the distribution f(), which is unavoidable. The following lemma is more intuitive (for proof see Appendix <ref>).
If f()≥ c>0 for each ∈ B^d(1), then for all (,τ)∈ℳ× [0, B]
(, τ) - (^*,τ^*)_L_1(f)≥cπ^d/2/20Γ(d/2+1)(1/18)^d ((, τ), (^*,τ^*)).
In particular, if f() is uniform on the unit disk, then
(, τ) - (^*,τ^*)_L_1(f)≥1/20(1/18)^d ((,τ), (^*,τ^*)).
Combining Theorem <ref> and Lemma <ref>, we have the following result.
For simplicity of notation, set
C(d) = cπ^d/2/Γ(d/2+1).
Let Noise(η) be a simple noise and set ω = min_|η|≤β A Noise(η).
If f()≥ c>0 for each ∈ B^d(1), then
R(, τ) - R(^*,τ^*) ≥ω^2C(d)^2/800× 18^2d^̣2((, τ), (^*,τ^*)).
In particular, if f() is uniform on B^d(1), then C(d) = 1 and thus
R(,τ) - R(^*,τ^*) ≥ω^2/800× 18^2d^̣2((, τ), (^*,τ^*)).
Now, combining Theorems <ref> and <ref>, we obtain the following result.
Let Noise(η) be a simple noise and set ω = min_|η|≤β A Noise(η).
Also, assume f()≥ c>0 for each ∈ B^d(1).
For any given ε, δ >0, if N > N(ω^2ε^2 C^2(d)/800× 18^2d, δ), then with probability at least 1-δ,
for any point (, τ̂) minimizing R_N(, τ), we have
((,τ̂), (^*,τ^*)) < ε.
We remark that we have not attempted to optimize the constants which appear in the sample complexity bound in Theorem <ref>.
§.§ Rank-deficient case
No work prior to this present paper has proved that we can recover the matrix ^* when it is not full rank,
or bounds the effect of truncating the derived to a low-rank _k.
Theorem <ref> indicates that for any given ε>0, if N is large enough then -^*_2 < ε and |τ̂-τ^*|< .
This will guarantee that there will be a small eigenvalues of for every small eigenvalue of ^*.
Let α>0 be given and assume the ground truth ^* has k eigenvalues less than α-3ε and k' eigenvalues greater than α with k+k'≤ d.
If the noise model Noise(η) is simple and N > N(ω^2ε^2 C^2(d)/800× 18^2d, δ), then with probability at least 1-δ,
the number of eigenvalues of which are less than α-2ε is at least k and the number of eigenvalues of which are greater than α-ε is at least k'.
Assume that (, τ̂) minimizes R_N(, τ).
By Theorem <ref>, we know -^*_2 < ε
and |τ̂-τ^*|< .
Set _r = -^*. Because of the definition of the spectral norm,
max_≠0_r_2/_2< ε.
Write ^* = - _r and for each i∈[d], notice
σ _i(^*) =min _(W)=n-i+1max __2=1x∈ W( - _r)_2
≤min _(W)=n-i+1max __2=1x∈ W(_2 + _r_2)
≤min _(W)=n-i+1max __2=1x∈ W(_2 + ε)
= ε + min _(W)=n-i+1max __2=1x∈ W_2
= σ_i() + ε,
where σ_i() and σ_i(^*) refer to
the i-th singular values of and respectively.
With a similar approach, we can prove that
σ_i() ≤σ _i(^*) + ε,
which implies that
|σ_i() - σ _i(^*)| < ε ∀ i∈[d],
which completes the proof.
Applying this lemma with α =, we obtain the next Theorem.
Assume that the noise model Noise(η) is simple and ^* has rank 0< r < d.
For a given ε,δ >0, if ε< 1/4σ_r(^*) and N > N(ω^2ε^2 C^2(d)/800× 18^2d, δ), then
has exactly d-r eigenvalues less than 2ε and the rest r eigenvalues are at least 3/4σ_r(^*) with probability 1-δ.
So if we truncate the eigenvalues of which are less than 2ε to zero, we obtain _k of rank r for which
^* -_k_2 < 2ε.
As we are not given ^*, in practice we are unaware of such a gap in the eigenvalues of ^*. Since we only have access to ,
the next Theorem establishes that the loss function still converges under eigenvalue truncation when the corresponding eigenvalues of are small.
Assume that the noise model Noise(η) is simple.
For a given ε,δ >0, assume that (,τ̂) minimizes R_N(,τ) for N>N(ε/2,δ).
Also assume has d - k eigenvalues which are less than γ. Set _k as the rank k matrix obtained from by setting these d-k eigenvalues to zero. Then, with probability at least 1-δ,
0< R(_k, τ̂) - R(^*, τ^*) < γ + ε.
As -_k_2 < γ, using the proof of the first inequality in Lemma <ref>, we obtain
|R(,τ̂) - R(_k,τ̂)| < ζ Fγ = γ,
since we have assumed ζ F=1. On the other hand, by Theorem <ref>, we have
0< R(, τ̂) - R(^*, τ^*) < ε.
Combining these two inequalities implies the desired inequality.
Combining Theorems <ref> and <ref>, we have
((_k,τ̂), (^*,τ^*))< 20√(2)× 18^d/ω C(d)( γ + ε).
Thus, if γ≤εω C(d)/20√(2)× 18^d, then
((_k,τ̂), (^*,τ^*))< 2ε.
§.§ Invariance to Changes in Unit of Input
Clearly, the learned and τ̂ are dependent on the units of feature space.
So, as an interesting question, we can study the behavior of and τ̂ if we change the units in the original feature space.
Mainly, we want to prove that if we change the units in feature space, we do not need to solve a
new optimization problem to learn a new and τ̂. Instead, we can recover these parameters from the already solved optimization problem.
Assume that we have a non-singular matrix _d× d which changes the units and rotates the space of features, and
let '_i = _i and ℓ_i' = ℓ_i. We
want to solve the following optimization problem
min_≽ 0,τ≥0 R'_N(,τ) ,
where
R'_N(,τ) = -1/N∑_i=1^N logσ(ℓ'_i('_i^2_ - τ))
= -1/N∑_i=1^N logσ(ℓ_i(_i^2_^⊤ - τ))
Since is non-singular, Optimization problem <ref> is minimized at , τ̂ if and only if Optimization problem <ref>
is minimized at ' = ^-1^⊤^-1, τ̂' = τ̂.
Hence, the solution is invariant to choice of units, given knowledge of the conversion.
§ EXPERIMENTAL RESULTS
In Section <ref>, we described the optimal loss functions for four different noise distributions (Logistic, Normal, Laplace, and Hyperbolic Secant). As the Normal noise model ends up with a probit in the loss function, and the probit function has no closed-form formula, we will not use this model in the experiments. Also, since in the real world we may not know the noise distribution in advance, for each model, we consider a variety of possible noise distributions to check the robustness of each model.
We study accuracy for different amounts of noise, sample complexity, and robustness against the amount of noise.
We start with synthetic data, described in Section <ref>, so we can run precisely controlled experiments which are reported in Sections <ref> and <ref>.
In Section <ref>, we compare our model performance with DML-eig (<cit.>).
Then in Section <ref> we apply our methods to some real data experiments well suited to our proposed algorithm. We presented experiments in a manner that facilitates their replication, to make it easy to reproduce, see the GitHub repository by <cit.> containing data and source codes.
§.§ Data generation
We start with d random positive real values λ_1,…,λ_d and then we randomly generate a d× d covariance matrix Σ whose eigenvalues are λ_1,…,λ_d.
To this end, we first randomly generate an orthonormal matrix _d× d and then set Σ = ^⊤, where is the d× d diagonal matrix whose diagonal entries are λ_1,…,λ_d.
We then independently sample 2N points _1, _1,…, _N, _N from (0,Σ) to generate N pairs (_i, _i) for i=1…, N.
Next we select d nonnegative random real values γ_1,γ_2,…,γ_d≥ 0 as the eigenvalues of the ground truth ^*,
and randomly generate ^* to be a random positive semi-definite matrix with eigenvalues
γ_1,…,γ_d, as we did for Σ.
We have
𝔼(-_M^*^2) = 2 tr(Σ M^*)
provided that ,∼ h(),
where h() is a pdf such that and Cov() = Σ.
We now choose τ^*>0 not far from 𝔼(-_M^*^2) so that we obtain a sufficient number of pairs (,) labeled as both Close and Far. More specifically, in Sections <ref>-<ref>, we consider the following setting.
* We assume that the rank of ^*_10× 10 is 5 and randomly and uniformly generate 5 nonzero eigenvalues from [0,1].
With two-digit precision, we obtain 0.32, 0.89, 0.59, 0.13, 0.14 as the 5 nonzero eigenvalues of ^*.
*
We randomly and uniformly select 10 nonzero numbers from (0,1] as the eigenvalues of Σ.
With two digit precision, we obtain
0.73, 0.7, 0.68, 0.59, 0.47, 0.45, 0.21, 0.19, 0.11, 0.04
as the eigenvalues of Σ.
* As we are dealing with fixed random seeds, we obtain 𝔼(-_M^*^2) ≈ 1.7.
* To obtain roughly balanced data, we set τ^* = 1.3 and generate 20000 data points. We split the data into 15000 training and 5000 test points.
We now describe the label generation. Note that in the theoretical formulation of the problem, we assume that the noisy labeling process depends on x-y_^*^2.
However one could also assume that the noise changes labels directly, independently of x-y_^*^2. We thus study both of the following settings empirically.
* Noise affects the labeling through x-y_^*^2 (<ref>).
We consider a noise distribution Noise(0, s) with zero mean and scale parameter s from the Logistic, Gaussian, Laplace, and Hyperbolic Secant distributions.
We then generate
η_1,…, η_N∼ Noise(0, s).
For each pair (_i, _i), we set ℓ_i = 1 (“Far”) if _i-_i_^*^2 + η_i ≥τ^*, and we set ℓ_i = -1 (“Close”) if
_i-_i_^*^2 + η_i < τ^*.
We save these labels as D_noisy.
We also save the non-noisy labels to check the model's robustness against noise.
However, we do not use these labels during training.
Indeed, for each pair (_i, _i), we set ℓ^*_i = 1 if _i-_i_^*^2 ≥τ^* and we set ℓ^*_i = -1 if
_i-_i_^*^2 < τ^*.
We save these labels as D^*.
* Noise directly affects the labeling (Noisy Labeling).
Here we assume that the noise affects the labels directly by randomly flipping them.
We first generate D^* as described in the previous paragraph.
Then for each i = 1,…, N, we flip a coin whose head chance is p. If the coin is tails, we set ℓ_i = ℓ_i^*; otherwise, we set
ℓ_i ∈{-1,1} randomly with the same chance. We save these labels as D_noisy.
In expectation, p/2 fraction of the labels are mislabeled in D_noisy.
Every other setting is kept the same as in the previous four cases. Although the amount of noise is the same as in the previous settings, i.e. the same number of mistakes are made, this regime is more challenging because in the first case the majority of mistakes occur close to the boundary, while the noisy labeling case results in “big" mistakes. We thus
expect performance to be worse than all four former settings.
As a default, in both settings we set the noise parameter so that 10% of the points are mislabeled.
§.§ Logistic Model with Different Noises
Recall the Logistic distribution has density function
L(x|μ, s) = 1/4ssech^2(x-μ/2s).
In Subsection <ref>, we saw that if the noise comes from a Logistic distribution, then
R_N(, τ) = -1/N∑_i=1^N logσ(ℓ_i(_i_^2 - τ)) serves as an optimal proxy for our objective.
In this section, we generate labels with different noise types including noisy labeling.
We set the corresponding noise parameter so that the number of mistakes is roughly 10%, and then investigate how the logistic loss function performs on all these types of noise.
We solve Optimization problem <ref> using gradient descent and setting
d = k = 10, learning_reate = 0.5, number_of_iterations = 30000, and learning_decay = .95. We summarize these results in Table <ref>. Note that the model uses only the noisy labels during training; the non-noisy labels are only used to evaluate the model.
The logistic model learned the labeling and ignored the noise very well. With logistic noise (first column), it reaches about 90% accuracy on noisy labels (as high as possible with 10% misclassification), and almost 99% accuracy with respect to the ground truth labels. This holds on both the training and test data sets, which indicates that the model is not overfitting. We also observe that as the noise becomes more and more different from the logistic model (Gaussian then Laplace then HS then Noisy Labeling),
the accuracy gets worse. This holds for both the noisy and ground truth labels, and on both the training and test data sets. The deterioration is most prominent in the “noisy labeling” setting, where about 5% accuracy is lost in comparison with the logistic noise.
Next, supporting Theorem <ref>, we summarize the recovery of the model parameters ^* / τ^* in Table <ref>. We observe that the error is fairly small with a relative error of about 0.07 for most noise types, but also that the error increases as the misspecification of the noise type increases.
For instance noisy labeling achieves only about 0.2-relative error in a Frobenius or spectral sense.
We plot the eigenvalues of ^*/τ^* and /τ̂ in Figure <ref>. The large left figure shows the eigenvalue recovery by the Logistic, Laplace, and HS loss functions when the labels are generated from logistic noise. All do about the same, and capture all eigenvalues fairly well. Four other plots are shown with other types of noise with similar results; the main exception being with noisy labeling, the top eigenvalue is predicted as much smaller than the true value. This experiment illustrates that although we focus on the logistic loss function, performance is robust with respect to misspecification of the noise model.
In Figure <ref>, we summarize the accuracy of the Logistic model for different noises as the number of iterations increases; the logistic noise plot is highlighted on the left. Each plot shows the progression of training on the train and test accuracy. As before, there is little difference between test and train accuracy. The accuracy with the noise-induced labels plateaus near 90% which is as good as expected with 10% noise. And the accuracy on the ground truth labels continues to increase (to about 99%) as training continues. The results are similar for other noise types, with convergence to lower plateaus, as expected.
To study the sample complexity behavior, we gradually increase the number of training samples and record the accuracy for each case in Figure <ref>. Again the left figure illustrates the logistic loss on labels generated with logistic noise, plotting results for training and test data, with respect to the noisy and ground truth labels.
Note that when the number of training points is too small (100 or less), the training accuracy is 1 while the test accuracy is low; this indicates overfitting.
However, when the number of training points increases to around 1000, the overfitting problem vanishes, and the training and test accuracy start to align closely. Between 5000 and 10,000 samples, they become indistinguishable in the plots. Similar results hold for the other noise types, again with somewhat lower overall accuracy depending on how close the noise model is to the logistic noise.
§.§ Further Ablation Study
How much noise can break the model.
Theoretically, we proved that the model can recover the ground truth parameters even if the labeling is noisy (under some assumptions).
This result is also supported by the experimental evidence in the previous section when the noise causes 10% mislabeling.
In this section we increase the effect of the noise and check the model resistance.
For a fixed number of training samples (18000 here), we increase the noise variance gradually and log the accuracy of the Logistic loss function when the noise also comes from a Logistic distribution; see Figure <ref>.
In this Figure, the x axis shows the fraction of points that are mislabeled, which depends directly on the variance of the noise distribution,
and the y axis indicates accuracy. We generally observe that the model ignores the noise and recovers true labeling even when the noise is high.
We can see that the train and test accuracies of the model for the noisy labels are aligned with the line y= 1-x, which is as expected.
It indeed indicates that with x amount of noise, the model cannot have better accuracy than 1-x on the noisy labels.
However, for the ground truth labeling, we observe that the model is pretty robust against noise, even when the amount of noise is pretty high. For instance, for around 40% mislabeling, we have around 95% of accuracy for unseen data. However, when the noise perturbs 45% of the labels, it starts to collapse.
When the noise disturbs 50% of the labels, we might assume that random guessing would achieve the best accuracy,
but the model still achieves around 65% accuracy for train and test points with respect to the ground truth labels.
Even though we have 50% mislabeling, the “extreme" examples are correct, so there is more information than purely random labels.
We will study this setting in the next paragraph.
Sample Complexity in High Noise Setting.
Now we focus on the setting where the loss function and the noise are compatible.
In other words, we only consider the Logistic model for Logistic noise, the Laplace model for Laplace noise, and the HS model for HS noise.
As explained in Section <ref>, the scale parameter s for the noise distribution Noise(η | 0,s) directly determines the portion of mislabeling
imposed on the data. In Figure <ref>, we observe that the accuracy drops when the noise gets more intense.
However, in theory, we proved that each model could overcome any amount of noise perturbation.
The noise scale parameter (variance) affects the sample complexity through the constants β and B (see Section <ref>: <ref>
and the discussion after). In Theorems <ref> and <ref>, we proved that irrespective of the amount of noise, we can recover the ground truth parameters if the number of samples is sufficiently large.
However in Figure <ref>, we saw that if the Logistic noise changes around 50% of the labels, then the test accuracy drops to 65%
when we have 15000 samples in the training set. Supported by the theoretical results, we should expect more and more accuracy
if we increase the number of training points.
To verify this, in this experiment we fix the amount of noise at 45% and gradually increase the number of training points to 2× 10^5 samples; Figure <ref> reports the resulting accuracy.
We observe that model accuracy with respect to the ground truth labels is approaching one. With 2× 10^5 training samples, we have around 97%
accuracy on the test data. This observation adheres to our theoretical results about the recovery power of the method.
For further experiments about the behavior of the loss function and a higher dimensional example, see Appendix <ref>.
§.§ Comparing to DML-eig
Inspired by a work of <cit.>, <cit.> developed an eigenvalue optimization framework (called DML-eig) for learning a Mahalanobis metric. They define an acceptable optimization problem and elegantly reduce it to minimizing the maximal eigenvalue of a symmetric matrix problem <cit.>. In their formulation, given pairs of similar data points and pairs of dissimilar data points, the goal is to learn a Mahalanobis metric which preserves similarity and dissimilarity.
More specific, they look for a p.s.d. matrix ^* to maximize the minimal squared distances between dissimilar pairs while the sum of squared distances between similar pairs is at most 1. This setting is comparable to ours since we also look for a matrix ^* (and also a threshold τ^*) to distinguish labeled far pairs of points from labeled close pairs of points. Their work did not study how noise can affect their model, nor if it could potentially recover a “ground truth" model that generates the dissimilar and similar labels.
However, we can compare our model to theirs empirically by passing our similarities and dissimilarities to their model and checking whether their model can handle noise or recover ground truth parameters.
We can also use the _eig learned by their model with the best τ̂ to see how well they can predict the labels.
In Figure <ref>, we compare our model with DML-eig using the data generated in Section <ref>. We set the noise level at 0% or at 10% and use the logistic noise distribution. The number of training points are indicated as the sample size and the test size is always fixed at 5000 points. For the noisy data, we evaluate performance with respect to both the noisy and ground truth labels.
We can see that our model outperforms DML-eig in each setting and for any sample size.
Although the DML-eig model can neutralize the noise (the blue curves in the left and right images are about the same), its accuracy in the noiseless setting is only around 90% at best.
In comparison, our model quickly overcomes the noise and its accuracy approaches 100% (shown as the magenta curves).
In the noisy setting, we train on the noisy labels training data, and show results for both techniques. Then we compare against the test data with respect to both the noisy (dashed curves) and ground truth labels, and report the results. Our approach can recover the parameters even under noise (the magenta curve), and the noisy test data matches the noisy training data (so no over fitting).
On the other hand, DML-eig only achieves about 90% test accuracy with respect to the ground truth labels, and about 85% train and test accuracy with respect to the noisy labels.
Next we observe that the DML-eig model is far less scalable than our approach. This is because it takes several matrix multiplications and eigenvalue solves for each subgradient step.
In Figure <ref>, we compare its training time to our model. While our model always takes less than a second on a sample size up to 3500, we see that DML-eig quickly surpasses the 20 minute mark (1200 seconds) and starts to become intractable.
If we provide our model with 10,000 training points, after 17 seconds, it can reach a test accuracy of 90% with respect to the noisy labels and 99% with respect to the ground truth labels, while DML-eig after 3 hours and 45 minutes cannot do better than 85% and 90% respectively.
§.§ Real Data experiments
In this section we use our methods to solve some real data challenges which demand or benefit from a learned Mahalanobis distance. The first one is from a physical simulation where we aim to find a reduced order model that needs to pass through a linear projection. Thus the learned scaling must be linear, and is desired to be low rank.
The second one is a consumer satisfaction story, and instead of using pairs of data points = -, it directly uses data points as the input, where satisfaction is predicted as a Mahalanobis distance from the origin. In both cases we show our method achieves the objectives with high accuracy.
§.§.§ Finding a Low-Rank Metric for Equations of State Combustion Simulation
We first consider a dataset generated by <cit.> to represent instances of the equations of state of a thermo-chemical reacting system. The goal is to model a combustion process to produce more efficient fuels and for easier C02 capture and sequestration. The data set consists of 30,000 data points in ^9, with 8 dimensions capturing equations of state, that is, the fractional composition of various chemicals (like O2, H, CO2) present in the system, and the 9th coordinate being temperature <cit.>. The goal is to learn a reduced order model (ROM) which should linearly project and scale <cit.> to a lower dimension, so one can then learn a PDE modeling the physics. The PDE modeling on the ROM is only tractable through a linear transformation, so non-linear approaches are not permitted. And choosing an appropriate scaling is crucial since there is a difference of several orders of magnitude in coordinate ranges.
We consider two ways of generating labels to apply our methodology. The first is based on the best known engineered solution <cit.> which we attempt to replicate from the training part of a test/train split. The second is via how close data points are on a critical simulation value called “mixture fraction.”
For the best known engineered solution, we start with a provided “ground truth" feature transformation matrix ^* ∈^9 × 3 as found by <cit.>. We set ^* = ^* (^*)^⊤.
Given the original and the projected data, we choose a threshold and label the disjoint pairs of the original data points as far and close based on their projected distance and the threshold. Now, we have 15,000 pairs of original points, and we divide them into a train set of size 10,000 and a test set of size 5,000. We only use the train pairs of data points and their labels to recover ^*.
Recovering the labels, we obtain 99.57% and 99.51% accuracy on the train and test points, respectively.
We have summarized the results in Tables <ref> and <ref>.
Normalizing data.
The method does not converge when we input the raw data as provided to our solver; this is due to the different scaling of coordinates.
The magnitude of some coordinates of the data are huge (temperature) and some are very small (CO2 percentage).
So in the gradient descent, the learning rate is the same for all variables; the algorithm does not show convergence on the variable with very small values, even after a large number of steps. We can solve this problem by doing coordinate-wise normalization of the data, thus scaling all coordinates similarly, and this process is labeled Normalized data in Figure <ref>. However, even in this setting, we do not have a good parameter recovery; see the last two rows of the second column of Table <ref> with about 0.75 relative error in /τ̂.
To analyze this situation, we compute the covariance of the data, we observe that the variance of the data in some directions is almost zero. Note that these directions are not along coordinates, they are a linear combination of the coordinates, so coordinate-wise normalization does not correct for it.
Note that in the theoretical results, we have the parameter recovery only if the support of data distribution has a nonzero Lebesgue measure which is effectively not the case here.
To address the remaining issue, note that if we change the behavior of ^*:^d⟶^k only for those directions that the data variance is zero, then the distance in the projected space remains almost always the same and thus the labeling remains the same as well.
This implies that there is no unique ^* to recover.
However, if we rescale the data and ^* by √(C) and the data by (√(C))^-1, where C is the covariance matrix of the data, we correct for this issue.
Indeed, if we set ^*_new = √(C)^* and X_new = X_normal (√(C))^-1, then, in this setting, ^*_new = √(C)^*(^*)^⊤√(C) has a negligible effect in the directions where C has a small variance. As the gradient descent initiates with very small entries and it will receive no substantial update in those directions, finally we recover ^*_new very well, see the third column of Table <ref> labeled Covariance normalized.
Low-rank recovery.
Moreover, we can truncate the matrix to a rank-k matrix _k by setting the last d-k eigenvalues of to zero. In Table <ref>, we summarize the accuracies for the case that we use _k instead of for k=1,2,3,4. We can see that the for k =1, we still have 90% accuracy, for k=2 we have 98% accuracy.
For k=3, the _k and are indistinguishable. Hence, we can recover the best Mahalanobis distance ^* up to very high classification accuracy and in parameters, even with a desired low-rank solution.
Mixture fraction labeling.
One of the features in the data is called the mixture fraction, which takes values between 0 and 1. We remove this feature from the data set and use it to label the points as far and close.
We first randomly extract 15,000 disjoint pairs from the data. Then we compute the absolute of their mixture fraction difference, and based on an appropriate threshold, we assign the far and close label to the pairs. We choose a threshold τ^* such that the generated labels are balanced. We partition the data into 10,000 training points and 5,000 test points. Now, we try to see whether there is a matrix ^* and a threshold τ^* that can replicate this labeling.
Indeed, we are able to find and τ̂ for which we have 99.68% accuracy for the test set, which is basically as good as our recovery of the best engineered solution from <cit.>. Notably, we again only have this accuracy if we normalize the data. If we work with the raw data directly, the best performance is about 70%. We summarize the corresponding results in Table <ref>.
§.§.§ Airline Passenger Satisfaction
We consider a data set containing a training set of around 100,000 points and a test set of around 26,000 points from the
Airline Passenger Satisfaction <cit.> dataset.
Each data point contains 24 features; 20 are real-valued, and 4 are categorical features containing
Gender: Female, Male,
Customer Type: Loyal, disloyal,
Type of Travel: Business, Personal,
Class: Eco, Eco Plus, Business.
As the first three are binary, we simply convert them to 0 and 1. The fourth one is also ordinal, and we convert it to 0, 1, and 2, respectively.
Here we assume that passenger satisfaction is determined by a Mahalanobis norm (distance from the origin).
Indeed, we model the problem as if there is some matrix ^* and threshold τ^*, so for each data point , _^*^2 + Noise_≥τ^*, for some unknown unbiased noise Noise_, if and only if the corresponding passenger is satisfied with the airline. We would like to find the generating ^* and τ^*.
Compared to the theoretical setting, here we are given =- or in other words, we can assume that =.
Using each of Logistic, Laplace, and HS models (learning_rate = 0.045 and number_iterations = 20,000), we can recover the satisfaction labeling with 93% accuracy on the training and test data.
It is notable that, similar to previous observations, we obtain these accuracies only for normalized data.
Other methods, such as random forest, can boost the labeling accuracy to 96% (see <cit.>).
However, our method gives us a feature transformation Â, which is more interpretable compared to something like the random forest.
We can find the most important directions for  to see what combination of features has the greatest effect on the satisfaction level of passengers.
§ RELATED WORK ON LINEAR DML
A considerable amount of works have been devoted to distance metric learning (<cit.>). Although recent work has focused primarily on nonlinear distance metric learning, the works most relevant to this article are more classic linear approaches. The method we developed can be categorized as fully supervised linear metric learning in which the scalability is in terms of both number of examples and dimension. Ours has bounded sample complexity that is O(d^2 log d) in the dimension d, and in practice we run gradient descent, where each iteration is linear in the data size N and has quadratic dependence for dimension d. Related works do not provide sample complexity bounds.
Note that our method learns a Mahalanobis distance (a positive semi-definite matrix ) and a threshold τ. We next compare with the most similar prior work.
Constrained Optimization Approaches.
<cit.> provide the first method to learn a Mahalanobis distance
by maximizing the sum of distances between points in the dissimilarity set (𝒟) under the constraint that the sum of squared distance between points in the similarity set (𝒮) is upper-bounded:
max_≽ 0 ∑_(_i,_j)∈𝒟 d_(x_i,_j)
s.t. ∑_(_i,_j)∈𝒮 d^2_(_i,_j)≤ 1.
It can be shown that this is a convex optimization which was solved by a proximal gradient ascent which, in each step, takes a gradient ascent step of the objective function, then projects back to the set of constraints, the cone of p.s.d. matrices.
The projection to the p.s.d. cone uses full eigen-decomposition with O(d^3) time complexity. So, as the dimension gets large it quickly gets intractable, while in our method we only deal with computing the
Mahalanobis distance which takes O(d^2) time complexity.
Note that the model proposed by <cit.> takes into account all the information of similar and dissimilar pairs by aggregating all similarity constraints together as well as all dissimilarity constraints. In contrast, the DML-eig method proposed by <cit.> maximizes the minimum distance between dissimilar pairs instead of maximizing the sum of their distances. They develop a subgradient ascent procedure to optimize their formulation which does not require a projection, but still uses an O(d^3) eigendecomposition step.
Intuitively, this model prioritizes separating dissimilar pairs over keeping similar pairs close. Experimentally, they show that their method outperforms <cit.>. In Subsection <ref> we experimentally compare our model with DML-eig showing that our approach works better in terms of performance, accuracy, and dealing with noise.
Unconstrained Optimization over A.
Taking into account the fact that any p.s.d matrix can be decomposed into = ^⊤, <cit.> define the expected leave-one-out error of a stochastic nearest neighbor classifier in the projection space induced by . They defined the probability that _i is similar (close) to _j as
p_ij() = exp(-_i-_j^2_)/∑_k≠ iexp(-_i-_k^2_)
, p_ii = 0
and the probability that _i is correctly classified as
p_i = ∑_{j: (_i,_j)∈𝒮}p_ij.
To learn , they solve max_∑_ip_i.
This is not a convex optimization and thus leads to a local maximum rather than a global one.
Using an MLE approach, <cit.> define a logistic loss function which matches a special case of our loss function when the noise comes from a logistic distribution. They start with the assumption that the similarity is determined via a Bernoulli distribution whose success probability (being similar) is σ(τ-d_(x_i,x_j)). Maximizing the likelihood under this probabilistic model then yields an approximation of and τ. This is exactly the approach we take in Subsection <ref>. However, instead of directly assuming such a model, we derive our model from certain noise assumptions and thus obtain the best theoretical model possible under these assumptions. We also prove that alternate (nonlogistic) formulations of the loss function are the MLE solution under different noise assumptions. Moreover, we conclude that each of these models is capable of parameter recovery with a sufficiently large sample size.
In each of the above settings, we can rewrite the optimization function in terms of , where =^⊤.
This makes the problem unconstrained which is a good advantage since it eliminates expensive sub-gradient projection and moreover, we can restrict to be rectangular inducing a low-rank . The main limitation of this formulation (even when is a square matrix) is that it is non-convex and thus subject to local maxima. However, in our formulation, a result by <cit.> resolves this issue.
Empirical Loss Minimization Framework.
There are some other related works proposing an empirical loss minimization framework; for a thorough review see Chapter 8 of the book by <cit.>. The prediction performance of the learned metric has been studied in some works such as <cit.>.
Broadly speaking, for an unknown data probability distribution, they considered different meaningful constrained cost functions as the true risk and then they studied the convergence of the empirical risk to the true risk.
As their setting is a bit different from ours, we recall their data assumptions here.
Given labeled examples
{(_i,y_i) i =1,…, N} where _i∈^d, _i≤ F and y_i∈{1,…,m}, they define a similar (close) pair set and a dissimilar (far) pair set 𝒟 as follows:
= {(_i,_j y_i = y_j} and 𝒟 = {(_i,_j y_i ≠ y_j}.
Note that the pairs here are not i.i.d. as in our formulation.
From O(n) given data points, we can feed our cost function with only n i.i.d. pairs while they can do it with O(n^2) pairs. It might give the impression that these methods should allow for much stronger convergence results, but it is not the case.
In the following we briefly review some of their results and then compare them to ours. The primary outcome of these findings is to demonstrate the overall reliability of a metric learning approach, rather than offering precise estimations of the generalization loss.
Following the idea of maximum margin classifiers, <cit.> adapted the uniform stability framework (<cit.> and McDiarmid's inequality) to metric learning to obtain a generalization bound. They considered
C() = 2c/n(n-1)∑_i<j L(y_ij(1-_i-_j^2_)) + 1/2^2_F
as the regularized empirical cost function where y_ij = 1 if (i,j)∈ and y_ij = -1 if (i,j)∈𝒟 and L(z) is a standard loss function which is ζ-Lipschitz.
As a main result, they proved that the empirical loss C() converges in probability measure to the true cost 𝔼_,', y L(y(1--'^2_)) + 1/2^2_F with the sample complexity
O(s(d)^2ln 1/δ^2)
where s(d) comes from a constraint trace()≤ s(d), where the hidden constant depends on ζ, F.
Note that if s(d) is considered a constant, this provides a sample complexity independent of dimension; but it may be that the best minimizing the cost function does not follow this constraint. Comparing to our sample complexity (Theorem <ref>),
both bounds share almost the same dominant part (assuming d is fixed).
However, they use O(n^2) pairs in their optimization while we only use n; thus our optimization framework is more scalable in n.
<cit.> considered a similar loss (without the regularizer term).
They theoretically and empirically studied the convergence of the empirical risk in this setting for some appropriate choice of L, focusing on log loss and hinge loss.
They proved that the empirical loss converges (in probability measure) to the corresponding true loss with the rate O(1/√(n)). This bound does not resolve the sample complexity since it is presented as a function of n without working out the dependencies to the other parameters. They also concluded that the minimizer of the empirical loss (, τ̂) converges in measure to the minimizer of the true loss (^*, τ^*) as n goes to infinity, but does not identify specific conditions that must hold for this to be true, or finite sample bounds, as our work does.
In a sequel, <cit.> examined a data assumption that closely resembled ours, along with empirical and accurate losses like our own.
They demonstrated that the empirical loss converges to the true loss on the optimal model, with a sample complexity comparable to ours in terms of ,δ, and d.
However, it is worth noting that their study did not include noise in their setting, it was not proven that their optimization model is theoretically optimal under some generating model parameterized by ^* as is done in this article,
and they did not investigate the recovery of ground truth parameters as we intend to do.
Extending the robustness framework (<cit.>) to metric learning, <cit.> studied the deviation between the true and empirical loss.
The cost function they worked with is again similar to the one in Equation (<ref>). They proved that the empirical loss converges in probability measure to the corresponding true loss.
We can simplify their result to the sample complexity bound O(s(d)+ ln(1/δ) ^2) where s(d) can be exponentially large in terms of d.
It should be noted that the constant appearing in their data assumption impacts the constant in this sample complexity bound.
For a fixed d, comparing our complexity bound with theirs,
we again can see that the dominant parts are almost the same while their algorithm operates on n^2 distances for a set of n points. Afterwards <cit.>, employing a different similarity learning optimization problem,
established a comparable error bound in terms of Rademacher average which is upper bounded by a function of data bound F.
More precisely, under our data assumption, we can translate their error bound to a sample complexity bound which depends linearly on
d and has the dominant term O(ln (1/δ)^2). It is similar to the other above-mentioned bounds.
In the following we briefly recall some advantages of our model compared to the above mentioned methods.
* All methods discussed above model metric learning as an optimization problem which penalizes mismatches, including using constrained optimization on . However, they do not prove that these optimization problems are theoretically optimal under some generating model parameterized by ^* as is done in this article.
* These algorithms deal with a more expensive optimization problem which uses O(n^2) pairs for n points while our method uses only O(n) pairs. Despite the extra information, these algorithms do not lead to a more favorable scaling between the sample size n and the error compared to our method.
* These other methods do not provide recovery guarantees on generating parameters as we do. This in turn allows us to provide low-rank approximation and dimensionality reduction results, since we can bound the effects of truncating small model parameters.
* Furthermore, since we derive the loss functions from various noise models, we can recover these model parameters even in the presence of (correctly modeled) noise.
Information Theoretic Modeling.
<cit.> assume that
=-| ,∈𝒮∼(0,Σ_𝒮) and = -| ,∈𝒟∼(0,Σ_𝒟).
For any linear transformation ' = ^⊤, this can be written
' = '-'| ,∈𝒮∼(0,^⊤Σ_𝒮) = f_(')
and
' = '-'| ,∈𝒟∼(0,^⊤Σ_𝒟) = g_(').
Their goal is to find maximizing Jeffrey divergence, i.e., to solve the following optimization problem;
max_∈^d× k KL(f_, g_) + KL(g_, f_),
where KL stands here for Kullback-Leibler divergence. As both distributions g_ and f_ are multivariate Gaussian, one can compute KL(f_A, g_A) + KL(g_A, f_A) as a function of and reduce the optimization problem to
max_∈^d× k tr((^⊤Σ_𝒮)^-1(^⊤Σ_𝒟) +
(^⊤Σ_𝒟)^-1(^⊤Σ_𝒮)) .
Setting the derivative of this objective function to zero, they present a solution to this non-convex optimization problem. In practice, they use MLE to replace Σ_𝒮 and Σ_𝒟 by their sample estimations, and since the formulation is not convex, the identified answer may be a local optimum.
§ CONCLUSION
In this paper we provide and analyze a simple and elegant method to solve the linear distance metric learning problem. That is, given a set of pairs of points labeled close or far, our method learns a Mahalanobis distance that maintains these labels for some threshold. This arises when in data analysis one needs to learn how to compare various coordinates, which may be in different units, but not introduce non-linearity for reasons of interpretability, equation preservation, or maintaining linear structure.
Our method reduces to unconstrained gradient descent, has a simple sample complexity bound, and shows convergence in a loss function and in parameter space. In fact, this convergence holds even under noisy observations.
Moreover, our method is the first approach to show that the learned Mahalanobis distance can be truncated to a low-rank model that can provably maintain the accuracy in the loss function and in parameters.
Finally, we demonstrate that this method works empirically as well as the theory predicts. We can obtain high accuracy (over 99%) and parameter recovery (less than 1.01 multiplicative error) on noiseless and noisy data, and on synthetic and real data. For instance, even if 45% of the data is mislabeled we can with very high accuracy recover the true model parameters.
Additionally we show this simple solution nearly matches the best engineered solutions on two real world data challenges.
JP thanks NSF IIS-1816149, CCF-2115677, and CDS&E-1953350; AL thanks NSF DMS-2309570 and NSF DMS-2136198.
§ PROOF OF LEMMA <REF>
W.l.o.g. we may assume that τ_1 = τ_2 = 1.
To prove Lemma <ref>, it suffices to show that ^2__1=^2__2 for every ∈ℝ^d.
For an arbitrary _0∈ℝ^d,
consider the two following different cases.
* _0^2__1 = 0. In this case, if _0^2__2 > 0, then for sufficiently large α, we should have α_0^2__2 > 1
while α_0^2__1 =0 < 1 contradicting the fact that
_{^2__1 - τ_1≥ 0} and _{^2__2 - τ_2≥ 0} are pointwise equal.
* _0^2__1 >0 and _0^2__2 >0.
Consider α , β >0 such that α_0^2__1 = β_0^2__2 = 1.
We need to show that α = β. For a contradiction, consider c>0 such that α<c<β.
Now it is clear that
_{^2__1 - τ_1≥ 0}(c_0) = 1 and _{^2__2 - τ_2≥ 0}
(c_0)= 0,
a contradiction.
§ BASIC PROPERTIES RELATED TO OPTIMIZATION PROBLEM <REF>
In this part, we derive some basic properties related to Optimization Problem <ref>.
In particular, we will see that it is a convex optimization. Using this as the main result of this subsection,
we prove that the true loss is uniquely minimized at the ground truth parameters.
First, note that since any convex combination of two p.s.d. matrices is still p.s.d,
using triangle inequality for spectral norm, we conclude that
the search space ℳ× [0, B] is convex.
For every fixed ∈ℝ^d and ℓ∈{-1,1}, -logσ(ℓ(^2_ - τ)) as a function of (, τ) is convex.
To prove the assertion, it suffices to show that logσ(ℓ(^2_ - τ)) is concave.
Consider arbitrary _1,_2∈ℳ and τ_1,τ_2∈ [0,B].
We remind that logσ(·) is a concave function. Also, for each λ∈[0,1],
^2_λ_1+(1-λ)_2 - (λτ_1 +(1-λ)τ_2)
= λ(^2__1 - τ_1) +
(1-λ)(^2__2 - τ_2).
Combining these two facts implies
logσ(ℓ(^2_-b)) as a function of (, τ) is concave, completing the proof.
This observation immediately implies that both R_N(,τ) and R(,τ) are convex functions as well.
Thus
min_(,τ)∈ℳ×[0,B] R(,τ) and min_(,τ)∈ℳ×[0,B] R_N(,τ)
are both convex optimization problems. Although the following observation is quite technical, it is necessary for the proof of succeeding results.
Let S⊆ℝ^d_≥ 0 be a set with zero Lebesgue measure. Then
S^1/2 = {∈ℝ^d^2∈ S} has also zero Lebesgue measure.
For each I ={i_1,…,i_k}⊆ [d], define
S^1/2 _I = {∃ ∈ S s.t. z_i = √(x_i) for i∈[n]∖ I and z_i = -√(x_i) for i∈ I}.
It is clear that
S^1/2 = ⋃_I⊆ [d]S^1/2 _I.
Accordingly, it suffices to show μ(S^1/2 _I) =0 for each I⊆ [d]. For an arbitrary I⊆ [d], define
f_i:ℝ^d_≥ 0⟶ℝ^d such that
f_I() = (l_1√(x_1),…,l_d√(x_d)) where
l_i =
{[ -1 i∈[I]; 1 i∉[I]. ].
It is clear that f_I is a one-to-one continues function and f_I(S) = S^1/2 _I.
To fulfill the proof, we prove that for each L∈ℕ, μ_L(S^1/2 _I∩[-√(L),√(L)]^d) = 0.
This implies that
μ_L(S^1/2 _I) = μ_L(⋃_L=1^∞(S^1/2 _I∩[-√(L),√(L)]^d))
≤∑_L=1^∞μ_L(S^1/2 _I∩[-√(L),√(L)]^d)
= 0
Let L be a fixed positive integer and
consider an arbitrary ε>0.
For each i∈[d], consider the interval
J_i = [0,L]×⋯×[0, ε^2/2^2dL^d-1d^2]_the i-th interval×⋯×[0, L].
It is clear that the volume of each J_i is ε^2/2^2dd^2.
Also, for the image of each J_i, we have
f_I(J_i) ⊂ [-√(L),√(L)]×⋯×[-ε/2^dL^d-1/2d, ε/2^dL^d-1/2d]_the i-th interval×⋯×[-√(L),√(L)].
This implies that the volume of f_I(J_i) is at most ε/d.
Since, f_I(·) is Lipschitz over [0, L]×⋯×[0, L]∖⋃_i∈[d]J_i, the zero Lebesgue measure sets will be mapped to
zero Lebesgue measure sets by f_I.
Therefore,
μ_L(f_I(S ∩ ([0, L]^d∖∪_i∈[d]J_i))) = 0.
Consequently,
μ_L(S^1/2 _I∩[-√(L),√(L)]) = μ_L(f_I(S ∩ ([0, L]^d∖∪_i∈[d]J_i)))
+ μ_L(f_I(S ∩ (∪_i∈[d]J_i)))
≤ 0 + dε/d = ε.
Since ε is arbitrary, μ_L(S^1/2 _I∩[-√(L),√(L)]^d)=0, completing the proof.
Using this observation, we can prove the following useful lemma.
If there are Q⊆ℝ^d and c∈ℝ, such that μ_L(Q)>0 (Lebesgue measure) and
^2__1 = ^2__2 +c for each ∈ Q,
then _1=_2 and c=0.
For each ∈ Q, we clearly have _i^t (_1 - _2) _i= c.
Since =_1 - _2 is a real value symmetric matric, = ^t where is an orthonormal matrix and is a diagonal d× d matrix whose (i,i) element is a_i.
To prove the assertion, it suffices to show that =0. For a contradiction, suppose that ≠ 0.
Set Q' = {: ∈ Q} and S = {∈ℝ_≥ 0^^d: ⟨,(a_1,…,a_d)⟩ =c}. For each =(z_1,…,z_d)∈ Q',
∑_i=1^d z_i^2a_i = ^t
= ^t^t
= ^t = c,
which concludes that
Q'⊆{∈ℝ^d: ⟨^2,(a_1,…,a_d)⟩ =c} = S^1/2.
Using Observation <ref>, since we know μ_L(S) = 0 we obtain μ_L(S^1/2) = 0 and
thus μ_L(Q') =0. On the other hand, μ_L(Q) = μ_L(Q') which implies μ_L(Q)=0, a contradiction.
§ Ε-COVER (Ε-NET) FOR ℳ×[0,B]
As we are going to prove a uniform convergence theorem between empirical and true losses, we need to define the ε-cover of a metric space.
For a metric space (𝒳, d), an ε-cover ℰ is a subset of 𝒳 such that for each x∈𝒳,
there is some y∈ℰ with d(x,y)<ε.
In the following, we introduce an ε-cover for ℳ×[0, B].
However, we should first define a metric over 𝒳 = ℳ×[0, B].
For each (_1, τ_1), (_2, τ_2)∈𝒳, we define
d((_1, τ_1), (_2, τ_2)) = _1- _2_2 + |τ_1 - τ_2|.
There exists an -cover ℰ of ℳ× [0,B] under metric d of size
B/( 4 β d √(d)/)^d^2.
The inequality
_F ≤√(d)_2≤β√(d)
indicates that ℳ is a subset of the d^2 dimensional Euclidean ball of the radius β√(d) centered at the origin, i.e.,
ℳ={_d× d is p.s.d. and _2 ≤β}⊆{∈ℝ^d^2_F≤β√(d)}.
It is known that a k-dimentional Euclidean ball of radius r can be covered by at most (2r√(k)/ε)^k number of balls of radius ε.
So, ℳ has an ε/2-cover ℰ_2 of size at most
(4β d√(d)/ε)^d^2.
As an ε/2-cover ℰ_2 for [0, B] (with respect to L_1-norm), we can partition [0, B] into B/ε intervals of length ε and consider the end points of those intervals as the ε/2-cover.
Now the cartesian product of these two ε/2-covers, ℰ_1×ℰ_2, is an ε-cover of size
B/ε(4β d√(d)/ε)^d^2.
for 𝒳 = ℳ× [0,B] with respect to metric d.
§ UNIFORM CONVERGENCE OF R_N TO R
Although we have proved in Theorem <ref> that the true loss R(,τ) is uniquely minimized at (^*, τ^*) , in reality, we do not have the true loss.
Indeed, we only have access to the empirical loss R_N(,τ).
In this part, broadly speaking, we will show that R_N(,τ) is uniformly close to R(,τ) as N gets large, and then, we conclude, instead of minimizing R_N(,τ), we can minimize R_N(,τ) to approximately find (^*, τ^*).
In the next lemma, we will see that if the two p.s.d. matrices are close via spectrum norm, then the Mahalanobis norm defined based on these two matrices are also close.
[Equation (<ref>)]
For given two p.s.d. _1 and _2,
| ^2__1 - ^2__2|≤_1 -_2_2 ^2.
Using Cauchy–Schwarz inequality and the definition of the spectral norm, we obtain
| ^2__1 - ^2__2| = |^⊤(_1 -_2)|
= ⟨^⊤, ^⊤(_1 -_2)⟩
≤(_1 -_2)
≤(_1 -_2)_2 ^2,
concluding the inequality.
As -logΦ_ Noise(·) is a decreasing function, using Equation <ref>, we have
0≤ -logΦ_ Noise(ℓ_i(_i^2_ - τ))≤ -logΦ_ Noise(-β A) = T,
which indicates that the random variables _i's are bounded.
Whenever we are dealing with a summation of bounded i.i.d. random variables,
one strong concentration inequality to use is Chernoff-Hoeffding bound.
This inequality states if X_1, ..., X_N are N independent random variables such that
X_i∈ [a_i,b_i] almost surely for all i, and
S_N=X_1+⋯ +X_N/N, then
P(|S_n-E[S_n]|≥α)≤ 2exp(-2N^2α^2/∑ _i=1^N(b_i-a_i)^2).
Since,
R_N(,τ) = 1/N∑_i=1^N -logΦ_ Noise(ℓ_i(_i^2_ - τ))
and 𝔼(R_N(,τ)) = R(,τ),
we can use Chernoff-Hoeffding bound to control the probability that
|R_N(,τ) - R(,τ)| is large.
If E={(_i, τ_i) i = 1,…,m=m(α)} is an α-cover for ℳ× [0,B], then
P(|R_N(_i, τ_i) - R(_i, τ_i)|≥α for some i∈[m])
≤ 2me^-2Nα^2/T^2.
Consider a fixed i∈[m].
For simplicity of notation, set Z = -logσ(ℓ(__i-τ_i)) and Z_j = -logσ(ℓ_j(_j__i- τ_i)).
As we explained above,
Z∈[0, T], see Table <ref>, and, using Chernoff-Hoeffding Inequality, we obtain
P(|R_N(_i, τ_i) - R(_i, τ_i)|≥α) = P(|1/N∑_j=1^N Z_j - E(Z)|≥α)
≤ 2e^-2Nα^2/T^2.
Now, using union bound, we have the desired inequality.
In the next theorem, we prove that, with high probability, the empirical loss R_N is everywhere close to the true loss R.
For any ε,δ>0, assume parameters B, F, and β are constants, define
N_d(ε, δ) =O(1/ε^2[
log1/δ + d^2logd/ε]).
If
N>N_d(ε, δ), then with probability at least 1-δ,
sup_(,τ)∈ℳ× [0, B]|R_N(, τ) - R(, τ)|<ε.
To prove the assertion, we can equivalently prove
P(sup_(, τ)∈ℳ× [0,B]|R_N(, τ) - R(, τ)|≥ε)<δ.
Set α = ε/3ζ(F+1).
Consider ℰ={(_i, τ_i); i = 1,…,m=m(α)} as an α-cover for ℳ×[0, B].
For an arbitrary (, τ), there is an index i∈[m] such that
d((, τ)-(_i, τ_i)) < α.
Consequently, using Lemma <ref>,
|R(, τ) - R(_i, τ_i)| < (F+1)ζα = ε/3
and
|R_N(, τ) - R_N(_i, τ_i)| < (F+1)ζα =ε/3.
So far, we have proved that
for every (,b), there exists an index i∈[m] such that
|R(, τ) - R(_i, τ_i)| < ε/3 and
|R_N(, τ) - R_N(_i, τ_i)| < ε/3.
Using triangle inequality, it concludes
|R_N(, τ) - R(, τ)| ≤ |R_N(, τ) - R_N(_i, τ_i)| + |R_N(_i, τ_i) - R(_i, τ_i)|
+ |R(_i, τ_i) - R(, τ)|
≤ 2ε/3 + |R_N(_i, τ_i) - R(_i, τ_i)|.
Via Lemma <ref>, there is an
α-cover of size
m(α) = B/α(4β d√(d)/α)^d^2
= 3ζ(F+1)B/ε(12ζ(F+1)β d√(d)/ε)^d^2
for 𝒳 = ℳ× [0,B] with respect to metric d.
On the other hand,
T^2/2α^2log2m/δ = 9ζ(F+1)^2T^2/2ε^2[
log6ζ(1+F)B/εδ + d^2log12ζ(1+F)β/ε +3/2d^2log d
]
= O(1/ε^2[
log1/δ + d^2logd/ε])
As setting
N > T^2/2α^2log2m/δ
= O(1/ε^2[
log1/δ + d^2logd/ε])
implies 2me^-2Nα^2/T^2<δ, using Lemma <ref>, we obtain, with probability at least 1-δ,
|R_N(_j, τ_j) - R(_j, τ_j)|<α = ε/3ζ(F+1)<ε/3 for all j∈[m].
Therefore, if N>O(1/ε^2[
log1/δ + d^2logd/ε]), with probability at least 1-δ,
for all (, τ) we have
|R_N(, τ) - R(, τ)|≤ε,
as desired.
§ SIMPLE NOISE PROPERTIES IN TABLE <REF>
In Subsections <ref>, <ref>, <ref>, <ref>, we derived some properties of
Φ_ Noise(·) when noise is one of the simple noises listed in Table <ref>. This section can be seen as a complementary section for those sections. For noises listed in Table <ref>, one can
verify that each of those noise distributions is simple. Here, we verify some other information listed in that table.
When Noise(η) is simple, -logΦ_ Noise(η) is a decreasing convex function which implies that
η̣(-logΦ_ Noise(η)) = - Noise(η)/Φ_ Noise(η) is a negative increasing function.
Therefore, -logΦ_ Noise(η) is ζ-Lipschitz over [-β F, β B] for
ζ = Noise(-β F)/Φ_ Noise(-β F).
Also, from Equation <ref>, we know
T = -logΦ_ Normal(-β F).
In what follows, we approximate ζ and T for each noise choice.
* Logistic noise. In this case, it is easy to see that
ζ = σ(-β F)σ(β F)/σ(-β F) = σ(β F) <1
and T = -logσ(-β F) = log(1+e^β F)≤ 1 +β F.
Thus, Φ_ Logistic(η) in 1-log-Lipschitz and T= O(β F).
* Normal noise.
We start with an approximation of Φ_ Normal(η) (the lower bound comes from a webpage <cit.>).
We set Φ^c_ Normal(η) = 1 - Φ_ Normal(η) = Φ_ Normal(-η).
Set g(t) = Φ^c(t) - 1/√(2π)t/t^2 + 1 e^-t^2/2.
As g(0) >0, g'(t) = -2/√(2π)e^-t^2/2/(t^2 + 1)^2< 0 and lim_t→ +∞ g(t) = 0, we obtain
Φ^c(t) ≥1/√(2π)t/t^2 + 1 e^-t^2/2.
Using the above formula for ζ, we have
ζ = 1√(2π)e^-(β F)^2/2/Φ(-β F)
= 1√(2π)e^-(β F)^2/2/Φ^c(β F)
≤(β F)^2 + 1/β F = O(β F).
Also,
T = -logΦ_ Normal(-β F) = -logΦ^c_ Normal(β F)= O((β F)^2).
* Laplace noise.
In this setting,
ζ = Noise(-β F)/Φ_ Noise(-β F) = 1/2e^-β F/1/2e^-β F = 1
and T = -logΦ_ Normal(-β F) = β F log 2 = O(β F).
* Hyperbolic secant noise.
Remind that Φ^c(t) = ∫_t^∞1/2 sech(π/2η)η̣.
Set
g(t) = Φ^c(t) - 1/π sech(π/2t)
Also, note that g(0) >0, lim_t→ +∞ g(t) = 0, and
g'(t) =-1/2 sech(π/2t) + 1/2tanh(π/2t) sech(π/2t)
= 1/2 sech(π/2t)(-1 + tanh(π/2t))<0 ∀ t ∈.
This implies that, for each t∈,
Φ^c(t)> 1/π sech(π/2t).
Using this inequality, we obtain
ζ = Noise(-β F)/Φ_ Noise(-β F)
= 1/2 sech(-π/2β F)/Φ_ Noise^c(β F)
≤π/2.
Furthermore,
T = -logΦ_ Normal(-β F)≤ -log1/π sech(π/2β F)
= logπ + logcosh(π/2β F) = O(β F).
§ CONNECTION BETWEEN L_1(F) NORM AND SPECTRAL NORM
In Theorem <ref> and Corollary <ref>, we
study the connection between L_f(f) norm and the sample complexity of our problem. However, as the L_1(f)-metric is dependent on the distribution f(), which is unavoidable, it is not very intuitive. We indeed prefer some more informative norms such as spectral norm. However, to this end, we must restrict the distribution f(). We here assume that
* We here assume that there is a positive constant c, such that f()≥ c for each ∈ B^d(1); recall we assume almost surely ^2 ≤ F = 1 in this section, and B^d(1) = {∈^d |≤ 1 }.
We next prove a statement similar to Corollary <ref> but in terms of the $̣-metric instead of theL_1(f)-metric. The following definition is also needed for the following two results.
For0≤a < b≤1, wherez_1is the first coordinate of, define
Cone(a,b) = { = (z_1,…,z_d) | 3z_1^2≥ 2_2^2 & a≤ z_1≤ b}.
If f()≥ c>0 for each ∈ B^d(1), then for all (,τ)∈ℳ× [0, B](, τ) - (^*,τ^*)_L_1(f)≥cπ^d/2/20Γ(d/2+1) (1/18)^d ((,τ), (^*,τ^*)).
In particular, if f() is uniform on unit disk, then for all (,τ)∈ℳ× [0, B](, τ) - (^*,τ^*)_L_1(f)≥1/20 (1/18)^d ((,τ), (^*,τ^*)).
We remind that
(, τ) - (^*,τ^*)_L_1(f) = ∫ f() |(^2_-τ) - (^2_^*-τ^*)|
= ∫ f() |^⊤ (-^*_=) - (τ-τ^*_=τ̅) |
= ∫ f() |^⊤ - τ̅ |.
Note that is a symmetric matrix. So, there are a real value orthonormal matrix and a real value diagonal matrix such that
= ^⊤. Let the vector denote the diagonal of .
Without loss of generality, we assume that λ_1 = max{|λ_1|,…, |λ_d|}.
One can verify that λ_1 = _2.
As is orthonormal, we have
∫ f() |^⊤ - τ̅ | = ∫ f() |()^⊤ () - τ̅ |
= ∫ f(^⊤) |^⊤ - τ̅ |
= ∫ f(^⊤) | ∑_i=1^dz_i^2λ_i - τ̅ |
Note that
∑_i=1^d z_i^2 λ_i≤λ_1^2 and, for each ∈ Cone(0,1),
∑_i=1^d z_i^2 λ_i
≥λ_1 z_1^2 - ∑_i=2^d λ_1 z_i^2
= λ_1 (2z_1^2 - ∑_i=1^d z_i^2)
= λ_1 (2z_1^2 - _2^2)≥λ_1/2z_1^2
Set q = λ_1 + |τ̅| = ((,τ̅), (^*,τ^*)).
We next consider two cases based on |τ̅| and q.
* |τ̅| ≤ 0.1 q. It implies λ_1≥ 0.9 q and thus, if ∈ Cone(1/√(3),1), then
∑_i=1^d z_i^2 λ_i - τ̅ ≥λ_1/2z_1^2-τ̅
≥ q(9/20z_1^2 - 1/10)
≥1/20q.
Therefore,
∫ f(^⊤) | ∑_i=1^dz_i^2λ_i - τ̅ | ≥q/20∫_∈ Cone(1/√(3),1) f(^⊤)
= q/20μ_f(^⊤ Cone(1/√(3),1))
≥cq/20∫_∈ Cone(1/√(3),1)∩ B^d(1)
= cq/20× Volume( Cone(1/√(3),1)∩ B^d(1) )
≥cq/20× Volume( Cone(1/√(3),√(%s/%s)23) )
= cq/20d[
√(2/3) V^d-1(√(1/3))) -
√(1/3) V^d-1(√(1/6))]
= cq/20π^d-1/2/d3^d/2Γ(d+1/2)[
√(2) - 1/2^d-1/2].
* |τ̅| > 0.1 q. It implies λ_1 < 0.9 q and thus
|τ̅ - ∑_i=1^dz_i^2λ_i | ≥ |τ̅| - λ_1_2^2
≥ q (1/10 - 9/10_2^2 )
≥q/20 if _2^2≤1/18
Therefore,
∫ f(^⊤) | ∑_i=1^dz_i^2λ_i - τ̅ | ≥q/20∫_∈ B^q(1/18) f(^⊤)
= q/20μ_f(B^q(1/18))
≥cq/20× Volume(B^q(1/18))
= cq/20π^d/2/Γ(d/2+1) (1/18)^d
Now Lower bounds (<ref>) and (<ref>) implies the proof.
Iff()is a rotationally symmetric pdf, thenμ_f(^⊤B) =μ_f(B)for any measurable setB.
Therefore, combining two Lower bounds (<ref>) and (<ref>), we obtain the following lemma
If f() is a rotationally symmetric pdf, then for all (,τ)∈ℳ× [0, B]
(, τ) - (^*,τ^*)_L_1(f)((, τ), (^*,τ^*))≥1/20max (μ_f( Cone(1/√(3),1), μ_f(B^d(1/18)) ) .
§ FURTHER EXPERIMENTAL STUDY
This section can be seen as a complementary section for
Section <ref>.
Loss Function Behavior.
In Subsection <ref>, we experimentally studied the logistic model with different noises. In Figures <ref> and <ref>, we evaluate the model for different noise in terms of eigenvalue recover and the accuracies. As a complementary information to these figures,
in Figure <ref>, as the iteration increases, we compare the values of the loss functionR_Non(, τ̂), compared with(^*/s, τ^*/s)which we do not expect to surpass. Observe that the loss on(^*/s, τ^*/s)is a constant red line at the bottom at around0.23. When considering Logistic noise (blue) we reach this loss around 700 iterations, and nearly do when considering Gaussian noise. For other types of noise, the method does worse; Noisy labeling only achieves a loss value around0.5.
Larger Dimension.
In Section <ref>, we dealt with small value for dimensiond(d=10for synthetic andd =9, 24for real data).
In this section, following the same approach as in Subsection <ref>, we generate synthetic data withd=100andrank(^*) =30. We also set the level of noise at20%. In Figure <ref>, we observe that how the sample complexity can be affected by the dimension. When the sample size is less than30K, the model overfits, which is completely natural as our model hasd^2+1parameters. But for the larger values, we can see that the model starts to neutralize the noise and the non-noisy accuracy approaches to1(blue and magenta curves).
We have97.59%, 97.56%non-noisy train and test accuracy for600Ksample size.
|
http://arxiv.org/abs/2306.11889v1
|
20230620211014
|
The superharmonic instability and wave breaking in Whitham equations
|
[
"John D. Carter",
"Marc Francius",
"Christian Kharif",
"Henrik Kalisch",
"Malek Abid"
] |
physics.flu-dyn
|
[
"physics.flu-dyn"
] |
[email protected]
Mathematics Department, Seattle University, USA
[email protected]
Université de Toulon, Aix Marseille Univ, CNRS, IRD, MIO, Toulon, France
[email protected]
Aix-Marseille Université, CNRS, Centrale Marseille, IRPHE, UMR 7342, 13384, Marseille, France.
[email protected]
Department of Mathematics, PO Box 7800, 5020 Bergen,Norway
[email protected]
Aix-Marseille Université, CNRS, Centrale Marseille, IRPHE, UMR 7342, 13384, Marseille, France.
The Whitham equation is a model for the evolution of surface waves on shallow water that combines the unidirectional linear dispersion relation of the Euler equations with a weakly nonlinear approximation based on the KdV equation. We show that large-amplitude, periodic, traveling-wave solutions to the Whitham equation and its higher-order generalization, the cubic Whitham equation, are unstable with respect to the superharmonic instability (i.e. a perturbation with the same period as the solution). The threshold between superharmonic stability and instability occurs at the maxima of the Hamiltonian and ℒ_2-norm. We examine the onset of wave breaking in traveling-wave solutions subject to the modulational and superharmonic instabilities.
We present new instability results for the Euler equations in finite depth and compare them with the Whitham results. We show that the Whitham equation more accurately approximates the wave steepness threshold for the superharmonic instability of the Euler equations than does the cubic Whitham equation. However, the cubic Whitham equation more accurately approximates the wave steepness threshold for the modulational instability of the Euler equations than does the Whitham equation.
The superharmonic instability and wave breaking in Whitham equations
Malek Abid
July 31, 2023
====================================================================
§ INTRODUCTION
White-capping through spilling and micro breaking is a ubiquitous feature of ocean waves, is a key component of air-sea interaction, and is known to be a significant factor in the kinetic and thermal energy budgets of the ocean. While much of white-capping is driven by surface winds and wave group behavior, in many cases, early theoretical studies of wave hydrodynamics relevant to breaking considered the simplest approach (ideal fluid, irrotational flow and negligible wind effects) and focussed on a number of hydrodynamic instabilities of two-dimensional uniform wave trains in deep water. Now it is known that wave breaking can be induced by an instability in the crest of a steep wave, a so-called crest instability that corresponds to a form of the superharmonic instability of a progressive gravity wave. This instability has been studied in a great many works, usually within the framework of the fully nonlinear potential Euler equations (also known as the “water-wave problem”). Our aim is to show that the crest instability is captured by simplified models for water waves, assumed to be weakly nonlinear but fully dispersive. In the process, we also provide some new instability results for the full water-wave problem.
Longuet-Higgins <cit.> was the first to find that very steep Stokes wave trains are linearly unstable with respect to perturbations of the same wavelength and phase-locked to the basic wave. He conjectured the existence of an exchange of stability for the wave whose phase velocity is a maximum. Later on, Tanaka <cit.>, using a more accurate approach, found that the exchange of stability occurs at the maximum of the energy and not at the maximum of phase velocity. Using Zakharov's Hamiltonian formulation, Saffman <cit.> proved analytically that an exchange of stability occurs when the wave energy is an extremum as a function of the wave height. Furthermore, he confirmed the non-existence of superharmonic bifurcation predicted by Tanaka <cit.>. Recently, Sato & Yamada <cit.> revisited Saffman's theorem and showed that the exchange of stability occurs when the energy is stationary as a function of the wave velocity. Zufiria & Saffman <cit.> extended Saffman's theorem to the case of finite depth. Kataoka <cit.> revisited the work of Zufiria & Saffman analytically and numerically and found that the superharmonic instability threshold for periodic waves on fluids of finite depth occurs at the maximum of the Hamiltonian. Note that Tanaka <cit.> found that very steep solitary waves are subject to crest instability of superharmonic type, too. Within the framework of the potential Euler equations, Francius & Kharif <cit.> suggested and provided preliminary numerical results for a dimensionless depth of d=2 on the existence of the occurrence of the superharmonic instability at the maximum of the energy. Tanaka et al. <cit.> used a boundary integral method to show that the nonlinear evolution of the crest instability leads to the overturning of the solitary wave. Longuet-Higgins & Dommermuth <cit.> showed numerically that the nonlinear development of the crest instability of periodic gravity waves produces the overturning of the wave crest depending on the sign of the unstable perturbation.
Due to the computational complexity of the Euler equations, it has long been of interest to find a simpler model equation that allows smooth periodic and solitary waves, but also the existence of highest waves with singularities at the crest as observed with the Euler equations. In this vein, Whitham <cit.> was the first to propose a simplified nonlocal water-wave model, the so-called Whitham equation, by combining the unidirectional linear dispersion relation of the Euler equations with a weakly nonlinear approximation based on the KdV equation for improving the description of the dynamics of weakly nonlinear long-waves. Much later on, Ehrnström & Kalisch<cit.> demonstrated rigorously the existence of traveling periodic wave solutions of the Whitham equation, and Ehrnström & Wahlén <cit.> proved that the highest traveling-wave solutions with maximal wave height are cusped.
On one hand, Hur & Johnson <cit.> proved that small-amplitude traveling-wave solutions of the Whitham equation are stable with respect to the modulational instability if k<1.146 and are unstable with respect to the modulational instability if k>1.146 where k is a dimensionless wavenumber of the solution, equivalent to the dimensionless depth. Sanford et al. <cit.> and Carter & Rozman <cit.> numerically studied the stability of traveling-wave solutions to the Whitham equation. They corroborated the Hur & Johnson k=1.146 threshold and showed that large-amplitude traveling-wave solutions are unstable regardless of their wavelength. Adding a higher-order term, Carter et al. <cit.> studied the stability of solutions to the cubic Whitham equation and found results qualitatively similar to those in the Whitham equation. Using the method described in Binswanger et al. <cit.> corroborates the Hur & Johnson threshold and shows that small-amplitude traveling-wave solutions to the cubic Whitham equation are unstable with respect to the MI when k>1.252. These values should be compared with the well-known critical value k_c=1.363 for small amplitude gravity waves in the Euler equations. On the other hand, up to the best of our knowledge, no information is available on the linear stability of periodic traveling-wave solutions subject to superharmonic disturbances within the framework of the Whitham equations. Nonetheless, we remark that, very recently, Bronski et al. <cit.> demonstrated analytically and numerically that periodic traveling waves of certain regularized long-wave models are linearly unstable to superharmonic perturbations. Examples analyzed by these authors include the regularized Boussinesq, Benney-Luke, and Benjamin-Bona-Mahony equations. The purpose of the present paper is therefore to analyze the superharmonic instability of traveling wave solutions within the context of either the Whitham and cubic Whitham equations.
The remainder of this paper is organized as follows. Section <ref> contains a brief introduction to the Whitham and cubic Whitham equations. Section <ref> contains a numerical study of the linear stability for the traveling-wave solutions of these equations. Comparisons of the results from the Whitham and cubic Whitham equations with those from the Euler equations are also presented in this section. Section <ref> contains a numerical examination of the nonlinear stability and the onset of wave breaking in traveling-wave solutions perturbed by modulational and superharmonic instabilities. Section <ref> contains a summary of our results.
§ THE WHITHAM AND CUBIC WHITHAM EQUATIONS
In dimensionless variables, the Whitham equation is given by
u_t+𝒦*u_x+3/2uu_x=0,
where 𝒦 is the kernel of the convolution operator defined in terms of its Fourier transform
𝒦̂(κ)=√(tanh(κ)/κ).
where κ is the wavenumber in Fourier space. Here u=u(x,t) represents the dimensionless surface displacement. The Whitham equation can be converted to dimensional form via the transformation
x→ h_0x, t→√(h_0/g) t, u→ h_0u,
where h_0 is the dimensional undisturbed fluid depth and g represents the acceleration due to gravity.
Whitham <cit.> conjectured that (<ref>) would be more suitable for describing the evolution of water waves since it does not have the long-wavelength restriction inherent in models such as the KdV and Boussinesq equations. Recent work has shown that the Whitham equation and some of its generalizations are able to describe surface waves more accurately than comparable long-wave models <cit.>.
Kharif & Abid <cit.> extended equation (13.131) of Whitham <cit.> for potential flows to flows of constant vorticity. Expanding this new generalized Whitham equation to second order in amplitude and setting the vorticity to zero gives the cubic Whitham equation
u_t+𝒦*u_x+3/2uu_x - 3/8 u^2u_x=0.
These two evolution equations possess a Hamiltonian structure. They can be written as
u_t=Jδℋ/δ u,
where J=-∂_x represents a skew-symmetric linear operator and δℋ/δ u is the variational derivative of the Hamiltonian functional. Equation (<ref>) has Hamiltonian
ℋ_W=1/2∫_-L/2^L/2( u𝒦*u+1/2u^3 )dx,
and equation (<ref>) has Hamiltonian
ℋ_cW=1/2∫_-L/2^L/2( u𝒦*u+1/2u^3-1/16u^4)dx,
where L=L_0/h_0 and L_0 is the dimensional spatial period of the solution. Note that with this scaling, the dimensionless wavenumber k=2π/L coincides with the dimensionless undisturbed depth d=h_0 k_0. As is well known from the Hamiltonian representation of evolution equations, the invariance of these equations under translations along the t-axis implies that both equations preserve their Hamiltonians in t. In addition, these equations have two other classical conserved quantities
ℳ=∫_-L/2^L/2u dx,ℒ_2=∫_-L/2^L/2u^2 dx,
which correspond to the mass and the impulse of the solution respectively. Note that the invariance in t of the impulse, namely the ℒ_2-norm of the solution, is due to the invariance of these Hamiltonian systems under translations along the x-axis.
§ PERIODIC TRAVELING WAVES
§.§ Steady periodic traveling waves
We computed periodic traveling-wave solutions of the form u(x,t)=f(x-ct)=f(ξ) where f is a smooth function and c is a real constant using the branch-following method described in Ehrnström & Kalisch <cit.> and Carter et al. <cit.> We only considered solutions with zero mean since they are the most physically relevant. Plots of 2π-periodic solutions to the Whitham and cubic Whitham equations are included in Figure <ref>. The tallest solutions shown are close in wave height to the solutions with maximal height. The values of the wave speed, c; the wave height, defined as the vertical distance between the crest and the trough, H; the wave steepness, s=H/L; the Hamiltonian, ℋ; and the ℒ_2-norm, ℒ_2; for these solutions are included in Table <ref>.
Figure <ref> contains plots of the Hamiltonians ℋ_W and ℋ_cW versus c for the 2π-periodic solution branches of the two equations. The colored dots correspond to the solutions plotted in those colors in Figure <ref>. For both equations, the Hamiltonians achieve local maxima at critical wave speeds, c=c^*. Figure <ref> contains plots of the ℒ_2-norm versus c for the 2π-periodic solution branches of both equations. Note that the ℒ_2-norms also achieve local maxima at the same critical value c=c^*. It is no coincidence that the extrema of the Hamiltonian and the ℒ_2-norm occur at the same critical value. In fact, traveling waves correspond to critical points of an augmented Hamiltonian functional ℋ_aug=ℋ(u)-cℒ_2(u)+bℳ(u) for some real b. For solutions with zero mean (i.e. ℳ=0), the corresponding Euler-Lagrange equation is
δℋ/δ u-cδℒ_2/δ u=0.
Using the notation ℋ=Ĥ(c,L) and ℒ_2=L̂_2(c,L) for the conserved quantities evaluated along the two-parameter branch of periodic traveling-wave solutions of the Whitham equation, then it follows that
dĤ/dc-cdL̂_2/dc=0,
for a fixed wavelength L. Thus, the two quantities are stationary at the same value of c. See Benjamin <cit.> for details.
The values of the parameters corresponding to the maxima of the Hamiltonians (and ℒ_2-norms) are listed in Table <ref> as starred values. Due to the resolution required to resolve solutions near the (cusped) solutions with maximal wave height using a Fourier basis, we were unable to determine if the Hamiltonians continue to decrease monotonically after the local maxima or if local minima are achieved for some c>c^*. We note that the plot of the Hamiltonian in the Euler equations on infinite depth case oscillates many times <cit.>.
§.§ Linear stability analysis
We consider perturbed solutions of the form
u_pert(ξ,t)=u(ξ)+ϵ u_1(ξ,t)+𝒪(ϵ^2),
where u is a traveling-wave solution with period L, ξ=x-ct, ϵ is a small constant, and u_1 is the leading-order term of the perturbation. Using the Fourier-Floquet-Hill method described in Deconinck & Kutz <cit.>, assume
u_1(ξ,t)=^iμξU(ξ)^λ t+c.c.,
where μ∈ [-π/L,π/L] is known as the Floquet parameter, λ is a complex constant, c.c. stands for complex conjugate, and U(ξ) is a function with period L and Fourier series
U(ξ)=∑_j=-N^NÛ(j)^2π i jξ/L.
Here 2N+1 is the total number of Fourier modes and the Û(j) are the complex amplitudes of the Bloch function U(ξ) associated with the perturbation. If μ=0, then the perturbation has the same ξ-period as the unperturbed solution. If there exists a perturbation with μ=0 and (λ)>0, then the solution is said to be linearly unstable with respect to the superharmonic instability (SI). If there exists a perturbation with μ close to zero, but nonzero, and (λ)>0, then the perturbation has an ξ-period that is larger than the unperturbed solution and the solution is said to be linearly unstable with respect to the modulational instability (MI). Modulational instabilities are sometimes referred to as subharmonic instabilities. The solution is said to be linearly stable if there does not exist any μ or U such that (λ)>0.
For a branch of solutions corresponding to a given wavenumber, as the wave height of the solutions increases, the stability spectra for both equations go through two bifurcations. First, the solutions become unstable with respect to the MI. The second bifurcation occurs when the solutions become unstable with respect to the SI. This second bifurcation point occurs at the maximum of the Hamiltonian and ℒ_2-norm. Table <ref> contains the values of the parameters where the first bifurcation occurs for solutions of both equations with four different periods. For the solutions with L=π, the first bifurcation occurs at H=0 because small-amplitude solutions to both equations with k=2 are known to be unstable with respect to the MI <cit.>. Table <ref> contains the values of the parameters where the second bifurcation occurs. Note that as the period of the solution increases, the values of the parameters at the threshold for the MI approach the values of the parameters at the threshold for the SI. Also note that in the solitary-wave limit (L→∞) there is SI, but not MI.
In Figure <ref>, the green curves are solutions with wave speeds slightly larger than c^†. These solutions are unstable with respect to the modulational instability, but not the superharmonic instability. Figure <ref> includes plots of the real and imaginary parts of U(ξ) corresponding to the unstable perturbations for these solutions when μ=0.1. In this case, the perturbations, u_1(ξ,t) have a ξ-period of 20π and eigenvalues λ=0.001503+0.01814i (Whitham) and λ=0.002578+0.01694i (cubic Whitham). The nonzero portion of the MI perturbations is centered at the peak of the unperturbed solution. As the wave steepnesses of the solutions increase, the steepnesses of the MI perturbations also increase.
In Figure <ref>, the magenta curves are solutions with wave speeds slightly larger than c^*. These solutions are unstable with respect to the superharmonic instability. Figure <ref> includes plots of the superharmonic instability corresponding to these solutions. These perturbations have the same period as the underlying solutions (L=2π) and have purely real eigenvalues of λ=0.2517 (Whitham) and λ=0.2337 (cubic Whitham). This means that the perturbation is phase-locked with the basic wave. Similarly to the MI, the nonzero portions of these perturbations are centered at the peaks of the solutions. However, the superharmonic instabilities are significantly steeper than the modulational instabilities.
§.§ Comparisons with results from the Euler equations
In order to appreciate the differences between the Whitham and cubic Whitham equations and whether one of them constitutes a better approximation of the full Euler equations, we now compare the results of these three equations.
Borluk et al. <cit.> carried out a similar comparison with the KdV and Whitham equation, for different wavelengths (L=π, 2 π and 4π) and wave heights. Comparing the bifurcation curves of each model, they showed that for the steady Whitham waves with L=π compare more favorably to the Euler waves than the KdV waves. For larger wavelengths, L≥ 2π, the Whitham waves compare poorly to the Euler waves, the KdV waves with the largest wavelength appearing to be a better approximation of the Euler waves.
Given these results, we have plotted in Figure <ref> the wave speed versus wave height curves for 2π-periodic traveling-wave solutions of the Euler, Whitham, and cubic Whitham equations. Here, the steady waves of the Euler model were obtained with the numerical method proposed by Longuet-Higgins <cit.>. For any given undisturbed depth d=2π/L, this method enables the computation of the bifurcation branch with very high accuracy, up to wave heights close to the maximum value and certainly beyond the maximal value of the Hamiltonian (or maximal total energy).
The plots in Figure <ref> show that the results from the cubic Whitham equation are in better agreement with those from the Euler equations, in particular for the large wave heights. However, the cubic Whitham equation admits solutions of significantly larger wave height and speed than do the Whitham and Euler equations. According to the Euler equations, the limiting wave with period L=2π has s^'=0.1030, H^'=0.6473 and c^'=0.9690. These estimates come from formulae (5.1) and (5.4) of Zhong & Liao <cit.>, which successfully obtained very accurate results (and especially the wave profiles) of the limiting Stokes’ waves in arbitrary water depth.
Considering the linear stability analysis of the Euler waves with L=2π it is expected that for large enough wave height they become unstable to 1-D modulational instabilities, as evidenced by the Figure 3b in McLean <cit.>, which shows the bands of 1-D and 2-D instabilities for a wave with H=0.580 and c=0.9588. Extending this work and using the numerical method described in Francius & Kharif <cit.>, we found that the threshold steepness for the onset of the modulational instability in the Euler equations with d=1 is s^†=0.085 (c^†=0.9477). These values should be compared with the threshold value s^†=0.062 (c^†=0.925) for the Whitham equation and s^†=0.087 (c^†=0.953) for the cubic Whitham equation, see Table <ref>. Hence the cubic Whitham result is closer to the exact value, which suggests that the addition of the cubic nonlinear in the Whitham equation provides an improvement for the L=2π case. The plots of Figure <ref> show that near this threshold value the bifurcation curve of the cubic Whitham equation is the closest one to that of the Euler equations.
For the Euler equations, the SI threshold occurs at the value s^*=0.099 (c^*=0.968). These values should be compared with those from the Whitham (s^*=0.103, c^*=0.974) and cubic Whitham (s^*=0.126, c^*=0.987) equations, see Table <ref>. This establishes that the Whitham equation more accurately reproduces the SI threshold and corresponding speed of the Euler equations than does the cubic Whitham equation. We compare the normalized profiles of the superharmonic instabilities corresponding to solutions with steepnesses slightly greater than the threshold value s^*. Figure <ref> shows plots of the superharmonic instabilities for the 2π-periodic solutions of the Whitham (from Figure <ref>(a) with s=0.109), cubic Whitham (from Figure <ref>(b) with s=0.131), and Euler equations (s=0.100). The superharmonic instabilities for the Whitham and cubic Whitham equations are essentially the same and cannot be distinguished at the scale used in the figure. However, the Euler instability is significantly less steep than the Whitham and cubic Whitham instabilities. Finally we note that as the steepness of the solution increases, the steepnesses of the superharmonic instabilities also increase, as well as their growth rate (not shown here).
At first glance it may seem surprising that approximate models capture the crest instability. However, the large-amplitude solutions to the Whitham and cWhitham equations are very steep and it is this steepness that triggers the superharmonic instability. The cWhitham equation more accurately predicts the onset of the MI than does the Whitham equation while the Whitham equation more accurately predicts the onset of the SI. Although there is qualitative agreement between the three models, there is not strong quantitative agreement.
§ NONLINEAR INSTABILITY AND THE ONSET OF WAVE BREAKING
In order to study the nonlinear stability of periodic traveling-wave solutions of the Whitham and cWhitham equations perturbed by modulational and superharmonic instabilities, we consider initial conditions of the form
u_0(x,0)=u(x)+ϵ u_1(x,0),
where u(x) is a periodic traveling-wave solution, u_1(x,0) is a perturbation, and ϵ is a small real constant. Initial conditions of this form were used in codes that time-evolve solutions of the Whitham and cWhitham equations. The codes use a Fourier basis in space and the fourth-order operator-splitting technique introduced by Yoshida <cit.> in time.
We ran a number of other simulations of special importance:
* Simulation #1: Here we considered the superharmonic instability. We used a single period of the magenta solutions in Figure <ref> as u(x), the superharmonic instabilities shown in Figure <ref> as u_1(x,0), and a positive value for ϵ in equation (<ref>). Initially, the perturbations grew exponentially with the rates predicted by linear theory (λ=0.2517 for Whitham and λ=0.2337 for cWhitham). Then, nonlinear effects began to play a role and the solutions evolved towards breaking. Figure <ref> contains plots of the perturbed solutions near the onset of breaking.
* Simulation #2: This simulation was the same as Simulation #1, except that we used a negative value for ϵ. In this case, the perturbations initially grew exponentially with the rates predicted by linear theory (λ=0.2517 for Whitham and λ=0.2337 for cWhitham). However, when nonlinear effects began to play a role, the solutions did not evolve towards breaking. Instead, they continued to evolve as perturbed traveling-wave solutions for a long period of time.
* Simulation #3: In this case, we considered the μ=1/3 modulational instability of solutions that are unstable with respect to both the modulational and superharmonic instabilities. (Note that the μ=1/2 modulational instability has the largest growth rate in this case. However, we were able to monitor the growth of the μ=1/3 instability because it was seeded as part of the initial condition.) We used three periods of the magenta solutions from Figure <ref> as u(x), one period of the μ=1/3 modulational instabilities shown in Figure <ref>, and a positive value for ϵ to construct the initial condition. Initially, the perturbations grew with the rates predicted by linear theory (λ=0.2571 for Whitham and λ=0.2428 for cWhitham). When nonlinear effects began to play a role, the rightmost and center peaks evolved towards breaking in a manner similar to what was observed in Simulation #1. The rightmost peaks tended towards breaking sooner than the center peaks. This is consistent with the fact that the magnitudes of the perturbations near that peak were larger than at the center peaks. The leftmost peaks evolved in a manner similar to what was observed in Simulation #2 where no trend towards breaking was observed. This is consistent with the fact that the perturbation essentially has a negative sign near that peak.
* Simulation #4: This simulation is the same as Simulation #3, except that we used a negative value for ϵ. Initially, the perturbations grew with the rates predicted by linear theory. When nonlinear effects begin to play a role, the leftmost peaks evolved towards breaking similarly to what was observed in Simulation #1. The rightmost and center peaks evolved in a manner similar to Simulation #2 where no trend towards breaking was observed. This again emphasizes that the sign and/or phase of the perturbation plays an important role in the onset of wave breaking.
* Simulations #5 and #6: In this case, we considered the μ=1/3 modulational instability of solutions that are not unstable with respect to the superharmonic instability, but are close in wave height to solutions that are unstable with respect to the superharmonic instability. These solutions behaved similarly to the solutions in Simulation #3 when ϵ was positive and similarly to the solutions in Simulation #4 when ϵ is negative. Simulation #5 demonstrates that the MI instability may trigger the SI instability as shown by Longuet-Higgins & Cokelet <cit.> who solved the Euler equations numerically. Nevertheless, Simulation #6 suggests the existence of a threshold value of the wave steepness of the basic wave above which the MI instability may trigger the SI instability.
Long-time simulations using initial conditions formed by perturbing the green solutions from Figure <ref> (solutions with just enough steepness to be unstable with respect to the MI, but not enough to be unstable with respect to the SI) with the MI perturbations shown in Figure <ref> did not tend towards breaking. Initially the perturbations grew with the growth rates predicted by linear theory (0.001503 for the Whitham case and 0.002578 for the cWhitham case), then nonlinear effects occurred, and eventually the solution nearly recurred to their initial states.
The Whitham breaking results are consistent with the recent work of McAllister et al. <cit.>. They showed that when the local surface slope surpasses u_x=0.577 in simulations of the Euler equations, then the solution would break. Our simulations of the Whitham equation show that the perturbed green solution, which has a maximal local steepness of u_x=0.1833, does not tend towards breaking while the perturbed magenta solution, which has a maximal local steepness of u_x=1.90, tends towards breaking. Determining a precise local steepness cutoff for breaking in the Whitham equation remains an open question.
§ SUMMARY
We have shown that periodic traveling-wave solutions with large enough amplitude to both the Whitham and cubic Whitham equations are unstable with respect to the superharmonic instability. This means that large-amplitude traveling-wave solutions of these equations with period L are unstable with respect to perturbations with period L. We showed that the threshold between superharmonic stability and instability occurs at the maxima of the Hamiltonian and ℒ_2-norm. This qualitatively aligns with the results from the Euler equations.
We presented new modulational and superharmonic instability wave steepness thresholds for the Euler equations in finite depth. We showed that the cubic Whitham equation more accurately approximates the Euler modulational instability threshold than does the Whitham equation. However, the Whitham equation more accurately approximates the Euler superharmonic instability threshold than does the cubic Whitham equation. The cWhitham equation works better for waves of moderate steepness while the Whitham equation works better for waves or larger steepness.
We showed that the sign and/or phase of the perturbation determines whether a perturbed traveling-wave solution of the Whitham or cubic Whitham equation evolves towards breaking. This qualitatively aligns with the results from the Euler equations.
These results show that the relatively simple Whitham and cubic Whitham equations possess some of the same properties of the Euler equations. To our knowledge, these are the first superharmonic instability results for periodic solutions to approximate models of surface waves on finite depth.
10
BenjaminVariation
T.B. Benjamin.
Impulse, flow force and variational principles.
IMA Journal of Applied Mathematics, 32:3–68, 1984.
Binswanger
A.L. Binswanger, M.A. Hoefer, B. Ilan, and P. Sprenger.
Whitham modulation theory for generalized Whitham equations and a
general criterion for modulational instability.
Studies in Applied Mathematics, 147:724–751, 2021.
Borluk
H. Borluk, H. Kalisch, and D.P. Nicholls.
A numerical study of the whitham equation as a model for steady
surface water waves.
Journal of Computational and Applied Mathematics, 296:293–302,
2016.
BronskiHurWester
J.C. Bronski, V.M. Hur, and S.L. Wester.
Superharmonic instability for regularized long-wave models.
Nonlinearity, 36:133–170, 2022.
WhithamComp
J.D. Carter.
Bidirectional Whitham equations as models of waves on shallow
water.
Wave Motion, 82:51–61, 2018.
CVWhitham
J.D. Carter, H. Kalisch, C. Kharif, and M. Abid.
The cubic-vortical Whitham equation.
Wave Motion, 110:102883, 2022.
STWhitham
J.D. Carter and M. Rozman.
Stability of periodic, traveling-wave solutions to the
capillary-Whitham equation.
Fluids, 4:58, 2019.
DK
B. Deconinck and J.N. Kutz.
Computing spectra of linear operators using Hill's method.
Journal of Computational Physics, 219:296–321, 2006.
EK
M. Ehrnström and H. Kalisch.
Traveling waves for the Whitham equation.
Differential and Integral Equations, 22:1193–1210, 2009.
ehrnstrom2013global
M. Ehrnström and H. Kalisch.
Global bifurcation for the Whitham equation.
Mathematical Modelling of Natural Phenomena, 8(5):13–30, 2013.
WhithamCusp
M. Ehrnström and E. Wahlén.
On Whitham's conjecture of a highest cusped wave for a nonlocal
dispersive equation.
Annales de l'Institut Henri Poincare. Analyse non linéar,
36:769–799, 2019.
emerald2021rigorous
L. Emerald.
Rigorous derivation from the water waves equations of some full
dispersion shallow water models.
SIAM Journal on Mathematical Analysis, 53(4):3772–3800, 2021.
FK1
M.J. Francius and C. Kharif.
On the disappearance of the lowest-order instability for steep
gravity waves in finite depth.
Physics of Fluids, 15(8):2445–2448, 2003.
FK2
M.J. Francius and C. Kharif.
Three-dimensional instabilities of periodic gravity waves in
shallow water.
Journal of Fluid Mechanics, 561:417–437, 2006.
HurJohnson2015
V.M. Hur and M.A. Johnson.
Modulational instability in the Whitham equation for water waves.
Studies in Applied Mathematics, 134(1):120–143, 2015.
Kataoka
T. Kataoka.
On the superharmonic instability of surface gravity waves on fluid of
finite depth.
Journal of Fluid Mechanics, 547:175–184, 2006.
Kharif2018
C. Kharif and M. Abid.
Nonlinear water waves in shallow water in the presence of constant
vorticity: A Whitham approach.
European Journal of Mechanics B/ Fluids, 72:12–22, 2018.
SHInstabStokesWaves
A.O. Korotkevich, P.M. Lushnikov, A.A. Semenova, and S.A. Dyachenko.
Superharmonic instability of Stokes waves.
Studies in Applied Mathematics, 151, 2023.
longuet1978instabilities
M.S. Longuet-Higgins.
The instabilities of gravity waves of finite amplitude in deep water
I. Superharmonics.
Proceedings of the Royal Society of London. A. Mathematical and
Physical Sciences, 360(1703):471–488, 1978.
LHCokelet
M.S. Longuet-Higgins and E.D. Cokelet.
Deformation of steep surface waves on water II. Growth of
normal-mode instabilities.
Proceedings of the Royal Society of London A, 364:1–28, 1978.
longuet1997crest
M.S. Longuet-Higgins and D.G. Dommermuth.
Crest instabilities of gravity waves. Part 3. Nonlinear
development and breaking.
Journal of Fluid Mechanics, 336:33–50, 1997.
McAllisterBreaking
M.L. McAllister, N. Pizzo, S. Draycot, and T.S. van den Bremer.
The influence of spectral bandwidth and shape on deep-water wave
breaking onset.
https://arxiv.org/abs/2305.08614, 2023.
McLean
J.W. McLean.
Instabilities of finite-amplitude gravity waves on water of finite
depth.
Journal of Fluid Mechanics, 114:331–341, 1982.
moldabayev2015whitham
D. Moldabayev, H. Kalisch, and D. Dutykh.
The Whitham equation as a model for surface water waves.
Physica D, 309:99–107, 2015.
saffman1985superharmonic
P.G. Saffman.
The superharmonic instability of finite-amplitude water waves.
Journal of Fluid Mechanics, 159:169–174, 1985.
Sanford2014
N. Sanford, K. Kodama, J.D. Carter, and H. Kalisch.
Stability of traveling wave solutions to the Whitham equation.
Physics Letters A, 378:2100–2107, 2014.
sato2019superharmonic
N. Sato and M. Yamada.
Superharmonic instability of nonlinear travelling wave solutions in
Hamiltonian systems.
Journal of Fluid Mechanics, 876:896–911, 2019.
tanaka1983stability
M. Tanaka.
The stability of steep gravity waves.
Journal of the Physical Society of Japan, 52(9):3047–3055,
1983.
tanaka1985stability
M. Tanaka.
The stability of steep gravity waves. Part 2.
Journal of Fluid Mechanics, 156:281–289, 1985.
tanaka1986stability
M. Tanaka.
The stability of solitary waves.
Physics of Fluids, 29(3):650–655, 1986.
tanaka1987instability
M. Tanaka, J.W. Dold, M. Lewy, and D.H. Peregrine.
Instability and breaking of a solitary wave.
Journal of Fluid Mechanics, 185:235–248, 1987.
Trillo
S. Trillo, M. Klein, G.F. Clauss, and M. Onorato.
Observation of dispersive shock waves developing from initial
depressions in shallow water.
Physica D, 333:276–284, 2016.
Whitham
G.B. Whitham.
Variational methods and applications to water waves.
Proceedings of the Royal Society of London, A, 299:6–25, 1967.
Whithambook
G.B Whitham.
Linear and Nonlinear Waves.
John Wiley & Sons, Inc., New York, 1974.
yoshida
H. Yoshida.
Construction of higher order symplectic integrators.
Physics Letters A, 150:262–268, 1990.
Zhong
X. Zhong and S. Liao.
On the limiting stokes wave of extreme height in arbitrary water
depth.
Journal of Fluid Mechanics, 843:653–679, 2018.
zufiria1986superharmonic
J.A. Zufiria and P.G. Saffman.
The superharmonic instability of finite-amplitude surface waves on
water of finite depth.
Studies in Applied Mathematics, 74(3):259–266, 1986.
|
http://arxiv.org/abs/2306.03004v1
|
20230605161432
|
Stratospheric dayside-to-nightside circulation drives the 3-D ozone distribution on synchronously rotating rocky exoplanets
|
[
"Marrick Braam",
"Paul I. Palmer",
"Leen Decin",
"Maureen Cohen",
"Nathan J. Mayne"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
firstpage–lastpage
Stockmayer supracolloidal magnetic polymers under the influence of an applied magnetic field and a shear flow
[
July 31, 2023
=============================================================================================================
Determining the habitability and interpreting future atmospheric observations of exoplanets requires understanding the atmospheric dynamics and chemistry from a 3-D perspective. Previous studies have shown significant spatial variability in the ozone layer of synchronously rotating M-dwarf planets, assuming an Earth-like initial atmospheric composition. We use a 3-D Coupled Climate-Chemistry model to understand this distribution of ozone and identify the mechanism responsible for it. We document a previously unreported connection between the ozone production regions on the photochemically active dayside hemisphere and the nightside devoid of stellar radiation and thus photochemistry. We find that stratospheric dayside-to-nightside overturning circulation can advect ozone-rich air to the nightside. On the nightside, ozone-rich air subsides at the locations of two quasi-stationary Rossby gyres, resulting in an exchange between the stratosphere and troposphere and the accumulation of ozone at the gyre locations. We identify the hemispheric contrast in radiative heating and cooling as the main driver of this ozone circulation. Dynamically-driven chemistry also impacts other tracer species in the atmosphere (gaseous and non-gaseous phase) as long as chemical lifetimes exceed dynamical lifetimes. These findings illustrate the 3-D nature of planetary atmospheres, predicting spatial and temporal variability that will impact spectroscopic observations of exoplanet atmospheres.
Planets and satellites: terrestrial planets – Planets and satellites: atmospheres – Planets and satellites: composition
§ INTRODUCTION
The past two decades have seen the discovery of numerous Earth-size exoplanets, with a substantial fraction of them orbiting in the circumstellar Habitable Zone <cit.>. Earth-size planets are preferentially discovered around M-dwarf stars <cit.>, because they are the most abundant stellar type, have relatively small radii, and are relatively cool, allowing for exoplanets in short-period orbits. The habitability of such exoplanets has been debated in light of the stellar and planetary environments <cit.>. Comprehensive numerical simulations that describe the physical and chemical properties of a planetary atmosphere in such environments are essential to understanding habitability and interpreting spectroscopic observations.
Since M stars are cooler and smaller than other stellar types, a planet in the Habitable Zone orbits at a small orbital distance and feels a strong gravitational pull from the host star. This can lead to spin-orbit resonances for the planet, so-called tidal locking, of which the most extreme case is the 1:1 resonant orbit or synchronous rotation <cit.>. Simulations with General Circulation Models (GCMs) help us understand how synchronous rotation affects the planetary atmosphere and surface habitability. First, synchronous rotation creates distinct hemispheric environments and a large temperature difference between the dayside and nightside <cit.>. Second, synchronous rotation leads to distinct photochemical environments, with strong photochemical production and destruction on the dayside and an absence of photochemistry on the nightside <cit.>. Depending on the rotation period, synchronous rotation can also lead to atmospheric circulation that is characterised by thermally direct circulation for slowly rotating planets <cit.>. The existence of this large-scale circulation requires the Rossby deformation radius to exceed the planetary radius <cit.>, which is the case for planets like Proxima Centauri b, Trappist-1 e to h, LHS-1140 b and GJ 667 C c, assuming an Earth-like atmosphere. The dayside-nightside contrast leads to an overturning circulation, with upwelling on the dayside and downwelling on the nightside <cit.>. This vertical motion results in a superposition of planetary-scale Rossby and Kelvin waves, which drives eddy momentum equatorward <cit.>. A typical part of this wave structure is a pair of quasi-stationary cyclonic gyres on the nightside <cit.>. The equatorward momentum feeds the superrotating jet <cit.>. The overturning circulation is a dominant component of the dayside-to-nightside heat transport <cit.>.
Atmospheric circulation impacts the spatial and temporal distribution of chemical species and other tracers such as clouds <cit.> and photochemical hazes <cit.>. On Earth, the Brewer-Dobson circulation controls the large-scale distribution of chemical tracers such as ozone (O_3) and water vapour in the atmosphere <cit.>. Ozone formation is initiated by photochemistry through the Chapman mechanism <cit.>, which is strongest at tropical latitudes. The Brewer-Dobson circulation describes the ascent of ozone-rich air in the tropics, followed by equator-to-pole transport and descending air motions at high latitudes, leading to meridional variations with a relatively enhanced ozone layer at high latitudes.
<cit.> simulated a tidally-locked Earth using a 3-D climate-chemistry model (CCM), which consists of a GCM coupled to a photochemical network to study the relation between (photo)chemistry, atmospheric dynamics and the thermal structure of the atmosphere. They find a breakdown of the Brewer-Dobson circulation, and instead predict that ozone accumulates on the nightside, where it has a long lifetime <cit.>. <cit.> investigated stratospheric circulation on tidally-locked exoplanets and the potential impact on the distribution of chemical species. For planets with short orbital periods (<25 days), tropical Rossby waves can induce strong equatorial jets in the stratosphere with pole-to-equator transport of airmasses <cit.>. <cit.> showed the meridional distribution of ozone from CCM simulations, confirming that this pole-to-equator circulation essentially confines photochemical species such as ozone to the equatorial regions. The existence of extratropical Rossby waves or damping of tropical Rossby waves prevents this equatorial confinement. Instead, a thermally-driven overturning circulation can drive equator-to-pole transport of photochemical species <cit.>, leading to meridional structure with enhanced ozone at high latitudes. For planets like Proxima Centauri b, <cit.> find a relatively weak tropical Rossby wave, with a thermally-driven equator-to-pole circulation existing in the stratosphere (see their Figure 12). For such planets, the enhanced ozone abundances at high latitudes were later also simulated by <cit.>.
The distribution of radiatively active species such as ozone impacts habitability <cit.>, and will determine what spectroscopic observations of the planetary atmosphere will look like <cit.>. Despite reporting a non-detection for the atmosphere, the observation of TRAPPIST-1 b illustrates the capability of JWST to characterise Earth-size exoplanets <cit.>. For the exoplanets that have an atmosphere we need to understand their 3-D nature, including circulation, clouds, and atmospheric chemistry, which motivates the application of 3-D CCMs to exoplanetary environments. Such simulations of synchronously rotating exoplanets predict a significant zonal structure in the ozone layer for planets around M-dwarfs like Proxima Centauri b <cit.> and haze distribution for hot Jupiters <cit.>. <cit.> found that ozone has a much longer chemical lifetime on the nightside as compared to the dayside of M-dwarf exoplanets. These long nightside lifetimes lead to accumulation of ozone in the nightside gyres, despite the absence of stellar radiation needed to initiate the relevant photochemistry. This spatially variable ozone layer indicates a connection between the photochemically active dayside regions and nightside gyres, which is currently not understood.
In this paper, we aim to understand the dayside-nightside connection and identify the physical and chemical mechanism that drives the spatially variable ozone layer on synchronously rotating exoplanets around M-dwarf stars. We use a 3-D CCM to investigate the spatial and temporal structure of atmospheric ozone, using a configuration for Proxima Centauri b. In Section <ref>, we briefly describe the CCM and introduce metrics used to diagnose atmospheric circulation. This will be followed by a description of the ozone distribution and its relation to atmospheric circulation in Section <ref>. In Section <ref>, we identify a possible driver of the circulation, investigate variability in our simulations and investigate potential observability. Finally, we present the conclusions of our study in Section <ref>.
§ METHODS
This section starts with a description of the 3-D coupled climate-chemistry model. This is followed by the introduction of useful metrics to diagnose the atmospheric circulation and its impact on chemistry in Section <ref>. Finally, we summarize the experimental setup in Section <ref>.
§.§ Coupled Climate-Chemistry Model
The 3-D CCM consists of the Met Office Unified Model (UM) as the GCM coupled with the UK Chemistry and Aerosol framework (UKCA), in the configuration described by <cit.>. UM-UKCA is used to simulate the atmospheric dynamics and chemistry for Proxima Centauri b, but the results apply to other planets in similar orbits around M-dwarf stars. We simulate an aquaplanet with 1 bar or 1000 hPa surface pressure <cit.> and use a horizontal resolution of 2 by 2.5 in latitude and longitude, respectively. The atmosphere extends up to 85 km in 60 vertical levels. We assume that Proxima Centauri b is in a 1:1 resonant orbit around its M-dwarf host star and use the orbital parameters as shown in Table <ref>. The substellar point is located at 0^∘ latitude (ϕ) and 0^∘ longitude (λ).
The UM is used in the Global Atmosphere 7.0 configuration <cit.>, including the ENDGame dynamical core to solve the non-hydrostatic fully compressible deep-atmosphere equations of motion <cit.>. Parametrized sub-grid processes include convection (mass-flux approach, based on ), water cloud physics <cit.>, turbulent mixing <cit.> and the generation of lightning <cit.>. The incoming stellar radiation for 0.5 nm to 5.5 μm is described by the v2.2 composite spectrum for Proxima Centauri from the MUSCLES spectral survey <cit.> and extended to 10 μm using the spectrum from <cit.>. Radiative transfer through the atmosphere is treated by the Suite of Community Radiative Transfer codes based on Edwards and Slingo (SOCRATES) scheme <cit.>. The UM is one of the leading models in predicting the Earth's weather and climate and has been adapted for the study of several types of exoplanets, including terrestrial planets <cit.> but also Mini-Neptunes <cit.> and hot Jupiters <cit.>. Furthermore, the UM was part of the TRAPPIST-1e Habitable Atmosphere Intercomparison (THAI) project <cit.>.
We use UKCA to simulate the 3-D atmospheric chemical composition, by including its description of gas-phase chemistry. UKCA is fully coupled to the UM for large-scale advection, convective transport and boundary layer mixing of the chemical tracers <cit.>. The Fast-JX photolysis scheme is implemented within UKCA, to calculate photolysis rates of chemical species in the atmosphere <cit.>. By taking into account the varying optical depths of Rayleigh scattering, absorbing gases, and clouds from the UM, Fast-JX provides an interactive treatment of photolysis in calculating the 3-D distribution of chemical species in the atmosphere. We distribute the stellar flux from Proxima Centauri over the 18 wavelength bins of Fast-JX, as shown in <cit.> and their Figure 1. These fluxes are synchronised to the orbital distance of Proxima Centauri b which provides an interactive calculation of photolysis rates over the planetary orbit. The chemistry included is a reduced version of UKCA's Stratospheric-Tropospheric scheme <cit.>, including the Chapman mechanism of ozone formation, and the hydrogen oxide (HO_x=H+OH+HO_2) and nitrogen oxide (NO_ x=NO+NO_2) catalytic cycles. This results in 21 chemical species that are connected by 71 reactions. A full list of species and reactions can be found in the appendix of <cit.>.
§.§ Metrics
The meridional circulation is diagnosed using the mean meridional mass streamfunction (in kg s^-1), which calculates the northward mass flux above pressure P:
Ψ_m = 2π R_p cosϕ/g∫^P_0 vdP,
with R_p as the planetary radius, g as the gravitational acceleration and v as the zonal and temporal mean of the northward velocity component at latitude ϕ. Earlier studies using this metric for synchronously rotating exoplanets <cit.> showed 1) the existence of tropospheric Hadley and Ferrel cells transporting heat and mass from the equatorial to polar regions and 2) the impact of orbital configuration on the Brewer-Dobson circulation in the stratosphere <cit.>.
However, with the fixed substellar point of synchronously rotating planets, the mean meridional circulation varies depending on the position relative to the substellar point: for example, the hemispheric mean meridional circulation can vary significantly between the dayside and nightside. The zonal circulation is analogous to the Walker circulation cells on Earth, with rising motion at the location of the heat source, followed by eastward and westward flow aloft and, after descending on the nightside, a return flow along the surface back to the heat source <cit.>. The mean zonal mass streamfunction can be used to calculate the eastward mass flux above pressure P:
Ψ_z = 2π R_p/g∫^P_0 udP,
where u is the meridional mean of the zonal velocity component. For slow rotators, the mean zonal circulation connects the substellar and antistellar points <cit.>. The substellar-antistellar circulation also consists of a cross-polar flow <cit.>.
As elaborated in Section <ref>, the total wind flow on synchronously rotating exoplanets consists of several components. We perform a Helmholtz decomposition of the total wind flow, following <cit.>. This decomposes the total wind flow into its rotational, eddy rotational, and divergent components. The divergent wind mainly drives the substellar-antistellar overturning circulation <cit.>. Since the divergent component is roughly isotropic around the substellar point, we can move from the usual latitude-longitude or geographic coordinate system to a tidally-locked coordinate system <cit.>. The transformation between geographic coordinates and tidally-locked coordinates is illustrated in Figure <ref>. The tidally-locked latitude ϕ' is measured as the angle from the terminator and the tidally-locked longitude λ' is the angle about the substellar point, with the geographic North Pole located at (ϕ',λ')=(0,0) in tidally-locked coordinates. The substellar point and antistellar point correspond to ϕ'=90^∘ and -90^∘, respectively. It was shown by <cit.> that integrating the continuity equation in tidally-locked coordinates over λ' leads to the tidally-locked mean meridional mass streamfunction:
Ψ'_m = 2π R_p cosϕ'/g∫^P_0 v'dP,
where v' is the zonal mean of the meridional velocity component at tidally-locked latitude ϕ'. In this system, the meridional mass streamfunction calculates the mass flux toward the antistellar point (along lines of constant λ'), connecting the substellar and antistellar points and also taking cross-polar flow into account.
Since we are particularly interested in the transport of ozone around the planet, we weight the stream functions using the ozone mass mixing ratio (χ_O3), which is measured as the mass of ozone per unit mass of air in a parcel. This gives us the ozone mass streamfunction:
Ψ'_O_3 = Ψ'×χ_O_3,
which can be applied generally using any of the streamfunctions in Equations <ref>, <ref> or <ref> to give the ozone-weighted meridional, zonal, or the tidally-locked meridional mass streamfunction.
§.§ Experimental Setup
We use the final state of the `Chapman+HO_x+NO_x' simulation from <cit.> for the analysis. The atmosphere was initialized at an Earth-like atmospheric composition, using preindustrial values of N_2, O_2 and CO_2 <cit.>. Water vapour profiles are interactively determined by evaporation from the slab ocean. The HO_x and NO_x species are initialized at mass mixing ratios of 10^-9 and 10^-15, respectively. We report results from our simulation as 600-day mean of the CCM output (equal to ∼50 orbits of Proxima Centauri b) after spinning up for 20 Earth years, to ensure the simulation has reached a dynamical and chemical steady state. The dynamical steady state was determined by the stabilisation of the surface temperature and radiative balance at the top of the atmosphere. The chemical steady state was determined by the stabilisation of ozone as a long-lived species, through the total column and volume mixing ratios. In diagnosing the impact of dynamical processes on the ozone distribution, parts of the spin-up period have also been used to plot the evolution of chemically inert tracers (see Figure <ref> below). The analysis of temporal variability in Section <ref> is based on a 6-day output over 900 days of simulation after reaching a steady state, to ensure we include potential variability at longer timescales.
§ RESULTS
In this section, we start with a brief description of the planetary climate and ozone layer. After that, we discuss the atmospheric circulation followed by its impact on the distribution of ozone around the planet, elaborating on the stratospheric overturning circulation. Lastly, we perform a comparison of relevant lifetimes in the atmosphere.
§.§ Planetary climate and atmospheric ozone
The simulated climate of Proxima Centauri b is broadly similar to that described by <cit.>. Furthermore, the formation of an ozone layer under quiescent stellar radiation is explained in detail by <cit.> and <cit.>. Here, we give a brief description of the details essential for this study. The simulated surface temperature of Proxima Centauri b is shown in Figure <ref>, using a geographic coordinate system in panel (a) and tidally-locked coordinate system in panel (b). Both panels show the dayside-to-nightside contrast characteristic of synchronous rotation, with dayside maxima in surface temperature of up to 289 K and minima of 157 K over the nightside Rossby gyres. Figure <ref>b demonstrates the usefulness of the tidally-locked coordinate system in identifying the dayside-to-nightside contrasts, with the terminator located at ϕ'=0^∘. The horizontal wind vectors are shown at P≈400 hPa, illustrating the tropospheric jet as well as the Rossby gyres on the nightside. The dayside-to-nightside circulation is part of an overturning circulation across multiple pressure levels that will be described in more detail in Section <ref>. At the locations of the nightside Rossby gyres <cit.>, we see the coldest areas on the planetary surface with air that is trapped and subject to radiative cooling. The atmospheric pressure in the gyres is relatively low, like the eye of tropical cyclones <cit.>. The gyres are relatively isolated from the rest of the hemisphere and their edges act as mixing barriers <cit.>. The gyres are a general feature of slowly rotating exoplanets in a synchronous orbit that have a single equatorial jet in the troposphere <cit.>.
We find a spatially variable distribution of ozone in Figure <ref>a, with a relatively thin dayside ozone layer and accumulation of ozone on the nightside. Typical values for the vertically-integrated ozone column on Earth's are 200–400 Dobson Units (DU: 1 DU=2.687×10^20 molecules m^-2), with lower values over the equatorial regions and ozone hole and higher values over high-latitude regions <cit.>. For synchronously rotating planets, most of the dayside ozone column falls within this range. The locations of the nightside Rossby gyres correspond to the maxima in the thickness of the ozone column, reaching up to 1401 DU. The gyres are not fully symmetric, evident from slightly different shapes and the average ozone columns: the area-weighted mean column of the low-λ' gyre (for λ'≤70 and λ'>320^∘) is equal to 626 DU and of the mid-λ' gyre (110<λ'≤220^∘) to 601 DU, both confined between tidally-locked latitudes -60<ϕ'<0^∘. Figure <ref>b shows that the accumulation of ozone at the gyre locations mostly occurs in the lower atmosphere, at pressure levels corresponding to the troposphere (<100 hPa).
The existence of such a spatially variable ozone layer depends on a complex interplay between photochemistry and atmospheric dynamics and changes as a function of incoming stellar radiation and planetary rotation state <cit.>. The production mechanisms for atmospheric ozone are relatively well-understood and due to photochemistry: in the presence of stellar radiation molecular oxygen will dissociate and form ozone through the Chapman mechanism <cit.>. The 3-D impact of M-dwarf radiation on the Chapman mechanism has been explored by previous studies, both in quiescent <cit.> and flaring conditions <cit.>. In all cases, an ozone layer develops around the planet. As such exoplanets are likely to rotate synchronously around their host star <cit.>, stellar radiation and the photochemical production of ozone are limited to the planetary dayside. This is illustrated in Figure <ref>, showing the time-averaged chemical tendency of ozone. The tendency denotes the balance between the production and loss of ozone due to chemical processes. We find that ozone production mainly occurs at high ϕ'>40^∘ (i.e., close to the substellar point), whereas ozone production is practically absent at the locations of the nightside gyres (-60<ϕ'<0^∘). Hence, another mechanism must be driving the relatively enhanced ozone abundances at the locations of the nightside Rossby gyres.
§.§ Overturning circulations
The relationship between the ozone distribution in Figure <ref> and the global atmospheric circulation becomes clear through the mass streamfunctions, as defined in Section <ref>. From left to right, Figure <ref> shows the mean meridional mass streamfunctions Ψ_m, Ψ'_m and Ψ'_m,O_3 that have been calculated from the divergent wind component. A positive streamfunction (red contours) indicates clockwise circulation, and a negative streamfunction (blue) indicates anticlockwise circulation.
From Figure <ref>a, we find strong poleward transport of air at tropospheric pressures (>100 hPa) in a single thermally driven circulation cell <cit.>. Moving up into the stratosphere, we find stacked layers of clockwise and anticlockwise circulation. The existence of poleward transport between ∼50 and ∼1.5 hPa indicates additional thermally-driven circulation cells. These cells transport aerosols and chemical tracers such as ozone from the equator to the poles through the stratosphere <cit.>. This equator-to-pole transport leads to an enhanced high latitude ozone layer on the dayside in geographic coordinates, with mean ozone columns of ∼490 DU above 80^∘ North and South as compared to a mean of ∼290 DU between 10^∘ North and 10^∘ South <cit.>. Since the stellar radiation at the poles is too weak to initiate the photochemistry responsible for ozone production, this polar enhancement has to be due to the poleward transport of ozone produced in the equatorial regions.
Moving to tidally-locked coordinates using Ψ'_m in Figure <ref>b, we find a single overturning circulation cell that dominates the troposphere and transports air and heat from the dayside towards the nightside. A weaker anticlockwise circulating cell is present between the antistellar point and ϕ'≈-30^∘, induced by the temperature gradient between those two points. The absence of anticlockwise motion when moving to lower pressure levels in Figure <ref>b indicates that a connection between the tropospheric cell and the stratospheric circulation exists. An overturning circulation covers essentially all of the stratosphere, connecting the dayside and nightside. Air ascends in the ozone production regions (between 0.2 and 100 hPa, see Figure <ref>) and moves through the stratosphere towards the nightside, where it subsides at the locations of the nightside gyres and thus the locations of ozone accumulation as shown in Figure <ref>.
To quantify the impact of this mass transport on the distribution of ozone, we calculate the tidally-locked ozone-weighted mass streamfunction Ψ'_m,O_3 (Equation <ref>) as shown in Figure <ref>c. From the ozone mass streamfunction we infer that the circulation of ozone through the stratosphere provides a significant contribution to the dayside-to-nightside transport. The downward ozone transport at the ϕ' of the Rossby gyres (-60<ϕ'<0^∘) indicates that this stratospheric dayside-to-nightside circulation drives ozone-rich air into the Rossby gyres and thus leads to ozone maxima on the nightside.
Figure <ref> again shows Ψ'_m,O_3, now separated into 4 ranges of λ'. Each of these λ' ranges corresponds to a distinct feature of the ozone distribution in Figure <ref>a. Figure <ref>a shows the λ'-range that contains the low-λ' gyre (λ'>320^∘ and λ'≤70^∘), and we can identify the dayside-to-nightside transport of ozone-rich air, followed by descending motion at ϕ' corresponding to the location of the Rossby gyres. The ozone is supplied from part of its production region (see Figure <ref>) between pressures of 0.3 hPa and 20 hPa. Figure <ref>b shows the low-λ'-range that does not contain the gyres and instead includes the nightside-to-dayside component of the equatorial jet. Ψ'_m,O_3 shows that there is a stratospheric clockwise circulation, but that this is separated from the lower parts of the atmosphere by an anticlockwise circulation at the ϕ' corresponding to the Rossby gyres and misses part of the ozone production regions between 10 and 100 hPa. Therefore, for 70<λ'≤110^∘, no ozone accumulation is found following the stratospheric overturning circulation. Figure <ref>c again indicates dayside-to-nightside transport of ozone-rich air, with ozone for the mid-λ' gyre (110<λ'≤220^∘) being supplied from the ozone production regions between pressures of 0.3 hPa and 15 hPa. Lastly, Figure <ref>d shows that in the final non-gyre range (220<λ'≤320^∘) there is a stratospheric overturning circulation transporting ozone-rich air, but this circulation misses part of the ozone production region between 0.3 and 10 hPa and is generally weaker than for the ranges containing the gyres. Furthermore, the air that descends below ∼10 hPa will meet the equatorial jet, leading to chemical destruction of ozone (due to HO_x-rich air from the dayside) or advection back to the dayside followed by photochemical destruction. Therefore, this λ'-range is not accumulating ozone in the lower part of the atmosphere.
Our interpretation of the atmospheric dynamics is supported by an age-of-air tracer experiment. In Figure <ref>, we show the zonally-averaged time evolution of the age-of-air-tracer during the model spin-up period. As a passive tracer, it is only affected by dynamical processes in the UM, including both advection and convection. The age-of-air tracer is initialised at 0 s and provides a measure of the amount of time that has passed since an air parcel was last found in the lowest layers of the atmosphere (below ∼2 km or 700 hPa). As such, the tracer measures the time it takes a parcel to rise from these lowest layers into the stratosphere. The tracer values are reset to 0 in the lowest layers at every model timestep. With the evolution of the age-of-air tracer over ϕ' in Figure <ref> we show that air rises over and around the substellar point, already providing much younger air to the dayside troposphere (<15 km) after 10 days of simulation. After 100 days, we find that most of the troposphere has been replenished with much younger air, except for the nightside gyres between -60^∘<ϕ'<0^∘. This picture persists after 500 days, showing that the age-of-air tracer in the nightside gyres is fed by older air from the stratosphere.
To further diagnose the nightside descent of ozone molecules indicated by the streamfunctions, we can define the vertical flux of ozone across pressure or altitude levels as:
F_O_3 = ∫^P_min_P_max (w·n_O_3) dP,
where w is the vertical wind velocity (m s^-1) and n_O_3 the ozone number density in molecules m^-3. Negative values correspond to downward transport and positive values to upward transport of ozone. The integration between pressure levels P_max and P_min is done to determine the total flux exchange between the stratosphere and troposphere. Using the streamfunctions in Figure <ref> and the ozone distribution in Figure <ref>b, we determine that downward transport between ∼200 and 8 hPa drives the ozone accumulation. Figure <ref> shows the vertical flux of ozone, integrated over pressures between 190 and 8 hPa. Generally, we find a relatively small but hemisphere-wide upward flux on the dayside. The nightside gyre locations stand out with a relatively strong downward flux. Hence, the ozone that was produced in the stratosphere will be transported downward into the troposphere at the gyre locations. Combining the streamfunctions, the tracer experiment and the vertical ozone flux, we find that the stratospheric overturning circulation provides a connection between the ozone production regions and the nightside gyres, leading to the accumulation of ozone in the latter. To the authors' knowledge, this is the first time this connection has been reported.
§.§ Dynamical and chemical timescales
In assessing the impact of atmospheric dynamics on chemical abundances, it is important to make a comparison between the timescales of processes that can control the ozone abundance. The dynamical lifetimes include the zonal (τ_u), meridional (τ_v), and vertical components (τ_w), and are calculated following <cit.>:
τ_u = L/u = 2π R_p/u,
τ_v = L/v = π R_p/v,
τ_w = H/w,
with L the relevant horizontal scale in terms of the planetary radius R_p, and H the vertical scale height. The zonal (u), meridional (v), and vertical (w) wind components are all in m/s. For the chemical lifetimes we use:
τ_chem = n_O_3/R_x,
where n_O_3 denotes the ozone number density (molecules m^-3) and R_x the loss of ozone (in molecules m^-3 s^-1) due to reaction x. Specifically, we use the termination reaction of the Chapman mechanism <cit.>:
O_3 + O(^3P) -> O_2 + O_2, (R1)
and the rate-limiting step of the dominant HO_x catalytic cycle <cit.>:
HO_2 + O_3 -> OH + 2O_2. (R2)
A detailed overview of the chemical reactions can be found in <cit.>. We calculate the lifetimes for sets of gridpoints centred at four distinct locations in the ozone distribution (see Figure <ref>), and subsequently take the meridional and zonal mean. These locations cover the substellar point (10 latitudes × 8 longitudes = 80 grid points), the nightside jet (10×7=70 points), and the two nightside gyres with 5×7=35 points each.
Figure <ref> shows the different lifetimes at each of the four locations. From Figure <ref>a we conclude that the dynamical lifetimes are shorter than the chemical lifetimes at all four locations, indicating that dynamics can be an important driver of disequilibrium abundances in this pressure range. In Figure <ref>b we highlight the differences between τ_u and τ_w, for the troposphere (<100 hPa) and lower stratosphere (between 100 hPa and 10 hPa), by using the fraction τ_u/τ_w. Vertical transport is the dominant process for τ_u/τ_w>1 (right of the vertical line) and horizontal transport for τ_u/τ_w<1 (left of the vertical line). Around the substellar point (solid lines), we determine that vertical mixing dominates the troposphere (τ_u/τ_w>1) and that zonal mixing (τ_u) starts to take over at P>80 hPa. Above this pressure, chemical abundances at the substellar point can be spread out zonally towards the nightside, connecting with the ozone-producing region that is part of the overturning circulation from Section <ref>. At the nightside location of the jet, τ_u/τ_w<1, and the zonal wind is capable of homogenising any vertically-driven disequilibrium. The circumnavigating jet then leads to the relatively thin ozone column for 70^∘<λ'<110^∘ and 220^∘<λ'<320^∘ in Figure <ref> (across all ϕ'). At the locations of the nightside gyres, Figure <ref>b shows that τ_u and τ_w are intermittently the smallest, indicating that both vertical and zonal mixing can drive disequilibrium abundances. However, as mentioned in Section <ref>, the edges of the gyres act as mixing barriers. Hence, the zonal transport leads to homogenisation within the gyres. Vertical mixing that is part of the overturning dayside-to-nightside circulation is dominant between ∼200 and 50 hPa at the gyre locations. This vertical mixing drives the observed disequilibrium abundances of tropospheric ozone at the gyre locations, and thus the maximum ozone columns in Figure <ref>a.
§ DISCUSSION
In this section, we start by describing the driving mechanism for the overturning circulation. We then show its impact on other long-lived tracers and discuss relevant temporal variability in the atmospheres of synchronously rotating exoplanets. Lastly, we produce synthetic emission spectra to investigate the observational impact of circulation-driven ozone chemistry.
§.§ Driving mechanism of the overturning circulation
The tropospheric overturning circulation for moist, rocky exoplanets in a synchronous orbit is driven by the absorption of incoming stellar radiation and latent heat release on the dayside, and longwave radiative cooling on the nightside <cit.>. <cit.> study dry, rocky planets rotating synchronously around an M-dwarf star and find that the overturning circulation is indirectly driven by the stellar radiation, in the form of nightside cooling by CO_2. They find that an overturning circulation forms in a N_2-CO_2 atmosphere, but not in a pure N_2 atmosphere <cit.>. Prescribed CO_2 distributions from <cit.> show that shortwave (SW) absorption on the planetary dayside only has a limited impact on the overturning circulation. CO_2 can cool an atmosphere when it is found in layers exhibiting a temperature inversion <cit.>. Enhanced infrared emission from increasing CO_2 levels cools the Earth's stratosphere <cit.>. On synchronously rotating planets, this can induce a downward motion on the nightside that subsequently drives dayside-to-nightside overturning circulation.
Since we focus on the stratosphere, which is relatively dry even for a moist climate of a rocky exoplanet in a synchronous orbit, we can build upon these results in identifying the driving mechanism. The SW atmospheric heating rates in Figure <ref>a show that CO_2 (the green line) acts as an important SW absorber on the dayside. The main absorber in the troposphere is H_2O, whereas CO_2 starts to become dominant above ∼170 hPa. In line with <cit.>, we find that heating due to SW absorption by CO_2 plays a minor role in the troposphere. However, in the stratosphere CO_2 absorption can become important because peak emissions from M-dwarfs are emitted at near-infrared (NIR) wavelengths, relatively long as compared to other stars. CO_2 (and H_2O) have strong NIR absorption bands <cit.>, which explains why CO_2 is the dominant absorbing species above ∼170 hPa, in contrast to ozone in the Earth's stratosphere. As expected, the total dayside heating rates (solid black line) greatly exceed the nightside values (dashed line), forming a direct driver for the overturning circulation. Additionally, Figure <ref>b shows the longwave (LW) heating rates, with negative values indicating cooling of the atmosphere. The black lines show stronger LW cooling on the nightside as compared to the dayside. Again, CO_2 is mainly responsible for these cooling rates, due to its presence in temperature inversion layers at ∼100 and ∼1 hPa. This radiative cooling on the nightside drives a large-scale downwelling which, together with SW heating on the dayside, supports the stratospheric overturning circulation <cit.>, and can explain the ozone maxima at the locations of the nightside gyres. The atmospheric pressure within the gyre is relatively low, analogous to the eye of tropical cyclones <cit.>. Such a pressure gradient naturally induces downward transport at the gyre locations. An important follow-up to this study is to investigate the ozone distribution for a variety of rotation states <cit.> in light of the circulation-driven chemistry proposed here.
§.§ Long-lived atmospheric tracers
The impact of the overturning circulation goes beyond the spatial distribution of ozone, as is also evident from the distribution of the age-of-air tracer as shown in Figure <ref>. Any tracer, gaseous or non-gaseous phase, can continue to circulate as long as its chemical lifetime is much longer than the dynamical timescales. Hence, the overturning circulation is relevant for any so-called long-lived atmospheric tracer. To illustrate this, we performed similar analyses using the species-weighted streamfunction as defined in Section <ref> on the distributions of nitric acid (HNO_3) and dinitrogen pentoxide (N_2O_5). Both of these species are signatures of lightning-induced chemistry in our simulations <cit.>. They are non-radical species with relatively long chemical lifetimes, mainly in the form of photolysis and wet deposition (rainout). In the dayside troposphere, the lifetimes against wet deposition are ∼10^-2-10^2 yr, while higher up in the atmosphere the lifetimes against photolysis are ∼10-10^2 yr. On the nightside, these loss processes are absent and thus their chemical lifetimes approach infinity. We calculate Ψ'_HNO_3 and Ψ'_N_2O_5 similar to Equation <ref>, and calculate the mean of each of the species-weighted streamfunctions over the troposphere (>10^2 hPa) and mid-to-lower stratosphere (1<P<10^2 hPa). The results are shown in Table <ref>.
The circulation cells weighted by HNO_3 and N_2O_5 are strongest in the troposphere, at ∼0.95 and ∼0.04 kg s^-1, respectively, because of the strong overturning circulation here (see Figure <ref>b). The troposphere is also the region where lightning flashes are predicted to occur and thus where HNO_3, N_2O_5, and their precursors are produced <cit.>. The factor 10^6 and 10^7 difference with the ozone-weighted streamfunction in Table <ref> is a consequence of the much lower predicted abundances of HNO_3 and N_2O_5. Moving up to the stratosphere, we find that the ozone-weighted streamfunction is similar to the streamfunction in the troposphere, providing the connection to the nightside gyres. For HNO_3 and N_2O_5, the streamfunction is ∼30 and 150 times lower in the stratosphere, due to low levels of stratospheric HNO_3 and N_2O_5 with the absence of lightning-induced chemistry at those pressure levels. Because of the lack of stratospheric HNO_3 and N_2O_5, the overturning circulation will not be able to accumulate these species at the locations of the nightside gyres (as is evident in the spatial distribution in Figure 10 of ).
In the presence of stellar flares, <cit.> show that the gyres are depleted in ozone (see their Figure 12). This can also be explained by the stratospheric overturning circulation, since flare-induced chemistry will result in a large amount of nitric oxide (NO) and nitrogen dioxide (NO_2) (together known as the NO_x chemical family) at stratospheric levels <cit.>. This NO_x can follow the stratospheric overturning circulation from the dayside to the nightside. Once on the nightside, it can be transported downward at the location of the gyres and locally deplete the ozone through the NO_x catalytic cycle <cit.>, given that flares produce sufficient NO_x.
The impact of the overturning circulation on the distribution of ozone has analogies with studies that simulate tracers in the atmospheres of synchronously rotating hot Jupiters. <cit.> identified dynamical mixing in hot Jupiter atmospheres as a process leading to cold trapping of condensible species on the planetary nightside. Their experiments involve gravitational settling as a source of these condensed particles, which leads to a gradient of tracer abundance, with fewer particles as we move up through the atmosphere. Upward mixing induced by the large-scale dynamics balances the settling of these particles, preventing the complete depletion of particles and inducing a strong spatial variation in the tracer abundances. The extent of the mechanism depends on the strength of frictional drag <cit.>. The mechanism does not require convection but follows the large-scale atmospheric motions that are ultimately driven by the dayside-nightside heating contrast <cit.>, as is the case for the circulation-driven ozone distribution discussed here. Another example of a long-lived tracer is photochemical haze, which is also expected to form at stratospheric altitudes <cit.> and, for synchronously rotating exoplanets, only on the dayside of a planet <cit.>. <cit.> show that the 3-D distribution of small photochemical hazes (≤10 nm) in hot Jupiter atmospheres is also driven by dynamical mixing. The highest tracer abundances are found above the production peak, indicating upwelling on the dayside. Then a divergent flow leads to transport towards the poles and the nightside. On the nightside, the haze particles are then advected downward and get trapped in the mid-latitude gyres <cit.>. These dynamically-induced asymmetries can produce distinctions between a planet's terminator regions, as shown for hot Jupiters <cit.>. Following up on the results presented here, we will investigate the potential terminator variability of the circulation-driven ozone distribution and its observability.
§.§ Time variability
Besides spatial variability in tracer distributions, simulations of synchronously rotating exoplanets exhibit several modes of temporal variability. The formation of the Rossby gyres is due to the thermal forcing asymmetries <cit.>. <cit.> show that these gyres oscillate over longitude λ, with the extent depending on the planet's rotation period and thus dynamical state. Planets with a slower rotation rate have longer oscillation periods, resulting in a 157.5-day oscillation for Proxima Centauri b, which was determined from the temporal evolution of the cloud cover <cit.>.
Since the stellar spectra are constant in time and the planet rotates in a 1:1 resonant orbit without eccentricity and/or obliquity, such variability has to be produced by internal atmospheric variability. <cit.> show that feedback between cloud cover and the incoming stellar radiation can influence the dynamics and drive zonal movement by the gyres, leading to variations in humidity and cloud cover over time. The accumulation of ozone (Figure <ref>) depends on the gyres so we expect there also to be a corresponding variation in atmospheric ozone. To verify this, in Figure <ref> we track the temporal evolution of the tidally-locked coordinates corresponding to the maximum in the ozone layer and the minimum in the vertical flux of ozone (F_O_3, thus corresponding to the strongest downward flux). Figure <ref>a shows ϕ' and Figure <ref>b λ' corresponding to these extrema, and the approximate extents of the gyres are indicated in yellow. The locations of the maximum ozone column and minimum vertical flux are not perfectly aligned, because the maximum ozone column corresponds to a long-term mean location of the gyre and thus depends on vertical fluxes over an extended period of time. The minimum vertical flux represents a snapshot in time and is also impacted by the upward flux from the gyre (see the red regions in Figure <ref>). From Figure <ref>a, we determine that the maximum ozone column is generally found at ϕ' corresponding to the gyre locations, with a small meridional variation over time. The minimum F_O_3 shows more variability in tidally-locked latitude, but the strongest downward flux is generally also located at the gyre locations. In Figure <ref>b, we see the variations in the tidally-locked longitude λ' over time. The low-λ' gyre typically hosts the maximum ozone column, but there are periods when the mid-λ' gyre hosts the maximum in the ozone column. The variations in the minimum F_O_3 broadly align with the maximum in the ozone column, following the gyre position that has the maximum ozone column at that time. The location of minimum F_O_3 shows more variability due to its instantaneous nature.
We translate the temporal variability into simulated observables using the Planetary Spectrum Generator <cit.>. To simulate an emission spectrum that includes half the planetary dayside and half the nightside, we extract the atmospheric pressure and temperature and mixing ratios of relevant chemical species (N_2, O_2, CO_2, H_2O, O_3, N_2O, HNO_3 and N_2O_5) for these locations, take the zonal and meridional averages and compute radiative transfer with PSG. In Figure <ref> we show the resulting planet-to-star contrast for the JWST-MIRI wavelength range, along with a zoom-in that focuses on the ozone 9.6 μm feature. Using extrema in the gyre positions over time from Figure <ref>, we simulate the emission spectra of Proxima Centauri b for different 6-day intervals and indicate the maximum day in the legend of Figure <ref>. We find variations around the ozone features at 9.6 μm and between 14-16 μm that is due to absorption by CO_2, H_2O, and ozone. Hence, the region around 9.6 μm is the place to look for ozone variability. Focusing on the region around 9.6 μm shows that the maximum temporal variations are about 0.5 ppm. Spectroscopic characterisation of these absorption features to the level needed to identify these temporal variations is challenging, as detecting the features themselves would already require many days of co-added observations <cit.>. However, the recent photometric observations of the thermal emission from TRAPPIST-1 b with JWST indicate the telescope's capacity to observe favourable terrestrial exoplanets <cit.>. Mission concepts such as the Large Interferometer For Exoplanets <cit.> further utilise the mid-infrared in the characterisation of terrestrial exoplanets and will have to consider the impact of 3-D spatial and temporal variability in atmospheric dynamics and chemistry.
The hot Jupiter simulations of passive tracers by <cit.> also exhibit significant temporal variability. Oscillations in the equatorial jet and variations in the dayside-to-nightside flow produce large local variations, which could again impact the spectroscopic observations of the planets, both when conducting extended observations and when observing the same object at two different points in time.
Another mode of variability in the atmospheres of exoplanets in synchronous orbits around M-dwarfs is the Longitudinally Asymmetric Stratospheric wind Oscillation <cit.>. Since this entails a stratospheric turnover of wind directions, it could be relevant for stratospheric ozone. Analysing ozone mixing ratios over time, we find variations in the ozone mixing ratios above ∼30 km (or ∼3.5 hPa) as a consequence of the LASO. However, these variations occur higher up in the atmosphere than the overturning circulation that feeds the gyres and thus do not affect the gyre abundances significantly. The variations are interesting from an observational perspective, which we plan to explore as part of an in-depth investigation of the observability of the circulation-driven ozone distribution.
§ CONCLUSIONS
We use a 3-D CCM (UM-UKCA) to study the spatial structure of the ozone layer on an exoplanet rotating in a 1:1 spin-orbit resonance around an M-dwarf star, using the parameters corresponding to Proxima Centauri b. Our results are relevant for similar M-dwarf orbiting planets, specifically for slowly rotating planets with a strong overturning circulation and a single equatorial jet in the troposphere. We investigate the spatial variability in the ozone layer and specifically the accumulation in two nightside ozone maxima, in the form of maximum ozone columns at the locations of the permanent Rossby gyres. Our work builds upon previous studies that have shown that M-dwarf radiation supports the emergence of a global ozone layer.
We show that stratospheric dayside-to-nightside circulation and downward motion over low-pressure nightside gyres can explain the spatial variability in ozone. The photochemistry required to initiate the Chapman mechanism of ozone formation is limited to the dayside hemisphere, with an absence of ozone production on the nightside. We find a connection between the ozone production regions on the dayside and the nightside hemisphere, using the transformation to the tidally-locked coordinate system. Meridional streamfunctions that we calculate from the divergent wind component illustrate the existence of a stratospheric dayside-to-nightside overturning circulation. This circulation consists of a single circulation cell characterized by upwelling motion in the ozone production regions, followed by stratospheric dayside-to-nightside transport and downwelling motions at the locations of the nightside gyres. The downwelling motion produces a flux of ozone from the stratosphere into the troposphere, leading to well-defined maxima in the ozone distribution. The circulation-driven ozone chemistry impacts spectroscopic observations, although the impact of temporal variability is limited to sub-ppm levels in emission spectra.
By investigating the impact of the stratospheric overturning circulation on lightning-induced chemical species (also limited to dayside production, but solely in the troposphere), we can explain why these species do not show a similar accumulation in the nightside gyres. The stratospheric overturning circulation also affects other tracer species, including gaseous chemical tracers and particulate components of photochemical haze, with the only requirement that the dynamical lifetimes are sufficiently short compared to chemical timescales.
We identify hemispheric contrasts in atmospheric heating and cooling rates as the driver for the overturning circulation. Dayside heating can directly drive the overturning circulation, and nightside cooling provides an indirect component by inducing local downward motion. The relatively low atmospheric pressure over the nightside gyres further induces downward motion here. Since the stratosphere is relatively dry, CO_2 absorption is the main contributor to these heating and cooling rates. Ozone absorption also contributes to the rates, but its contribution is weaker than CO_2 since M-dwarf fluxes peak close to absorption bands of CO_2.
For the first time, we find a connection between the ozone-producing dayside of synchronously rotating planets and the simulated ozone maxima on the nightside, covering hemispheric scales and multiple vertical levels in the stratosphere and troposphere. The role of the stratospheric dayside-to-nightside circulation in driving the ozone distribution around the planet illustrates the necessity of 3-D model to capture atmospheric processes correctly. Any robust interpretation of spectroscopic observations will need to understand the spatial and temporal variability of chemical species due to such circulation-driven chemistry.
§ ACKNOWLEDGEMENTS
We are very grateful to Denis Sergeev for his contribution to the coordinate transformations and valuable feedback on the manuscript. MB kindly thanks Ludmila Carone for discussing circulation regimes on synchronously rotating exoplanets.
MB, PIP and LD are part of the CHAMELEON MC ITN EJD which received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 860470. PIP acknowledges funding from the STFC consolidator grant #ST/V000594/1. LD acknowledges support from the KU Leuven IDN grant IDN/19/028 and from the FWO research grant G086217N. MC acknowledges the funding and support provided by the Edinburgh Earth, Ecology, and Environmental Doctoral Training Partnership and the Natural Environment Research Council [grant No. NE/S007407/1]. NM was supported by a UKRI Future Leaders Fellowship [grant number MR/T040866/1], a Science and Technology Facilities Council Consolidated Grant [ST/R000395/1] and the Leverhulme Trust through a research project grant [RPG-2020-82].
We gratefully acknowledge the use of the MONSooN2 system, a collaborative facility supplied under the Joint Weather and Climate Research Programme, a strategic partnership between the Met Office and the Natural Environment Research Council. Our research was performed as part of the project space ‘Using UKCA to investigate atmospheric composition on extra-solar planets (ExoChem)'. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
§ DATA AVAILABILITY
All the CCM data was generated using the Met Office Unified Model and UK Chemistry and Aerosol model (https://www.ukca.ac.uk/https://www.ukca.ac.uk/), which are available for use under licence; see http://www.metoffice.gov.uk/research/modelling-systems/unified-modelhttp://www.metoffice.gov.uk/research/modelling-systems/unified-model. The data underlying this article will be shared on reasonable request to the corresponding author, mainly motivated by the size of the data.
We used the iris <cit.> and aeolus <cit.> python packages for the post-processing of model output. Scripts to process and visualize the data are available on github: https://github.com/marrickb/o3circ_codehttps://github.com/marrickb/o3circ_code.
mnras
|
http://arxiv.org/abs/2306.11256v1
|
20230620032110
|
GUMSum: Multi-Genre Data and Evaluation for English Abstractive Summarization
|
[
"Yang Janet Liu",
"Amir Zeldes"
] |
cs.CL
|
[
"cs.CL"
] |
Geometry of quasiperiodic functions on the plane
I.A. Dynnikov^1, A.Ya. Maltsev^2, S.P. Novikov^1,2
July 31, 2023
======================================================
Automatic summarization with pre-trained language models has led to impressively fluent results, but is prone to `hallucinations', low performance on non-news genres, and outputs which are not exactly summaries. Targeting ACL 2023's `Reality Check' theme, we present GUMSum, a small but carefully crafted dataset of English summaries in 12 written and spoken genres for evaluation of abstractive summarization.
Summaries are highly constrained, focusing on substitutive potential, factuality, and faithfulness. We present guidelines and evaluate human agreement as well as subjective judgments on recent system outputs, comparing general-domain untuned approaches, a fine-tuned one, and a prompt-based approach, to human performance.
Results show that while GPT3 achieves impressive scores, it still underperforms humans, with varying quality across genres. Human judgments reveal different types of errors in supervised, prompted, and human-generated summaries, shedding light on the challenges of producing a good summary.
§ INTRODUCTION
Recent advances in supervised summarization models as well as prompt-based approaches using large pre-trained language models have led to substantial improvements in summary fluency, with prompt-based outputs now surpassing supervised approaches in human evaluation <cit.>. At the same time, researchers in the field repeatedly note that the most commonly used datasets, such as CNN/DailyMail (CNN/DM, ) and Extreme Summarization (XSum, ), which are large-scale `found' datasets not designed to facilitate high quality summarization, are problematic, and in many cases contain texts which are not summaries, are incomplete or unfaithful to the texts they relate to, add information not present in texts, or any combination of the above <cit.>.
Existing datasets are also limited to mainly newswire text (cf. ), which is a fraction of extant genres in general and on the Web.
The main contributions of this paper are in providing and evaluating a very genre-diverse dataset and guidelines for carefully crafted, rather than `found' summaries, which follow the same design across text types. Building on the UD English GUM treebank <cit.>, which contains 213 spoken and written texts balanced across 12 different genres, our summaries target three goals: 1) to be substitutive (i.e. informative, functioning as a substitute for reading a text, cf. ) rather than indicative (e.g. `clickbait' designed to attract readership); 2) to be faithful to the text, adhering to original formulations wherever possible; 3) to be hallucination-free, meaning summaries make a strong effort not to add any information (even if it is likely to be true), mentioning only entities and events actually contained in the text, thereby preventing typical errors associated with datasets such as XSum <cit.>.
Instructions on obtaining the dataset and responses from the human evaluation study as well as evaluation code can be found at <https://github.com/janetlauyeung/GUMSum4EVAL>.[Data is also available from the corpus website at <https://gucorpling.org/gum/> and guidelines at <https://wiki.gucorpling.org/en/gum/summarization>. ]
§ RELATED WORK
The problem of mitigating factuality and faithfulness issues in Natural Language Generation (NLG) has recently received considerable attention, with studies proposing auxiliary tasks using the Multi-Task Learning approach to constrain models, such as overlapping entities <cit.>, encoding of predicate triples from source documents <cit.> or encouraging systems to incorporate or copy entities from source documents <cit.>. In addition, <cit.> present a thorough investigation of factual errors in summarization and propose a taxonomy of error types with a focus on entity and predication errors, while <cit.> examine types of accuracy errors made by neural systems and contrast them with human errors.
These papers share concerns about the nature of widely used datasets for English, such as XSum and CNN/DM, but are limited by the lack of evaluation data specifically targeting genre-diverse texts with high-quality summaries: ones which ideally maximize faithfulness, rule out hallucinations, and follow consistent guidelines for what constitutes a summary. Although there are some non-news single-document summarization datasets covering Reddit <cit.> and Podcast data <cit.>, text types are still quite limited and data is often not publicly available <cit.>. This motivates our work to create open-access, multi-genre data with consistent guidelines across text types.
§ DATASET
Contents
GUMSum covers the 213 documents (amounting to ∼200K tokens) from the 12-genre UD English GUM corpus (; specifically GUM V9), which provides gold syntax trees, entity types, coreference resolution, and discourse parses for the data. For this paper, we added summaries to each document in the corpus, by the authors and students in a Computational Linguistics course as part of a class-based project,[Consent to release data was given by all students.] guided by general and genre-specific instructions. Although the range of ∼20 human summarizers is broad as a result, we defined guidelines to constrain summaries and ensure they are maximally `reality-checked', i.e. faithful and factual, as evaluated below. Documents vary in length, ranging between 167 and 1,878 tokens (=957, =249.6), and cover the genres in Table <ref>.
Because of the classroom context in which summaries are collected and the natural variation in student styles and adherence to guidelines, all summaries are thoroughly checked by a teaching assistant and the course instructor. For the 24 documents in the UD treebank's official set of GUM V9, we provide two summaries to support inter-annotator agreement and multiple-reference evaluation.
Guidelines
Previous literature has characterized `good' summaries primarily as ones that are concise, accurate, fluent, and coherent <cit.>. What these qualities mean varies depending on the summary's objective: whether it is domain-specific or general, indicative (enticing readers to read the text) or informative (aiming to substitute reading it, ) etc. GUMSum's summaries explicitly target a domain-general, substitutive, maximally concise format, which is therefore constrained to:
* have at most one sentence / 380 characters[We follow XSum in targeting 1-sentence summaries, and we aimed for a maximum of 5 lines in a PEP8-compliant IDE, but in practice no summary exceeded 380 characters. ]
* have the goal of replacing reading the text
* give participants/time/place/manner of events
* form a sentence rather than a fragment
* omit distracting information
* avoid entities or information not present in the text, even if we are fairly sure it is true
* reject synonyms for words in the text
For instance, the summary in <ref> for a story involving `robbers plundering a vault' follows guidelines by providing a declarative-sentence (criteria , ), synopsis of events, participants (exactly five robbers), time (a date) and place (Poughkeepsie) (), as well as additional details (exact name of the bank, mode of escape). <ref> is underspecified (we do not know when or where the event occurred, criterion ). <ref> paraphrases the robbers' escape by introducing an entity not in the original text (uncaught by police, violating ), and substitutes `robbed' for `plundered', a near synonym but a deviation from the original text's style ().
. On March 23, 1999, five bank robbers plundered the vault of First National Bank in Poughkeepsie, NY and escaped in a bus they had stolen.
. Bank robbers plundered a vault and escaped.
. Bank robbers who robbed a bank in Poughkeepsie were never caught by police.
Although these examples illustrate newswire language, GUMSum covers very different spoken and written text types as well:
. Some people debate whether the original 3 hour cut of Snyder's movie about Batman and Superman should have been released instead of the shorter version, which prioritized getting to the action faster in order to appeal to a general audience. (Reddit)
. Ash tells about her day, which includes a yoga class, marketing brand management class, doing some work while having coffee at Saxby's, and finally cooking pasta with peppers for dinner together with her boyfriend Harry. (YouTube CC-BY vlog)
The summary in <ref> follows the guidelines by not mentioning that the discussion is on Reddit (, the interlocutors are simply `people'), since Reddit is not mentioned. Similarly, while Zack Snyder's film Batman v Superman: Dawn of Justice is most likely being discussed, it is not named explicitly, leading to the formulation `Snyder's movie about Batman and Superman'. In <ref>, the summary focuses on conveying events which happen over the course of a vlog, but again, the unmentioned word `vlog' is avoided, while specific details about the participants and circumstances (people, but also the type of class) are prioritized. Summaries are thus highly constrained to remain faithful and avoid even minor potential hallucinations, such as completing the title of a film. For more on genre-specific guidelines and examples, see Appendix <ref>.
§ EVALUATION
Automatic Evaluation To evaluate how well current neural approaches produce `reality-checked' summaries approaching the ones in GUMSum, we obtain system outputs from two recent supervised systems, BRIO <cit.> and SimCLS <cit.>, as well as prompt-based outputs using a GPT3 model <cit.>, (GPT3-DV2), with the prompt `Summarize the text above in one sentence.'. We chose system models trained on the XSum dataset, since it has one-sentence summaries more in line with the GUMSum data. However, because systems have never seen data in many of GUMSum's genres, we also add an additional experiment in which we fine-tune the higher-scoring supervised system, i.e. BRIO's trained-model on XSum for generation,
by continuing training it on the 165 documents in the UD treebank's set of the underlying GUM V9 corpus (BRIO-FT in Table <ref>; details/splits and system output selection can be found in Appendix <ref>). Scores are compared to a second human-written summary obtained from a human evaluation study, using the same guidelines.
Table <ref> shows that while systems have impressive scores for ROUGE <cit.>, BERTScore (BS, ), MoverScore (MS, ), METEOR <cit.>, BLEURT <cit.>, and BLEU <cit.>, they still lag behind the human summaries across the board. Reproducing findings by <cit.>, GPT3-DV2 outperforms supervised systems trained on XSum, though our data contains much more diverse genres than those in that paper. However, fine-tuning on even a small amount of GUMSum data (165 documents) in this paper already outperforms GPT3-DV2. This strongly suggests that a major problem with supervised systems in domain-general settings is simply the training data itself. Qualitative inspection of outputs suggests fine-tuning was particularly helpful for summarizing conversations, Reddit, and how-to guides, on which all systems struggled. For humans, genre differences were much less pronounced, with lowest scores surprisingly for news.
Figure <ref> gives a detailed breakdown of BLEURT scores <cit.> by genre for each scenario. Human scores lead in every genre except academic, news, and interview, and generally vary less by genre than systems. BRIO-FT is improved especially on genres that diverge from XSum, such as conversations, travel guides from Wikivoyage, and how-to guides from Wikihow.
Finally, the human scores provide some numbers for ceiling performance as reflected by automatic metrics. Comparing human numbers to the best-system numbers suggests that there is a substantial gap for systems which have never been trained on in-domain data. However, for the the fine-tuning (FT) scenario, we notice that ROUGE scores are neck-and-neck with the second human summary, likely because the system is trained with an objective averaging R1, R2, and R-L, on which it excels. By contrast, metrics more focused on verbatim overlap, such as BLEU, or semantic similarity, such as BLEURT, retain a more substantial gap, with FT results on BLEURT being close to GPT3-DV2 and still nearly 6 points below human performance.
It is an established finding however that metrics do not tell the whole story <cit.>. In fact, we regularly observe hallucinations, especially in XSum-trained systems, such as prefixing generic leads (e.g. `In our series of letters from British journalists ...', when there are no journalists involved) or inserting entities and events not mentioned in the text. We thus conduct a human evaluation of system outputs below, focusing on substituitivity, hallucinations, and faithfulness, and more importantly, apply the same evaluation criteria to the human-written summaries for a more targeted evaluation, as advocated by <cit.>.
Human Evaluation
We asked 12 Linguistics students to evaluate the full texts and the summaries of the 24 documents in the set of the source GUM V9 corpus and to produce an additional summary for their assigned texts (see detailed instructions in Appendix <ref>).[The hourly pay is $20.29/hour based on the pay rate of the 2022 / 2023 academic year for graduate students at Georgetown University. It took about 1.5 hours in total for each annotator to complete all the tasks for the two documents.]
Figure <ref> shows humans overwhelmingly preferred the human-written summary (<ref>, 83%, with exceptions citing gold summaries as less pleasant to read), and also found it best at substituting reading the text (<ref>, 79%). Pretrained supervised systems were judged to be highly non-substitutive (88% for SimCLS, 79% for BRIO), while 71% of GPT3-DV2 outputs were judged moderately so.
While all systems exhibited some hallucinations and unfaithfulness, GPT3-DV2 performed best, in part because its outputs tended to be short (mean 138 characters vs. human 272 characters) and general, giving fewer chances for issues. At the same time, hallucination types varied substantially. Human violations in both categories were rare and subtle, resulting from evaluators adhering to guidelines very literally: for example, one evaluator proposed that a human summary's use of the pronoun `she' in reference to a vlogger whose pronouns had not been stated is a form of hallucination, while another pointed out that a mention of `Washington' in a news article was a faithfulness issue, since without specifying `DC', the place is ambiguous. Hallucinations from GPT3-DV2 were more pronounced (e.g. designating a speaker mentioning retirement as an attendee of a seminar about retirement, which was not mentioned), while XSum-trained systems had more extreme cases, such as incorrectly attributing a speech about New Zealand to its former Prime Minister John Key (BRIO), claiming a fictional short story is a BBC documentary (SimCLS), or adding to a textbook excerpt on the Civil War by calling it the longest, most expensive conflict in US history (BRIO and SimCLS). Below we provide a comparison of outputs for two documents and a qualitative analysis.
We also asked evaluators whether they could tell if summaries were NLG outputs, and learned that while `NLG' guesses were correct, and most human summaries were also recognized, humans could not tell for certain in 56% of the outputs they evaluated (incl. 8% of human-written cases).
Qualitative Analysis
Figure <ref> shows two human-written and several system-generated summaries, for a conversation in (a) and for a news text in (b).[The PDFs of the full-text of these two documents are provided in the repository of the paper for reference. ] Note the typical hallucinated lead about journalists in the first BRIO output, which disappears after fine-tuning, and a similar insertion about a Nigerian writer in the output for SimCLS. GPT3-DV2 does not show similar issues, but misses important context information, e.g. the purpose of the conversation revolving around whether speakers should go to a specific dance class, and why or why not.
The news output is substantially better for all systems. BRIO disagrees with SimCLS and GPT3 on the number of `remaining' space shuttles: three remained to be retired, but there were four total in the article, including the already retired shuttle Discovery. All pre-trained system outputs are substantially less detailed than the human summaries, which add information about time and place of the announcement, or the list of space shuttles. Human 2 commits a similar hallucination error to BRIO in identifying the already retired Discovery as being retired at document creation time. However, both human summaries agree that a prominent part of the story involves the disappointment or criticism from sites that were not selected to house retired shuttles, a topic to which most of the latter half of the original story is dedicated. The fine-tuned model successfully adds more details in line with the human summaries, but also fails to capture the site controversy in the second half of the document.
§ CONCLUSION
The dataset and guidelines introduced in this paper make a step towards consistent and constrained multi-genre evaluation of factual summarization. Our results show that domain-general summarization is still hampered by serious reliability and factuality problems, which may only become apparent when confronted with a dataset with strict `reality check' constraints and diverse text types. Even small amounts of such data can be used to fine-tune pre-trained systems, with measurable improvements for system outputs.
The human evaluation study also revealed that pre-trained systems are bad at delivering substitutive summaries, perhaps because, as pointed out in <cit.>, “summarisation datasets should contain summaries,” but often they do not. Meanwhile, human identification of possibly more minor hallucinations in human-written summaries also suggests that more work is needed in delimiting what a `reality check' for summaries should include.
§ LIMITATIONS
GUMSum is designed to constrain summaries to one sentence for all 12 genres, which raises the question of whether one-sentence summaries are useful for all possible genres or long-document summarization. This is a complex topic that needs in-depth investigation. For GUMSum, as mentioned in Section <ref>, document length is limited to 167–1,878 tokens. Moreover, in analyzing human evaluators' responses to two open-ended questions ([<ref>] and [<ref>] in Appendix <ref>), we noticed that virtually all evaluators mentioned that limiting the summary to one-sentence is very difficult and that some genres were easier than others. For example, one evaluator who was given a vlog and a travel guide commented that,
“The travel guide was much more difficult than the vlog, likely because it was longer and denser. [...] the travel guide packed a lot more information into its pages and within each sentence.”
This indicates that genre differences at the summary-level is not trivial due to the style of the original text.
Additionally, this paper examined a specific subset of pre-trained systems and one version of GPT3's pretrained language model (i.e. ), producing findings which may not generalize to other settings. The dataset used for the evaluation is also substantially smaller than those used in most work on summarization, due to the fact that it was carefully crafted based on both general and genre-specific guidelines to be substitutive and to avoid hallucinations and faithfulness issues, rather than originating in a found dataset, in order to conduct a more targeted evaluation, as recommended by <cit.>. While it is inevitable that more data would lead to different results, we do not believe that system rankings or overall findings would be substantially different, so long as the guidelines and genres examined here remain stable.
Finally, we must raise a further limitation involving text type and language: our study encompasses 12 specific written and spoken genres available in the UD English GUM corpus, but does not capture findings for other genres, or indeed other languages, which deserve more attention in future studies.
§ ETHICS STATEMENT
The data produced in this paper is made openly available in accordance with the original licenses of the underlying resources and academic fair use. we are keenly aware that NLP, and particularly NLG technology can be misused adversely, for example to generate fake news, we believe the risks posed by models which are not `reality-checked' outweigh those associated with improving models to prevent factuality and generalization issues across domains. The latter issue is particularly relevant, since technologies limited to particular domains and styles will primarily benefit actors in sectors engaged with that data (e.g. news, for example, financial reporting), while underserving the public in other areas (e.g. computer-mediated communication). We therefore concur with this year's ACL theme that work towards `reality checking' our outputs is a net positive.
§ ACKNOWLEDGEMENTS
The human evaluation study was funded by a GSAS-GradGov Research Project Award (GRPA) towards graduate students' research and professional development endeavors at Georgetown University.
We thank the following participants for their valuable participation and insightful feedback in our human evaluation study (alphabetically ordered by last names): Kris Cook, Jessica Cusi, Helen Dominic, Luke Gessler, Caroline Gish, Lauren Levine, Cynthia Li, Kristina Lignell, Davide Locatelli, Emma Manning, and others who prefer to stay anonymous.
We thank Nathan Schneider and the anonymous reviewers for their feedback.
acl_natbib
§ GENRE-SPECIFIC GUIDELINES
The following excerpts from genre-specific guidelines exemplify instructions which were given to annotators working on documents in those specific genres. The full guidelines can be viewed at <https://wiki.gucorpling.org/gum/summarization>.
§.§ Biographies
Summaries for biographies and other texts centered around an individual:
* typically take the form “Kim is/was a French X who ... ”
* typically include information about what this person is/was known for (“... best known for ...”)
* information about the time period and place is typically included (“a Japanese X”, “a German X living in France”, “a 19th century Kenyan X”)
Examples:
* Jared Padalecki is an award winning American actor who gained prominence in the series Gilmore Girls, best known for playing the role of Sam Winchester in the TV series Supernatural, and for his active role in campaigns to support people struggling with depression, addiction, suicide and self-harm.
* Jenna Nicole Mourey, better known as Jenna Marbles, is a very successful American YouTube personality, vlogger, comedian and actress, known for her videos "How To Trick People Into Thinking You're Good Looking" and "How To Avoid Talking To People You Don't Want To Talk To".
§.§ Fiction
* In non-metalinguistic texts (i.e. fiction itself, not texts about fiction), summarize the text as if it is a literal, true story; for example, “Huckleberry Finn is fishing”, not “In this extract from the novel Huckleberry Finn, fictional character Huck is...”
* Even if described events are factually incorrect, or involve science fiction or imaginary contexts, we summarize without commenting on this (e.g. “Three unicorns chat and decide to go fishing”)
* Unnamed active protagonists should be referred to as “a/the protagonist”
* An unnamed narrator who is not an agent in the story can be referred to as “a/the narrator”
Examples:
* Jacques Chalmers, a starfighter pilot for the Empire, is terrified of overwhelming enemy forces as he leaves his deployment carrier together with his comrades, and later narrowly escapes the Enemy after witnessing the destruction of the Kethlan system.
* Santa Claus's second wife, Betty Moroz, plays online video games with her friends Williams and Gomez while making dinner on Christmas Eve, and is then disappointed when Santa gets a call from his secretary Ginny and goes out to take care of the children of the world, missing dinner.
§.§ Vlogs
* Typically a present tense third person style is used, and events are ordered in sequence, for example: “Ash tells about her day, which includes a yoga class, marketing brand management class, doing some work while having coffee at Saxby's, and finally cooking pasta with peppers for dinner together with her boyfriend Harry.”
* As in conversations, people other than the vlogger who play a significant role in the vlog should be mentioned, but if their name is not mentioned within the excerpt being annotated, then they can only be referred to using generic terms (“a friend/relative/...”)
* If the vlogger does not mention that they are a vlogger in the video, or that this is a vlog, do not refer to them as such (e.g. “Jasmine tells about ...”, not “YoutTube vlogger Jasmine tells ...”)
Examples:
* Jasmine tells about how she tested positive for Covid on December 16th after she spent time without a mask with her sister, who also tested positive, and recounts her symptoms over several days, starting from a sore throat, then fever and congestion, and finally a partial loss of smell and taste and shortness of breath.
§ EXPERIMENT DETAILS
§.§ Fine-tuning on BRIO
All three fine-tuning sessions were conducted using 1 NVIDIA A100 40GB GPU on Google Cloud Platform, which cost $2.8 per hour.[<https://cloud.google.com/compute/docs/gpus#a100-40gb>] The configurations of BRIO for XSum[<https://github.com/yixinL7/BRIO/blob/main/config.py#L37-L71>] were used except that the default number of was increased to 1000 from 100 in order to achieve better validation performance on GUMSum's set. Specifically, we take BRIO's generation model checkpoint on XSum from Huggingface's Transformers <cit.>.[<https://huggingface.co/Yale-LILY/brio-xsum-cased>] The average training time for a single run was about 7 hours. Table <ref> shows the validation performance of each run on the documents from the set of GUM V9.
Both and partitions contain 24 documents, 2 for each genre, leaving 165 documents for training.[The complete list of // document names is provided in the repository.]
§.§ GPT3 Output Selection
We use OpenAI's [ was not available at the time.] with the prompt Summarize the text above in one sentence. and keep the default settings.
Due to the nondeterministic nature and in order to ensure a fair comparison, we generated 3 summaries for each text and computed average ROUGE scores (the mean of R-1/2/L) against the human-written summaries and selected the summary with the middle average ROUGE score. At the time, the Davinci model costs $0.0200 / 1K tokens. To avoid repetitive computation and to facilitate further research, we release all the GPT3-generated summaries for GUMSum. No post-editing was made on the GPT3-generated summaries.
§.§ BRIO-/SimCLS- Generated Summaries
We use BRIO's generation model checkpoint on XSum available on Huggingface (i.e. ) to obtain BRIO-generated summaries for GUMSum's texts. For SimCLS <cit.>, we use the checkpoint on XSum provided by the authors in their GitHub repository.[<https://github.com/yixinL7/SimCLS>] Although some BRIO-/SimCLS-generated summaries contain trailing punctuation, no post-editing was made on these system outputs.
§ HUMAN EVALUATION DETAILS
We recruited 12 students who are native speakers of English to participate in this human evaluation study. Each student was assigned two documents from two different genres.
They were given 4 weeks to work on a series of tasks for each document, as shown in Figure <ref> below.
Every student received a Google Form for each assigned text.
Tasks 1 and 2 Students were asked to review both general and genre-specific guidelines before writing their own one-sentence summary for the assigned document. We also asked for their consent to release their written-summaries to GUMSum to facilitate multiple-reference evaluation and inter-annotator agreement, as shown in Figure <ref>.
Tasks 3 and 4 Students were presented both system-generated and human-written summaries in order to evaluate various aspects of each summary candidate. The order of outputs shown to the evaluators was randomized for each source text, and we also ask them to not modify their written summary after viewing the presented ones. In addition, we ask the evaluators to justify their decisions in a few sentences for certain questions:
* Please choose your most and least preferred summaries respectively. You can select more than one for each category below if multiple summaries are equally most or least preferred by you.
* Please justify your decisions above in a few sentences below. For instance, you could say, "I prefer summary X over summary Y because X doesn't contain the main point (while a minor one is included) or Y contains incorrect information" etc. The more detailed the justifications, the better!
* How substitutive is each summary candidate? According to the guidelines, substitutive summaries replace reading the text as best as possible in one sentence - they are not just meant to attract readers to reading the text; they are meant to save you the trouble of reading it)
* Does the summary include information NOT PRESENT in the text even if you happen to know that it is factually correct?
* Please justify your decisions (esp. the ones you chose YES for) above in a few sentences below. For instance, you can list the relevant information below.
* Does the summary include INCORRECT information? (i.e. information PRESENT in the original text but used or interpreted in a different, misleading, or incorrect way in the summary; in other words, this summary is not faithful to the original text)
* Please justify your decisions (esp. the ones you chose YES for) above in a few sentences below. For instance, you can list the relevant information below.
* Is the summary written in good English? (e.g. no grammar errors or incomplete sentences etc.)
* Can you tell which summary is human-written and which one is computer-generated? If you are very unsure about this (confidence level at or below 50%), then choose the "can't tell" category.
* Please justify your decisions above in a few sentences below. In particular, if you have a very strong opinion about a specific summary or certain summaries, we'd highly appreciate it if you could share your valuable thoughts with us.
Wrapping-up The last part of the evaluation study is to ask evaluators to first rate the level of difficulty of the entire evaluation task on a scale of 1 to 5 where 1 means `Not difficult at all' and 5 means `Extremely difficult'. We also collect their responses to the following open-ended questions in order to help us get a better idea of the challenges of producing a good summary for various text types, which are very valuable insights to guide future research on designing more specifically defined guidelines and targeted evaluation.
* Based on your experience here, what's the most difficult or challenging thing you found when writing a one-sentence summary for the genre you are assigned?
* Is there anything else you would like to share regarding your experience of writing a summary and/or evaluating other existing summaries?
§.§ Additional Plots of Responses from the Human Evaluation Study
Figure <ref> shows additional responses on English fluency quality for selected systems vs. human performance, as well as a breakdown of annotators' guesses as to whether they were looking at human or system summaries.
|
http://arxiv.org/abs/2306.02869v1
|
20230605134334
|
Data-Driven Regret Balancing for Online Model Selection in Bandits
|
[
"Aldo Pacchiano",
"Christoph Dann",
"Claudio Gentile"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] |
On “Scientific Debt” in NLP: A Case for More Rigour
in Language Model Pre-Training Research
Made Nindyatama Nityasya^1, Haryo Akbarianto Wibowo^1, Alham Fikri Aji^2,
Genta Indra Winata^3, Radityo Eko Prasojo^4, Phil Blunsom^5,6, Adhiguna Kuncoro^7
^1Independent Researcher ^2MBZUAI ^3Bloomberg ^4Universitas Indonesia
^5Cohere.AI ^6University of Oxford ^7DeepMind
July 31, 2023
===================================================================================================================================================================================================================================================================================================================
We consider model selection for sequential decision making in stochastic environments with bandit feedback, where a meta-learner has at its disposal a pool of base learners, and decides on the fly which action to take based on the policies recommended by each base learner. Model selection is performed by regret balancing but, unlike the recent literature on this subject, we do not assume any prior knowledge about the base learners like candidate regret guarantees; instead, we uncover these quantities in a data-driven manner. The meta-learner is therefore able to leverage the realized regret incurred by each base learner for the learning environment at hand (as opposed to the expected regret), and single out the best such regret.
We design two model selection algorithms operating with this more ambitious notion of regret and, besides proving model selection guarantees via regret balancing, we experimentally demonstrate the compelling practical benefits of dealing with actual regrets instead of candidate regret bounds.
§ INTRODUCTION
In online model selection for sequential decision making, the learner has access to a set of base learners and the goal is to adapt during learning to the best base learner that is the most suitable for the current environment. The set of base learners typically comes from instantiating different modelling assumptions or hyper-parameter choices, e.g., complexity of the reward model or the ϵ-parameter in ϵ-greedy. Which choice, and therefore which base learner, works best is highly dependent on the problem instance at hand, so that good online model selection solutions are important for robust sequential decision making. This has motivated an extensive study of model selection questions <cit.> in bandit and reinforcement learning problems.
While some of these works have developed custom solution for specific model selection settings, for instance, selecting among a nested set of linear policy classes in contextual bandits (e.g., <cit.>), the relevant literature also provides several general purpose approaches that work in a wide range of online model selection settings. Among the most prominent ones are FTRL-based (follow-the-regularized-leader) algorithms, including EXP4 <cit.>, Corral <cit.> and Tsallis-INF <cit.>, as well as algorithms based on regret balancing <cit.>.
These methods usually come with theoretical guarantees of the following form: the expected regret (or high-probability regret) of the model selection algorithm is not much worse than the expected regret (or high probability regret) of the best base learner. Such results are reasonable and known to be unimprovable in the worst-case <cit.>. Yet, it is possible for model selection to achieve expected regret that is systematically smaller than that of any base learner.
This may seem surprising at first, but it can be explained through an example when considering the large variability across individual runs of each base learner on the same environment.
The situation is illustrated in fig:expected_regret_motivation. On the left, we plot the cumulative expected regret of two base learners, along with the corresponding behavior of one of our model selection algorithms (ED^2RB – see sec:ED2RB below) run on top of them. On the right, we unpack the cumulative expected regret curve of one of the two base learners from the left plot, and display ten independent runs of this base learner on the same environment, together with the resulting expected regret curve (first 1000 rounds only).
Since the model selection algorithm has access to two base learners simultaneously, it can leverage a good run of either of two, and thereby achieve a good run more likely than any base learner individually, leading to overall smaller expected regret.
Such high variability in performance across individual runs of a base learner is indeed fairly common in model selection, for instance when base learners correspond to different hyper-parameters that control the explore-exploit trade-off. For a hyper-parameter setting that explores too little for the given environment, the base learner becomes unreliable and either is lucky and converges quickly to the optimal solution or unlucky and gets stuck in a suboptimal one.
This phenomenon is a key motivation for our work. Instead of model selection methods that merely compete with the expected regret of any base learner, we design model selection solutions that compete with the regret realizations of any base learner, and have (data-dependent) theoretical guarantees that validate this ability.
While the analysis of FTRL-based model selection algorithms naturally lends itself to work with expected regret (e.g., <cit.>), the existing guarantees for regret balancing work with realized regret of base learners (e.g., <cit.>). Concretely, regret balancing requires each learner to be associated with a candidate regret bound, and the model selection algorithm competes with the regret bound of the best among the well-specified learner, those learners whose regret realization is below their candidate bound. Setting a-priori tight candidate regret bounds for base learners is a main limitation for existing regret balancing methods, as the resolution of these bounds is often the one provided by a (typically coarse) theoretical analysis.
As suggested in earlier work, we can create several copies of each base learner with different candidate bounds, but we find this not to perform well in practice due to the high number of resulting base learners. Another point of criticism for existing regret balancing methods is that, up to deactivation of base learners, these methods do not adapt to observations, since their choice among active base learners is determined solely by the candidate regret bounds themselves, which are set a-priori.
In this work, we address both these limitations, and propose two new regret balancing algorithms for model selection with bandit feedback that do not require knowing candidate regret bounds. Instead, the algorithms determine the right regret bounds sequentially in a data-driven manner, allowing them to adapt to the regret realization of the best base learner. We prove this by deriving regret guarantees that share the same form with existing results, but replace expected regret rates or well-specified regret bounds with realized regret rates, which can be much sharper (as in the example in fig:expected_regret_motivation).
From a theoretical standpoint, our work has to be contrasted with existing results where the model selection algorithm is provided with a set of candidate regret bounds for each of the base learners. As we said, our work removes this assumption and yields data-dependent model selection regret bounds. This is in contrast with existing black-box approaches such as Corral <cit.> and Regret Bound Balancing <cit.>.
From an empirical standpoint, we illustrate the validity of our approach by carrying out an experimental comparison with competing approaches to model selection via base learner pooling, and find that our new algorithms systematically outperform the tested baselines.
§ SETUP AND NOTATION
We consider a general sequential decision making framework that covers many important problem classes such as multi-armed bandits, contextual bandits and tabular reinforcement learning as special cases.
This framework or variations of it has been commonly used in the model selection literature <cit.>.
The learner operates with a policy class Π and a set of contexts over which is defined a probability distribution , unknown to the learner.
In bandit settings, each policy π is a mapping from contexts to Δ_𝒜, where 𝒜 is an action space and Δ_𝒜 denotes the set of probability distributions over 𝒜. However, the concrete form of Π, or is not relevant for our purposes.
We only need that each policy π∈Π is associated with a fixed expected reward mapping μ^π→ [0, 1] of the form μ^π(x) = [r | x, π],
which is unknown to the learner.
In each round t ∈ of the sequential decision process, the learner first decides on a policy π_t ∈Π. The environment then draws a context x_t ∼
as well as a reward observation r_t ∈ [0, 1] such that
[r_t | x_t, π_t] = μ^π_t(x_t). The learner receives (x_t, r_t) before the next round starts.
We call v^π = _x ∼[μ^π(x)] the value of a policy π∈Π and define the instantaneous regret of π as
(π) = v^⋆ - v^π = _x ∼[μ^π_⋆(x) - μ^π(x)]
where π_⋆∈_π∈Π v^π is an optimal policy and v^⋆ its value. The total regret after T rounds of an algorithm that chooses policies π_1, π_2, … is
(T) = ∑_t=1^T (π_t).
Note that (T) is a random quantity since the policies π_t selected by the algorithm depend on past observations, which are themselves random variables. Yet, we use in (<ref>) a pseudo-regret notion that takes expectation over reward realizations and context draws. This is most convenient for our purposes but we can achieve guarantees without those expectations by paying an additive O(√(T)) term, as is standard. We also denote by u_T = ∑_t=1^T v^π_t the total value accumulated by the algorithm over the T rounds.
Base learners.
The learner (henceforth called meta-learner) is in turn given access to M base learners that the meta-learner can consult when determining the current policy to deploy. Specifically, in each round t, the meta-learner chooses one base learner i_t ∈ [M] = {1,…, M} to follow and plays the policy suggested by this base learner. The policy that base learner i recommends in round t is denoted by π^i_t and thus π_t = π^i_t_t.
We shall assume that each base learner has an internal state (and internal clock) that gets updated only on the rounds where that base learner is chosen. After being selected in round t, base learner i_t will receive from the meta-learner the observation (x_t,r_t).
We use n^i_t = ∑_ℓ = 1^ti_t = i to denote the number of times base learner i happens to be chosen up to round t, and by
u_t^i = ∑_ℓ = 1^ti_t = i v^π^i_t the total value accumulated by base learner i up to this point.
It is sometimes more convenient to use a base learner's internal clock instead of the total round index t. To do so, we will use subscripts (k) with parentheses to denote the internal time index of a specific base learner, while subscripts t refer to global round indices. For example, given the sequence of realizatons (x_1,r_1), (x_2,r_2), …, π^i_(k) is the policy base learner i wants to play when being chosen the k-th time,
i.e., π^i_t = π^i_(n^i_t).
The total regret incurred by a meta-learner that picks base learners i_1,…, i_T can then be decomposed into the sum of regrets incurred by each base learner:
(T) = ∑_t=1^T (π_t) = ∑_i = 1^M ∑_k = 1^n^i_T(π^i_(k)).
§.§ Data-Driven Model Selection
Our goal is to perform model selection in this setting: We devise sequential decision making algorithms
that have access to base learners as subroutines and are guaranteed to have regret that is comparable
to the smallest realized regret, among all base learners in the pool, despite not knowing a-priori which base learner will happen to be best for the environment at hand ( and μ^π), and the actual realizations (x_1,r_1), (x_2,r_2), …, (x_T,r_T).
In order to better quantify this notion of realized regret, the following definition will come handy.
The regret scale of base learner i after being played k rounds is ∑_ℓ=1^k(π_(ℓ)^i)/√(k).
For a positive constant d_min, the regret coefficient of base learner i after being played k rounds is defined as
d^i_(k) = max {∑_ℓ=1^k(π_(ℓ)^i)/√(k), d_min}.
That is, d^i_(k)≥ d_min is the smallest number such that the incurred regret is bounded as ∑_ℓ=1^k(π^i_(ℓ)) ≤ d^i_(k)√(k).
Further we define the monotonic regret coefficient of base learner i after being played k rounds as
d̅^i_(k) = max_ℓ∈ [k] d^i_(ℓ).
We use a √(k) rate in this definition since that is the most commonly targeted regret rate in stochastic settings. Our results can be adapted, similarly to prior work <cit.> to other rates but the √(T) barrier for model selection <cit.> remains of course.
It is worth emphasizing that both d^i_(k) and d̅^i_(k) in the def:regretcoeff are random variables depending on (x_1,r_1), (x_2,r_2), …, (x_ℓ,r_ℓ), where ℓ = min{t : n^i_t = k}. We illustrate them in fig:regret_coefficients.
§.§ Running Examples
The above formalization encompasses a number of well-known online learning frameworks, including finite horizon Markov decision processes and contextual bandits, and model selection questions therein. We now introduce two examples but refer to earlier works on model selection for a more exhaustive list <cit.>.
Tuning UCB exploration coefficient in multi-armed-bandits. As a simple illustrative example, we consider multi-armed bandits where the learner chooses in each round an action a_t from a finite action set and receives a reward r_t drawn from a distribution with mean μ^a_t and unknown but bounded variance σ^2. In this setting, we directly identify each policy with an action, i.e., Π = and define the context = {∅} as empty. The value of an action / policy a is simply v^a = μ^a.
The variance σ strongly affects the amount of exploration necessary, thereby controlling the difficulty or “complexity” of the learning task. Since the explore-exploit of a learner is typically controlled through a hyper-parameter, it is beneficial to perform model selection among base learners with different trade-offs to adapt to the right complexity of the environment at hand.
We use a simple UCB strategy as a base learner that chooses the next action as _a ∈μ̂(a) + c √(ln(n(a) / δ)/n(a)) where n(a) and μ̂(a) are the number of pulls of arm a so far and the average reward observed. Here c is the confidence scaling and we instantiate different base learners i ∈ [M] with different choices c_1, …, c_M for c. The goal is to adapt to the best confidence scaling c_i_⋆, without knowing the true variance σ^2.[We choose this example for its simplicity. An alternative without model selection would be UCB with empirical Bernstein confidence bounds <cit.>. However, adaptation with model selection works just as well in more complex settings e.g. linear bandits and MDP, where empirical variance confidence bounds are not available or much more complicated.]
Nested linear bandits. In the stochastic linear bandit model, the learner chooses an action a_t ∈ from a large but finite action set 𝒜⊂ℝ^d, for some dimension d>0 and receives as reward r_t = a_t^⊤ω + white noise, where ω∈ℝ^d is a fixed but unknown reward vector.
This fits in our framework by considering policies of the form π_θ(x) = _a ∈⟨ a, θ⟩ for a parameter θ∈^d, defining contexts = {∅} as empty and the mean reward as μ^π(x) = π(x)^⊤ω, which is also the value v^π.
We here consider the following model selection problem, that was also a motivating application in <cit.>. The action set 𝒜⊂ℝ^d_M has some maximal dimension d_M>0, and we have an increasing sequence of M dimensions d^1 < … < d^M. Associated with each d^i is a base learner that only considers policies Π_i of the form π_θ_i(x) = _a ∈⟨ P_d_i[a], θ_i ⟩ for θ_i ∈^d^i and P_d^i[·] being the projection onto the first d^i dimensions. That is, the i-th base learner operates only on the first d^i components of the unknown reward vector ω∈ℝ^d^M. If we stipulate that only the first d^i_⋆ dimensions of ω∈^d^M are non-zero (d^i_⋆ being unknown to the learner) we are in fact competing in a regret sense against the base learner that operates with the policy class Π_i_⋆, the one at the “right" level of complexity for the underlying ω.
Nested stochastic linear contextual bandits. We also consider a contextual version of the previous setting <cit.> where where context x_t ∈ are drawn i.i.d. and which a policy maps to some action a_t ∈. The expected reward is then μ^π(x) = ψ(x,π(x))^⊤ω for a known feature embedding ψ : ×→ℛ^d, and an unknown vector ω∈ℛ^d. Just as above, we consider the nested version of this setting where ψ and ω live in a large ambient dimension d^M but only the first d^i_⋆ entries of ω are non-zero.
§ DATA-DRIVEN REGRET BALANCING
We introduce and analyze two data-driven regret balancing algorithms.
§.§ Data-Driven Regret Balancing Through Doubling
We present our first meta-algorithm (Doubling Data Driven Regret Balancing (D^3RB)) in alg:doublebalancing, which serves as a warm up for our slightly more involved second meta-algorithm.
D^3RB maintains over time three main estimators: (1) regret coefficients d^i_t, meant to estimate the monotonic regret coefficients d̅^i_t from def:regretcoeff, (2) the average reward estimators u^i_t/n^i_t, and (3) the balancing potentials ϕ^i_t, which are instrumental in the implementation of the exploration strategy based on regret balancing (other instances of model selection via regret balancing can be found in earlier papers, e.g., <cit.>).
At each round t the meta-algorihm picks the base learner i_t with the smallest balancing potential so far (ties broken arbitrarily). The algorithm plays the policy π_t suggested by that base learner on the current context x_t, receives the associated reward r_t, and forwards (x_t,r_t) back to that base learner only. Then D^3RB performs a misspecification test, meant to see if the current estimate of the regret of base learner i_t is compatible with the data collected so far. If that is not the case (the test “triggers") the regret coefficient d^i_t is doubled, and the balancing potential of base learner i_t is updated.
The following result quantifies the regret properties of D^3RB in terms of the monotonic regret coefficients of the base learners at hand.
theoremmaindouble
With probability at least 1 - δ, the regret of alg:doublebalancing with parameters δ and d_min≥ 1 is bounded in all rounds T ∈ as[
Here and throughout, Õ hides log-factors.
]
(T) = Õ( d̅^⋆_T M√(T) + (d̅^⋆_T)^2 √(MT))
where d̅^⋆_T = min_i ∈ [M]d̅^i_T = min_i ∈ [M]max_t ∈ [T] d^i_t is the smallest monotonic regret coefficient among all learners.
One way to interpret thm:maindouble is the following. If the meta-learner were given ahead of time the index of the base learner achieving the smallest monotonic regret coefficient d̅^⋆_T, then the meta-learner would follow that base learner from beginning to end. The resulting regret bound for the meta-learner would be of the form[
Yet, see thm:mainestimate, where d̅^⋆_T is replaced by the smaller d^⋆_T.
]
(d̅^⋆_T)√(T).
Then the price D^3RB pays for aggregating the M base learners is essentially a multiplicative factor of the form M + d̅^⋆_T √(M).
§.§ Data-Driven Regret Balancing Through Estimation
A more refined version of D^3RB is the ED^2RB algorithm (Estimating Data-Driven Regret Balancing), contained in alg:estimatebalancing. The main difference compared to D^3RB is that ED^2RB replaces the misspecification-test-plus-doubling operation with a more refined data-dependent estimation d^i_t of the regret coefficients, coupled with a slightly more careful definition of the balancing potentials ϕ^i_t deployed for regret balancing. The function clip(x; a,b) therein clips the real argument x to the interval [a,b]. This more careful definition allows us to replace in the regret bound the monotonic regret coefficient d̅^⋆_T with the sharper regret coefficient d^⋆_T:
theoremmainestimate
With probability at least 1 - δ, the regret of alg:estimatebalancing with parameters δ and d_min≥ 1 is bounded in all rounds T ∈ as
(T) = Õ(d^⋆_T M√(T) + (d^⋆_T)^2 √(MT))
where d^⋆_T = min_i ∈ [M]max_j ∈ [M]d̅^i_T_j is the smallest regret coefficient among all learners, and T_j is the last time t when base learner j was played and ϕ^j_t+1 < 2ϕ^j_t.
Up to the difference between d^⋆_T and d̅^⋆_T, the guarantees in thm:maindouble and thm:mainestimate are identical. Further, since d^⋆_T ≤d̅^⋆_T, the guarantee for ED^2RB is never worse than that for D^3RB. It can however be sharper, e.g., in environments with favorable gaps where we expect that a good base learner may achieve a O(log(T)) regret instead of a √(T) rate and thus d^i_t of that learner would decrease with time. The regret coefficient d^⋆_T can benefit from this while d̅^⋆_T cannot decrease with T, and thus provide a worse guarantee.
Importantly, both our data-dependent guarantees recover existing data-independent model-selection results up to the precise M dependency. Specifically, ignoring M factors, our bounds scale at most as (d̅^⋆_T)^2 √(T) while the previous literature on the subject (e.g., <cit.>, Corollary 2) scales as (d^i_⋆)^2√(T). In the case of existing regret balancing, d^i_⋆√(T) is the best well-specified regret bound. We always have d^i_⋆≥d̅^⋆_T but, as mentioned earlier, the regret bound has to be specified ahead of time which typically is informed by expected or high-probability regret guarantees of the base learners. These therefore do not leverage the favorable cases that our data-dependent bounds automatically adapt to. Similarly for FTRL algorithms <cit.>, d^i_⋆ is the expected regret scale and thus also never sharper than our d̅^⋆_T and not capturing favorable realizations. As we will see in the experimental evaluation in the following section, there is often a stark difference between the expected performance and the data-dependent performance which confirms that the improvement in our bounds is important in practice.
Proof technique. The proofs for both our regret bounds can be found in app:doubling_proofs and <ref>, respectively.
We built on the existing technique for analyzing regret balancing b <cit.>. However, this analysis heavily relies on fixed candidate regret bounds and removing those introduced several technical challenges. To overcome them, we had to disentangle the balancing potentials ϕ^i_t from the estimated regret coefficients d^i_t and combine it with clipping or the doubling estimator. This allowed us to show the necessary monotonicity properties and generalized balancing conditions that enable our improved data-dependent bounds.
§ EXPERIMENTS
We evaluate our algorithms on several different synthetic benchmarks (environments, base-learners and model selection tasks), and compare their performance against existing meta-learners. For all details of the experimental setup and additional results, see app:experimental_details.
Environments and base-learners: As the first environment, we use a simple 5-armed multi-armed bandit problem (MAB) with standard Gaussian noise. We then use two linear bandit settings, as also described in sec:running_example: linear bandits with stochastic rewards, either with a stochastic context (CLB) or without (LB). As base learners, we
use UCB for the MAB environment (see also sec:running_example) and Linear Thompson (LinTS) sampling <cit.> for the LB and CLB setting.
l.5
< g r a p h i c s >
Experiment 2.
Model selection task: We consider 3 different model selection tasks. In the first, conf (“confidence"), we vary the explore-exploit trade-off in the base learners. For UCB, different base learners correspond to different settings of c, the confidence scaling that multiplies the exploration bonus (fig:confidence_MAB). Analogously, for LinTS, we vary the scale c of the parameter perturbation (see fig:confidence_linear).
For the second task dim (“dimension"), we vary the number of dimensions d_i the base learner considers when choosing the action (see second and third example in sec:running_example, as well as fig:nested_linear for results). Finally, we also consider a “self” task (fig:expected_regret_sample_runs), where all base learners are copies of the same algorithm.
Meta-learners: We evaluate both our algorithms, D^3RB from alg:doublebalancing and ED^2RB from alg:estimatebalancing. We compare them against the Corral algorithm <cit.> with the stochastic wrapper from <cit.>, as a representative for FTRL-based meta-learners. We also evaluate Regret Balanncing from <cit.> with several copies of each base learner, each with a different candidate regret bound, selected on an exponential grid (RB Grid). We also include in our list of competitors three popular algorithms, the Greedy algorithm (always selecting the best base learner so far with no exploration), UCB <cit.> and EXP3 <cit.>. These are legitimate choices as meta-algorithms, but either they do not come with theoretical guarantees in the model selection setting (UCB, Greedy) or enjoy worse guarantees <cit.>.
Discussion. An overview of our results can be found in tab:general_overview, where we report the cumulative regret of each algorithm at the end of each experiment. fig:confidence_MAB–<ref> contain the entire learning curves (as regret scale = cumulative regret normalized by √(T)).
We observe that D^3RB and ED^2RB both outperform all other meta-learners on all but the second benchmark. UCB as a meta-learner performs surprisingly well in benchmarks on MABs but performs poorly on the others.
Thus, our methods feature the smallest or close to the smallest cumulative regret among meta-learners on all benchmarks.
Comparing D^3RB and ED^2RB, we observe overall very similar performance, suggesting that ED^2RB may be preferable due to its sharper theoretical guarantee. While the model selection tasks conf and dim are standard in the literature, we also included one experiment with the self task where we simply select among different instances of the same base learner. This task was motivated by our initial observation (see also fig:expected_regret_motivation) that base learners have often a very high variability between runs and that model selection can capitalize on. Indeed, fig:expected_regret_sample_runs shows that
our algorithms as well as UCB achieve much smaller overall regret than the base learner. This suggests that model selection can be an effective way to turn a notoriously unreliable algorithm like the base greedy base learner (UCB with c=0 is Greedy) into a robust learner.
§ CONCLUSIONS AND LIMITATIONS
We proposed two new algorithms for model selection based on the regret balancing principle but without the need to specify candidate regret bounds a-priori. This calls for more sophisticated regret balancing mechanics that makes our methods data-driven and as an important benefit allows them to capitalize on variability in a base learner's performance. We demonstrate this empirically, showing that our methods perform well across several synthetic benchmarks, as well as theoretically. We prove that both our algorithms achieve regret that is not much worse than the realized regret of any base learner. This data-dependent guarantee recovers existing data-independent results but can be significantly tighter.
In this work, we focused on the fully stochastic setting, with contexts and rewards drawn i.i.d. We believe an extension of our results to arbitrary contexts is fairly easy by replacing the deterministic balancing with a randomized version. In contrast, covering the fully adversarial setting is likely possible by building on top of <cit.> but requires substantial innovation.
abbrvnat
§ APPENDIX
The appendix contains the extra material that was omitted from the main body of the paper.
§ DETAILS ON FIG:EXPECTED_REGRET_MOTIVATION
We consider a 5-armed bandit problem with rewards drawn from a Gaussian distribution with standard deviation 6 and mean 10/10, 6/10, 5/10, 2/10, 1/10 for each arm respectively.
We use a simple UCB strategy as a base learner that chooses the next action as _a ∈μ̂(a) + c √(ln(n(a) / δ)/n(a)) where n(a) and μ̂(a) are the number of pulls of arm a so far and the average reward observed.
The base learners use δ = 1/10 and c = 3 or c = 4 respectively.
§ ANALYSIS COMMON TO BOTH ALGORITHMS
We define the event in which we analyze both algorithms as the event in which for all rounds t ∈ and base learners i ∈ [M] the following inequalities hold
- c √(n^i_t ln M ln n^i_t/δ)≤u^i_t - u^i_t ≤ c √(n^i_t ln M ln n^i_t/δ)
for the algorithm parameter δ∈ (0, 1) and a universal constant c > 0.
Event from def:evente has probability at least 1 - δ.
Consider a fixed i ∈ [M] and t and write
u^i_t - u^i_t
= ∑_ℓ = 1^t i_ℓ = i(r_t - v^π_t)
= ∑_ℓ = 1^t i_ℓ = i(r_ℓ - 𝔼[r_ℓ | π_ℓ] )
Let _t be the sigma-field induced by all variables up to round t before the reward is revealed, i.e., _t = σ( {x_ℓ, π_ℓ, i_ℓ}_ℓ∈ [t-1]∪{x_t, π_t, t_t}).
Then, X_ℓ = i_ℓ = i(r_t - 𝔼[r_t | π_t] ) ∈ [-1, +1] is a martingale-difference sequence w.r.t. _ℓ. We will now apply a Hoeffding-style uniform concentration bound from <cit.>.
Using the terminology and definition in this article, by case Hoeffding I in Table 4, the process S_t = ∑_ℓ=1^t X_ℓ is sub-ψ_N with variance process V_t = ∑_ℓ=1^t i_ℓ = i/ 4.
Thus by using the boundary choice in Equation (11) of <cit.>, we get
S_t ≤ 1.7 √(V_t ( lnln(2 V_t) + 0.72 ln(5.2 / δ) ) ) = 0.85√(n^i_t( lnln(n^i_t/2) + 0.72 ln(5.2 / δ)))
for all k where V_k ≥ 1 with probability at least 1 - δ.
Applying the same argument to -S_k gives that
| u^i_t - u^i_t |
≤ 3 ∨ 0.85√(n^i_t( lnln(n^i_t /2) + 0.72 ln(10.4 / δ)))
holds with probability at least 1 - δ for all t.
We now take a union bound over i ∈ [M] and rebind δ→δ / M. Then picking the absolute constant c sufficiently large gives the desired statement.
For each i ∈ [M], let F_i: ℕ∪{0}→ℝ_+ be a nondecreasing potential function that does not increase too quickly, i.e.,
F_i(ℓ) ≤ F_i(ℓ+1) ≤α· F_i(ℓ) ∀ℓ∈∪{0}
and that 0<F_i(0) ≤α· F_j(0) for all i, j ∈ [M]^2.
Consider a sequence (i_t)_t ∈ such that i_t = _i ∈ [M] F_i(n^i_t-1) and n^i_t = ∑_ℓ = 1^t1{i_ℓ = i}, i.e., i_t ∈ [M] is always chosen as the smallest current potential. Then, for all t ∈
max_i ∈ [M] F_i(n^i_t) ≤α·min_j ∈ [M] F_j(n^j_t).
Our proof works by induction over t. At t = 1, we have n^i_0 = 0 for all i ∈ [M] and thus, by assumption, the statement holds. Assume now the statement holds for t.
Notice that since n^i_t and F_i are non-decreasing, we have for all i ∈ [M]
min_i F_i(n^i_t) ≥min_i F_i(n^i_t-1)).
Further, for all i ≠ i_t that were not chosen in round t, we even have F_i(n^i_t-1) = F_i(n^i_t) for all i ≠ i_t.
We now distinguish two cases:
Case i_t ∉_i F_i(n^i_t-1). Since the potential of all i ≠ i_t that attain the max is unchanged, we have
max_i F_i(n^i_t) = max_i F_i(n^i_t-1)
and therefore max_i F_i(n^i_t)/min_j F_j(n^j_t)≤max_i F_i(n^i_t-1)/min_j F_j(n^j_t-1)≤α.
Case i_t ∈_i F_i(n^i_t-1).
Since i_t attains both the maximum and the minimum, and hence all potentials are identical, we have
max_i F_i(n^i_t) = F_i_t(n^i_t_t) ≤ F_i_t(n^i_t_t-1 + 1) ≤α F_i_t(n^i_t_t-1) = αmin_j F_j(n^j_t-1).
§ PROOFS FOR THE DOUBLING ALGORITHM (ALGORITHM <REF>)
In event , for each base learner i all rounds t ∈, the regret multiplier d^i_t satisfies
d^i_t ≤ 2 d̅^i_t .
Note that instead of showing this for all rounds t, we can also show this equivalently for all number k of plays of base learner i.
If the statement is violated for base learner i, then there is a minimum number k of plays at which this statement is violated.
Note that by definition d̅_(0)^i = d_min and by initialization d^i_(0) = d_min, hence this k cannot be 0.
Consider now the round t where the learner i was played the k-th time, i.e., the first round at which the statement was violated.
This means d^i_t > 2 d̅^i_t but d^i_t-1≤2̅ d^i_t-1 still holds. Since d^i_t can be at most 2d^i_t-1, we have
d^i_t-1 > d̅^i_t. We will now show that in this case, the misspecification test could not have triggered and therefore d^i_t = d^i_t-1≤ 2 d̅^i_t-1≤ 2 d̅^i_t which is a contradiction. To show that the test cannot trigger, consider the LHS of the test condition and bound it from below as
u^i_t_t/n_t^i_t + d^i_t_t-1√(n^i_t_t)/n_t^i_t + c√(lnMln n^i_t_t/δ/n_t^i_t) ≥u^i_t_t/n_t^i_t + d^i_t_t-1√(n^i_t_t)/n_t^i_tEvent
≥u^i_t_t/n_t^i_t + d̅^i_t_t√(n^i_t_t)/n_t^i_td^i_t_t-1 > d̅^i_t_t
≥u^i_t_t + ∑_ℓ=1^n_t^i_t(π^i_t_(ℓ))/n_t^i_tdefinition of d^i_t
≥ v^⋆definition of regret
≥u^j_t/n_t^jdefinition of v^⋆
≥u^j_t/n_t^j - c √(lnMln n^j_t/δ/n_t^j).
Event
This holds for any j ∈ [M] and thus, the test does not trigger.
In event , for each base learner i all rounds t ∈, the number of times the regret multiplier d^i_t has doubled so far is bounded as follows:
d^i_t ≤ 1 + log_2 d̅^i_t/d_min .
The potentials in alg:doublebalancing are balanced at all times up to a factor 3, that is,
ϕ^i_t≤ 3 ϕ^j_t for all rounds t ∈ and base learners i, j ∈ [M].
We will show that lem:balancing_abstract_lemma with α = 3 holds when we apply the lemma to F_i(n^i_t-1) = ϕ^i_t.
First F_i(0) = ϕ^i_1 = d_min for all i ∈ [M] and, thus, the initial condition holds. To show the remaining condition, it suffices to show that ϕ^i_t is non-decreasing in t and cannot increase more than a factor of 3 per round.
If i was not played in round t, then ϕ^i_t = ϕ^i_t-1 and both conditions holds.
If i was played, i.e., i = i_t, then
ϕ^i_t = d^i_t √(n^i_t)≤ 2d^i_t-1√(n^i_t)≤
2d^i_t-1√(n^i_t - 1)√(n^i_t/n^i_t - 1) = 2 ϕ^i_t-1√(n^i_t/n^i_t - 1)≤ 3 ϕ^i_t-1 if n^i_t > 1
2d_min√(1) = ϕ_t-1^i if n^i_t = 1
In event , the regret of all base learners i is bounded in all rounds T as
∑_k = 1^n^i_T(π^i_(k)) ≤6(d̅^j_T)^2/d_min√(n^i_T)
+ 6d̅_T^j √(n_T^j)
+ (6 cd̅^j_T/d_min + 2c)√(n_T^i lnMln T/δ)+ 1 + log_2 d̅^i_T/d_min ,
where j ∈ [M] is an arbitrary base learner with n^j_T > 0.
Consider a fixed base learner i and time horizon T, and let t ≤ T be the last round where i was played but the misspecification test did not trigger. If no such round exists, then set t = 0. By corr:num_doublings, i can be played at most 1 + log_2 d̅^i_T/d_min times between t and T and thus
∑_k = 1^n^i_T(π^i_(k)) ≤∑_k = 1^n^i_t(π^i_(k)) + 1 + log_2 d̅^i_T/d_min.
If t = 0, then the desired statement holds. Thus, it remains to bound the first term in the RHS above when t > 0. Since i = i_t and the test did not trigger we have, for any base learner j with n^j_t > 0,
∑_k = 1^n^i_t(π^i_(k)) = n^i_t v^⋆ - u^i_t definition of regret
= n^i_t v^⋆ - n^i_t/n^j_tu^j_t + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( n^j_t v^⋆ - u^j_t) + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( ∑_k = 1^n^j_t(π^j_(k))) + n^i_t/n^j_tu^j_t - u^i_t definition of regret
≤n^i_t/n^j_t(d^j_t √(n^j_t)) + n^i_t/n^j_tu^j_t - u^i_t definition of regret rate
≤√(n^i_t/n^j_t) d^j_t √(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
We now use the balancing condition in lem:doubling_balanced to bound the first factor √(n^i_t / n^j_t). This condition gives that ϕ^i_t+1≤ 3ϕ^j_t+1. Since both n^j_t > 0 and n^i_t > 0, we have ϕ^i_t+1 = d^i_t √(n^i_t) and ϕ^j_t+1 = d^j_t √(n^j_t).
Thus, we get
√(n^i_t/n^j_t) = √(n^i_t/n^j_t)·d^i_t/d^j_t·d^j_t/d^i_t = ϕ^i_t+1/ϕ^j_t+1·d^j_t/d^i_t≤ 3 d^j_t/d^i_t≤ 6 d̅^j_t/d_min.
Plugging this back into the expression above, we have
∑_k = 1^n^i_t(π^i_(k)) ≤6(d̅^j_t)^2/d_min√(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
To bound the last two terms, we use the fact that the misspecification test did not trigger in round t. Therefore
u^i_t ≥u^i_t - c√(n_t^i lnMln n^i_t/δ)event
=n^i_t ( u^i_t/n^i_t + c√(lnMln n^i_t/δ/n^i_t) + d^i_t/√(n^i_t)) - 2c√(n_t^i lnMln n^i_t/δ) - d_t^i √(n_t^i)
≥n^i_t/n^j_tu^j_t - √(n^i_t/n^j_t) c√(n^i_t lnMln n^j_t/δ) - 2c√(n_t^i lnMln n^i_t/δ) - d_t^i √(n_t^i)test not triggered
Rearranging terms and plugging this expression in the bound above gives
∑_k = 1^n^i_t(π^i_(k)) ≤6( d̅^j_t)^2/d_min√(n^i_t) + √(n^i_t/n^j_t) c√(n^i_t lnMln n^j_t/δ) + 2c√(n_t^i lnMln n^i_t/δ) + d_t^i √(n_t^i)
≤6(d̅^j_t)^2/d_min√(n^i_t) + 6 d̅^j_t/d_min c√(n^i_t lnMln n^j_t/δ) + 2c√(n_t^i lnMln n^i_t/δ) + d_t^i √(n_t^i)eqn:dn_connection
≤6(d̅^j_t)^2/d_min√(n^i_t) + 6 d̅^j_t/d_min c√(n^i_t lnMln n^j_t/δ) + 2c√(n_t^i lnMln n^i_t/δ) + 3d_t^j √(n_t^j)eqn:dn_connection
≤6(d̅^j_t)^2/d_min√(n^i_t)
+ 3d_t^j √(n_t^j)
+ (6 cd̅^j_t/d_min + 2c)√(n_t^i lnMln t/δ)n^i_t ≤ t
≤6(d̅^j_t)^2/d_min√(n^i_t)
+ 6 d̅_t^j √(n_t^j)
+ (6 cd̅^j_t/d_min + 2c)√(n_t^i lnMln t/δ)lem:dbound
Finally, since t ≤ T and therefore d̅^j_t ≤d̅^j_T and n^j_t ≤ n^j_T (and similarly for i), the statement follows.
*
By lem:highprob, event from def:evente has probability at least 1 - δ. In event , we can apply lem:base_learner_regret for each base learner.
Summing up the bound from that lemma gives
(T) ≤∑_i = 1^M [ 6(d̅^j_T)^2/d_min√(n^i_T)
+ 6 d̅_T^j √(n_T^j)
+ (6 cd̅^j_T/d_min + 2c)√(n_T^i lnMln T/δ)+ 1 + log_2 d̅^i_T/d_min]
≤ 6M d̅^j_T √(T) + M + M log_2 √(T)/d_min + [ 6(d̅^j_T)^2/d_min
+ 4 d̅^j_T/d_min 2c√(lnMln T/δ)]∑_i = 1^M√(n_T^i)
≤( 6√(M)d̅^j_T + 6(d̅^j_T)^2/d_min
+ 8c d̅^j_T/d_min√(lnMln T/δ))√(MT) + M + M log_2 T/d_min.
Plugging in d_min≥ 1 yields
(T) ≤( 6√(M)d̅^j_T + 6(d̅^j_T)^2 + 8c d̅^j_T√(lnMln T/δ))√(MT) + M + M log_2 T
= O ( (M d̅^j_T + √(M)(d̅^j_T)^2 + d̅^j_T √(lnMln T/δ)) √(T)+ M ln(T))
= Õ(d̅^j_T M√(T) + (d̅^j_T)^2 √(MT)) ,
as desired.
§ PROOFS FOR THE ESTIMATING ALGORITHM (ALGORITHM <REF>)
In event , the regret rate estimate in alg:estimatebalancing does not overestimate the current regret rate, that is, for all base learners i ∈ [M] and rounds t ∈, we have
d^i_t ≤ d^i_t.
Note that the algorithm only updates d^i_t when learner i is chosen and only then d^i_t changes. Further, the condition holds initially since d^i_1 = d_min≤ d^i_t. Hence, it is sufficient to show that this condition holds whenever d^i_t is updated.
The algorithm estimates d^i_t as
d^i_t = max{d_min, √(n_t^i)( max_j ∈ [M]û^j_t/n_t^j - c√(lnMln n^j_t/δ/n_t^j)
- û^i_t_t/n_t^i - c√(lnMln n^i_t/δ/n_t^i)) } .
If d^i_t ≤ d_min, then the result holds since by definition d^i_t ≥ d_min. In the other case, we have
d^i_t = √(n_t^i)( max_j ∈ [M]û^j_t/n_t^j - c√(lnMln n^j_t/δ/n_t^j)
- û^i_t/n_t^i - c√(lnMln n^i_t/δ/n_t^i))
≤√(n_t^i)( max_j ∈ [M]u^j_t/n_t^j - u^i_t/n_t^i)event
≤√(n_t^i)( v^⋆ - u^i_t_t/n_t^i)definition of optimal value v^⋆
= n_t^i v^⋆ - u^i_t/√(n_t^i) = ∑_k=1^n_t^i(π^i_(k))/√(n_t^i)regret definition
≤ d^i_t , definition of d^i_t
as claimed.
In event , the balancing potentials ϕ^i_t in alg:estimatebalancing satisfy for all t ∈ and i ∈ [M] where n^i_t ≥ 1
ϕ^i_t+1≤ d^i_t √(n^i_t).
If i ≠ i_t, then ϕ^i_t+1 = ϕ^i_t, d^i_t = d^i_t-1 and n^i_t = n^i_t-1. It is therefore sufficient to only check this condition for i = i_t.
By definition of the balancing potential, we have when i = i_t
ϕ^i_t+1 ≤max{ϕ^i_t, d^i_t √(n^i_t)}≤max{ϕ^i_t, d^i_t √(n^i_t)} ,
where the last inequality holds because of lem:dboundest. If n^i_t = 1, then ϕ^i_t = d_min and d^i_t √(n^i_t)≥ d_min√(1) by definition, and the statement holds. Otherwise, we can assume that ϕ^i_t≤ d^i_t-1√(n^i_t-1) by induction. This gives
ϕ^i_t+1≤max{d^i_t-1√(n^i_t-1), d^i_t √(n^i_t)}.
We notice that d^i_t √(n^i_t) = max{d_min√(n^i_t), ∑_k=1^n^i_t(π^i_(k))}. Since each term inside the max is non-decreasing in t, d^i_t √(n^i_t) is also non-decreasing in t, and therefore ϕ^i_t+1≤ d^i_t √(n^i_t), as anticipated.
In event , for all T ∈ and i ∈ [M], the number of times the balancing potential ϕ^i_t doubled until time T in alg:estimatebalancing is bounded by
log_2 (t max{1, 1 / d_min}).
The balancing potential ϕ^i_t is non-decreasing in t and ϕ^i_1 = d_min. Further, by lem:phibound, we have
ϕ^i_t+1≤ d^i_t √(n^i_t)≤max{d_min√(n^i_t) , n^i_t}.
Thus, the number of times ϕ^i_t can double is at most
log_2 ( max{√(n^i_t) , n^i_t/d_min}) ≤log_2 (t max{1, 1 / d_min}) .
The balancing potentials in alg:estimatebalancing are balanced at all times up to a factor 2, that is,
ϕ^i_t≤ 2 ϕ^j_t for all rounds t ∈ and base learners i, j ∈ [M].
We will show that lem:balancing_abstract_lemma with α = 2 holds when we apply the lemma to F_i(n^i_t-1) = ϕ^i_t.
First F_i(0) = ϕ^i_1 = d_min for all i ∈ [M] and, thus, the initial condition holds. To show the remaining condition, it suffices to show that ϕ^i_t is non-decreasing in t and cannot increase more than a factor of 2 per round. This holds by the clipping in the definition of ϕ^i_t+1 in the algorithm.
In event , the regret of all base learners i is bounded in all rounds T as
∑_k = 1^n^i_T(π^i_(k))
≤2(d^j_t)^2/d_min√(n^i_t) + 2 d^j_t √(n^j_t) + 2c(1 + 2d^j_t/d_min)√(n^i_t lnMln t/δ) + log_2 max{T, T/ d_min} ,
where j ∈ [M] is an arbitrary base learner with n^j_T > 0 and t ≤ T is the last round where i = t_t and ϕ^i_t+1 < 2ϕ^i_t.
Consider fixed base learner i and time horizon T, and let t ≤ T be the last round where i was played and ϕ^i_t did not double, i.e., ϕ^i_t+1 < 2ϕ^i_t. If no such round exists, then set t = 0. By lem:est_num_double, i can be played at most log_2 (T max{1, 1 / d_min}) times between t and T and thus
∑_k = 1^n^i_T(π^i_(k)) ≤∑_k = 1^n^i_t(π^i_(k)) + log_2 (T max{1, 1 / d_min}).
If t = 0, then the desired statement holds. Thus, it remains to bound the first term above when t > 0. We can write the regret of base learner i up to t in terms of the regret of any learner j with n^j_t > 0 as follows:
∑_k = 1^n^i_t(π^i_(k)) = n^i_t v^⋆ - u^i_t definition of regret
= n^i_t v^⋆ - n^i_t/n^j_tu^j_t + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( n^j_t v^⋆ - u^j_t) + n^i_t/n^j_tu^j_t - u^i_t
= n^i_t/n^j_t( ∑_k = 1^n^j_t(π^j_(k))) + n^i_t/n^j_tu^j_t - u^i_t definition of regret
≤n^i_t/n^j_t(d^j_t √(n^j_t)) + n^i_t/n^j_tu^j_t - u^i_t definition of regret rate
≤√(n^i_t/n^j_t) d^j_t √(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
We now use the balancing condition in lem:estimating_balanced to bound the first factor √(n^i_t / n^j_t). This condition gives that ϕ^i_t+1≤ 2ϕ^j_t+1.
Since ϕ^i_t+1 < 2ϕ^i_t and, thus, the balancing potential was not clipped from above, we have ϕ^i_t+1≥d^i_t √(n^i_t). Further,
since n^j_t > 0 we can apply lem:phibound to get ϕ^j_t+1≤ d^j_t √(n^j_t).
Thus, we get
√(n^i_t/n^j_t) = √(n^i_t/n^j_t)·d^i_t/d^j_t· d^j_t/d^i_t≤ϕ^i_t+1/ϕ^j_t+1·d^j_t/d^i_t≤ 2 d^j_t/d^i_t≤ 2 d^j_t/d_min.
Plugging this back into the expression above, we have
∑_k = 1^n^i_t(π^i_(k)) ≤2(d^j_t)^2/d_min√(n^i_t) + n^i_t/n^j_tu^j_t - u^i_t.
To bound the last two terms, we use the regret coefficient estimate:
n^i_t/n^j_tu^j_t - u^i_t
= n^i_t (u^j_t/n^j_t - u^i_t/n^i_t)
≤ n^i_t (û^j_t/n^j_t - û^i_t/n^i_t)
+ c√(n^i_t lnMln n^i_t/δ) + c n^i_t √(lnMln n^j_t/δ/n_t^j)event
= n^i_t (û^j_t/n^j_t - c √(lnMln n^j_t/δ/n_t^j) - û^i_t/n^i_t - c√(lnMln n^i_t/δ/n_t^i))
+ 2c√(n^i_t lnMln n^i_t/δ) + 2c n^i_t √(lnMln n^j_t/δ/n_t^j)
≤d^i_t √(n^i_t)
+ 2c√(n^i_t lnMln n^i_t/δ) + 2c n^i_t √(lnMln n^j_t/δ/n_t^j)definition of d^i_t
≤d^i_t √(n^i_t) + 2c(1 + √(n^i_t/n^j_t))√(n^i_t lnMln t/δ)n^i_t ≤ t and n^j_t ≤ t
≤d^i_t √(n^i_t) + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ)eqn:dn_connection_est
≤ϕ^i_t+1 + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ)ϕ^i_t+1≥d^i_t √(n^i_t)
≤ 2ϕ^j_t+1 + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ)lem:estimating_balanced
≤ 2 d^j_t √(n^j_t) + 2c(1 + 2 d^j_t/d_min)√(n^i_t lnMln t/δ). lem:phibound
Plugging this back into the expression above, we get the desired statement:
∑_k = 1^n^i_T(π^i_(k))
≤2(d^j_t)^2/d_min√(n^i_t) + 2 d^j_t √(n^j_t) + 2c(1 + 2d^j_t/d_min)√(n^i_t lnMln t/δ) + log_2 max{T, T/ d_min} .
*
By lem:highprob, event from def:evente has probability at least 1 - δ. In event , we can apply lem:base_learner_regret_est for each base learner.
Summing up the bound with j = _i' ∈ [M]max_i d^j_T_i' from that lemma gives
(T) ≤∑_i = 1^M [ 2(d^j_T_i)^2/d_min√(n^i_T_i) + 2 d^j_T_i√(n^j_T_i) + 2c(1 + 2d^j_T_i/d_min)√(n^i_T_ilnMln T/δ) + log_2 max{T, T/ d_min}]
≤ 2M d^⋆_T √(T) + M log_2 max{T, T/ d_min} + [ 2(d^⋆_T)^2/d_min
+ 6 d^⋆_T/d_min c√(lnMln T/δ)]∑_i = 1^M√(n_T^i)
≤( 2√(M)d̅^j_T + 2(d̅^⋆_T)^2/d_min
+ 6c d^⋆_T/d_min√(lnMln T/δ))√(MT) + M log_2 max{T, T/ d_min}.
Plugging in d_min≥ 1 gives
(T) ≤( 2√(M) d^⋆_T + 2(d^⋆_T)^2 + 6c d^⋆_T√(lnMln T/δ))√(MT) + M log_2 T
= O ( (M d^⋆_T + √(M)(d̅^⋆_T)^2 + d^j_T √(lnMln T/δ)) √(T)+ M ln(T))
= Õ( d^⋆_T M√(T) + (d^⋆_T)^2 √(MT)) ,
as claimed.
§ EXPERIMENTAL DETAILS
§.§ Meta-Learners
We now list the meta-learners used in our experiments.
Corral. We used the Corral Algorithm as described in <cit.> and <cit.>. Since we work with stochastic base algorithms we use the Stochastic Corral version of <cit.> where the base algorithms are updated with the observed reward r_t instead of the importance sampling version required by the original Corral algorithm of <cit.>. The pseudo-code is in Algorithm <ref>. In accordance with theoretical results we set η = Θ(1/√(T) ). We test the performance of the Corral meta-algorithm with different settings of the initial learning rate η∈{ .1/√(T), 1/√(T), 10/√(T)}. In the table and plots below we call them CorralLow, Corral and CorralHigh respectively. In tab:exp3_overview_appendix we compare their performance on different experiment benchmarks. We see Corral and CorralHigh achieve a better formance than CorralLow. The performance of Corral and CorralHigh is similar.
EXP3. At the beginning of each time step the EXP3 meta-algorithm samples a base learner index i_t ∼ p_t from its base learner distribution p_t. The meta-algorithm maintains importance weighted estimator of the cumulative rewards for each base learner R_t^i for all i ∈ [M]. After receiving feedback r_t from base learner i_t the importance weighted estimators are updated as R_t+1^i = R_t^i + 1(i = i_t) r_t/p_t^i_t. The distribution p_t+1^i = (1-γ)exp( η R_t+1^i )/∑_i'exp(η R_t+1^i') + γ/M where η is a and γ are a learning rate and exploration parameters. In accordance with theoretical results (see for example <cit.>) in our experiments we set the learning rate to η = √(log(M)/MT) and set the forced exploration parameter γ = 0.1/√(T). We test the performance of the EXP3 meta-algorithm with different settings of the forced exploration parameter γ∈{0, .1/√(T), 1/√(T)}. In tab:exp3_overview_appendix we call them EXP3Low, EXP3 and EXP3High. All these different variants have a similar performance.
Greedy. This is a pure exploitation meta-learner. After playing each base learner at least once, the Greedy meta-algorithm maintains the same cumulative reward statistics {u^i_t }_i ∈ [M] as D^3RB and ED^2RB. The base learner i_t chosen at time t is i_t = _i∈ [M]u^i_t/n_t^i.
UCB. We use the same UCB algorithm as described in sec:running_example. We set the scaling parameter c = 1.
D^3RB and ED^2RB. These are the algorithms in Algorithm <ref> and <ref>. We set therein c = 1 and d_min = 1.
§.§ Base Learners
All base learners have essentially been described, except for the Linear Thompson Sampling Algorithm (LinTS) algorithm, which was used in all our linear experiments.
In our implementation we use the algorithm described as in <cit.>. On round t the Linear Thompson Sampling algorithm has played x_1, ⋯ x_t-1⊂ℝ^d with observed responses r_1, ⋯, r_t-1. The rewards are assumed to be of the form r_ℓ = x_ℓ^⊤θ_⋆ + ξ_t for an unknown vector θ_⋆ and a conditionally zero mean random variable ξ_t. An empirical model of the unknown vector θ_⋆ is produced by fitting a ridge regression least squares estimator θ_t = _θλθ^2 + ∑_ℓ=1^t-1 ( x_ℓ^⊤θ - r_ℓ)^2 for a user specified parameter λ > 0. This can be written in closed form as θ_t = ( 𝐗^⊤𝐗 + λ𝕀)^-1𝐗^⊤ y where 𝐗∈ℝ^t-1× d matrix where row ℓ equals x_ℓ. At time t a sample model is computed θ_t= θ_t + c √(d)( 𝐗^⊤𝐗 + λ𝕀)^-1/2η_t where η_t ∼𝒩(0, 𝕀) and c > 0 is a confidence scaling parameter. This is one of the parameters that we vary in our experiments. If the action set at time t equals 𝒜_t (in the contextual setting 𝒜_t changes every time-step while in the fixed action set linear bandits case it ) the action x_t = _ x ∈𝒜_t x_t^⊤θ_t. In our experiments λ = 1 and θ_⋆ is set to a scaled version of the vector (0, ⋯, d-1). In the detailed experiment description below we specify the precise value of θ_⋆ in each experiment.
§.§ Detailed Experiments Description
Figure <ref> illustrates the overall structure of our experiments. Experiments 1 through 6 are those also reported in the main body of the paper. The below table contains a detailed description of each experiment, together with the associated evidence in the form of learning curves (regret scale vs. rounds). Finally, Table <ref> contains the final (average) cumulative regret for each meta-learner on each experiment.
|
http://arxiv.org/abs/2306.01925v2
|
20230602213044
|
Improving the generalizability and robustness of large-scale traffic signal control
|
[
"Tianyu Shi",
"Francois-Xavier Devailly",
"Denis Larocque",
"Laurent Charlin"
] |
cs.LG
|
[
"cs.LG"
] |
[email protected]
Department of Civil Engineering, University of Toronto, 35 St. George Street,Toronto, Ontario, M5S 1A4, Canada
Department of Decision Sciences at HEC Montreal,Quebec, Canada
[cor1]Corresponding author. Address: 35 St.George Street, Toronto, Ontario, M5S 1A4, Canada
A number of deep reinforcement-learning (RL) approaches propose to control traffic signals. Compared to traditional approaches, RL approaches can learn from higher-dimensionality input road and vehicle sensors and better adapt to varying traffic conditions resulting in reduced travel times (in simulation). However, these RL methods require training from massive traffic sensor data. To offset this relative inefficiency, some recent RL methods have the ability to first learn from small-scale networks and then generalize to unseen city-scale networks without additional retraining (zero-shot transfer). In this work, we study the robustness of such methods along two axes. First, sensor failures and GPS occlusions create missing-data challenges and we show that recent methods remain brittle in the face of these missing data. Second, we provide a more systematic study of the generalization ability of RL methods to new networks with different traffic regimes. Again, we identify the limitations of recent approaches.
We then propose using a combination of distributional and vanilla reinforcement learning through a policy ensemble. Building upon the state-of-the-art previous model which uses a decentralized approach for large-scale traffic signal control with graph convolutional networks (GCNs), we first learn models using a distributional reinforcement learning (DisRL) approach. In particular, we use implicit quantile networks (IQN) to model the state-action return distribution with quantile regression. For traffic signal control problems, an ensemble of standard RL and DisRL yields superior performance across different scenarios, including different levels of missing sensor data and traffic flow patterns. Furthermore, the learning scheme of the resulting model can improve zero-shot transferability to different road network structures, including both synthetic networks and real-world networks (e.g., Luxembourg, Manhattan). We conduct extensive experiments to compare our approach to multi-agent reinforcement learning and traditional transportation approaches. Results show that the proposed method improves robustness and generalizability in the face of missing data, varying road networks, and traffic flows.
Distributional reinforcement learning, Graph neural networks, Policy ensemble, Robustness, Generalizability, Traffic signal control.
§ INTRODUCTION
As the number of cars on our roads continues to rise it is imperative to adapt road networks to minimize congestion. Developing robust yet efficient traffic control strategies is a powerful mitigator <cit.>. Powerful traffic signal control (TSC) methods, for example, based on deep reinforcement learning <cit.>, now exist to optimize the control signal phase (e.g., red or green). They learn from and use available historical and real-time traffic and vehicle data <cit.>.
Real-time data can be collected from the built-in sensors of the vehicles and then transmitted to the control system to help in decision-making (e.g., to free busy lanes by changing the phase of the TSC) <cit.>. However, missing values in the collected data from vehicles <cit.>, (e.g., caused by GPS occlusions and transmission delays) — are common. Downstream, missing data will introduce uncertainty in the observations of the system, which will then be challenging for the decision-making module.
Controlling traffic signals under these exogenous sources of uncertainty requires robust control policies.
A second challenge is that traffic conditions can be non-stationary because of singular events such as accidents and construction and also due to recurring patterns (e.g., periodic daily and weekly ones). They can also evolve over time as a result of other infrastructure changes (e.g., new roads nearby). As a result, it is advantageous to use control policies that can adapt to new scenarios, varying traffic-flow patterns, and even allow deployment across networks of different scales.
The ability to obtain policies that are both robust (to sensor failures) and that can generalize to new situations (traffic and networks) is important for deploying control policies in complex road systems that are ubiquitous in our cities. Current methods do not yield policies with both desiderata (we show this below). This is the gap we address in this paper. Next, we introduce the classes of existing approaches for traffic signal control.
First, hand-crafted policies for TSCs form a class of traditional approaches. For example, fixed-time approaches <cit.> define a fixed cycle length and phase time for each intersection based on the road configuration. Greedy <cit.> maximizes the throughput of the road networks by greedily picking the phase that can maximize the pressure. In principle, hand-crafted policies generalize across networks and traffic conditions. However, they rely on unrealistic assumptions, such that the road lanes have unlimited capacity and that the traffic flow is constant. As a result, their application in real-world and complex road networks is limited <cit.>.
Reinforcement learning (RL), a formalism for sequential decision-making, is proving to be an effective tool to learn complex policies for diverse traffic-control problems <cit.>. RL models traffic signals as agents that use the current state of the environments (e.g., the position of all nearby vehicles) to control the light phase. Reinforcement learning agents are trained to maximize a utility function called a reward. For traffic-signal control, rewards are often taken to be proxies of the traffic efficiency, measured, for example, as the inverse (vehicle) delay or queue length. In simulation, RL has been trained to control traffic lights in real-world road networks and outperforms hand-crafted policies <cit.>.
RL has shown robustness in small-scale road networks (one to five intersections). In particular, the standard Deep Q-Networks (DQNs) for RL, using a replay buffer to store previous experiences, have demonstrated a level of generalizability for different traffic demands. <cit.>. Figure <ref> shows that DQNs still suffer from a performance decrease when faced with missing data. The performance further decreases in larger road networks.
Generalizability is also important for RL policies since training RL agents is computationally costly even for small-scale networks.
To scale agents to larger-scale road networks (of the order of neighborhoods or whole cities) with different traffic flow patterns, <cit.> and <cit.> explore scalable and decentralized multi-agent reinforcement learning (MARL) approaches.
In particular, to encourage better utilization of the spatial-temporal information, researchers model the road network using graph neural networks <cit.> trained with RL to encourage cooperation <cit.> and improve transferability <cit.>.
We are interested in further studying these approaches. In particular, we investigate their robustness to missing data as well as their ability to generalize to larger-size networks with different traffic regimes.
We introduce an initial experiment to demonstrate the limitation of current deep-reinforcement learning approaches. We learn a traffic signal control agent based on decentralized independent deep reinforcement learning <cit.>. We also add a few standard Deep RL tricks: Double Q-Learning <cit.> to prevent overestimation and to stabilize the learning process, and parameter noise for exploration <cit.>. The experiment compares the performance of this Deep RL agent trained on a small network with 3 intersections and tested on the same small network as well as a larger one with 30 intersections. Sensor failures are also presented in the test scenarios (the exact setup is described later <ref>).
As noted above, we find that faced with sensor failures, the RL agent performs comparatively worse in a large road network versus in a small one (Figure <ref>). Furthermore, we find that when demand surges,[The heavy traffic regime is simulated by doubling the number of cars in the network.] the performance decreases more in the large road network (Figure <ref>). This result demonstrates that a shift in the distribution of network architectures and the distribution of demand hinders the robustness of reinforcement learning approaches. These observations <ref> and <ref> motivate the development of robust and transferable Deep RL-based methods for traffic signal control.
In this work, we propose RGLight, a method that can further improve both the robustness and generalizability of traffic-signal controllers compared to previous works (as shown in Table <ref>). RGLight uses distributional RL (DisRL) <cit.>. Compared to standard RL that estimates the mean value of returns (actions in each state), DisRL constructs a (full) distribution over returns. DisRL tends to improve the stability of the learning process, i.e., improve convergence, especially in dynamic environments <cit.>. Until now, DisRL instantiations focus on the single-agent setting without exogenous uncertainty. We conjecture that DisRL can also improve the learning stability in multi-agent settings and in particular
in large-scale traffic signal control settings.
Building upon the prior work of IGRL <cit.>, we find that a policy ensemble that combines distributional and deterministic modeling further boosts the generalizability of IGRL across a number of scenarios.
We also propose several criteria to evaluate the robustness and generalizability of the learned policies and conduct extensive experiments to evaluate RGLight in both real-world settings and synthetic settings. Results show that RGLight improves the robustness and generalizability of traffic signal control compared to several state-of-the-art baselines.
To summarize, our main contributions are:
* A method based on a policy ensemble of distributional RL and standard graph-based RL for traffic signal control. Our approach focuses on improving the overall generalization performance and robustness of the trained RL policies.
* An empirical evaluation with different types of missing values, flow patterns, and network structures using both synthetic and real-world road networks. We compare approaches using an evaluation matrix to provide a more systematic analysis of the generalization ability of different models. We highlight that RGLight outperforms several state-of-the-art baselines.
§ BACKGROUND AND RELATED WORK
§.§ RL-based Traffic Signal Control
The very first implementation of RL in TSC uses tabular Q-Learning to learn from a single intersection <cit.>. <cit.> then uses RL with function approximations. However, most previous investigations are limited to toy scenarios. To develop RL methods for more realistic traffic data, researchers turned their attention to deep RL. <cit.> show that deep reinforcement learning can dynamically adjust to real-time traffic. However, the high dimension of the joint action space still limits the scalability of centralized RL approaches.
§.§ Large-Scale Traffic Signal Control
Multi-agent Reinforcement Learning (MARL) is introduced to improve the scalability of RL agents by using a decentralized control framework. <cit.> use advantage actor-critic (A2C) as a large-scale TSC method. To be specific, neighbors' information is adapted to improve sample efficiency and promote cooperative strategy. Furthermore, a spatial discount factor is introduced to improve the learning efficiency, i.e. to reduce fitting difficulty. To enable cooperation of traffic signals, recent works study how to encourage cooperation through graph representation learning. <cit.> propose to use a graph attention neural network in the setting of large-scale road networks with hundreds of traffic signals. They model each TSC as an agent. Agents learn to communicate by attending to the representations of neighboring intersections. Their results demonstrate the effectiveness of the attention mechanism to help cooperation and achieve superior performance over state-of-the-art methods. Concurrently, <cit.> further exploit the vehicular data at its finest granularity by representing every vehicle as a node. They demonstrate the flexibility of GCNs, which can enable transferability to unseen road networks. However, neither of these works evaluates their methods under exogenous uncertainties.
§.§ Robustness in Traffic Signal Control
There are several factors that could affect the model's robustness, such as sensor failures and demand surges.
In transportation research, a very straightforward way to solve the exogenous uncertainty problem from sensor failure is to use imputation methods <cit.>.
For example, recent work uses
a variational Bayes approach to predict missing values accurately <cit.>.
Graph Neural Network (GNN) can also be an efficient and effective tool for recovering information from malfunctioning sensors <cit.>.
Bayesian multiple imputation and bootstrap have also been used to approximate the distribution of the training set in order to estimate the state-action value function given missing data <cit.>.
Such methods are tailored to sensor failures and do not solve problems related to demand surges and different road networks. Therefore, we do not focus on imputation methods here.
Recently, deep RL has proved to be robust in small-scale networks under the impact of special events, such as demand surges, sensor failures, and partial detection. <cit.> developed the callback-based framework to enable flexible evaluation of different deep RL configurations under special events. They concluded that when training in scenarios with sensor failures, the RL approach can be quite robust to the wide sensor failure and demand surge problems. <cit.> demonstrate that deep RL agents can be robust within the partially detected intelligent transportation systems (PDITS), which is a partially observable Markov decision process (POMDP) in the RL community,
in which only part of vehicle information can be acquired. They have conducted experiments under different detection rates and report that the RL-based control method can improve travel efficiency even with a low detection rate. However, their evaluation scenario is limited to one to five intersection cases. Most importantly, they have not further discussed how to improve the robustness based on previous reinforcement learning methods. Our model can be extended to a large-scale network. <cit.> introduces a model called OnCertain to improve decision-making in self-adaptive systems that interact with each other in dynamic environments. The proposed system can handle uncertainty caused by unpredictable and rare events while having limited information about the environment.
§.§ Generalization in Traffic Signal Control
The training mechanism for Deep RL follows a trial-and-error approach and is computationally expensive (see chapter 4 in <cit.>). For traffic signal control, training models on large-scale networks or using a variety of different traffic demands quickly becomes prohibitive <cit.>.
As a result, designing methods that can learn on smaller networks and transfer their knowledge to large-scale ones can be beneficial.
Recently, meta-RL[meta-RL: a learning-to-learn approach that involves learning on training tasks in order to ease training on test tasks drawn from the same family of problems.] has been applied to traffic signal control problems. <cit.> propose to use value-based meta-reinforcement learning for traffic signal control which includes periodically alternating individual-level adaptation and global-level adaptation. Based on the previous work <cit.>, <cit.> take the policies of neighbor agents into consideration and consider learning a latent variable to represent task-specific information to not only balance exploration and exploitation but also help learn the shared structures of reward and transition across tasks. <cit.> design a WGAN-based <cit.> flow generator to generate different traffic flows to improve the generalization ability of TSC models to different traffic flow environments. However, MetaLight <cit.> considers training on larger-scale networks, then testing on a subset of training networks or smaller networks. Recently, GNNs have demonstrated generalizability to different road structures and traffic flow rates or demands. <cit.> stack multiple GCN layers onto neural networks to improve the generalizability to different vehicle generation rates during training. <cit.> use graph attentional networks to facilitate communication and promote cooperation among intersections. <cit.> represent traffic entities as nodes in the graph to enable generalizability to new road networks, traffic distributions, and traffic regimes.
§.§ Summary of Previous Work on Robustness and Generalizability for Traffic Signal Control
Table <ref> summarizes and compares the previous works with respect to the following aspects: 1. Generalizability to different networks and traffic flows or demands, and 2. Robustness to sensor failures (noise).
Deep reinforcement learning methods have demonstrated robustness to sensor failures <cit.>. Furthermore, by using the transfer learning technique <cit.>, the trained model can also handle demand surges. However, the above methods do not adapt to new road networks. At best these methods require a fine-tuning step before being deployed on a new network.
Some work proposes using meta-learning to improve the generalizability to different road networks and traffic flow distributions <cit.>. However, the training data sets usually include more scenarios than the testing sets, or the testing sets are a subset of training sets <cit.>. Furthermore, MetaLight <cit.> still needs to re-train its model parameter on new intersections. As a result, they cannot perform zero-shot transfer to new road networks.
Recently, graph-convolutional networks have demonstrated their ability to further improve generalizability, enabling zero-shot transfer learning to new road structures and traffic settings that have never been experienced during training. In summary, IGRL <cit.> is the only work that can enable zero-shot transfer learning for new scenarios. Therefore, we choose the IGRL model and its variant as our reinforcement learning baseline methods.
In this work, we build upon the previous work <cit.> and systematically evaluate the transferability of IGRL. We are the first to jointly improve generalizability to different networks and robustness to sensor failures and demand surges.
§ METHODOLOGY
The proposed framework is shown in <Ref>. Like <cit.>, we first encode the road network around each TSC including the moving components as a graph with nodes and edges. We abstract each vehicle feature (V), lane feature (L), connection feature (C), and traffic signal controller (TSC) feature as nodes of the graph (<Ref>). Then a representation of the graph is learned using a graph convolutional network (GCN), see <Ref>.
We train the GCN to estimate state-action values (or returns) either using a standard RL objective (<Ref>) or a DisRL objective (<Ref>). In standard RL, the GCN provides a graph representation embedding ψ (<Ref> right branch). In DisRL, we combine the embedding with an embedding function ϕ(·) (<Ref> left branch). We then combine the values of the returns estimated by the DisRL and the standard RL objectives (<Ref>).
The combined estimated returns can then be decoded (greedily) to obtain the agent's action. Once an action a_t is executed, the environment changes (e.g., following a micro-traffic simulator) and the agent can then pick its next action (a_t+1). In practice, we assume that the agent can execute an action every second (i.e., a timestep lasts one second).
From Figure <ref>, we can find that on the right (traditional DQN/IGRL), pointwise estimates of state-action returns are used (one point per action/color) while on the left, multiple samples (i.e. multiple points per action/color) are drawn from quantiles and implicitly define the distribution. of state-action returns for all actions.
§.§ Agent Design
§.§.§ State space
Given the state observation for each signal controller i, the state-action pairs for each TSC are denoted
(s_i, a_i) ∈ S × A, i= 1,… ,K.
We assume that there are K intersections in the system and each agent, i.e., TSC, can observe part of the system state s ∈ S. The number of layers in the GCN defines how large the observable part of the state space is for a given agent. For instance, when using only 2-3 layers, given the architecture of the GCN, only information regarding a local intersection (connectivity features corresponding to controllable connections and traffic features corresponding to immediately inbound and outbound lanes) is perceivable to that intersection's agent.
Based on <cit.>, we consider the following features in each entity:
* TSC feature: represents the state of a controller. The features are the number of seconds since a traffic controller performed its last phase switch.
* Connection feature: represents the state of an existing link between an entry lane and an exit lane. For example, the connection exists between an entry lane A and an exit lane B if a vehicle on lane A is allowed to continue its travel to lane B. The features in the connection feature are whether a connection is opened under the current phase; whether an open connection between an entry and an exit lane has priority or not; the number of switches the controller has to perform before the next opening of a given connection; and whether the next opening of the connection will have priority or not.
* Lane feature: represents the state of a lane. It includes the length of the lane.
* Vehicle feature: represents the state of a vehicle which includes its current speed and position on the current lane as a feature.
§.§.§ Action space
At every intersection of the road network, there is a predefined logical program, composed of a given number of phases, depending on the roads, lanes, and the connection information. The program is given by the road network. The binary action of the agent is either to switch to the next phase or prolong the current phase. This modelling is compatible with TSCs using different programs.
§.§.§ Reward function
Each agent i obtains a reward r^t_i at time t from the environment. In this paper, we want to minimize the travel time of the vehicles. The reward is defined as the negative sum of total queue lengths per intersection q, r^t_i=-∑_l q^t_i,l. where
q^t_i,l is the queue length on the lane l at time t.
§.§ Graph Representation Learning on Different Nodes
§.§.§ Graph representation using a GCN
As in <cit.>, we encode the state of the network as a graph. Traffic signal controllers, lanes, connections between lanes, and vehicles are nodes in this graph. Edges connect nodes that are adjacent on the road network (e.g., a vehicle node to its current lane node or a lane node to its connections with a neighbor lane).
The graph is encoded using its adjacency matrix A and it is processed by a graph convolutional network (GCN) <cit.>. The GCN propagates information between nodes to obtain a representation H^n at each layer n:
H^n+1=σ(D^-1/2 A D^-1/2 H^n W^n),
where D is a (diagonal) degree matrix (D_ii=∑_j A_ij) which normalizes A using its number of neighbors, W^n are learned parameters and σ is the sigmoid activation function <cit.>.
Along with the graph structure, nodes and edges can have features X.
These features are used to obtain the first-layer representation:
H^0 = σ(W^0⊤X+b^0)
where W^0 and b^0 are learned parameters.
Assuming N hidden layers, we use the last-layer representation H^N to predict a value function. Let ψ:
𝒳→ℝ^d
be an embedding function parameterized by the GCN layers. We add a subsequent fully-connected layer to map ψ(x) to the estimated action values, such that Q(x,a) ≡ f(ψ(x))_a, where a in f(·)_a indexes the output action.
We can get the estimated Q values as:
Q(s,a)= (H^N W_p+b_p)_(s,a),
where W_p ∈ R^c × p and b_p ∈ R^p are parameters of the neural networks, and p is the number of phases (action space).
In Deep RL, the objective to optimize at each time step t is
ℒ( θ)= (y_t-Q (s_t, a_t;θ))^2,
where y_t=r_t +γ max_a Q(s_t+1, a_t+1), θ represents all trainable parameters (b^0, W^0,… ,N-1, b_p, W_p) and γ is the (fixed) discount factor.
The (greedy) action associated with the value function can be obtained for each state as:
π(s)=a ∈𝒜max Q(s, a).
where π(s) denotes the policy in state s.
§.§.§ Parameter sharing
Each TSC learns to maximize its local reward and as such TSCs are independent. However, the parameters of all TSCs are shared to encourage learning parameters that transfer to a variety of situations. In particular, nodes of the same type both within the same TSC and across TSCs share the same parameters. Parameter sharing also reduces the memory footprint of the system (since the number of parameters is now independent of the number of TSCs). The system can then scale to very large networks <cit.>.
§.§ Distributional RL
The previous section introduces standard RL for GCNs (<ref>). Now, we discuss learning the GCN model using distributional RL (DisRL). Compared to traditional RL, DisRL models the distribution over returns. The expectation of that distribution yields the standard value function. In this work, we use implicit quantile networks <cit.>, a distributional version of Deep Q-Networks <cit.>. Implicit quantile networks can approximate any distribution over returns and show superior performance compared to other DisRL methods <cit.>.
Implicit quantile networks define an implicit distribution using samples τ from a base distribution τ∼ U([0,1])). The implicit distribution is parameterized using ϕ:[0,1] → R^d. The function ϕ provides the embedding for quantile τ. This embedding ϕ is combined with the GCN's output embedding ψ to form the approximation of the distributional Q-values (see Figure <ref> (a)):
Z_τ(s, a) ≡ f(ψ(s) ⊙ϕ(τ))_a,
where ⊙ represents the element wise product, the a on the RHS indexes the output of the function f. We use the same embedding function as in <cit.>:
ϕ_j(τ):=ReLU(∑_i=0^n-1cos (π i τ) w_i j+b_j),
where n is the size of the input embedding, j∈ 1,…,d indexes different units (neurons), and w_ij and b_j are parameters shared across all TSCs (much like parameters of the GCN <Ref> are also shared across TSCs).
As a result, the state-action value function can be represented as the expectation:
Q(s, a):=τ∼ U([0,1])𝔼[Z_(τ)(s, a)],
and its associated greedy policy can be obtained from <Ref>.
In DisRL, we want to minimize the distance between two distributions so as to minimize the temporal difference error (TD-Error). For two samples τ , τ' ∼ U([0,1]), and policy π, the TD-Error at time step t can be computed as:
δ_t^τ, τ^'=r_t+γ Z_τ^'(s_t+1, π(s_t+1))-Z_τ(s_t, a_t).
Furthermore, the random return is approximated by a uniform mixture of K Dirac delta function:
Z_(s,a):=1/K∑_i=1^K δ_μ_i (s,a),
where each μ_i assigned a fixed quantile target. The quantile target's estimations are trained using the Huber loss <cit.> with threshold λ.
As a result, the distributional version of loss function is formulated as:
ℒ_dis( θ)=1/M^'∑_i=1^M∑_j=1^M^'ρ_τ_i^λ(δ_t^τ_i, τ_j^'),
with ρ_τ_i^λ is the quantile regression term <cit.>, M and M' the number of samples used to evaluate the TD-error.
§.§ RGLight
In the previous sections, we introduce two different reinforcement learning formulations for learning TSC policies (see <Ref>).
Our initial experiments
show important empirical differences between the two approaches.
First, we find that distributional RL converges faster than classical RL in our domain. We also note that the embeddings learned by both approaches are different (see Figure 6 in the supplementary material for an example).
We suspect a combination of the learned policy might yield the best of both worlds.
To do so, we train both approaches separately and then combine their (estimated) Q-values (during testing) (see Figure <ref>).
Given a set of actions A(s_t)={a[1],...,a[n]}, The estimated Q-value for action a_i is Q(s_t,a_i) at time t.
We first normalize the Q values of both methods. We find that exponentiating the values first yields better results <cit.>:
Q̃(s,a)= e^Q(s,a)/T/∑_i e^Q(s,a_i)/T.
We then obtain Q̃^RG the Q-value used by RGLight as a convex combination of the normalized Q-values of the two methods:
Q̃^RG=κQ̃^deter+(1-κ)Q̃^dis,
where we dropped the s and a indexes for clarity and κ∈[0,1] is the relative importance of the standard RL approach.
We ensemble the prediction results from two frameworks to improve the robustness and generalizability of our model. Based on preliminary simulations, we find that κ=0.6 and T=5 offer more consistent and higher performance across experiments.
§ EXPERIMENTS
In this section, we study the effectiveness of the RGLight method for multi-agent TSC. We aim at answering the following questions:
* How does the proposed method perform compared with other state-of-the-art baselines? (<Ref> and <Ref>)
* Is the proposed method more robust to sensor failure problems compared to other baseline methods? (<Ref> and <Ref>)
* Can the proposed method generalize to different road network structures and traffic regimes? (<Ref>)
* How can we balance the trade-off between representation capacity and learning stability to improve the overall robustness and generalizability? (<Ref> and <Ref>)
§.§ Experiment Setup
The scenario we study is one where a system learns in a “controlled environment” on synthetic networks with no missing data. Then the performance, robustness, and generalizability of the system are tested by “deploying” it in a more realistic scenario that involves new networks (synthetic or from the real world), different traffic regimes (demand surges), and missing data. A visualization of the learning setup is shown in Figure <ref>.
To be more precise, we train RL methods (DGRL, IGRL, and GNN-TSC) on synthetic road networks for 60 episodes without missing data or demand surge. Then we test their performance on either other synthetic networks or, perform zero-shot generalization by controlling the TSCs of two real-world networks (a part of Luxembourg and Manhattan). All of our studies use the simulation of urban mobility (SUMO) <cit.> micro simulator.
§.§.§ Background and Assumption
* Sensor Failures: In all of our experiments, we assume that we know the lane each vehicle is in. We imagine, for example, that on each traffic signal controller, there would be a camera/detector that can sense which vehicle has entered which lane, and it is not likely to fail <cit.>.
The most common cause of missing data comes from the sensor failure of probed vehicles, which means that the system detects the vehicle, but does not get its current speed and exact position <cit.>. We assume faulty vehicle sensors provide a value of zero.
* Traffic flows: We consider different traffic flows as both different traffic distributions and traffic demands. Particularly, different traffic demands are based on the arrival rate. For all these experiments, the trip is generated by SUMO's trip generator.[https://sumo.dlr.de/docs/Tools/Trip.html] The arrival rate is controlled by the option period in SUMO <cit.>. By default, this generates vehicles with a constant period and arrival rate of (1/period) per second. Note that for different scales of road networks, the same arrival rate will end up with different traffic signal performances.[To obtain a fair comparison, we consider the heavy traffic regime as two times the normal traffic regime in simulated data. In our experiment, we set the normal traffic regime with period=4 and the heavy traffic regime with period=2.] For the trip distribution, the number of departures per second will be drawn from a binomial distribution. In our experiment setting, the trip distribution (the probability of a successful departure) will be changed every 120 seconds. As a result, both the traffic distribution and the traffic demands can be changed in our study.
* Evaluation metrics: We discuss the performance of the methods using several standard evaluation metrics ( <cit.>).
§.§.§ Travel time
The travel time is defined as the time duration between the real departure time and the time the vehicle has arrived. The information is generated for each vehicle as soon as the vehicle arrives at its destination and is removed from the network.
§.§.§ Queue length
The queue length is calculated at the lane level using the end of the last standing vehicle. This criterion measures congestion, representing whether it significantly slowed close to an intersection.
§.§.§ Delay
The delay d_t measures the gap between the current speed of the vehicle and its maximum theoretically reachable speed, which is constrained by the type of the vehicle and the maximum allowed speed on the current lane
s_v^*=min(s_v^*, s_l),
d_t=∑_v ∈ V(s_v^*-s_vt) / s_v^*
where V is the total number of vehicles traveling in the current network, s_v^* is the maximum speed that the vehicle can reach, s_l is the speed limit of this road, and s_vt is the vehicle speed at time step t and d_t denotes the delay at time t. Instantaneous delay for 1 vehicle is how far it currently is from its optimal theoretically reachable speed
§.§.§ Datasets
We evaluate the different methods using both synthetic networks with synthetic data and real-world networks with real-world traffic routes.
* Synthetic networks:
We use the same approach to generate the synthetic networks as in IGRL <cit.>. The structure of the synthetic road networks is generated at random using the SUMO simulator, the number of intersections varies between two and ten; the length of every edge is between 100 and 300 meters, and the number of lanes per route is between one and four. Some examples of the generated networks can be seen in Figure <ref>. We try to maximize the variability of the training networks by generating random networks to cover the most typical cases in real-world networks.
* Real-world networks:
We use representative traffic data[Luxembourg: <https://github.com/lcodeca/LuSTScenario>, Manhattan: <https://traffic-signal-control.github.io/>] from part of Luxembourg and Manhattan to evaluate the performance of our model in real-world settings. Manhattan has a grid-like road network and contains 75 traffic lights and 550 intersections. The Luxembourg network contains 22 traffic lights and 482 intersections. It is also more irregular than Manhattan. Both networks have different traffic demand evolution characteristics as shown in Figure 1 and 2 in the supplementary material.
§.§.§ Baselines
We compare our method with several state-of-the-art methods, including both classical transportation methods and learned ones.
Transportation Methods:
* Fixed time Baseline <cit.>: It uses a predetermined plan for cycle length and phase time. This technique is widely used when the traffic flow is steady <cit.>.
* Max-moving-car-dynamic-heuristic (Greedy): This dynamic heuristic-based method aims at ensuring that as many vehicles as possible are moving on inbound lanes at any given time, in the spirit of the popular baseline Greedy <cit.> under a cyclic setting. Controllers switch to the next phase if, on inbound lanes, the number of stopped vehicles is superior to the number of moving vehicles, and prolongs the current phase otherwise.
Reinforcement Learning Methods:
* Inductive Graph Reinforcement Learning (IGRL) <cit.>: This recent approach uses graph convolutional networks with a decentralized RL objective. The authors show that their approach can scale and transfer to massive-scale networks. Our robust learning framework is based on IGRL. We compare against their best-performing model IGRL-V which models vehicles as nodes.
* Graph Neural Networks for TSC (GNN-TSC) <cit.>: Similar to IGRL, the authors propose a GNN-based RL-trained model. Compared to IGRL <cit.>, the method does not consider individual vehicles as nodes in the graph. Instead, they model information at the lane level.
With that in mind, we use IGRL-L, a version of IGRL that models lane nodes rather than vehicles as nodes. This version is similar to the CoLight method <cit.>.[The authors of <cit.> rely on the CityFlow simulator <https://cityflow-project.github.io/>, we use SUMO, which prevents a direct comparison without a major code rewrite.]
* Independent Reinforcement Learning (IRL): An independent deep Q-Learning (DQN) agent can be used to model each TSC. DQNs have som level of robustness given demand surges and sensor failures <cit.>. Further, the IRL baseline couples DQNs with recent developments for improved robustness: double Q-Learning <cit.>, a dueling architecture <cit.>, and noisy layers <cit.>.
§.§ Performance Comparison
In this section, we compare the performance of the above baselines to the performance of RGLight with respect to different traffic regimes and sensor failures. All experiments are repeated 30 times with different random seeds for trip generations and the average results are presented. For every evaluation metric, we report the sum of a 1,000-time-step simulation.
Note that for each criterion, for readability, the obtained value is divided by 100 in the tables. We also provide a video illustrating the different methods.[Simulation video link: <https://youtu.be/wTUkoXvVghs>]
§.§.§ Comparison under Different Traffic Regime in Synthetic Networks
Table <ref> reports the performance of different methods for both normal and heavy traffic regimes in synthetic networks.[We conduct the demand surge experiment in a synthetic network because it is difficult to control the demand parameter in real networks with real traffic demand.] We use the same road network (not seen in the training set) in tests for all methods with 30 random seeds for trips.
Overall, RGLight outperforms others in the normal regime across the three metrics except in terms of travel time where IGRL does as well. RGLight also shines in a heavy regime showing that it is more robust to demand surges.
We see that Fixed time does not perform as well as Greedy in normal traffic regimes but better than Greedy in heavy traffic regimes. In terms of travel time, RGLight performs about the same as IGRL in the normal regime.
As shown in Figure <ref>, although IGRL and RGLight provide similar average travel times, the empirical distribution of their difference is skewed to the right. This seems to indicate that under this evaluation RGLight is more equitable. In a heavy traffic regime, we see that RGLight outperforms IGRL by a large margin.
§.§.§ Comparison under Sensor Failures in Different Real-world Road Networks
In this experiment, we test our model's performance with two real-world road networks using real traffic demand (see Figure 1 and 2 in supplementary material). The IRL method does not scale to such large networks (the parameters increase linearly with the number of TSCs) and so we cannot report its performance. Transportation baselines do not consider speed or vehicle position and so their performance is robust to noisy sensors.
We first discuss the performance in the Manhattan road network from <ref>.
We find RGLight outperforms other methods. It is also more robust in scenarios with higher proportions of missing data compared to the other RL baselines.
Second, we study methods on Luxembourg's road network.
Results in <ref> are similar to previous ones. RGLight outperforms other methods, especially as missing data increases. However, given higher probabilities of missing data, i.e., 60%, both IGRL, and GAT-TSC perform worse than the Fixed time method, which might limit their usefulness.
Contrary to the Manhattan study, Greedy performs worse than the Fixed time method. This result suggests that when the road network becomes more irregular as is the case for Luxembourg, Greedy tends to fail. To confirm, we tested the Greedy method on two synthetic networks with the same number of intersections, one with irregular road patterns (more similar to Luxemburg) and the second one laid out as a grid (similar to Manhattan). We confirm that Greedy performs better on the latter.
To visualize the performance of the learned policy, we collect the average delays per time step in two road networks. We select the best RL baseline and two transportation baselines.
In Figure <ref>, we see that RGLight better mitigates the effect of demand surge compared to other baselines. Moreover, from Figure <ref>, faced with a more challenging demand evolution in the Luxembourg road network, RGLight also demonstrates the overall best robustness.
§.§ Generalizability analysis
Now we test more systematically the ability of the models to generalize to networks of different shapes and scales and under different traffic demands.
This departs from most previous works <cit.> that keep training and testing conditions similar.
We also introduce DGRL, a pure distributional baseline version of IGRL, obtained by setting k=0 in Equation <ref>.
We train models on irregular synthetic networks with 2 to 6 intersections. The horizontal direction on each sub-figure in Figures <ref> and <ref> represents different traffic demands (0.5, 1, 2, 4), and the vertical direction represents different grid network scales, that is, how many columns and rows in the grid network (4, 6, 8). In total, we test 16 different scenarios for each model to evaluate its generalizability.
We use the average delay over the whole simulation process to evaluate model performance. Furthermore, we normalize the average delay of each method for readability:
x_i'=x_i-x_min/x_max-x_min× 10,000
where x_i is the average delay calculated from method i, x_max and x_min are the maximum and minimum delay calculated across all methods given the specific scenario. Then we can use the normalized average delay to plot the colormap in Figure <ref>. The values of x_i' range between 0 and 10,000 and smaller values indicate better performances.
Figure <ref> shows that all methods tend to perform worse for heavy-traffic regimes in small networks (upper-left corner). This matches common knowledge about network traffic capacity <cit.>. We also find that the Greedy baseline performs relatively well in small-scale networks but performs worse in large-scale networks. We hypothesize it assumes that the downstream lanes have an unlimited capacity which makes it not very realistic in large-scale networks. As a result, we can see that the model's performance worsens when the network scale increases. This is similar to the finding in <cit.>. On the other hand, we find that RL-based methods (i.e., IGRL and DGRL) are less sensitive to network scale change compared to the transportation method. This result demonstrates that RL methods can better generalize to different network structures than standard transportation baselines.
We now focus on the reinforcement-learning methods. In the bottom right corner, IGRL performs better than DGRL, but DGRL performs better than IGRL in the upper-left corner (i.e., smaller network with higher demand). These results indicate the weaker generalization ability of IGRL since its performance tends to decrease in test scenarios that are very different from the training scenarios (e.g., a small network under a heavy-traffic regime). We also find that DGRL performs better than IGRL in a small network with a heavy-traffic regime. We suspect that since the distributional approach uses a robust loss it might be less sensitive to outliers.
However, in a normal traffic regime with a larger network, DGRL performs worse than IGRL.
These findings further motivate the policy ensemble approach. Overall, we find that the RGLight method performs best across most scenarios. This result indicates that an ensemble of policies can boost generalizability.
To further analyze the characteristics of the policies learned by the RL methods, we examine the switch rates of IGRL, DGRL, and RGLight. Recall that the actions are binary and correspond to either switching to the next phase in a signal's program (action 1) or not switching (action 0). The switching rate is the ratio of signals that perform a phase switch (action 1) in a single timestep across all intersections. Using a similar matrix across network scale and demand as before, Figure <ref> reports the average switch rate across methods.
Comparing Figure <ref> (b) and (c), we see that overall IGRL exhibits a higher switch rate compared to DGRL. In contrast, RGLight is often in-between IGRL and DGRL except when the demand is the highest (first column) and it switches more often than both. This seems to indicate that RGLight attains states that are different than the two other methods.
We further discuss the scenario with a 2x2 network and a demand of 1800 veh/h. By considering Figure <ref> (a) and Figure <ref> (a) together, we observe that RGLight does best. Further, its switch rate (58) is in-between IGRL's (109.4) and DGRL's (30.62). We provide a video demonstration of this simulation.[Simulation video link: <https://youtu.be/-n_LUbNjJUs>] In the video we notice that a policy that switches too often (IGRL) leads to a shock wave or gridlock. On the other hand, switching too slowly (DGRL) ends up preventing significant traffic from passing to allow less busy lanes to advance. RGLight seems to have found a good comprise. We believe it is worth further investigating how to design the signal phase and the action space based on these types of results.
§ CONCLUSIONS AND DISCUSSION
Motivated by gaps in the current literature (Table <ref>), we propose RGLight, an RL approach that combines two reinforcement learning agents and that provides more generalizable and robust policies. Further, we conduct a series of experiments on two different real-world networks with real traffic demands and show that our method outperforms several state-of-the-art baselines.
In future work, we plan to study the empirical and theoretical properties of RGLight to model multi-agent systems in other similar domains. Such general multi-agent settings include connected and automated vehicles environment <cit.> and traffic junction environment <cit.>.
As a second avenue, we will investigate combinations of RGLight (model-free) and model-based reinforcement learning that can both improve performance and also (training) data efficiency <cit.>.
§ ACKNOWLEDGMENT
This research is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, Mitacs Canada, the Canada Foundation for Innovation (CFI), and LC is supported by a Canada AI CIFAR Chair.
elsarticle-harv
|
http://arxiv.org/abs/2306.09790v2
|
20230616120219
|
The Information Bottleneck's Ordinary Differential Equation: First-Order Root-Tracking for the IB
|
[
"Shlomi Agmon"
] |
cs.IT
|
[
"cs.IT",
"cs.LG",
"math.IT"
] |
On the finitary content of Dykstra's cyclic projections algorithm
[
July 31, 2023
=================================================================
The Information Bottleneck (IB) is a method of lossy compression of relevant information. Its rate-distortion (RD) curve describes the fundamental tradeoff between input compression and the preservation of relevant information embedded in the input. However, it conceals the underlying dynamics of optimal input encodings. We argue that these typically follow a piecewise smooth trajectory when input information is being compressed, as recently shown in RD. These smooth dynamics are interrupted when an optimal encoding changes qualitatively, at a bifurcation. By leveraging the IB's intimate relations with RD, we provide substantial insights into its solution structure, highlighting caveats in its finite-dimensional treatments. Sub-optimal solutions are seen to collide or exchange optimality at its bifurcations.
Despite the acceptance of the IB and its applications, there are surprisingly few techniques to solve it numerically, even for finite problems whose distribution is known.
We derive anew the IB's first-order Ordinary Differential Equation, which describes the dynamics underlying its optimal tradeoff curve.
To exploit these dynamics, we not only detect IB bifurcations but also identify their type in order to handle them accordingly.
Rather than approaching the IB's optimal curve from sub-optimal directions, the latter allows us to follow a solution's trajectory along the optimal curve under mild assumptions.
We thereby translate an understanding of IB bifurcations into a surprisingly accurate numerical algorithm.
[0]
The author is grateful to Or Ordentlich for helpful conversations and for his support, and to Noam and Dafna Agmon for their relentless support throughout this journey. The author thanks the late Naftali Tishby for insightful conversations and Etam Benger for his involvement during the early stages of this work.
Keywords:
the Information Bottleneck,
Bifurcations,
Ordinary Differential Equation,
Numerical Approximation.
§ INTRODUCTION
The Information Bottleneck (IB) describes the fundamental tradeoff between the compression of information on an input X to the preservation of relevant information on a hidden reference variable Y.
Formally, let X and Y be random variables defined respectively on finite source and label alphabets 𝒳 and 𝒴, and let p_Y|X(|)p_X() be their joint probability distribution[ Without loss of generality, we may assume that p() > 0 for every x∈𝒳, and so p_Y|X is well-defined.], or p(|)p() for short.
One seeks <cit.> to maximize the information I(Y; X̂) over all Markov chains Y ⟷ X ⟷X̂, subject to a constraint on the mutual information I(X; X̂) := E_p(|) p()logp(|)/p(),
I_Y(I_X) := max_p(|){ I(Y; X̂): I(X; X̂) ≤ I_X } .
The latter maximization is over conditional probability distributions or encoders p(|).
The graph of I_Y(I_X) is the IB curve.
We write T := |X̂|, for a codebook or representation alphabet X̂.
An encoder p(|) which achieves the maximum in (<ref>) is IB optimal or simply optimal.
Written in a Lagrangian[ Normalization constraints are omitted for clarity.] formulation ℒ := I(X; X̂) - β I(Y; X̂) with β > 0, <cit.> showed that a necessary condition for extrema in (<ref>) is that the IB Equations hold. Namely,
p(|) = p()/Z(,β)exp{ -β D_KL[p(|) || p(|)] } ,
p(|) = ∑_x p(|) p(|) , and
p() = ∑_x p(|) p() .
In these, Z(, β) := ∑_x̂ p() exp{ -β D_KL[p(|) || p(|)] } is the partition function, p(|) in (<ref>) is defined by the Bayes rule p(|) p()/p(), and D_KL is the Kullback-Leibler divergence, D_KL[p||q] := ∑_i p(i) logp(i)/q(i).
The IB Equations (<ref>)-(<ref>) are a necessary condition for an extremum of ℒ also when it is considered as a functional in three independent families of normalized distributions {p(|)}, {p(|)} and {p()}, <cit.>, rather than in {p(|)} alone.
While satisfying them is necessary to achieve the curve (<ref>), it is not sufficient.
Indeed, Equations (<ref>)-(<ref>) have solutions that do not achieve curve (<ref>), and so are sub-optimal.
This results in sub-optimal IB curves, which intersect or bifurcate as the multiplier β varies (see Section 3.4 there).
Iterating over the IB Equations (<ref>)-(<ref>) is essentially Blahut-Arimoto's algorithm variant for the IB (BA-IB) due to <cit.>, brought here as Algorithm <ref>.
While the minimization problem (<ref>) can be solved exactly in special cases, <cit.>, exact solutions of an arbitrary finite IB problem whose distribution is known are usually obtained nowadays using BA-IB.
cf., <cit.> for a survey on other computation approaches.
We write BA_β for a single iteration of the BA-IB Algorithm <ref>.
Since BA_β encodes an iteration over the IB Equations (<ref>)-(<ref>), then an encoder p(|) is its fixed point, BA_β[p(|)] = p(|), if and only if it satisfies the IB Equations.
Or equivalently, if p(|) is a root of the IB operator
F := Id - BA_β ,
in a manner similar to <cit.>. We shall then call it an IB root.
used a similar formulation of rate-distortion (RD) and its relations in <cit.> to the IB, to show that BA-IB suffers from critical slowing down near critical points, where the marginal p() of a representor in an optimal encoder vanishes gradually.
That is, the number of BA-IB iterations required till convergence increases dramatically as one approaches such points.
Formulating fixed points of an iterative algorithm as operator roots can also be leveraged for computational purposes in a constrained-optimization problem, as noted recently by <cit.> for RD.
Indeed, let F(·, β) be a differentiable operator on R^n for some n > 0, F: R^n×R→R^n, where β is a (real) constraint parameter.
Suppose now that (x, β) is a root of F,
F(x, β) = 0 ,
such that x = x(β) is a differentiable function of β.
Write D_x F := ( ∂∂ x_jF_i )_i, j for its Jacobian matrix, and D_β F := ( ∂∂β F_i )_i for its vector of partial derivatives with respect to β. The point (x, β) of evaluation is omitted whenever understood.
As is often discussed along with the Implicit Function Theorem, e.g., <cit.>, applying the multivariate chain rule to F(x(β), β) in (<ref>) yields an implicit ordinary differential equation (ODE)
D_x F dxdβ = -D_β F ,
for the roots of F.
Plugging in explicit expressions for the first-order derivative tensors D_x F and D_β F, one can specialize (<ref>) to a particular setting, which allows one to compute the implicit derivatives dxdβ numerically.
While <cit.> discovered the RD ODE this way, they showed that (<ref>) can be generalized to arbitrary order under suitable differentiability assumptions.
Namely, they showed that the derivatives d^lxdβ^l implied by F = 0 (<ref>) can be computed via a recursive formula, for an arbitrary-order l > 0.
By specializing this with the higher derivatives of Blahut's algorithm <cit.>, they obtained a family of numerical algorithms for following the path of an optimal RD root (Part I there).
In this work, we specialize the implicit ODE (<ref>) to the IB.
Namely, we plug into (<ref>) the first-order derivatives of the IB operator Id - BA_β (<ref>) to obtain the IB ODE, and then use it to reconstruct the path of an optimal IB root, in a manner similar to <cit.>.
This is not to be confused with the gradient flow (of arbitrary encoders) towards an optimal root at a fixed β value, described at <cit.> by an ODE, which is a different optimization approach.
In contrast, the implicit Equation (<ref>) describes how a root evolves with β.
So, in principle, one may compute an optimal IB root once and then follow its evolution along the IB curve (<ref>).
While the discovery of the IB ODE is due to <cit.>, we derive it here anew in a form that is better suited for computational (and other) purposes, especially when there are fewer possible labels 𝒴 than input symbols 𝒳, as often is the case.
To that end, we consider several natural choices of a coordinate system for the IB in Section <ref> and compare their properties.
This allows us to make an apt choice for the ODE's variable x in (<ref>).
In Section <ref>, we present the IB ODE in these coordinates (Theorem <ref>).
This enables one to numerically compute the first-order implicit derivatives at an IB root, if it can be written as a differentiable function in β.
So long that an optimal root remains differentiable, a simple way to reconstruct its trajectory is by taking small steps at a direction determined by the IB ODE. This is Euler's method for the IB.
The error accumulated by Euler's method from the true solution path is roughly proportional to the step size, when small enough.
For comparison, reverse deterministic annealing <cit.> with BA-IB is nowadays common for computing IB roots.
The dependence of its error on the step size is roughly the same as in Euler's method.
This is discussed in Section <ref>, where we combine Euler's method with BA-IB to obtain a modified numerical method whose error decreases at a faster rate than either of the above.
However, the differentiability of optimal IB roots breaks where the solution changes qualitatively.
Such a point is often called a phase transition in the IB literature, or a bifurcation — namely, a point where there is a change in the problem's number of solutions.
e.g., <cit.> for basic definitions.
As noted already by , their existence in the IB stems from restricting the cardinality of the representation alphabet 𝒳̂.
Indeed, the gap between achieving the IB curve (<ref>) to merely satisfying the fixed-point equations (<ref>)-(<ref>) lies in understanding the solution structure of the IB operator (<ref>), or equivalently its bifurcations.
While IB bifurcations were analyzed in several works, including <cit.> and others, little is known about the practical value of understanding them. <cit.> showed that they correspond to the onset of learning new classes, while <cit.> showed that they inflict a hefty computational cost to BA-IB.
Following <cit.>, this work demonstrates that understanding bifurcations can be translated to a new numerical algorithm to solve the IB.
To that end, merely detecting a bifurcation along a root's path does not suffice.
But rather, it is also necessary to identify its type, as this allows one to handle the bifurcation accordingly.
One can then continue following the path dictated by the IB ODE.
Almost all of the literature on IB bifurcations is based on a perturbative approach, in a manner similar to <cit.>.
That is, suppose that the first variation[ For finite IB problems, condition (<ref>) boils down to requiring that the gradient of ℒ vanishes, while condition (<ref>) is equivalent to requiring that its Hessian matrix has a non-trivial kernel, as both are conditions on directional derivatives. e.g., <cit.>. ]
∂/∂ϵℒ[ p(|) + ϵΔ p(|); β]|_ϵ=0
of the IB Lagrangian ℒ vanishes, for every perturbation Δ p(|).
This condition is necessary for extremality and implies the IB Equations (<ref>)-(<ref>), <cit.>.
Then, (p(|), β) is said to be a phase transition only if there exists a particular direction Δ q(|) at which p(|) can be perturbed without affecting the Lagrangian's value to second order,
∂^2/∂ϵ^2ℒ[ p(|) + ϵΔ q(|); β]|_ϵ=0 = 0 .
<cit.> and <cit.> take such an approach.
<cit.> similarly analyzes one type of IB bifurcation.
While a perturbative approach is common in analyzing phase transitions, it has several shortcomings when applied to the IB, as noted by <cit.>.
First, the IB's Lagrangian ℒ is constant on a linear manifold of encoders p(|), <cit.>, and so condition (<ref>) leads to false-detections.
While this was considered there and in its sequel <cit.> by giving subtle conditions on the nullity of the second variation in (<ref>), in practice it is difficult to tell whether a particular direction Δ q(|) is in the kernel due to a bifurcation or due to other reasons, as they note.
Second, note that a finite IB problem can be written as an infinite RD problem, <cit.>.
As discussed in Section <ref>, representing an IB root by a finite-dimensional vector leads to inherent subtleties in its computation.
Among other things, these may well result in a bifurcation not being detectable under certain circumstances (Section <ref>).
To our understanding, many of the difficulties that hindered the understanding of IB bifurcations throughout the years are, in fact, artifacts of finite dimensionality.
Third, conditions (<ref>)-(<ref>) do not suffice to reveal the type of the bifurcation, information which is necessary for handling it when following a root's path.
While <cit.> give conditions for identifying the type, these partially agree with our findings and do not suggest a straightforward way for handling a bifurcation.
Rather than imposing conditions on the scalar functional ℒ, our approach to IB bifurcations follows that of <cit.> for RD.
That is, we rely on the fact that the IB's local extrema are fixed points of an iterative algorithm, and so they also satisfy a vector equation F = 0 (<ref>).
We shall now consider a toy problem to motivate our approach.
“Bifurcation Theory can be briefly described by the investigation of problem (<ref>) in a neighborhood of a root where D_x F is singular”, <cit.>.
Indeed, recall that if D_x F is non-singular at a root (x_0, β_0), then by the Implicit Function Theorem (IFT), there exists a function x(β) through the root, x(β_0) = x_0, which satisfies F(x(β), β) = 0 (<ref>) at the vicinity of β_0.
The function x(β) is then not only unique at some neighborhood of (x_0, β_0), but further, x(β) inherits the differentiability properties of F, <cit.>.
In particular, if the operator F is real-analytic in its variables — as with the IB operator (<ref>) — then so is its root x(β).
While a bifurcation can occur only if D_x F is singular, singularity is not sufficient for a bifurcation to occur.
For example, the roots of the operator
F(x,y; β) := (x - β, 0)
on R^2 consist of the vertical line x = β, {(β, y): y ∈R}, for every β∈R.
For a fixed y, each such root is real-analytic in β.
However, one cannot deduce this directly from the IFT, as the Jacobian 1 0
0 0 of F (<ref>) is always singular.
Note, however, that in this particular example, the x coordinate alone suffices to describe the problem's dynamics, and so its y coordinate is redundant.
One can ignore the y coordinate by considering the “reduction” F̃(x; β) := x - β of F to R^1.
Further, discarding y also removes or mods-out the direction 0
1 from D_x F, which does not pertain to a bifurcation in this case.
This results in the non-singular Jacobian matrix 1 of F̃, and so it is now possible to invoke the IFT on the reduced problem.
The root guaranteed by the IFT can always be considered in R^2 by putting back a redundant y coordinate at some fixed value.
<cit.> used a similarly defined reduction of finite RD problems to show that their dynamics are piecewise real-analytic under mild assumptions.
The intuition behind our approach is similar to <cit.>, who observed that “in the IB one can also get rid of irrelevant variables within the model”.
Nevertheless, the details differ.
Mathematically, we consider[ This formulation can be made precise by using the tangent space of a differentiable manifold, e.g., <cit.>. However, that shall not be necessary. ] the quotient V/W of a vector space V by its subspace W.
Elements of V are identified in the quotient if they differ by an element of W: v_1 ∼ v_2 ⇔ v_1 - v_2 ∈ W, for v_1, v_2∈ V.
This way, one “mods-out” W, collapsing it to a single point in the quotient vector space V/W.
The resulting problem is smaller and so easier to handle, whether for theoretical or practical purposes.
This is how the one-dimensional vector space D_x F in our toy example (<ref>) was reduced to the trivial D_xF̃ = {0}.
However, one needs to understand the solution structure, for example, to ensure that the directions in W are not due to a bifurcation.
We note in passing that V/W has a simple geometric interpretation as the translations of W in V, in a manner reminiscent of its better-known counterparts of quotient groups and rings. e.g., <cit.>.
To keep things simple, however, we shall not use quotients explicitly.
Instead, the reader may simply consider the sequel as a removal of redundant coordinates.
For, we shall only remove coordinates that the reader does not care about anyway, as in the above toy example.
To achieve this approach, one needs to consider the IB in a coordinate system that permits a simple reduction as in (<ref>), and to understand its solution structure.
We achieve these in Section <ref> by exploiting two properties of the IB which are often overlooked.
First, proceeding with the coordinates-exchange of Section <ref>, the intimate relations <cit.> of the IB with RD suggest a “minimally sufficient” coordinates system for the IB, just as the x axis is for problem (<ref>).
Reducing an IB root to these coordinates is a natural extension of reduction in RD, <cit.>.
Reduction of IB roots facilitates a clean treatment of IB bifurcations.
These are roughly divided into continuous and discontinuous bifurcations, in Subsections <ref> and <ref> respectively.
While understanding continuous bifurcations is straightforward, the IB's relations with RD allow us to understand the discontinuous bifurcation examples of which we are aware as a support switching bifurcation in RD, by leveraging <cit.>.
A second property is the analyticity of the IB operator (<ref>), which stems from the analyticity of the IB Equations (<ref>)-(<ref>).
By building on the first property, analyticity leads us to argue that the Jacobian of the IB operator (<ref>) is generally non-singular (Conjecture <ref>) when considered in reduced coordinates as above.
As an immediate consequence, the dynamics underlying the IB curve (<ref>) are piecewise real-analytic in β, in a manner similar to RD.
Indeed, the fact that there exist dynamics underlying the IB curve (<ref>) in the first place can arguably be attributed to analyticity; cf., the discussion following Conjecture <ref>.
Combining both properties sheds light on several subtle yet important practical caveats in solving the IB (Subsection <ref>) due to using finite-dimensional representations of its roots.
These subtleties are compatible with our numerical experience.
The results here suggest that, unlike RD, the IB is inherently infinite-dimensional, even for finite problems.
Finally, Section <ref> combines the modified Euler method of Section <ref> with the understanding of IB bifurcations in Section <ref>, to obtain Algorithm <ref> (IBRT1) for following the path of an optimal IB root, in Subsection <ref>. That is, First-order Root-Tracking for the IB.
For simplicity, we focus mainly on continuous IB bifurcations, as these are the ones most often encountered in practice;
cf., the comments in Subsection <ref>.
The resulting approximations in the information plane are surprisingly close to the true IB curve (<ref>), even on relatively sparse grids (i.e., with large step sizes), as seen in Figure <ref>.
See Subsection <ref> for the numerical results underlying the latter.
The reasons for this are discussed in Subsection <ref>, along with the algorithm's basic properties.
Unlike BA-IB, which suffers from an increased computational cost near bifurcations, our Algorithm <ref> suffers from a reduced accuracy there, in a manner similar to root-tracking for RD, <cit.>.
With that, we note that there are standard techniques in Bifurcation Theory for handling a non-trivial kernel of D_x F at a root.
For example, the Lyapunov-Schmidt reduction replaces the high-dimensional problem F = 0 (<ref>) on R^n by a smaller but equivalent problem Φ = 0, where Φ(·, β) maps vectors in the (right) kernel of D_x F to vectors in its left kernel.
To achieve this, it separates the kernel- and non-kernel directions of the problem, essentially handling each at its turn.
e.g., <cit.> or <cit.>.
This technique is generic, as it does not rely on any particular property of the problem at hand.
As such, it is considerably more involved than removing redundant coordinates[ Applied to our toy problem (<ref>) for instance, Lyapunov-Schmidt reduces F = 0 (<ref>) to choosing a continuously differentiable function Φ on the y-axis there, which is obtained by first solving for x = β (see the proof of <cit.> for details). However, since y is redundant in this example, then solving for Φ can provide no useful information on the dynamics of its roots. ], which requires an understanding of the solution structure.
In contrast, reduction in the IB is straightforward.
For the purpose of following a root's path, carrying on with redundant kernel directions is burdensome, computationally expensive, and sensitive to approximation errors.
<cit.> use a variant of the Lyapunov-Schmidt reduction to consider IB bifurcations due to symmetry breaking.
While our findings are in agreement with theirs' for continuous IB bifurcations, they differ for discontinuous bifurcations (see Subsections <ref> and <ref>).
*Notations.
Vectors are written in boldface x, scalars in a regular font x.
A distribution p pertaining to a particular Lagrange multiplier value β (in Equations (<ref>)-(<ref>)) is denoted with a subscript, p_β.
The probability simplex on a set S is denoted Δ[S] (see Section <ref>).
The support of a probability distribution p on S is p := {s∈ S: p(s) ≠ 0}.
The source, label and representation alphabets of an IB problem are denoted 𝒳, 𝒴 and 𝒳̂, respectively; we write T:= |𝒳̂|.
δ denotes Dirac's delta function, δ_i, j = 1 if i = j and zero otherwise.
§ COORDINATES EXCHANGE FOR THE IB
Just as a point in the plane can be described by different coordinate systems, so can IB roots.
As demonstrated recently by <cit.> for the related rate-distortion theory, picking the right coordinates matters when analyzing its bifurcations.
The same holds also for the IB.
Our primary motivations for exchanging coordinates are to reduce computational costs and to mod-out irrelevant kernel directions, as explained in Section <ref>.
In this Section, we discuss three natural choices of a coordinate system for parametrizing IB roots and the reasoning behind our choice for the sequel before setting to derive the IB ODE in the following Section <ref>.
This work is complemented by the later Subsection <ref>, which facilitates a transparent analysis of IB bifurcations.
IB roots have been classically parameterized in the literature by (direct) encoders p(|), following <cit.>.
Considering the BA-IB Algorithm <ref> reveals two other natural choices, illustrated by Equation (<ref>) below.
First, an encoder p(|) determines a cluster marginal p() and an inverse encoder p(|), via algo:BA-IBeq:IB-BA-cluster_marginal and algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc, respectively.
These can be interpreted geometrically as p()-weighted points q_(x) in the simplex Δ[X] of X, so long that these are well-defined, ∀ p() ≠ 0.
No more than |𝒳| + 1 points in the simplex are required to represent an IB root, <cit.>.
The latter is readily seen to analyze the IB in these coordinates[ Although known among IB practitioners, this reference has generally escaped broader attention.], though it pre-dates <cit.>.
Second, an inverse encoder determines a decoder p(|), via algo:BA-IBeq:IB-BA-decoder-eq. Along with the cluster marginal, ( p(|), p() ) can be similarly interpreted as p()-weighted points r_(y) in the simplex Δ[𝒴] of Y.
This choice of coordinates is implied already by <cit.>.
Cycling around Equation (<ref>), a decoder ( p(|), p()) determines via algo:BA-IBeq:IB-BA-partition-func and algo:BA-IBeq:IB-BA-new-direct-enc a new encoder, which may differ from the one with which we have started.
For notational simplicity, we shall usually write ( p(|), p() ) rather than ( r_(), p() ) for decoder coordinates (similarly, for inverse-encoder coordinates).
p(|) @(l, u)[dl]_algo:BA-IBeq:IB-BA-cluster_marginal, algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc
( p(|), p() ) @(d, d)[rr]_algo:BA-IBeq:IB-BA-decoder-eq ( p(|), p() ) @(u, r)[ul]_algo:BA-IBeq:IB-BA-partition-func, algo:BA-IBeq:IB-BA-new-direct-enc
The above allows us to define three BA operators as the composition of three consecutive maps in Equation (<ref>), encoding an iteration of Algorithm <ref>.
When starting at an encoder p(|), its output is a newly-defined encoder.
Similarly, when starting at one of the other two vertices, it sends an inverse-encoder ( p(|), p() ) or a decoder pair ( p(|), p() ) to newly-defines one.
By abuse of notation, we denote all three compositions by BA_β, with the choice of coordinates system mentioned accordingly. Indeed, these are representations of a single BA-IB iteration in three different coordinate systems, and so may be considered as distinct representations of the same operator.
For completeness, BA_β in decoder coordinates is spelled out explicitly at Equation (<ref>) in Appendix <ref>.
A newly-defined encoder (or inverse-encoder or decoder) at a cycle's completion need not generally equal the one at which we have started.
These are equal precisely at IB roots, when the IB Equations (<ref>)-(<ref>) hold.
Therefore, the choice of a coordinates system does not matter then, and so moving around Equation (<ref>) from one vertex to another yields different parameterizations of the same root, at least when ∀ p() ≠ 0.
In particular, this shows that the inverse-encoders q_ in Δ[X] of an IB root are in bijective correspondence with its decoders r_ in Δ[Y], an observation which shall come in handy at Section <ref>.
Next, we consider how well each of these coordinate systems can serve for following the path of an IB root.
The minimal number of symbols needed to write down an IB root typically varies with the constraints; cf., <cit.> or <cit.>.
Therefore, inverse-encoder and decoder coordinates are better suited than encoder coordinates for considering the dynamics of a root with β, as they allow us to consider its evolution via a varying number of points in a fixed space, Δ[𝒳] or Δ[𝒴], respectively.
Indeed, a direct encoder p(|) can be interpreted geometrically as a point in the |𝒳|-fold product Δ[𝒳̂]^X of simplices Δ[𝒳̂], <cit.>.
So, if a particular symbol is not in use anymore, p() = 0, then one is forced to choose between replacing Δ[𝒳̂] by a smaller space Δ[𝒳̂∖{}], to carrying on with a redundant symbol .
The latter leads to non-trivial kernels in the IB due to duplicate clusters[ In addition to the IB's “perpetual kernel”, <cit.>.] (e.g., Section 3.1 there), making it difficult to tell whether a particular kernel direction pertains to a bifurcation.
In contrast, when considered in decoder coordinates, for example, an IB root is nothing but p()-weighted paths r_1, …, r_T in Δ[𝒴], with β↦ r_(β) a path for each r_.
And so, once a symbol is not needed anymore, then one can discard the path r_ without replacing the underlying space Δ[𝒴].
This permits the clean treatment of IB bifurcations in Section <ref>.
The computational cost of solving the first-order ODE (<ref>) for dxdβ numerically depends on x.
Much of this cost is due to computing a linear pre-image under D_x F, which is of order O(x)^3, <cit.>. cf., Section <ref>.
Representing an IB root on T clusters in encoder coordinates requires[ With the coordinates considered as independent variables, ignoring normalization constraints.] |𝒳|· T dimensions, in inverse-encoder coordinates (|𝒳| + 1)· T dimensions, and in decoder coordinates (|𝒴| + 1)· T dimensions.
Thus, when there are fewer possible labels 𝒴 than input symbols 𝒳, then the computational cost is lowest in decoder coordinates.
A-priori, one might expect that derivatives with respect to β vanish when the solution barely changes, regardless of the choice of coordinates system.
For example, at a very large “β = ∞” value, an obvious IB root is the diagonal encoder[ That is, set 𝒳̂ := 𝒳 and p(|) := 1 if = and 0 otherwise. ], as can be seen by a direct examination of the IB Equations (<ref>)-(<ref>).
It consists of one IB cluster of weight (or mass) p() at p_Y|X=x∈Δ[𝒴] for each ∈𝒳, and so one might expect that it would barely change so long that β is very large.
However, the logarithmic[ See Section <ref> below on the use of logarithmic derivatives.] derivative dlog p_β(|)dβ in encoder coordinates need not vanish even when the derivatives dlog p_β(|)dβ and dlog p_β()dβ in decoder coordinates do, as seen to the right of Figure <ref>.
Indeed, given the derivative in decoder coordinates, one can exchange it to encoder coordinates by
log p_β(|) =
J_dec^enc log p_β(|) +
J_mrg^enc log p_β()
- D_KL[p(|) || p_β(|)]
+ ∑_ p_β(|) D_KL[p(|) || p_β(|)] ,
where J_dec^enc and J_mrg^enc are the two coordinate exchange Jacobian matrices of orders (T· |𝒳|)× (T· |𝒴|) and (T· |𝒳|)× T respectively, given by Equations (<ref>) and (<ref>) in Appendix <ref>.
And so, dlog p_β(|)dβ would often be non-zero even if both dlog p_β(|)dβ and dlog p_β()dβ vanish.
This unintuitive behavior of the derivative in encoder coordinates is due to the explicit dependence of the IB's encoder Equation (<ref>) on β. This dependence is the source of the last two terms in Equation (<ref>) (see Equation (<ref>)).
The comparison between encoder and inverse-encoder coordinates can be seen to be similar. See Appendix <ref> for further details.
In light of the above, we proceed with decoder coordinates in the sequel.
§ IMPLICIT DERIVATIVES AT AN IB ROOT AND THE IB'S ODE
We now specialize the implicit ODE (<ref>) (of Section <ref>) to the IB, using the decoder coordinates of the previous Section <ref>.
This allows us to compute first-order implicit derivatives at an IB root (Theorem <ref>) at a remarkable accuracy, under one primary assumption. Namely, that the root is a differentiable function of β.
While differentiability breaks at IB bifurcations (Section <ref>), this allows to reconstruct a solution path from its local approximations in the following Section <ref>, so long that it holds.
To simplify calculations, we take the logarithm (log p(|), log p() ) of the decoder coordinates of Section <ref> as our variables.
Exchanging the BA_β operator to log-decoder coordinates is immediate, by writing log BA_β[exp(log p(|)), exp(log p()) ].
For short, we denote it BA_β[log p(|), log p()] when in these coordinates, by abuse of notation.
Similarly, exchanging the IB ODE (below) back to non-logarithmic coordinates is immediate, via ddβlog p = 1pddβ p.
In Section <ref> we shall assume that p() never vanishes.
To ensure that taking logarithms is well-defined, we also require[ While a decoder p(|) may have a well-defined derivative p(|) even without this requirement, the calculation details below would differ. ] that no coordinate of p(|) vanishes.
A sufficient condition for that is that p(|) > 0 for every and (Lemma <ref> in Appendix <ref>).
Next, define a variable x∈R^T·(|𝒴| + 1) as the concatenation of the vector ( log p_β(|) )_∈𝒴, ∈𝒳̂ with (log p_β() )_∈𝒳̂.
Differentiating ∂/∂log p with respect to log-probabilities is given by p ·∂∂ p, by the chain rule[ Defining u := log p, the u-derivative of f(p) is given by dfdu = dfdpdpdu, or equivalently dfdlog p = p·dfdp. See also Appendix <ref> for a gentler treatment. ].
This gives meaning to the Jacobian matrix D_x (·) with respect to our logarithmic variable x.
The Jacobian D_log p(|), log p() BA_β of a single Blahut-Arimoto iteration in these log-decoder coordinates is a square matrix of order T· (|𝒴| + 1).
Its (T· |𝒴|)× (T· |𝒴|) upper-left block (below) corresponds to perturbations in BA's output log-decoder log p(|) due to varying an input log-decoder log p(|).
Since we prime input but not output coordinates, this is to say that the columns of this block are indexed[ Alternatively, one can enumerate the label and representation alphabets explicitly, 𝒴 := {_1, …, _|𝒴|} and 𝒳̂ := {_1, …, _T}. This allows to replace (, ) and (, ) throughout by (_i, _j) and (_k, _l), respectively, with i, k=1,…, |𝒴| and j, l = 1, …, T.
] by pairs (, ) and its rows by (, ).
Its (T· |𝒴|)× T upper-right block corresponds to perturbations in BA's output log-decoder log p(|) due to varying an input log-marginal log p().
That is, its columns are indexed by and rows by (, ).
Similarly, for the bottom-left and bottom-right blocks, of respective sizes T × (T· |𝒴|) and T× T.
See (<ref>) ff., in Appendix <ref>, and the end-result at Equation (<ref>) there.
Explicitly, when evaluated at an IB root ( log p(|), log p() ), BA's Jacobian matrix is given by
D_log p(|), log p() BA_β[log p(|), log p()] =
(
[ β·∑_, ( δ_, - δ_, )
·[1 - δ_, p_β(|)]
C(, ; β)_, ( 1 - β) ·∑_[
1 - δ_, p_β(|)] B(, ; β)_; ; ; β·[
δ_, p_β(|) -
B(, ; β)_] (1 - β) ·[ δ_, - A(, ; β) ] ])
where δ_i, j = 1 if i = j and is 0 otherwise.
As mentioned above, primed coordinates and index the columns, and un-primed coordinates and the rows. Indices and with more than a single prime are summation variables.
A, B and C are a scalar, a vector, and a matrix, each involving two IB clusters. They are defined by,
A(, ; β) := ∑_ p_β(|) p_β(|) ,
B(, ; β)_ := ∑_ p(|) p_β(|) p_β(|) , and
C(, ; β)_, := ∑_ p(|) p(|) p_β(|) p_β(|) .
In these, indexes B and the rows of C, the columns of C, and is a summation variable.
These (, )-labeled tensors have only |𝒴| entries along each axis, thanks to the choice of decoder coordinates.
A and B can be expressed in terms of C via some obvious relations; see Equation (<ref>) and below in Appendix <ref>.
Appendix <ref> elaborates on the mathematical subtleties involved in calculating the Jacobian (<ref>).
See also Equation (<ref>) in Appendix <ref> for an implementation-friendly form of (<ref>).
Together with D_β BA_β (Equations (<ref>) and (<ref>) in Appendix <ref>), we have both of the first-order derivative tensors of BA_β in log-decoder coordinates.
This allows us to specialize the implicit ODE (<ref>) (of Section <ref>) to the IB, in terms of our variable x. By abuse of notation, we write ( log p_β(|), log p_β() )_, for its |𝒴|· T + T coordinates, and similarly for its derivatives vector v (<ref>) below.
Let ( p(|), p() ) be an IB root,
and suppose that it can be written as a differentiable function β↦( p_β(|), p_β() ) in β.
If none of its coordinates vanish, then the vector
v := ( log p_β(|), log p_β())_,
of its implicit logarithmic derivatives is well defined and satisfies an ordinary differential equation in β,
( I - D_log p(|), log p() BA_β) v =
- ∑_, [ 1 - p(|)/p_β(|) ] ·[ δ_, - p_β(|) ] p_β(|) D_KL[ p(|) || p_β(|) ]
∑_, [ δ_, - p_β(|) ] p_β(|) D_KL[ p(|) || p_β(|) ]
where I is the identity matrix of order T· (|𝒴| + 1), and the Jacobian matrix D_log p(|), log p() BA_β at the given IB root is given by Equation (<ref>).
The right-hand side of (<ref>) is indexed as in (<ref>), by (, ) at its top and at its bottom coordinates.
While the IB ODE was discovered by <cit.>, it is derived here anew in log-decoder coordinates due to the considerations in Section <ref>.
It is analogous to the RD ODE, due to <cit.>; Corollary <ref> and around (in Section <ref>) provides a relation between these two ODEs.
We emphasize that the first assumption of Theorem <ref>, that the IB root is a differentiable function of β, is essential. It is comprised of two parts: (i) that the root can be written as a function of β, and (ii) that this function is differentiable.
These are precisely the assumptions needed to compute the first-order implicit multivariate derivative v (<ref>) at the given root, <cit.>.
Continuous IB bifurcations violate (ii) (Subsection <ref>), while discontinuous ones violate (i) (Subsection <ref>).
In contrast, the requirement that no coordinate vanishes is a technical one, due to our choice of logarithmic coordinates.
It is not necessary for the Jacobian of the IB operator (<ref>) (to the left of (<ref>)) to be non-singular in order to solve the IB ODE numerically.
Nevertheless, non-singularity of the Jacobian will follow from the sequel (see Conjecture <ref> in Section <ref>).
With that, the derivatives v = ddβ( log p_β(|), log p_β() ) (<ref>) computed numerically from the IB ODE (<ref>) at an exact root are remarkably accurate, as demonstrated in Figure <ref>.
As in RD, <cit.>, calculating implicit derivatives numerically loses its accuracy when approaching a bifurcation because the Jacobian is increasingly ill-conditioned there.
For comparison, the BA-IB Algorithm <ref> also loses its accuracy near a bifurcation. This is a consequence of BA's critical slowing down, <cit.>, just as with its corresponding RD variant.
Each coordinate of ( p(|), p() ) is treated by the IB ODE (<ref>) as an independent variable. However, normalization of p(|) imposes one constraint[ Ignoring the normalization of the marginal p().] per cluster .
Thus, one might expect the behavior of BA's Jacobian (<ref>) to be determined by fewer than T·(|𝒴| + 1 ) coordinates, at least qualitatively.
This intuition is justified by the following Lemma <ref>, which allows to consider the kernel of the IB operator (<ref>) by a smaller and simpler matrix S;
see Appendix <ref> for its proof.
Given an IB root as above, define a square matrix of order T · |𝒴| by
S_(, ), (, ) :=
∑_ p_β(|) [ β·p(|) p_β(|) + (1 - 2β) ]
p(|) [ δ_, - p_β(|) ] .
Then, the nullity of the Jacobian I - D_log p(|), log p() BA_β of the IB operator (<ref>) equals that of I - S,
where I is the identity matrix (of the respective order), and S is defined by (<ref>),
( I - S ) =
( I - D_log p(|), log p() BA_β) .
Specifically, write v := (v_, )_, for a left eigenvector which corresponds to 1 ∈ S.
Then, there is a bijective correspondence between the left kernels at both sides of (<ref>), mapping
v↦ (v, u) ,
where u := (u_)_ is defined by u_ := 1 - ββ·∑_ v_,.
In addition to offering a form more transparent than BA's Jacobian in (<ref>), Lemma <ref> also reduces the computational cost of testing I - D_log p(|), log p() BA_β (<ref>) for singularity, by using the smaller I - S (<ref>) in its place.
This makes it easier to detect upcoming bifurcations (see Conjecture <ref> in Section <ref>).
Further, one can verify directly that the IB ODE (<ref>) indeed follows the right path.
Indeed, if the ODE is non-singular, then, by the Implicit Function Theorem, there is (locally) a unique IB root, which is a differentiable function of β.
And so, there is a unique solution path for a numerical approximation to follow.
Finally, we note that a relation similar to (<ref>) holds also for eigenvalues of D_log p(|), log p() BA_β (<ref>) other than 1. This can be seen either empirically or by tracing the proof of Lemma <ref>.
In Section <ref>, we shall proceed with this line of thought of removing irrelevant coordinates.
In the following Section <ref>, we turn to reconstruct a solution path from implicit derivatives at a point, with bifurcations ignored for now.
§ A MODIFIED EULER METHOD FOR THE IB
We follow the path of a given IB root away of bifurcation by using its implicit derivatives computed from the IB ODE (<ref>), of Section <ref>.
We follow the classic Euler method for simplicity, modifying it slightly to get the most out of the calculated derivatives.
Improvements using more sophisticated numerical methods are left to future work.
The detection and handling of IB bifurcations are deferred to the next Section <ref>, and thus are ignored in this Section.
Let dxdβ = f(x, β) and x(β_0) = x_0 define an initial value problem.
In numerical approximations of ordinary differential equations (ODEs), the Euler method for this problem is defined by setting
x_n+1 := x_n + Δβ· f(x_n, β_n) ,
where β_n+1 := β_n + Δβ, and | Δβ | is the step size.
The global truncation error max_n x_n - x(β_n)_∞ is the largest error of the approximations x_n from the true solutions x(β_n).
A numerical method for solving ODEs is said to be of order d if its global truncation error is of order O(|Δβ|^d), for step sizes |Δβ| small enough.
Euler's method error analysis is a standard result, e.g., <cit.> or <cit.>, brought as Theorem <ref> below.
It shows that Euler's method (<ref>) is of order d = 1, under mild assumptions, as demonstrated in Figure <ref>.
The immediate generalization of (<ref>) using derivatives till order d is Taylor's method, which is a method of order d.
Let an initial-value problem be defined on [β_0, β_f] by dxdβ = f(x, β) as above[
The initial condition x_0 in (<ref>) is allowed to deviate from the true solution x(β_0).
], and suppose that f satisfies the Lipschitz condition with some constant L > 0. Namely, f(x, β) - f(x', β)_∞≤ L ·x - x'_∞ for every x, x' and β∈[β_0, β_f].
Then, Euler method's (<ref>) global truncation error satisfies
max_β_0 ≤β_n ≤β_fx_n - x(β_n) _∞≤
e^(β_f - β_0) Lx_0 - x(β_0) _∞ +
e^(β_f - β_0) L - 1/L·12 |Δβ| max_β_0 ≤β≤β_fd^2x(β)dβ^2_∞ .
Specializing Euler's method to our needs, replace x in (<ref>) above by the log-decoder coordinates of an IB root, as in Section <ref>.
So long that an IB root p_β := ( p_β(|), p_β() ) is a differentiable function of β in the vicinity of β_n, it can be approximated by
log p_β_n+1(|) ≈log p_β_n(|) + Δβ·d log p_β(|) /dβ|_p_β_n and
log p_β_n+1() ≈log p_β_n() + Δβ·d log p_β() /dβ|_p_β_n ,
where d log p_β(|) dβ and d log p_β() dβ are calculated from the IB ODE (<ref>).
Thus, applying (<ref>) repeatedly, we obtain an Euler method for the IB.
We shall take only negative steps Δβ < 0 when approximating the IB, due to reasons explained in Subsection <ref> (after Proposition <ref>).
In contrast to the BA-IB Algorithm <ref>, Euler's method (<ref>) can be used to extrapolate intermediate points, yielding a piecewise linear approximation of the root.
The problem of tracking an operator's root belongs in general to a family of hard-to-solve numerical problems — known as stiff — if the problem has a bifurcation, <cit.>.
e.g., <cit.> or <cit.> on stiff differential equations.
Stopping early in the vicinity of a bifurcation restricts the computational difficulty and permits convergence guarantees.
Early-stopping in the IB shall be handled later, in Subsection <ref>.
<cit.> proves that Euler's method convergence guarantees (Theorem <ref>) hold for the closely related Euler method for RD with early stopping.
While Euler's method may inadvertently switch between solution branches, the latter guarantees ensure that it indeed follows the true solution path between bifurcations, if the step size |Δβ| is small enough and initializing close enough to the true solution.
While we do not dive into these details for brevity, we note that similar convergence guarantees can also be proven here.
Alternatively, Euler's method can be ensured to follow the true solution path by noting that an optimal IB root is (strongly) stable when negative steps Δβ < 0 are taken; these details are deferred to Subsection <ref>, as they depend on Section <ref>.
Following the discussion in Section <ref>, there is a subtle disadvantage in choosing decoder coordinates as our variables compared to the other two coordinate systems there.
Indeed, recall that the IB is defined as a maximization over Markov chains Y ⟷ X ⟷X̂.
An (arbitrary) encoder p(|) defines a joint probability distribution p(|) p(|) p() which is Markov.
An inverse-encoder pair also similarly defines a Markov chain.
In contrast, an arbitrary decoder pair (p(|), p()) need not necessarily define a Markov chain.
Rather, by invoking the error analysis of Euler's method, one can see that Markovity is approximated at an increasingly improved quality as the step-size |Δβ| in (<ref>) becomes smaller.
To enforce Markovity, we shall perform a single BA iteration (in decoder coordinates) after each Euler method step.
This ensures that the newly generated decoder pair satisfies the Markov condition, as it is now generated from an encoder.
As a side effect, adding a single BA-IB iteration after each Euler method step improves the approximation's quality significantly.
By linearizing BA_β around a fixed point, one can show that deterministic annealing with a fixed number of BA iterations per grid point is a first-order method. Thus, deterministic annealing may arguably be considered a first-order method, as is with Euler's method.
A similar argument shows that adding a single BA iteration after each Euler method step yields a second-order method.
However, while a larger number of added BA iterations obviously improves the approximation's quality, it does not improve the method's order.
See Appendix <ref> for an approximate error analysis.
The predicted orders are in good agreement with the ones found empirically, shown in Figure <ref>.
We note that while <cit.> did not attempt an added BA iteration, they do discuss a variety of other improvements to root-tracking (see Section 3.4 there).
§ ON IB BIFURCATIONS
For a bifurcation to exist, it is necessary that the Jacobian of the IB operator (<ref>) would be singular, as illustrated by Figure <ref>.
However, a priori singularity is not sufficient to detect a bifurcation (cf., <cit.>), nor does this allow to distinguish between bifurcations of different types.
In order to be able to exploit the IB's ODE (<ref>) (Section <ref>), we shall now take a closer look into its bifurcations.
These can be broadly classified into two types: where an optimal root is continuous in β and where it is not.
As noted after Theorem <ref>, each type violates an assumption necessary to compute implicit derivatives.
Sections <ref> and <ref> provide the means to identify bifurcations, distinguish between their types, and handle them accordingly (for continuous bifurcations).
To facilitate the discussion, Section <ref> considers the IB as a rate-distortion problem, following <cit.> and others.
This allows us to leverage recent insights on RD bifurcations, <cit.>, while suggesting a “minimally sufficient” choice of coordinates for the IB.
The latter permits a clean treatment of continuous IB bifurcations in Section <ref>.
Viewing the IB as an infinite-dimensional RD problem facilitates the understanding of its discontinuous bifurcations, which in turn highlight subtleties in its finite-dimensional coordinate systems (of Section <ref>).
These provide insight into the IB and are also of practical implications (Section <ref>), and so are necessary for our algorithms at Section <ref>.
§.§ The IB as a rate-distortion problem
We now explore the intimate relation between the IB and RD, following <cit.> and <cit.>. This leads to a “minimally sufficient” coordinate system for the IB, thereby completing the work of Section <ref>.
In this coordinate system, results <cit.> on the dynamics of RD roots are readily considered in IB context. This leads to Conjecture <ref>, that the IB operator (<ref>) in these coordinates is typically non-singular.
The discussion here facilitates the treatment of IB bifurcations in the following Sections <ref> and <ref>.
First, recall a few definitions.
A rate distortion problem on a source alphabet 𝒳 and a reproduction alphabet 𝒳̂ is defined by a distortion measure[ Also called a single-letter fidelity criterion. It is a non-negative real-valued function on 𝒳×𝒳̂, with no further requirements. Or equivalently, an |𝒳|-by-|𝒳̂| matrix. e.g., <cit.>. ] d:𝒳×𝒳̂→R_≥ 0 and a source distribution p_X(x).
One seeks the minimal rate I(X; X̂) subject to a constraint D on the expected distortion E[d(, )], <cit.>,
R(D) := min_p(|){ I(X; X̂): E_p(|) p_X()[d(, )] ≤ D } ,
known as the rate-distortion curve.
The minimization is over test channels p(|).
A test channel that attains the RD curve (<ref>) is called an achieving distribution.
We say that an RD problem is finite if both of the alphabets 𝒳 and 𝒳̂ are finite.
Using Lagrange multipliers for (<ref>) with[ Normalization constraints omitted for clarity.] I(X; X̂) + β E[d(x, x̂)], one obtains a pair of fixed-point equations
p(|) = p() e^-β d(, )/∑_ p() e^-β d(, ) and
p() = ∑_ p(|) p()
in the marginal p() and test channel p(|), similar to the IB Equations (<ref>) and (<ref>).
Iterating over these is Blahut's algorithm for RD, <cit.>, denoted BA_β^RD here.
As with the IB (<ref>), β parametrizes the slope of the optimal curve (<ref>) also for RD.
See <cit.> or <cit.> for an exposition of rate-distortion theory.
We clarify a definition needed to rewrite the IB as an RD problem.
We define the simplex Δ[S] on a (possibly infinite) set S as the collection of finite formal convex combinations ∑_s a_s · s of elements of S.
That is, as the S-indexed vectors[ Equivalently, as functions mapping each element s of S to a real number a_s. ] (a_s)_s∈ S that satisfy ∑_s a_s = 1 and a_s ≥ 0, with a_s non-zero for only finitely many elements s (the support of (a_s)_s).
Addition and multiplication are defined pointwise, as in ∑_s a_s· s + ∑_s b_s· s = ∑_s (a_s+b_s)· s.
Δ[S] is closed under finite convex combinations because the sum of finitely supported vectors is finitely supported.
When taking S = {e_1, …, e_n} the standard basis vectors (e_i)_j = δ_i, j of R^n, then one can identify the formal operations with those in R^n, reducing the simplex Δ[S] to its usual definition.
We write r for an element of Δ[𝒴].
In particular, an element of Δ[Δ[𝒴]] is merely a finite convex combination ∑_ p() r_ of distinct[ Note that Δ[S] is a set.] probability distributions r_(y) ∈Δ[𝒴] on 𝒴.
When setting 𝒳̂⊂Δ[𝒴] to be a finite subset of distributions, |𝒳̂| < ∞, then Δ[𝒳̂] is a special case[ Unlike Δ[𝒳̂] here, the decoder coordinates of Section <ref> are not required to have their clusters r distinct.] of the decoder coordinates of Section <ref>.
Now, let a finite IB problem be defined by a joint probability distribution p_Y|X p_X, as in Section <ref>.
To write it down as an RD problem, <cit.>, define the IB distortion measure by
d_IB(x, r) := D_KL[ p_Y|X=x || r ] ,
for x∈𝒳, r ∈Δ[𝒴], and p_Y|X=x∈Δ[𝒴] the conditional probability distribution at X = x.
The distortion measure d_IB (<ref>) and p_X define an RD problem on the continuous reproduction alphabet 𝒳̂ := Δ[𝒴].
Minimizing the IB Lagrangian ℒ (at Section <ref>) is equivalent to minimizing the Lagrangian of this RD problem, <cit.>.
That is, the IB is a rate-distortion problem when considered in these coordinates[
The astute reader might notice that the IB Equations (<ref>) and (<ref>) are then equivalent to RD's fixed-point Equations (<ref>), with (<ref>) implied by the IB's Markovity.
The IB's Y-information I(Y; X̂) equals the expected distortion E[d_IB(, )] at (<ref>) up to a constant, <cit.>, and so is linear in the test channel p(r|).
].
IB clusters r∈Δ[𝒴] assume the role of RD reproduction symbols, while an IB root (considered now as an RD root) is equivalently described either by the probabilities of each cluster — namely, by a point in Δ[Δ[𝒴]] — or, by a test channel p(r|).
Unlike the finite-dimensional coordinate systems of Section <ref>, this definition of the IB entails no subtleties due to finite-dimensionality, such as duplicate clusters (see more below).
However, while it allows to spell out the IB explicitly as an RD problem, handling an infinite reproduction alphabet is difficult for practical purposes.
Since no more than |𝒳| + 1 reproduction symbols are needed to write down an IB root (see therein), this motivates one to consider the IB's local behavior, with clusters fixed.
So instead, one may require the reproduction symbols of d_IB (<ref>) to be in a list[ We use here a tuple (r__1, …, r__T) rather than a set, since the points r_ need not be distinct a priori.] (r_)_ indexed by some finite set 𝒳̂, with each r_ in Δ[𝒴].
This defines a finite RD problem, for which d_IB (<ref>) is merely an |𝒳|-by-T matrix.
Yet, placing identical clusters in the list (r_)_ inadvertently introduces degeneracy to the matrix d_IB (<ref>), as discussed below.
<cit.> take (r_)_ to be the decoders defined by a given encoder p(|), as in Equation (<ref>) (Section <ref>). We shall then refer to d_IB (<ref>) as the distortion matrix defined by p(|).
When p_β_0(|) is an optimal IB root then the problem (d_IB, p_X) defined by it is called the tangent RD problem.
Indeed, its RD curve (<ref>) coincides[ Note that an optimal choice of IB clusters is already encoded into d_IB (<ref>) here. With d_IB and p_X given, solving the IB boils down to finding the optimal cluster weights p(), which is an RD problem. ] with the IB curve (<ref>) at this point. However, the curves differ outside this point since IB clusters usually vary with β, while the distortion of the tangent problem is defined at p_β_0(|) and so is fixed.
By definition (<ref>), it follows that the IB curve is a lower envelope of the curves of its tangent RD problems, <cit.>.
We note that a similar construction can also be carried out in inverse-encoder coordinates; cf., <cit.>.
Regardless of the formulation used to rewrite the IB as an RD problem, the associated RD problem has an expected distortion E[d_IB] of I(X; Y) - I(X̂; Y) at an IB root, <cit.>, <cit.>.
That is, the IB is a lossy compression method that strives to preserve the relevant information I(X̂; Y).
Due to the Markov condition, information on Y is available only through X.
Thus, one may intuitively consider the IB as a lossy compression method of the information on Y that is embedded in X.
These intimate relations between the IB and RD suggest that studying bifurcations in either context could be leveraged to understand the other.
Bifurcations in finite RD problems are discussed at length in <cit.>.
To facilitate the study of IB bifurcations in the sequel (Sections <ref> and <ref>) using results from RD, we need a “minimally-sufficient” coordinate system for the IB.
Consider an IB root in decoder coordinates as finitely-many p()-weighted points r_() in Δ[𝒴], as in Section <ref>.
Exchanging to decoder coordinates (Equation (<ref>) there) is well-defined so long that there are no zero mass clusters, ∀ p() ≠ 0.
Yet, even then, the points r_ in Δ[𝒴] yielded by Equations algo:BA-IBeq:IB-BA-cluster_marginal through algo:BA-IBeq:IB-BA-decoder-eq need not be distinct.
Namely, they may yield identical clusters r_ = r_ at distinct indices ≠.
This leads to a discussion of structural symmetries of the IB (its degeneracies), which is not of use for our purposes; cf., <cit.>.
To avoid such subtleties, we shall say that an IB root is reduced if it has no zero-mass clusters, ∀ p() ≠ 0, and all its clusters are distinct, = ⇔ r_ = r_.
A root that is not reduced is degenerate or represented degenerately.
An IB root can be reduced by removing clusters of zero mass and merging identical clusters of distinct indices — see Algorithm <ref> in Section <ref> below.
It is straightforward to see from the IB Equations (<ref>)-(<ref>) that reduction preserves the property of being an IB root.
Similarly, reducing a root does not change its location in the information plane.
So, a root achieves the IB curve (<ref>) if and only if its reduction does.
Therefore, reduction decreases the dimension in which the problem is considered while preserving all its essential properties.
This allows to represent an IB root on the smallest number of clusters possible — its effective cardinality — by factoring out the IB's structural symmetries. cf., <cit.>, upon which this definition is based.
While the purpose of reduction is to mod-out redundant kernel coordinates (see Section <ref>), it highlights the differences between the various IB definitions found in the literature[ e.g., both of the IB formulations <cit.> do not impose an a priori restriction on the number of clusters. The former does not enable one to encode duplicate clusters, while the latter does. The formulation <cit.> ignores these subtleties altogether. <cit.> consider the IB on a pre-determined number of possibly duplicate clusters. ], bringing to light a subtle caveat of finite dimensionality.
To see this, note that reduction could have been defined above in terms of the other coordinate systems of the IB.
Its definition in inverse-encoder coordinates is nearly identical to that above, while defining it in encoder coordinates is a straightforward exercise.
Since the coordinate systems of Section <ref> are equivalent at an IB root (without zero-mass clusters), the precise definition does not matter then.
Each of these parameterizations encodes the coordinates r(y) of a root's clusters r using a finite-dimensional vector x (note Equation (<ref>)).
This enables one to represent duplicate clusters ≠ with r_ = r_, and obliges one to choose the order in which clusters are being encoded into the coordinates of x.
A finite-dimensional representation x of an IB root is invariant to interchanging clusters ≠ precisely when they are identical, r_ = r_.
The IB's functionals (e.g., its X and Y-information) are invariant to any cluster permutation.
cf., <cit.>.
Both of these structural symmetries result from using a finite-dimensional parameterization, with the former eliminated by reduction.
In contrast, the elements of Δ[𝒴] are distinct by definition (since Δ[𝒴] is a set), and so parametrizing the IB by points in Δ[Δ[𝒴]] does not permit identical clusters.
An element ∑_r p(r) r of Δ[Δ[𝒴]] assigns a probability mass p(r) to every point r in Δ[𝒴], with only finitely-many points r supported.
Thus, it implicitly encodes all the entries r(y) of every probability distribution r ∈Δ[𝒴] in a “one size fits all” approach, giving no room for the choices above.
This leads us to argue that the IB's structural symmetries are not an inherent property but rather an artifact of using its finite-dimensional representations.
In rate-distortion, the reduction of a finite RD problem is defined similarly, <cit.>, by removing a symbol from the reproduction alphabet 𝒳̂ and its column d(·, ) from the distortion matrix once it is not in use anymore (of zero mass).
A distortion matrix d is non-degenerate if its columns are distinct, d(·, ) ≠ d(·, ) for all ≠.
Non-degeneracy arises naturally when considering the RD problem tangent to a given IB root p(|).
Indeed, the distortion matrix d_IB (<ref>) defined by p(|) has duplicate columns if the root has identical clusters, while the other direction holds under mild assumptions[ If the |𝒳| vectors p_Y|X=x span R^|𝒴|, then D_KL[ p_Y|X=x || r_] = D_KL[ p_Y|X=x || r_] for all x implies that r_ = r_.].
Under these assumptions, the distortion matrix induced by an IB root p(|) is reduced and non-degenerate precisely when p(|) is a reduced IB root.
Reduction in RD provides the means to show that the dynamics underlying the RD curve (<ref>) are piecewise analytic in β, <cit.>, under mild assumptions.
Just as in definition (<ref>) of the IB operator, <cit.> similarly define the RD operator Id - BA_β^RD in terms of Blahut's algorithm for RD, <cit.>.
By using their Theorem 1, <cit.> observed that reducing a finite RD problem to the support[ The support of a distribution p() is defined by p() := { : p() > 0}. ] of a given RD root mods-out redundant kernel coordinates if the distortion measure is finite and non-degenerate.
That is, the Jacobian D(Id - BA_β^RD) of the RD operator on the reduced problem is then non-singular[ When considered in the right coordinates system; see therein for details.], just as with our toy problem (<ref>) in Section <ref>.
By the Implicit Function Theorem, there is therefore a unique RD root of the reduced problem through the given one; this root is real-analytic in β (details there).
Considering this for the RD problem tangent to a reduced IB root immediately yields the following,
Let p_β_0(|) be a reduced IB root of a finite IB problem defined by p_Y|X p_X, such that the matrix p_Y|X is of rank |𝒴|.
Then, near β_0, there is a unique function continuous in β, which is a root of the tangent RD problem through p_β_0(|); it is real-analytic in β.
Corollary <ref> shows that the local approximation of an IB problem (the roots of its tangent RD problem) is guaranteed to be as well-behaved as one could hope for, provided that the IB is viewed in the right coordinates system.
Note, however, that the RD root through p_β_0(|) of the tangent problem does not in general coincide with the IB root outside of β_0 since the IB distortion d_IB (<ref>) varies along with the clusters that define it.
However, when the IB clusters are fixed, then one might expect that the Jacobian (<ref>) of BA_β in log-decoder coordinates would be the same as the Jacobian of its RD variant.
Indeed, the Jacobian matrix of BA^RD_β is the T× T bottom-right sub-block of the Jacobian (<ref>) of BA_β, up to a multiplicative factor.
cf., Equations (5)-(6) in <cit.>, Equations (<ref>) and (<ref>) in Section <ref>, and (<ref>) in Appendix <ref>.
As in RD, we argue that reduction in the IB also provides the means to show that the dynamics underlying the optimal curve (<ref>) are piecewise analytic in β.
Corollary <ref> concludes that, under mild assumptions, through every reduced IB root passes a unique real-analytic RD root.
However, its crux is that the Jacobian of the RD operator Id - BA_β^RD is non-singular at a reduced root.
Due to the IB's close relations with RD, and since reduction in the IB is a natural extension of reduction in RD, we argue that the same is also to be expected of the IB operator Id - BA_β (<ref>) in decoder coordinates.
To see this, note that IB roots are finitely supported, <cit.>, and so one may take finitely-supported probability distributions Δ[Δ[𝒴]] for the IB's optimization variable.
Thus, the IB's BA_β operator in decoder coordinates (of Section <ref>) may be considered as an operator on Δ[Δ[𝒴]].
Next, consider the RD problem defined by p_X and d_IB (<ref>) on the continuous reproduction alphabet Δ[𝒴], as in <cit.>.
This defines on Δ[Δ[𝒴]] also the BA operator BA_β^RD for RD.
Now that both BA operators are considered on an equal footing, we note the following.
First, while BA_β^RD iterates over[ To see this, plug the IB distortion measure d_IB (<ref>) into the Equations (<ref>) defining BA_β^RD.] the IB Equations (<ref>) and (<ref>), its IB variant BA_β iterates also over the decoder Equation (<ref>).
The latter Equation is a necessary condition[ For comparison, only p(|) = ∑_x p(|, ) p(|) holds for an arbitrary triplet (Y, X, X̂) of random variables.] for Y → X →X̂ to be Markov, and so can be understood as enforcement of Markovity.
That is, IB roots are RD roots with an extra constraint.
Second, by <cit.>, reducing Id - BA_β^RD from the continuous reproduction alphabet Δ[𝒴] to a root of finite support renders it non-singular, under mild assumptions.
Due to the similarity between these operators, and since reduction in the IB is a natural extension of reduction in RD, this suggests that reducing Id - BA_β (<ref>) from Δ[Δ[𝒴]] to a root's effective cardinality should also render it non-singular.
In line with the discussion of Section <ref> on reduction, we therefore state the following,
The Jacobian matrix I - D_log p(|), log p() BA_β at (<ref>) of the IB operator (<ref>) in log-decoder coordinates is non-singular at reduced IB roots so long that it is well-defined, except perhaps at points of bifurcation.
The intuition behind this conjecture stems from analyticity, as follows.
The IB operator Id - BA_β (<ref>) is real-analytic, since each of the Equations algo:BA-IBeq:IB-BA-cluster_marginal-algo:BA-IBeq:IB-BA-new-direct-enc defining it (in the BA-IB Algorithm <ref>) is real-analytic in its variables.
For a root x_0 of a real-analytic operator F, one might expect that, in general, (i) no roots other than x_0 exist in its vicinity and that (ii) D_x F|_x_0 has no kernel.
That is, unless the operator is degenerate at x_0 in some manner or x_0 is a bifurcation.
To see this, recall <cit.> that a real-valued function F_i in x∈R^n is real analytic in some open neighborhood of x_0 if it is a power series in x = (x_1, …, x_n), within some[ While a strictly positive radius of convergence is needed here, we omit these details for clarity.] radius of convergence.
For every practical purpose, one may replace F_i by a polynomial in (x_1, …, x_n) when x is close enough to the base-point x_0, by truncating the power series.
Viewed this way, a root of an operator F(x) = (F_1(x), …, F_n(x)) is nothing but a solution of n polynomial equations in n variables. However, a square polynomial system typically has only isolated roots, which is (i).
This is best understood in terms of Bézout's Theorem; see <cit.> for example.
For (ii), a vector v is in D_x F precisely when it is orthogonal to each of the gradients ∇ F_i.
However, ∇ F_i is the vector of the first-order monomial coefficients of x_1, …, x_n in F_i.
In a general position, these n coefficient vectors ∇ F_1, …, ∇ F_n are linearly-independent, and so v must vanish as claimed.
If F is degenerate such that F_i = F_j for particular i ≠ j, for example, then both points fail, of course.
cf., also <cit.> for (i) and (ii).
This intuition accords with the comments of <cit.> on RD: “usually, each point on the rate distortion curve [...] is achieved by a unique conditional probability assignment. However, if the distortion matrix exhibits certain form of symmetry and degeneracy, there can be many choices of [a minimizer]”.
Indeed, the fact that the dynamics underlying the RD curve (<ref>) are piecewise real-analytic (under mild assumptions), <cit.>, can be similarly understood to stem from the analyticity of the RD operator Id - BA_β^RD.
Subject to Conjecture <ref>, a Jacobian eigenvalue of the IB operator (<ref>) must vanish gradually[ Note that BA's Jacobian (<ref>) is continuous in the root at which it is evaluated.] as one approaches a bifurcation, causing the critical slowing down of the BA-IB Algorithm <ref>, <cit.>.
When an IB root traverses a bifurcation in which its effective cardinality decreases, then it is not reduced anymore.
One can then handle the bifurcation by reducing the root anew.
To ensure proper handling by the bifurcation's type, we consider the latter closely in Sections <ref> and <ref> below.
In a nutshell, following the IB's ODE (<ref>) along with proper handling of its bifurcations is the idea behind our Algorithm <ref> (Section <ref>), for approximating the IB numerically.
Conjecture <ref> is compatible with our numerical experience. However, we leave its proof to future work.
To that end, one could examine closely the smaller matrix S (<ref>) (of Lemma <ref> in Section <ref>) for example.
However, even if Conjecture <ref> were violated, then one could detect that easily by inspecting the Jacobian's eigenvalues.
Conjecture <ref> also implies that IB roots are locally unique outside of bifurcations when presented in a reduced form. Non-uniqueness of optimal roots is detectable by inspecting the Jacobian's eigenvalues — see Corollary <ref> in Section <ref> and the discussion following it.
cf., <cit.> for the respective discussion in RD.
With that, most of the results in Sections <ref> and <ref> below do not depend on the validity of Conjecture <ref>.
§.§ Continuous IB bifurcations: cluster-vanishing and cluster-merging
Following <cit.>, we consider the evolution of IB roots which are a continuous function of β.
By representing an IB root in its reduced form (Section <ref>), it is evident that there are two types of continuous IB bifurcations.
We provide a practical heuristic (Algorithm <ref>) for identifying and handling such bifurcations.
The discussion here is complemented by Subsection <ref> below, which considers the case where continuity does not hold.
The evolution of an IB root in β obeys the ODE (<ref>) so long that it can be written as a differentiable function in β, as in Theorem <ref>.
Considering the root in decoder coordinates, this amounts to an evolution of a T-tuple of points r_ in Δ[𝒴] and their weights p(). These typically traverse the simplex smoothly as the constraint β is varied, as demonstrated in Figure <ref>.
We now consider two cases where this evolution does not obey the ODE (<ref>), due to violating differentiability.
Consider an optimal IB root in its reduced form (see Section <ref>). Namely, consider the reduced form of a root that achieves the IB curve (<ref>).
Suppose that its decoders r_ and weights p() are continuous in β.
Then, a qualitative change in the root can occur only if either (i) two (or more) of its clusters collide or (ii) the marginal probability p() of a cluster vanishes. In either case, the minimal number of points in Δ[𝒴] required to represent the root decreases. That is, its effective cardinality decreases[ A qualitative change where a root's effective cardinality increases is obtained by merely reversing the dynamics in β of (i) or (ii) above.].
We call the first a cluster-merging bifurcation and the second a cluster-vanishing bifurcation. Or, continuous bifurcations collectively.
Both types were observed already at <cit.> in the related setting of RD problems with a continuous source alphabet.
At a continuous bifurcation, IB roots of distinct effective cardinalities collide and merge into one, as discussed in Section <ref> below.
Specifically, one root achieves the minimal value of the IB Lagrangian and so is stable, while the other root is sub-optimal.
Thus, continuous IB bifurcations are pitchfork bifurcations[
Strictly speaking, several copies of the root of larger effective cardinality collide at a continuous bifurcation.
When two clusters r ≠ r' collide in (i), the root itself is invariant to interchanging their coordinates after the collision but not before it, breaking the IB's first structural symmetry discussed in Subsection <ref>.
Interchanging the coordinates of r and r' (and their marginals) before the collision yields two distinct copies of essentially the same root.
For (ii), the IB's functionals (e.g., its X and Y-information) do not depend on the coordinates ( r(y) )_y of a vanished cluster r, rendering these redundant. cf., <cit.>.
Before the cluster r vanishes, there is one copy of the root for each index , with r placed at its coordinates.
Considered in reduced coordinates, these coincide to a single copy after the cluster vanishes.
This breaks the second structural symmetry.
], e.g., <cit.>, in accordance with <cit.>.
Even though the optimal root is continuous in β (by assumption), its differentiability is lost at the point of bifurcation, as noted after Theorem <ref> and demonstrated in Figure <ref>.
Among the two, cluster-vanishing bifurcations are more frequent in practice than cluster-merging. This can be understood by considering cluster trajectories in the simplex. In a general position, one might expect clusters to seldom be at the same “time” and place (β and r∈Δ[𝒴]).
With that, we note that cluster vanishing bifurcations cannot be detected directly by standard local techniques (i.e., considering the derivative's kernel directions at the bifurcation), whether considering the Hessian of the IB's loss function as in <cit.> or the Jacobian of the IB operator (<ref>) as here.
The technical reason for this is as follows, while the root cause underlying it is discussed in Subsection <ref> (after Proposition <ref>).
Observe that the I(Y; X̂) and I(X; X̂) functionals do not depend on the coordinates ( r(y) )_y of clusters r of zero mass.
Thus, the directions corresponding to these coordinates are always in the kernel regardless of whether evaluating at a bifurcation or not, and so cannot be used to detect a bifurcation[ The direction corresponding to a cluster's marginal is useless when one does not know which cluster ( r(y) )_y to pick.].
Indeed, with its dynamics in β reversed, “a new symbol grows continuously from zero mass” in a cluster-vanishing bifurcation, as <cit.> comments in a related setting.
It is then not clear a priori which point in Δ[𝒴] should be chosen for the new symbol, rendering the perturbative condition at Equation (<ref>) difficult to test.
In accordance with this, <cit.> offers a perturbative condition for detecting arbitrary IB bifurcations, while <cit.> offers a condition for detecting cluster-merging bifurcations by analyzing cluster stability.
However, both conditions are equivalent (Appendix <ref>), and so must detect the same type of bifurcations.
In contrast, a cluster-splitting (or merging) bifurcation is straightforward to detect because the stability of a particular cluster is a property of the root itself — see Appendix <ref> and the references therein for details.
One may wonder whether bifurcations exist in the IB for the same reason as they do in RD.
As in the IB, RD problems typically have many sub-optimal curves, <cit.>.
While bifurcations in the IB stem from restricting the effective cardinality[ At least for continuous IB bifurcations.], <cit.>, in RD they stem from the various restrictions that a reproduction alphabet has. e.g., a reproduction alphabet 𝒳̂ := {r_1, r_2, r_3} of an RD problem may be restricted to the distinct subsets {r_1, r_2} and {r_2, r_3}, usually yielding distinct sub-optimal RD curves; cf., <cit.>.
In contrast to RD, the IB's distortion d_IB (<ref>) defined by a root's clusters is determined a posteriori by the problem's solution rather than a priori by the problem's definition.
As a result, both reasons for the existence of bifurcations coincide.
To see this, consider the IB as an RD problem whose reproduction symbols 𝒳̂ are a finite subset of Δ[𝒴] which is allowed to vary (e.g., as if defining the tangent RD problem anew at each β).
Distinct restrictions of a reproduction alphabet 𝒳̂ can be forced to agree by altering the symbols themselves, so long that they are of the same size.
For example, restricting the set {r_1, r_2, r_3} of reproduction symbols to {r_1, r_2} is the same as restricting it to {r_2, r_3} instead, and then replacing r_3 with r_1∈Δ[𝒴] in the restricted problem[ This is not to be confused with cluster permutations, which change the order among clusters but do not alter the symbols themselves.].
The dynamical point of view above, considering an IB root as weighted points traversing Δ[𝒴], offers a straightforward way to identify and handle continuous IB bifurcations. It is spelled out as our root-reduction Algorithm <ref>.
For cluster-vanishing bifurcations, one can set a small threshold value δ_1 > 0 and consider the cluster as vanished if p() < δ_1 (step algo:root-reductionalgo:root-reduction:nearly-vanished-cluster), as in <cit.>.
Similarly, for cluster-merging bifurcations, one can set a small threshold δ_2 > 0 and consider the clusters ≠ to have merged if r_ - r__∞ < δ_2 (step algo:root-reductionalgo:root-reduction:distance-between-points).
A vanished cluster is then erased (and merged clusters replaced by one), resulting in an approximate IB root on fewer clusters.
This not only identifies continuous IB bifurcations but also handles them, since the output of the root-reduction Algorithm <ref> is a numerically-reduced root, represented in its effective cardinality.
To re-gain accuracy, we shall later invoke the BA-IB Algorithm <ref> on the reduced root, as part of Algorithm <ref> (in Section <ref>).
We note that one should pick the thresholds δ_1 and δ_2 small enough to avoid false detections, and yet not too small so as to cause mis-detections. Mis-detections are handled later, using the heuristic Algorithm <ref> (Subsection <ref>).
Using the root-reduction Algorithm <ref> allows one to stop early in the vicinity of a bifurcation when following the path of an IB root.
As mentioned in Section <ref>, early-stopping restricts the computational difficulty of root-tracking, <cit.>.
Further, reducing the root before invoking BA-IB (Algorithm <ref>) allows to avoid BA's critical slowing down, <cit.>.
For, reduction removes the nearly-vanished Jacobian eigenvalues that pertain to the nearly-vanished (or nearly-merged) cluster(s), which are the cause of BA's critical slowing down.
cf., Proposition <ref> (Section <ref>) and the discussion around it.
See also <cit.> for the respective behavior in RD.
Finally, we comment that the root-reduction Algorithm <ref> can also be implemented in the other two coordinate systems of Section <ref>.
§.§ Discontinuous IB bifurcations and linear curve segments
In the previous Subsection <ref>, we considered continuous IB bifurcations — namely, when the clusters r_∈Δ[𝒴] and weights p() of an IB root are continuous functions of β.
By exploiting the intimate relations between the IB and RD (Section <ref>), we now consider IB bifurcations where these cannot be written as a continuous function of β.
Although in our experience discontinuous bifurcations are infrequent in practice, the theory they evoke has several subtle yet important consequences, with practical implications for computing IB roots (in Section <ref>).
We start with several examples before diving into the theory.
The examples of discontinuous IB bifurcations of which we are aware can be understood in RD context as follows.
Consider the IB as an RD problem on the continuous reproduction alphabet Δ[𝒴], with IB roots parametrized by points in Δ[Δ[𝒴]] (see Section <ref>).
In RD, the existence of linear curve segments is well known, <cit.>. See, for example, Figure 2.7.6 in the latter and its reproduction <cit.>.
<cit.> offers an explanation of these in terms of a support-switching bifurcation.
Namely, a bifurcation where two RD roots of distinct supports exchange optimality at a particular multiplier value β_c.
Both roots evolve smoothly in β, while only exchanging optimality at the bifurcation.
At β_c itself, every convex combination of these two roots is also an RD root.
This is manifested by a linear segment of slope -β_c in the RD curve (see panels E and F in <cit.>).
In particular, the optimal RD root cannot be written as a continuous function of β.
The sudden emergence of an entire segment of RD roots is best understood in light of Bézout's Theorem; cf., point (i) in the discussion following Conjecture <ref>.
For one example of linear curve segments in the IB, say that a matrix M decomposes if it can be written (non-trivially) as a block matrix by permuting its rows or columns. In light of the above, we have the following refinement of <cit.>,
The IB curve (<ref>) has a linear segment at β = 1 if and only if the problem's definition p_Y|X p_X decomposes.
Figure <ref> demonstrates a simple decomposable problem, exhibiting a support-switching bifurcation at β_c = 1 between the trivial and non-trivial roots there.
For other examples, a symmetric binary erasure channel can also be seen to exhibit a support-switching bifurcation, <cit.>, which is manifested by a linear segment of slope 1/β_c, for β_c > 1.
Similarly, also for Hamming channels with a uniform input, <cit.> and <cit.>, whose problem definition p_Y|X p_X is of full support.
We argue that in the IB, support-switching bifurcations exhibit the same behavior as in RD.
That is, two roots that evolve smoothly in β and exchange optimality at the bifurcation.
While the sequel can justify this in general, there is a simple way to see this in practice.
Namely, following the two roots of Figure <ref> through the bifurcation[ That is, following the trivial root of Figure <ref> from left to right and the non-trivial one from right to left, through the bifurcation at β_c = 1 there.] by using BA-IB with deterministic annealing, <cit.>.
As deterministic annealing usually follows a solution branch continuously, this immediately reveals either root at the region where it is sub-optimal (not displayed).
A support-switching bifurcation evidently has similar characteristics[ Strictly speaking, the two roots do not intersect as in a classical transcritical, and so a support-switching bifurcation should perhaps be classified as an imperfect transcritical.] to a transcritical bifurcation; e.g., <cit.>.
This extends the results of <cit.>, who conclude that[Theorem 5 in <cit.> says that the bifurcations detected by their Theorem 3 are degenerate rather than transcritical. It is then concluded “that the bifurcation guaranteed by Theorem 3 is [generically] pitchfork-like”.] bifurcations in the IB “are only of pitchfork type”.
To see the reason for this discrepancy, note that they employ the mathematical machinery in <cit.> of bifurcations under symmetry.
As pitchfork bifurcations are “common in physical problems that have a symmetry”, <cit.>, then detecting only pitchforks by using the above machinery might not come as a surprise.
Both <cit.> and its sequel <cit.> consider the IB's symmetry to interchanging the coordinates of identical clusters[ e.g., <cit.>.].
However, this is a structural symmetry of the IB which stems from representing IB roots by finite-dimensional vectors (Subsection <ref>), and is broken at continuous IB bifurcations (Subsection <ref>).
On the other hand, discontinuous IB bifurcations need not break this symmetry, as can be seen by inspecting the roots of Figure <ref> closely[ The trivial solution to the left of β_c (Figure <ref>, left panel) may be given a degenerate bi-clustered representation, which is fully supported on p_Y but has a second cluster r of zero-mass. One may choose r ≠ p_Y, in which case the root possesses no symmetry to interchanging cluster coordinates, at either side of the bifurcation there.].
A few convexity results from rate-distortion theory are needed to consider discontinuous bifurcations in general.
These have subtle practical implications, which are of interest in their own right.
The set of conditional probability distributions p(|) which achieve a point (D, R(D)) on the rate-distortion curve (<ref>) is convex.
Viewing the IB as an RD problem as in <cit.> immediately yields an identical result for the IB,
The set of IB encoders that achieve a point (I_X, I_Y) on the IB curve (<ref>) is convex.
The proof is provided below for completeness.
We note that a version of Corollary <ref> in inverse-encoder coordinates can also be synthesized from the ideas leading to Theorem 2.3 in <cit.>.
Consider a finite IB problem p_Y|X p_X as an RD problem (d_IB, p_X) on the continuous reproduction alphabet Δ[𝒴], as defined by (<ref>) in Section <ref>.
As noted above, its encoders (or test channels) are conditional probability distributions p(r|), with r ∈Δ[𝒴], supported on finitely many coordinates (r, ).
Let p_1(r|) and p_2(r|) be encoders achieving a point (I_X, I_Y) on the IB curve (<ref>).
By <cit.>, these may be considered as test channels achieving the curve (<ref>) of the RD problem (d_IB, p_X).
The reproduction symbols r∈Δ[𝒴] supporting[ Defined here p(r|) := p(r), where p(r) is defined from p(r|) by marginalization, as in (<ref>). ] a convex combination p_λ := λ· p_1 + (1 - λ) · p_2, 0 ≤λ≤ 1, are contained in the the supports of p_1 and p_2, p_λ⊆ p_1 ∪ p_2, and so p_λ is finitely-supported.
Although 's Theorem <ref> assumes that the reproduction alphabet is finite, one can readily see that its proof works just as well when the distributions involved are finitely-supported.
Thus, by Theorem <ref>, p_λ achieves the above point on the RD curve (<ref>).
Since this point (I_X, I_Y) is on the IB curve (<ref>), then p_λ is an optimal IB root.
The RD curve (<ref>) is the envelope of lines of slope -β and intercept min_p(|)( I(X; X̂) + β E[d(, )] ) along the R-axis, e.g., <cit.>.
Thus, Theorem <ref> can be generalized by considering the achieving distributions that pertain to a particular slope value rather than to a particular curve point (D, R(D)) — see <cit.>.
For any β > 0 value, the set of distributions achieving the RD curve (<ref>) that correspond to β is convex.
As with Corollary <ref>, we immediately have an identical result for roots achieving the IB curve (<ref>),
For any β > 0 value, the set of optimal IB encoders that correspond to β is convex.
See also <cit.> for an argument in inverse-encoder coordinates.
In particular, note the duality technique leading to (b) and (c) in Theorem 4.1 there.
This duality boils down to describing a compact convex set in the plane by its lines of support, as in the observation leading to Theorem <ref>.
Commensurate with the IB being a special case of RD, Corollary <ref> can also be proven directly from the IB's definitions in direct-encoder terms, <cit.>.
Note that the requirement that the IB root indeed achieves the curve is necessary. Otherwise one could take convex combinations with the trivial IB root[ One can verify directly that this satisfies the IB Equations (<ref>)-(<ref>) for every β > 0.] p(r|x) = δ_r, p_Y. This yields absurd since the trivial root contains no information on either X or Y.
As in <cit.>, the convexity of optimal IB roots (Corollary <ref>) has several important consequences.
For one, unlike the (local) bifurcations we have considered so far, bifurcation theory also has global bifurcations.
These are “bifurcations that cannot be detected by looking at small neighborhoods of fixed points”, <cit.>.
From convexity, it immediately follows that
There are no global bifurcations in finite IB problems.
Indeed, if at a given β value there exists more than one optimal root, then the Jacobian of the IB operator Id - BA_β (<ref>) must have a kernel vector pointing along the line connecting these optimal roots, by Corollary <ref>.
With that comes an important practical caveat.
Corollaries <ref> and <ref> hold for the IB when parametrized by points in Δ[Δ[𝒴]].
However, the above kernel vector (which exists due to convexity) may not be detectable if an IB root is improperly represented by a finite-dimensional vector.
For example, consider the bifurcation in Figure <ref>, where a line segment at β_c connects the trivial (single-clustered) root to the 2-clustered root.
Obviously, the bifurcation there cannot be detected by the Jacobian of the IB operator (<ref>) when it is computed on T = 1 clusters (Jacobian of order 1· (|𝒴| + 1)).
Indeed, the root of effective cardinality two cannot be represented on a single cluster, and so the line segment connecting it to the trivial root does not exist in a 1-clustered representation.
This is demonstrated in Figure <ref>, which compares Jacobian eigenvalues at reduced representations to those at 2-clustered representations.
The same reasoning gives the following necessary condition,
A bifurcation at β_c in a finite IB problem which involves roots of effective cardinalities T_1 and T_2 is detectable by a non-zero vector in (I - D_log p(|), log p() BA_β_c) only if the latter is evaluated at a representation on at least max{T_1, T_2} clusters.
Indeed, suppose that T_1 ≨ T_2 (the conclusion is trivial if T_1 = T_2).
By definition, a root of effective cardinality T_2 does not exist in representations with less than T_2 clusters. Thus, there is no bifurcation in a T-clustered representation if T < T_2, and so there is then nothing to detect.
As a special case of this argument, note that Conjecture <ref> (Section <ref>) implies that the Jacobian is non-singular in a T_1-clustered representation of the T_1-clustered root (namely, at its reduced representation).
With that, we have observed numerically that the eigenvalues of D_log p(|), log p() BA_β do not depend on the representation's dimension if computed on strictly[ Computing on one cluster more than the effective cardinality makes sense considering <cit.> or <cit.>, for example.] more clusters than the effective cardinality. Rather, only the eigenvalues' multiplicities vary by dimension.
We omit practical caveats on exchanging between the coordinate systems of Section <ref> for brevity.
To complete the discussion on continuous bifurcations (Section <ref>), we argue that cluster-merging and cluster-vanishing are indeed bifurcations, where IB roots of distinct effective cardinalities collide and merge into one.
We offer two ways to see this.
First, using the inverse-encoder[ Inverse-encoder and decoder coordinates are interchangeable here. Indeed, as noted in Section <ref>, the inverse-encoders of an IB root (with no zero-mass clusters) are in bijective correspondence with its decoders. ] formulation of the IB in <cit.>, one can consider an optimization problem in which the number of IB clusters is constrained explicitly.
By the arguments therein, the constrained problem has an optimal root (due to compactness), which achieves the optimal curve of the constrained problem. The latter curve must be sub-optimal if fewer clusters are allowed than needed to achieve the IB curve (<ref>).
Thus, whenever the effective cardinality of an optimal root (in the un-constrained problem) decreases, it must therefore collide with an optimal root of the constrained IB problem, by Corollary <ref>.
This accords with <cit.>, which describes IB bifurcations as a separation of optimal and sub-optimal IB curves according to their effective cardinalities.
Second, consider the reduced form of an IB root at the point of a continuous bifurcation.
Since its effective cardinality decreases there strictly, say from T_2 to T_1, then the root can be represented on T_1 clusters at the bifurcation itself.
However, the Jacobian of the IB operator (<ref>) in log-decoder coordinates is non-singular when represented on T_1 clusters, as noted after Proposition <ref>.
Thus, by the Implicit Function's Theorem, there is a unique IB root on T_1 clusters through this point. It exists at both sides of the bifurcation (above and below the critical point).
When represented on T_2 clusters, however, the latter intersects at the bifurcation with the root of effective cardinality T_2, and so the two roots collide and merge there to one.
This argument is identical to <cit.>, which proves that distinct RD roots collide and merge at cluster-vanishing bifurcations in RD.
The arguments above imply that cluster vanishing bifurcations cannot be detected directly by considering kernel directions of the IB operator (<ref>) at the bifurcation, as argued in Subsection <ref>.
Indeed, consider a continuous bifurcation, where roots p_1 and p_2 of respective effective cardinalities T_1 < T_2 intersect.
These are paths in Δ[Δ[𝒴]] that coincide at the bifurcation itself, p_1(β_c) = p_2(β_c), and so in particular are of the same effective cardinality T_1 there.
Asking whether a bifurcation is detectable amounts to considering the evaluation of D(Id - BA_β) at a finite-dimensional representation (or “projection”) of p.
The Jacobian D(Id - BA_β) of the IB operator (<ref>) is non-singular at β_c when evaluated on T_1-clusters in log-decoder coordinates, as noted after Proposition <ref>.
We argue that evaluating it on representations with more clusters T ≩ T_1 does not allow to detect the bifurcation, not even if T ≥ T_2.
See Appendix <ref> for a formal argument.
Intuitively, this follows because picking a degenerate representation amounts to duplicating clusters of the reduced representation or adding clusters of zero mass; cf., reduction in Subsection <ref>.
Introducing degeneracies to a reduced root adds no information about the problem at hand.
Due to the above, cluster-vanishing bifurcations cannot be detected by following a root p_1 of effective cardinality T_1 through a “cluster growing” bifurcation, but only by following a root p_2 with T_2 > T_1 till its collision with p_1.
As discussed after Conjecture <ref> (Subsection <ref>), the Jacobian of Id - BA_β in reduced log-decoder coordinates can then be used to indicate an upcoming collision of p_2 with p_1, in addition to the root-reduction Algorithm <ref>.
The exact same arguments as above apply also to cluster-merging bifurcations. However, as noted in Subsection <ref> (and Appendix <ref>), the stability of a particular IB cluster is a property of the root itself. Thus, these are detectable by standard local techniques at the point bifurcation.
Unlike continuous bifurcations, discontinuous bifurcations are inherently detectable due to the line segment in Δ[Δ[𝒴]] connecting the roots at the bifurcation (Corollary <ref>), so long that the IB root is represented on sufficiently many clusters (Proposition <ref>) — see Figure <ref>.
These results make sense, considering that cluster-vanishing bifurcations seem to appear more frequently in practice than other types.
Intuitively, branching from a suboptimal root p_1 to an optimal one p_2 is harder than the other way around, just as learning new relevant information is harder than discarding it.
Cases where both directions are equally difficult are the exception, as one might expect.
This is consistent with the later discussion in Subsection <ref> on the stability of optimal IB roots (Appendix <ref>).
When following the path of a reduced IB root (as in Section <ref>), one would like to ensure that its bifurcations are indeed detectable by BA's Jacobian.
Due to the caveats above, it is computationally preferable[ While one can compute BA's Jacobian on more clusters than necessary, that increases computational costs and may introduce numerical subtleties. ] to follow its path as the effective cardinality decreases rather than increases.
As a result, we take only negative step sizes Δβ < 0, since the effective cardinality of an optimal IB root cannot decrease with β.
To see this, first note that the IB curve I_Y(I_X) (<ref>) is concave, and so its slope 1/β cannot increase with I_X. That is, β cannot decrease with I_X.
Second, note that allowing more clusters cannot decrease the X-information ∑_ p() H( p(|) ) achieved by the IB's optimization variables. Indeed, a T-clustered variable ( p(|), p()) (not necessarily a root) can always be considered as (T+1)-clustered, by adding a cluster of zero mass.
cf., the construction at <cit.>.
Thus, the effective cardinality of an optimal root cannot decrease as the constraint I_X on the X-information is relaxed.
When both points are combined, the effective cardinality cannot decrease with β, as argued.
In contrast to the IB, we note that the behavior of RD problems is more complicated, e.g., <cit.>, since the distortion of each reproduction symbol is fixed a priori.
Finally, we proceed with the argument of Section <ref> for the case of discontinuous IB bifurcations.
That is, consider the reduced form of an optimal IB root, and suppose that either its decoders or its weights (or both) cannot be written as a continuous function of β in the vicinity of β_c.
Write r^+_ and r^-_ for its distinct decoders as β→β_c^+ and β→β_c^-, respectively.
Similarly, p^+() and p^-() for its non-zero weights.
Consider the tangent RD problem on the reproduction alphabet 𝒳̂ := {r^+_}_∪{r^-_}_⊂Δ[𝒴], as in Section <ref>; cf., <cit.>, upon which this argument is based.
By construction, the IB coincides with its tangent RD problem at the two points (r^+_, p^+()) and (r^-_, p^-()).
Since both points achieve the optimal curve at the same slope value 1/β_c, then the linear segment of distributions connecting these points is also optimal, by Theorem <ref>.
Alternatively, one could apply Corollary <ref> directly to the IB problem.
Either way, there exists a line segment of optimal IB roots, which pertain to the given slope value.
In summary,
Let a finite IB problem have a discontinuous bifurcation at β_c > 0.
Then, its IB curve (<ref>) has a linear segment of slope 1/β_c.
Unless the decoder sets {r^+_}_ and {r^-_}_ are identical, then this is a support-switching bifurcation, as in Figure <ref>; cf., <cit.>.
A priori, the IB roots (r^+_, p^+()) and (r^-_, p^-()) may achieve the same point in the information plane, in which case the linear curve segment is of length zero. However, we are unaware of such examples.
Yet, even if such bifurcations exist, they would be detectable by the Jacobian of BA-IB (when represented on enough clusters), subject to Conjecture <ref>.
§ FIRST-ORDER ROOT-TRACKING FOR THE INFORMATION BOTTLENECK
Gathering the results of Sections <ref> through <ref>, we can now not only follow the evolution of an IB root along the first-order equation (<ref>), but can also identify and handle IB bifurcations.
This is summarized by our First-order Root-Tracking Algorithm <ref> for the IB (IBRT1) in Section <ref>, with some numerical results in Section <ref>.
Section <ref> discusses the basic properties of IBRT1, and mainly the surprising quality of approximations of the IB curve (<ref>) that it produces, as seen in Figure <ref>.
We focus on continuous bifurcations (Section <ref>), as in our experience, these are far more frequent than discontinuous ones and are straightforward to handle. cf., Section <ref>.
§.§ The IBRT1 Algorithm <ref>
To assist the reader, we first present a simplified version in Algorithm <ref>, with edge-cases handled at Algorithm <ref> — clarifications follow.
These two combined form our IBRT1 Algorithm <ref>, specified below.
We now elaborate on the main steps of the Simplified First-order Root-Tracking for the IB (Algorithm <ref>), which follows Root-Tracking for RD, Algorithm 3 in <cit.>.
Its purpose is to follow the path of a given IB root p_β_0(|) in a finite IB problem.
The initial condition p_β_0(|) is required to be reduced and IB-optimal.
Its optimality is needed below to ensure that the path traced by the algorithm is indeed optimal.
The step-size Δβ is negative, for reasons explained in Section <ref> (Proposition <ref> ff.).
The cluster-mass and cluster-merging thresholds are as in the root-reduction Algorithm <ref> (Section <ref>).
Denote p̃ (line <ref> of Algorithm <ref>) for the distributions generated from an encoder (cf., Equation (<ref>) in Section <ref>).
Algorithm <ref> iterates over grid points p̃, with each while iteration generating the reduced form of the next grid point, as follows.
On line <ref>, evaluate the IB ODE (<ref>) at the current root p̃, solving the linear equations numerically.
By Conjecture <ref> (Section <ref>), the IB ODE has a unique numerical solution v if p̃ is a reduced root and not a bifurcation.
Lines <ref> and <ref> approximate the root at the next grid point at β + Δβ, by exponentiating Euler-method's step (<ref>) (Section <ref>).
Normalization is enforced on line <ref>, since it is assumed throughout.
Off-grid points can be generated by repeating lines <ref> through <ref> for intermediate Δβ values if desired.
The approximate root at β + Δβ is reduced on line <ref>, by invoking the root-reduction Algorithm <ref> (Section <ref>).
Note that Algorithm <ref> returns its input root unmodified unless reducing it numerically.
If reduced, then the root is a vector of a lower dimension — either a cluster mass p() has nearly vanished or distinct clusters have nearly merged.
To re-gain accuracy, we invoke (on line <ref>) the Blahut-Arimoto Algorithm <ref> for the IB till convergence, on the encoder defined at line <ref> by the reduced root.
Although BA-IB is invoked near a bifurcation, this does not incur a hefty computational cost due to its critical slowing-down, <cit.> — see comments at the bottom of Section <ref>.
Invoking BA (on line <ref>) before reducing (on line <ref>) would have inflicted a hefty computational cost to BA-IB due to the nearby bifurcation.
Finally, a single BA-IB iteration in decoder coordinates is invoked on the approximate root (line <ref>), whether reduced earlier or not.
This enforces Markovity while improving the algorithm's order (see Section <ref>, and Figure <ref> in particular).
Algorithm <ref> continues this way (line <ref>) until the approximate solution is trivial (single-clustered), or β is non-positive.
In the IB, the trivial solution is always optimal for tradeoff values β < 1. However, here β plays the role of the ODE's independent variable instead.
Thus, we allow Algorithm <ref> to continue beyond β = 1, so long that[ The condition β > |Δβ| is required on line <ref>, to ensure that the target β value of the next grid point is non-negative. ] β > 0 (which we assume throughout).
This shall be useful for overshooting — see below.
With that, there are caveats in Algorithm <ref>, which stem from passing too far or close to a bifurcation.
For one, suppose that the error accumulated from the true solution is too large for a bifurcation to be detected.
The approximations generated by the algorithm will then overshoot the bifurcation. Namely, proceeding with more clusters than needed until the conditions for reduction are met later on (see Section <ref> below), as demonstrated by the two sparse grids in Figure <ref> (Section <ref>).
For another, suppose that the current grid point p̃ is too close to a bifurcation.
This might happen due to a variety of numerical reasons — e.g., thresholds δ_1, δ_2 too small, or due to the particular grid layout.
The coefficients matrix[ That is, the Jacobian D ( Id - BA_β) of the IB operator (<ref>) in log-decoder coordinates.] I - D_log p(|), log p() BA_β of the IB ODE (<ref>) would then be ill-conditioned (cf., Conjecture <ref> ff. in Section <ref>), typically resulting in very large implicit numerical derivatives v (on line <ref>).
Any inaccuracy[ e.g., due to the accumulated approximation error or due to the error caused by computing implicit derivatives in the vicinity of a bifurcation (see Figure <ref> top, in Section <ref>).] in v might then send the next grid point astray, derailing the algorithm from there on.
Indeed, the derivatives dxdβ = - (D_x F)^-1 D_β F defined by[ Note that D_x F here is always non-singular outside bifurcations, due to Conjecture <ref> and the use of reduced coordinates. ] the implicit ODE (<ref>) are in general unbounded near a bifurcation of F.
This can be seen in Figure <ref> (Section <ref>) for example, where the derivatives “explode” at the bifurcation's vicinity.
See also <cit.> on the computational difficulty incurred by a bifurcation.
While overshooting a bifurcation is not a significant concern for our purposes (see Section <ref>), passing too close to one is.
The latter is important, especially when the step size |Δβ| is small.
While decreasing |Δβ| generally improves the error of Euler's method, it also makes it easier for the approximations to come close to a bifurcation, thus potentially worsening the approximation dramatically if it derails.
This motivates one to consider how singularities of the IB ODE (<ref>) should be handled.
Next, we elaborate on our heuristic for handling singularities of the IB ODE (<ref>), brought as Algorithm <ref>.
The inputs of this heuristic are defined as in Algorithm <ref>.
It starts with the assumption that the coefficients matrix I - D_log p(|), log p() BA_β of the IB ODE (<ref>) is nearly-singular at the current grid point p̃ due to[ While a priori the Jacobian D_log p(|), log p() (Id - BA_β) may be singular also due to other reasons, by Conjecture <ref> it is non-singular at the approximations generated so far since they are assumed to be in their reduced form. cf., Section <ref>. ] a nearby bifurcation.
As a result, the implicit derivatives v at p̃ are not to be used directly to extrapolate the next grid point, as explained above.
Instead, we use them to identify the two[ While this can be refined to handle more than two fast-moving clusters at once, that is not expected to be necessary for typical bifurcations. ] fastest moving clusters, on line <ref> of Algorithm <ref>.
These are replaced by a single cluster (lines <ref> through <ref>), resulting in an approximate root on one fewer cluster.
To re-gain accuracy, the BA-IB Algorithm <ref> is then invoked (at line <ref>) on the encoder generated (at line <ref>) from the latter root, thereby generating the next grid point.
If the fast-moving clusters have merged (in the true solution) by the following grid point, then the output of Algorithm <ref> will be an IB-optimal root if its input grid point is so.
Namely, the branch followed by the algorithm remains an optimal one.
Otherwise, if these clusters merge shortly after the next grid point, then Algorithm <ref> yields a sub-optimal branch.
However, optimality is re-gained shortly afterward since the sub-optimal branch collides and merges with the optimal one in continuous IB bifurcations (Section <ref>).
Figure <ref> below demonstrates Algorithm <ref>.
cf., the similar heuristic <cit.> in root-tracking for RD, which may also lose optimality near a bifurcation and re-gain it shortly after.
The heuristic Algorithm <ref> is motivated by cluster-merging bifurcations.
In these, the implicit derivatives are very large only[ Note that cluster masses barely change in the vicinity of a cluster-merging, till the point of bifurcation itself. ] at the coordinates d logp(|) dβ of the points colliding in Δ[𝒴].
While intended for cluster-merging bifurcations, this heuristic works nicely in practice also for cluster-vanishing ones.
To see why, note that one can always add a cluster of zero-mass to an IB root without affecting the root's essential properties, regardless of its coordinates in Δ[𝒴]; cf., Section <ref> on reduction in the IB.
Therefore, a numerical algorithm may, in principle, do anything with the coordinates p(|) ∈Δ[𝒴] of a nearly-vanished cluster , p() ≃ 0, without affecting the approximation's quality too much.
Thus, for numerical purposes, one may treat a cluster-vanishing bifurcation as a cluster-merging one.
Conversely, in a cluster-merging bifurcation, a numerical algorithm may, in principle, zero the mass of one cluster while adding it to the remaining cluster. Again, without affecting the approximation's quality too much.
To conclude, for numerical purposes, cluster-vanishing is very similar to cluster-merging.
A variety of treatments between these extremities may be possible by a numerical algorithm.
Empirically, we have observed that our ODE-based algorithm treats both as cluster-merging bifurcations.
To our understanding, this is because our algorithm operates in decoder coordinates, unlike the BA-IB Algorithm <ref>, for example, which operates in encoder coordinates.
Finally, we combine the simplified root-tracking Algorithm <ref> with the heuristic Algorithm <ref> for handling singularities, yielding our IBRT1 Algorithm <ref>.
It follows the lines of simplified Algorithm <ref>, except that after solving for the implicit derivatives on line <ref>, we test the IB ODE (<ref>) for singularity.
To that end, we propose to use the matrix S (<ref>) (from Lemma <ref> in Section <ref>), since its order T· |𝒴| is smaller than the order T·( |𝒴| + 1 ) of the ODE's coefficients matrix.
This might make it computationally cheaper to test for singularity (on lines <ref> and <ref>).
Our heuristic Algorithm <ref> is invoked (on line <ref>) if the ODE (<ref>) is found to be nearly-singular, otherwise proceeding as in Algorithm <ref>.
§.§ Numerical results for the IBRT1 Algorithm <ref>
To demonstrate the IBRT1 Algorithm <ref>, we present the numerical results used to approximate the IB curve in Figure <ref> (Section <ref>) — see Section <ref> below on the approximation quality and the algorithm's basic properties.
This example was chosen both because it has an analytical solution (Appendix <ref>) and because it allows one to get a good idea of the bifurcation handling added (in Section <ref>) on top of the modified Euler method (from Section <ref>).
The source code used to generate these results is provided for readers who wish to examine the details (bottom of Section <ref>).
We discuss the numerical examples of this Section in light of the explanations provided in the previous Section <ref>.
The error of the IBRT1 Algorithm <ref> generally improves as the step-size |Δβ| becomes smaller, as expected.
The single BA-IB iteration added to Euler's method (in Section <ref>) typically allows one to achieve the same error by using much fewer grid points, thus lowering computational costs.
For example, the two denser grids in Figure <ref> require about an order of magnitude fewer points to achieve the same error compared to Euler's method for the IB; this can be seen from Figure <ref> (Section <ref>).
In sparse grids, the approximations often pass too far away from a bifurcation for the root-reduction Algorithm <ref> to detect it.
When overshooting it, the conditions for numerical reduction are generally met later on, as discussed in Section <ref> below.
Decreasing |Δβ| further often leads the approximations too close to a bifurcation, as can be seen in the densest grid of Figure <ref>.
The implicit derivatives are typically very large at the proximity of a bifurcation, while the least accurate there (see Section <ref>).
As these might send subsequent grid points off-track, the heuristic Algorithm <ref> is invoked to handle the nearby singularity (see inset of Figure <ref>).
As noted earlier, the computational difficulty in tracking IB roots (or root-tracking in general) stems from the presence of a bifurcation, manifested here by large approximation errors in its vicinity.
While the algorithm's error peaks at the bifurcation, it typically decreases afterward when overshooting, as seen in Figure <ref>. See Section <ref> for details.
§.§ Basic properties of the IBRT1 Algorithm <ref> and why does it work
Apart from presenting the basic properties of the IBRT1 Algorithm <ref>, the primary purpose of this section is to understand why does it approximate the problem's true IB curve (<ref>) so well, despite its apparent errors in approximating the IB roots?
While shown here only in Figures <ref> and <ref> (Sections <ref> and <ref>), this behavior is consistent in the few numerical examples that we have tested. We offer an explanation why this may be true in general.
To understand why the IBRT1 Algorithm <ref> approximates the true IB curve (<ref>) so well, we first explain why overshooting is not a significant concern, as noted earlier in Section <ref>.
To that end, consider the implicit ODE (<ref>)
dxdβ = - (D_x F)^-1 D_β F ,
from Section <ref>.
So long that D_x F and D_β F at its right-hand side are well-defined, it defines a vector field on the entire phase space of admissible x values, at least when D_x F is non-singular.
That is, even for x's which are not roots (<ref>) of F.
Ignoring several technicalities, the IB ODE (<ref>) therefore defines a vector field also outside IB roots.
Indeed, due to Conjecture <ref>, the Jacobian of the IB operator Id - BA_β (<ref>) is non-singular in the vicinity of a reduced root, [ By Equation (<ref>) (in Section <ref>), D_log p(|), log p() BA_β is continuous in the distributions defining it, under mild assumptions. cf., Lemma <ref> in Appendix <ref>. Thus, so are its eigenvalues. ].
Now, suppose that p_β is an optimal IB root, and consider a point p' ≠p_β in its vicinity.
An argument based on a strong notion of Lyapunov stability (in Appendix <ref>) shows that p' flows along the IB's vector field towards p_β in regions that do not contain a bifurcation, though only if flowing in decreasing β as done by our IBRT Algorithm <ref>.
An approximation p' would then be “pulled” towards the true root.
Stability at this direction of β is very reasonable, considering that p_β follows a path of decreasingly informative representations as β decreases.
Indeed, all the paths to oblivion lead to one place — the trivial solution, whose representation in reduced coordinates is unique.
As a result, a numerical approximation p' would gradually settle in the vicinity of the true root p_β as seen in Figures <ref> and <ref>, so long that p_β does not change much and the step-size |Δβ| is small enough.
While this explanation obviously breaks near a bifurcation, it does suggest that the approximation error should decrease when overshooting it (see Section <ref>), once the true reduced root has settled down.
In a sense, overshooting is similar to being in the right place but at the wrong time.
The above suggests that the IBRT1 Algorithm <ref> should generally approximate the true IB curve (<ref>) well, despite its errors in approximating IB roots.
To see this, note that while β^-1 is the slope of the optimal curve (<ref>) of the IB, <cit.>, for the IB ODE (<ref>) it is merely a “time-like” independent variable.
When solving for the optimal curve (<ref>), one is not interested in an optimal root or at its β value, but rather at its image ( I(X; X̂), I(Y; X̂) ) in the information plane.
As a result, achieving the optimal roots but on the wrong β values does yield the true IB curve (<ref>), as required.
This is the reason that the true curve (<ref>) is achieved in Figure <ref> (Section <ref>) even on sparse grids, despite the apparent approximation errors in Figures <ref> and <ref> (Section <ref>).
With that, expect the approximate IB curve produced by the IBRT1 Algorithm <ref> to be of lesser quality when there are more than two possible labels y.
To see this, note that the space Δ[𝒴] traversed by the approximate clusters is not one-dimensional then, and so it is possible to maneuver around clusters of an optimal IB root.
Next, we briefly discuss the basic properties of the IBRT1 Algorithm <ref>.
Its computational complexity is determined by the complexity of a single grid point.
The latter is readily seen to be dominated by the complexity O(T^2 · |𝒴|^2 ·( |𝒳| + T· |𝒴| ) ) of computing the coefficients matrix of the IB ODE (<ref>) and of solving it numerically (on line <ref>).
To that, one should add the complexity of the BA-IB Algorithm <ref> each time a root is reduced.
However, the critical slowing down of BA-IB <cit.> is avoided since we reduce the root before invoking BA-IB (see Section <ref>).
The complexity is only linear in |𝒳| thanks to the choice of decoder coordinates. Had we chosen one of the other coordinate systems in Section <ref>, then solving the ODE would have been cubic in |𝒳| rather than linear (see there).
The computational difficulty in following IB roots stems from the existence of bifurcations (Section <ref>), as it generally is with following an operator's root, <cit.>.
As noted in Section <ref>, convergence guarantees can be derived for Euler's method for the IB when away of bifurcation, in terms of the step-size |Δβ|, in a manner similar to <cit.> for RD.
These imply similar guarantees for the IBRT1 Algorithm <ref>, as adding a single BA-IB iteration in our modified Euler method improves its order (see there).
These details are omitted for brevity, however.
For a numerical method of order d > 0 (see Section <ref>) with a fixed step-size |Δβ| and a fixed computational cost per grid point, the cost-to-error tradeoff is given by
error ∝ cost^-d ,
as in <cit.>, when |Δβ| is small enough. See <cit.> for example.
Figure 3.4 in <cit.> demonstrates for RD that methods of higher order achieve a better tradeoff, as expected, as in the fixed-order Taylor methods they employ.
Since computing implicit derivatives of higher orders requires the calculation of many more derivative tensors of Id - BA_β (<ref>) than done here, <cit.>, we have used only first-order derivatives for simplicity.
However, while the vanilla Euler method for the IB is of order d = 1, the discussion in Section <ref> (and Figure <ref> in particular) suggests that the order d of the modified Euler method used by the IBRT1 Algorithm <ref> is nearly twice than that.
cf., Section <ref>.
With that, we comment on the behavior of the IBRT1 Algorithm <ref> at discontinuous bifurcations.
Consider the problem in Figure <ref> (Section <ref>), for example.
When Algorithm <ref> follows the optimal 2-clustered root there, the Jacobian's singularity (in Figure <ref>) is detectable by it because the step size Δβ is negative.
cf., the discussion in Section <ref> there.
Indeed, due to Conjecture <ref> ff., the algorithm can detect discontinuous bifurcations in general.
Whether a particular discontinuous bifurcation is detected by Algorithm <ref> in practice depends on the details[ e.g., on the threshold value δ_3 for detecting singularity and on the precise grid points layout.], of course, as with continuous bifurcations.
Indeed, the details may or may not cause a particular example to be detected by the conditions on lines <ref> and <ref> (in Algorithm <ref>).
If missed, Algorithm <ref> will continue to follow the 2-clustered root in Figure <ref> to the left of the bifurcation, where it is sub-optimal, just as BA-IB with reverse deterministic annealing would.
Once detected, though, one may wonder whether the heuristic Algorithm <ref> works well also for discontinuous bifurcations.
The example of Figure <ref> has just one single-clustered root to the left of the bifurcation.
Thus, the BA-IB Algorithm <ref> invoked on line <ref> (of Algorithm <ref>) must converge to it.
However, there may generally be more than a single root of smaller effective cardinality to the left of the bifurcation, to which BA-IB may converge.
The handling of discontinuous bifurcations is left to future work.
Such handling is expected to be easier in the IB than in RD.
Since, in contrast to RD, the effective cardinality of an optimal IB root cannot decrease with β (bottom of Section <ref>).
See <cit.> for counter-examples in RD.
This makes detecting discontinuous bifurcations easier in the IB and is also expected to assist with their handling.
We list the assumptions used along the way for reference.
These are needed to guarantee the optimality of the IBRT1 Algorithm <ref> at the limit of small step-sizes |Δβ |, except at a bifurcation's vicinity.
In Section <ref>, it was assumed without loss of generality[ Otherwise, one may remove symbols with p_X() = 0 from the source alphabet.] that the input distribution p_X is of full support, p() > 0 for every .
The requirement p( | ) > 0 was added in Section <ref> as a sufficient technical condition for exchanging to logarithmic coordinates (Lemma <ref> in Appendix <ref>), and could perhaps be alleviated in alternative derivations.
Together, these are equivalent to having a never-vanishing IB problem definition, p(|) p() > 0 for every and .
The algorithm's initial condition is assumed to be a reduced and optimal IB root, as reduction is needed by Conjecture <ref> in Section <ref>.
Finally, the given IB problem is assumed to have only continuous bifurcations, except perhaps for its first (leftmost) one.
While these assumptions are sufficient to guarantee optimality, we note that milder conditions might do in a particular problem.
§ CONCLUDING REMARKS
The IB is intimately related to several problems in adjacent fields, <cit.>, including coding problems, inference, and representation learning.
Despite its importance, there are surprisingly few techniques to solve it numerically.
This work attempts to fill this gap by exploiting the dynamics of IB roots.
The end result of this work is a new numerical algorithm for the IB, which follows the path of a root along the IB's optimal tradeoff curve (<ref>).
A combination of several novelties was required to achieve this goal.
First, the dynamics underlying the IB-curve (<ref>) obeys an ODE, <cit.>.
Following the discussion around Conjecture <ref> (in Section <ref>), the existence of such a dynamics stems from the analyticity of the IB's fixed-point Equations (<ref>)-(<ref>), thus typically resulting in piece-wise smooth dynamics of IB roots.
Several natural choices of a coordinate system for the IB were considered, both for computational purposes and to facilitate a clean treatment of IB bifurcations below.
The IB's ODE (<ref>) was derived anew in appropriate coordinates, allowing an efficient computation of implicit derivatives at an IB root.
Combining BA-IB with Euler's method yields a modified numerical method whose order is higher than either.
Second, one needs to understand where the IB ODE (<ref>) is not obeyed, thereby violating the differentiability of an optimal root with respect to β.
To that end, one not only needs to detect IB bifurcations but also needs to identify their type in order to handle them properly.
Unlike standard techniques, our approach is to remove redundant coordinates, following root-tracking for RD, <cit.>; cf., Section <ref>.
To achieve a reduction, we follow the arguably better definition of the IB in <cit.>.
Namely, a finite IB problem is an RD problem on the continuous reproduction alphabet Δ[𝒴].
Therefore, the IB may be intuitively considered as a method of lossy compression of the information on Y embedded in X.
Viewing a finite IB problem as an infinite RD problem suggests a particular choice of a coordinate system for the IB, which enables reduction in the IB; this extends reduction in RD, <cit.>.
Furthermore, this point of view highlights subtleties due to computing finite-dimensional representations of IB roots.
To our understanding, these subtleties hindered the understanding of IB bifurcations throughout the years.
Combining the above allows us to translate an understanding of IB bifurcations to a new numerical algorithm for the IB (the IBRT1 Algorithm <ref>).
There are several directions that one could consider to improve our algorithm.
Near bifurcations, one could improve its handling of discontinuous bifurcations.
While we used implicit derivatives only of the first order for simplicity, higher-order derivatives generally offer a better cost-to-error tradeoff when away of bifurcations.
See also <cit.> on possible improvements for following an operator's root.
PART:
*Appendix
§ THE BA-IB OPERATOR IN DECODER COORDINATES
For reference, we give an explicit expression for the BA-IB operator in decoder coordinates, defined in Section <ref>.
Denote by p_Y|X̂ and p_X̂ the vectors whose coordinates are ( p(|), p() ).
We denote the evaluation of BA_β at this point by BA_β[p_Y|X̂, p_X̂].
Its output is again a decoder-marginal pair, whose coordinates are denoted respectively BA_β[p_Y|X̂, p_X̂](|) and BA_β[p_Y|X̂, p_X̂]().
Explicitly, BA_β in decoder coordinates is given by,
BA_β[p_Y|X̂, p_X̂](|) :=
∑_p(|) p()/Z(, β)exp{ -β D_KL[p(|) || p(|)] } and
BA_β[p_Y|X̂, p_X̂]() :=
∑_p() p() /Z(, β)exp{ -β D_KL[p(|) || p(|)] } ,
where Z(, β) is defined in terms of p(|) and p() as in the IB's encoder Equation (<ref>) (Section <ref>).
The following lemma is handy when exchanging to logarithmic coordinates in Section <ref>.
Let p(|) p() define a finite IB problem, such that p(|) > 0 for every and .
Let p(|) be the decoder of an IB root, and such that p() > 0.
Then p(|) > 0 for every .
This follows immediately from the IB's decoder Equation (<ref>), since p(|) is a well-defined normalized conditional probability distribution if p() > 0.
§ THE FIRST-ORDER DERIVATIVE TENSORS OF BLAHUT-ARIMOTO FOR THE IB
We calculate the first-order derivative tensors of the Blahut-Arimoto operator BA_β in log-decoder coordinates (see Sections <ref> and <ref>).
Namely, its Jacobian matrix D_log p(|), log p() BA_β, and the vector D_β BA_β of its partial derivatives with respect to β.
cf., Appendix <ref> for explicit formulae of BA_β in decoder coordinates.
While these are “just” differentiations, many subtleties are involved in getting the math right.
For example, one needs to correctly identify the inputs and outputs of BA_β, when considered as an operator on log-decoder coordinates.
For another, one must take special care as to which variable depends on which, and especially on which does it not depend, as multiple variables are involved.
Above all, these calculations require a deep understanding of the chain rule.
With that, a common caveat in such calculations is that the BA_β operator (and the equations defining it) should be differentiated before they are evaluated.
While this is obvious for real functions, where f'(3) stands for the derivative function of f(x) evaluated at x=3, for the BA_β operator, this might get obfuscated by the myriad of variables and variable-dependencies of which it is comprised.
Although calculating the derivative of BA_β (at an arbitrary point) first and only then evaluating at a fixed point might appear as a mere technical necessity, it is required by this work. For example, when considering the vector field defined by the IB operator (<ref>) at Section <ref>.
cf., <cit.>, for the derivative tensors of Blahut's algorithm <cit.> for RD, of arbitrary order.
The subtitles involved in these differentiations are discussed in Appendix <ref>, with the bulk of the calculations carried out in <ref>.
The latter are gathered and simplified in Appendix <ref> to obtain the Jacobian matrix D_log p(|), log p() BA_β, and in Appendix <ref> to obtain the partial-derivatives vector D_β BA_β.
The results provided here naturally depend on the choice of coordinate system.
To compare results between log-decoder and log-encoder coordinates in Section <ref> (e.g., in Figure <ref>), we derive in Appendix <ref> the coordinate-exchange Jacobians between these coordinate systems.
§.§ Calculation setups and partial derivatives of unnamed functions
We explain the mathematical subtitles relevant to the sequel.
As we are interested in the derivatives of the Blahut-Arimoto Algorithm <ref> for the IB (in Section <ref>), we shall follow its notation.
Namely, distributions are subscripted i or i+1 by the algorithm's iteration number.
A subscript i is usually considered an input distribution, and a subscript i+1 is usually considered an output distribution.
e.g., p_i() or p_i+1(|).
These need not be IB roots but rather are arbitrary distributions.
On the other hand, a subscript β denotes a distribution of an IB root at a tradeoff value β, as in p_β(|) for a root's decoders.
To avoid subtleties due to zero-mass clusters, we usually assume p_i() ≠ 0 in the sequel, for any .
cf., Sections <ref> and <ref> on root-reduction in the IB.
It is important to distinguish which variables are dependent and which are independent in a particular calculation.
e.g., in Appendix <ref>.
Since this task is easier for a single real variable (as opposed to distributions, for example), we consider simplifications to the real case.
Note that each of the equations algo:BA-IBeq:IB-BA-cluster_marginal through algo:BA-IBeq:IB-BA-new-direct-enc defining the BA-IB Algorithm <ref> yields a new distribution in terms of already-specified ones.
These define unnamed functions, whose variables and values are probability distributions. For example, one could have formally defined p_i(x|x̂) in algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc by the function
ℱ[ p_i(x̂|x), p_i(x̂) ](x, x̂) :=
p_i(x̂|x)p()/p_i(x̂) ,
where p_i(x̂|x) and p_i(x̂) are the variables of ℱ, and its output is a conditional probability distribution, with x conditioned upon x̂.
As the input and representation alphabets 𝒳 and 𝒳̂ are finite, N := |𝒳| and T := |𝒳̂|, the arguments p_i(x̂|x), p_i(x̂) and values p_i(x|x̂) of ℱ (<ref>) are merely real vectors.
Thus, enumerating the variables x_1, …, x_N and x̂_1, …, x̂_T allows to spell-out (<ref>) by its coordinates,
ℱ[ p_i(x̂_1|x_1), p_i(x̂_1|x_2), …, p_i(x̂_1|x_N), …, p_i(x̂_T|x_N), p_i(x̂_1), …, p_i(x̂_T) ](x, x̂) :=
p_i(x̂|x)p()/p_i(x̂) .
While (<ref>) is too cumbersome to work with, it does highlight that ℱ is merely a vector of N· T real vector-valued functions, in T + N· T real variables.
This allows us to use partial derivatives rather than their infinite-dimensional counterparts (namely, variational derivatives), as in
∂ℱ[ p_i(x̂|x), p_i(x̂) ]/∂ p_i(x̂_j|x_k) :=
lim_h→ 0ℱ[p_i(x̂_1|x_1), …, p_i(x̂_j|x_k) + h, …, p_i(x̂_T) ] - ℱ[…, p_i(x̂_j|x_k), …]/h .
This is the derivative of ℱ (<ref>) with respect to a particular (j, k)-entry of its argument, by definition.
However, to maintain a concise notation, we shall carry on with un-named function definitions, writing ∂ p_i(x|x̂)/∂ p_i(x̂_j|x_k) for the partial derivative of (<ref>) rather than its explicit form (<ref>).
If disoriented, the reader is encouraged to return to the definitions (<ref>).
We often exchange variables implicitly to logarithmic coordinates, as in Section <ref>.
For example, ∂ℱ[ p_i(x̂|x), p_i(x̂) ]/ p_i(x̂_i|x_j) is to be understood as exchanging variables to u_i(x̂, x) := log p_i(x̂|x), with 𝒢[u_i(x̂,x), u_i(x̂) ] := ℱ[ exp u_i(x̂,x), exp u_i(x̂) ] now differentiated with respect to its variables u_i(x̂,x) and u_i(x̂),
∂ℱ[ p_i(x̂|x), p_i(x̂) ]/ p_i(x̂_i|x_j) = ∂ℱ[ exp u_i(x̂,x), exp u_i(x̂) ]/∂ u_i(x̂,x) =:
∂𝒢[u_i(x̂,x), u_i(x̂) ]/∂ u_i(x̂,x)
The output of ℱ may similarly be exchanged to logarithmic coordinates, as in logℱ[ exp u_i(x̂,x), exp u_i(x̂) ].
To proceed, carefully note the dependencies between the various variables in a BA-IB iteration, at algo:BA-IBeq:IB-BA-cluster_marginal through algo:BA-IBeq:IB-BA-new-direct-enc. These are summarized compactly by the following diagram,
@C=1.8em
…[r] p_i(x̂|x)[r]@(u,u)[rr]|(0.72) p_i(x̂)[r]@(u,u)[rrr]@(d,l)[rrd] p_i(x|x̂)[r] p_i(y|x̂)[r][d] p_i+1(x̂|x)[r] …
Z_i(x, β)@(r,d)[ur]
by their order of appearance in the BA-IB Algorithm <ref>.
This diagram proceeds to both sides by the iteration number i.
Each node in (<ref>) serves both as a function of the nodes preceding it and as a variable for those succeeding it, and so it is a “function-variable”.
To differentiate along the dependencies graph (<ref>), we shall need the multivariate chain rule
df/dy = ∂ f/∂ y + ∂ f/∂ zd z/d y ,
for a function f(y, z(y)).
As the dependencies graph (<ref>) involves multiple function-variables, such as z(y), we pause on the definition's subtleties.
The partial derivative of a function g in several variables x_1, …, x_N with respect to its i-th entry is defined by
∂ g/∂ x_i := lim_h→ 0g(x_1, …, x_i + h, …, x_N) - g(x_1, …, x_i, …, x_N)/h .
We emphasize that variables x_1, …, x_i-1, x_i+1, …, x_N other than x_i are fixed when calculating ∂ g/∂ x_i. And so, it makes no difference in (<ref>) whether or not they depend on x_i, as in x_j = x_j(x_i) for j≠ i.
Next, suppose we would like to calculate how changing an input distribution affects some output distribution.
This is relevant in Appendix <ref> for example, when considering how does a change in a coordinate of an input decoder p_i(|) or marginal p_i() affect a particular coordinate of the output decoder or marginal.
For exposition's simplicity, though, suppose that we would like to calculate how a change in the (k_1, k_2) coordinate p_i(x̂_k_1|x_k_2) of an input encoder affects the (j_1, j_2) coordinate p_i+1(x̂_j_1|x_j_2) of the output encoder.
That is, deriving the rightmost node in (<ref>) with respect to a coordinate of the leftmost one,
dlog p_i+1(x̂_j_1|x_j_2)/dlog p_i(x̂_k_1|x_k_2) ,
where we have exchanged to logarithmic coordinates to simplify calculations.
To calculate (<ref>), one needs to apply the multivariate chain rule (<ref>) along all the possible dependencies of the output log p_i+1(x̂_j_1|x_j_2) on the input coordinate log p_i(x̂_k_1|x_k_2).
This amounts to following all the paths in (<ref>) connecting these two nodes, summing the contributions of every possible path.
For example, traversing from the input p_i(x̂_k_1|x_k_2) rightwards at (<ref>) to p_i(x̂), then downwards to Z_i(x, β) and then to the output p_i+1(x̂_j_1|x_j_2) yields the term
∂log p_i(x̂”)/∂log p_i(x̂_k_1| x_k_2)∂log Z_i(x, β)/∂log p_i(x̂”) p_i+1(x̂_j_1|x_j_2)/ Z_i(x, β)
corresponding to this path, at particular x and x̂” coordinates.
To collect the contribution from every intermediate function-variable coordinate, we need to sum the latter over x and x̂”.
Writing down all such paths, one has for (<ref>),
dlog p_i+1(x̂_j_1|x_j_2)/dlog p_i(x̂_k_1 | x_k_2)
=
∂log p_i(x̂”)/∂log p_i(x̂_k_1| x_k_2)·{ p_i+1(x̂_j_1|x_j_2)/ Z_i(x, β)·∂log Z_i(x, β)/∂log p_i(x̂”)
+ ∂log p_i+1(x̂_j_1 | x_j_2)/∂log p_i(x̂”)
+ [ p_i+1(x̂_j_1|x_j_2)/ Z_i(x, β)· Z_i(x, β)/ p_i(y|x̂) + p_i+1(x̂_j_1|x_j_2)/ p_i(y|x̂)] · p_i(y|x̂)/ p_i(x'|x̂')· p_i(x'|x̂')/ p_i(x̂”)}
+ [ p_i+1(x̂_j_1|x_j_2)/ Z_i(x, β)· Z_i(x, β)/ p_i(y|x̂) + p_i+1(x̂_j_1|x_j_2)/ p_i(y|x̂)] · p_i(y|x̂)/ p_i(x'|x̂')· p_i(x'|x̂')/ p_i(x̂_k_1|x_k_2)
Repeated unbounded variables are understood to be summed over, as in Einstein's summation convention.
§.§.§ Differentiating along the dependencies graph
Next, we differentiate each edge in (the logarithm of) the dependency graph (<ref>).
These are necessary to evaluate derivatives along dependency paths, that underlie the subsequent sections' calculations.
Equation algo:BA-IBeq:IB-BA-cluster_marginal in the BA-IB Algorithm <ref> defines the cluster marginal in terms of the direct encoder,
p_i(x̂)/ p_i(x̂'|x')algo:BA-IBeq:IB-BA-cluster_marginal=1/p_i(x̂)∑_ p() ∂/ p_i(x̂'|x') p_i(x̂|x)
=
1/p_i(x̂)∑_ p() p_i(x̂|x) p_i(x̂|x)/ p_i(x̂'|x')algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc=
p_i(x'|x̂) ·δ_x̂, x̂'
In the first and second equalities we have used the identity ∂∂ x y = y ∂∂ xlog y for the differentiation of a function's logarithm, when y is a function of x.
Following the comments around the definition (<ref>) of a partial derivative, note that algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc defines the inverse encoder log p_i(x|x̂) as a function of the variables log p_i(x̂|x) and log p_i(x̂) (and p(), which we ignore under differentiation).
Thus, differentiating this equation with respect to an entry of the variable log p_i(x̂|x) implies that the entries of the other variable log p_i(x̂) are held fixed, and vice versa.
So, for the Bayes rule algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc we have
p_i(x_j_1|x̂_j_2)/ p_i(x̂) = ∂/ p_i(x̂)[ log p_i(x̂_j_2|x_j_1) - log p_i(x̂_j_2) ] = -δ_x̂, x̂_j_2
where log p_i(x̂_j_2|x_j_1) at the right-hand side is different from the variable log p_i(x̂) of differentiation, and so its partial derivative vanishes.
Next, differentiating algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc with respect to a coordinate of its other variable log p_i(x̂|x),
p_i(x_j_1|x̂_j_2)/ p_i(x̂'|x') =
p_i(x̂_j_2|x_j_1)/ p_i(x̂'|x') - p_i(x̂_j_2)/ p_i(x̂'|x') = δ_x_j_1, x'·δ_x̂_j_2, x̂'
Using again the logarithmic derivative identity ∂∂ x y = y ∂∂ xlog y, by the decoder Equation algo:BA-IBeq:IB-BA-decoder-eq we have
p_i(y|x̂”)/ p_i(x_k_1|x̂_k_2) =
1/p_i(y|x̂”)∑_x”' p(y|x”') ∂/ p_i(x_k_1|x̂_k_2) p_i(x”'|x̂”)
= 1/p_i(y|x̂”)∑_x”' p(y|x”') p_i(x”'|x̂”) δ_x̂_k_2, x̂”·δ_x_k_1, x”'
= δ_x̂_k_2, x̂”·p(y|x_k_1) p_i(x_k_1|x̂”)/p_i(y|x̂”)
Next, consider the KL-divergence term in the definition algo:BA-IBeq:IB-BA-partition-func of the partition function Z_i,
∂/ p_i(y|x̂”) D_KL[p(y|x”) || p_i(y|x̂)]
= -∑_y' p(y'|x”) δ_x̂, x̂”·δ_y, y'∂/ p_i(y|x̂”)log p_i(y'|x̂)
= -δ_x̂, x̂”· p(y|x”)
Since the partition function algo:BA-IBeq:IB-BA-partition-func depends on the decoder p_i(y|x̂) only via the KL-divergence,
∂ Z_i(x”, β)/ p_i(y|x̂”) =
∂/ p_i(y|x̂”)∑_x̂ p_i(x̂) exp{ -β D_KL[p(y|x”) || p_i(y|x̂)] }
=
-β∑_x̂ p_i(x̂) exp{ -β D_KL[p(y|x”) || p_i(y|x̂)] }∂/ p_i(y|x̂”) D_KL[p(y|x”) || p_i(y|x̂)]
(<ref>)=β p_i(x̂”) exp{ -β D_KL[p(y|x”) || p_i(y|x̂”)] } p(y|x”)
algo:BA-IBeq:IB-BA-new-direct-enc=β p_i+1(x̂”|x”) Z_i(x”, β) p(y|x”)
Hence,
Z_i(x”, β)/ p_i(y|x̂”) =
β p_i+1(x̂”|x”) p(y|x”)
For the derivative of the partition function with respect to the marginal p_i(x̂),
∂ Z_i(x,β)/ p_i(x̂')algo:BA-IBeq:IB-BA-partition-func=∂/ p_i(x̂')∑_x̂ p_i(x̂) exp{ -β D_KL[p(|) || p_i(y|x̂)] }
=
∑_x̂ p_i(x̂) exp{ -β D_KL[p(|) || p_i(y|x̂)] } p_i(x̂)/ p_i(x̂')
=
∑_x̂ p_i(x̂) exp{ -β D_KL[p(|) || p_i(y|x̂)] }·δ_x̂, x̂'algo:BA-IBeq:IB-BA-new-direct-enc=
Z_i(x, β) · p_i+1(x̂'|x)
Where the second equality follows from the logarithmic derivative identity. Hence,
Z_i(x,β)/ p_i(x̂') = p_i+1(x̂'|x)
Finally, for the encoder Equation algo:BA-IBeq:IB-BA-new-direct-enc,
log p_i+1(x̂'|x') :=
log p_i(x̂') - log Z_i(x',β) - β D_KL[p(y|x') || p_i(y|x̂')]
The first two terms to the right, p_i(x̂) and Z_i(x, β), take the role of a variable in Equation algo:BA-IBeq:IB-BA-new-direct-enc.
In contrast, we consider the last divergence term as a shorthand for summing over p_i(y|x̂). Thus, the latter is a variable of (<ref>).
With (<ref>), we thus have
p_i+1(x̂'|x')/ p_i(y|x̂”) =
β δ_x̂', x̂”· p(y|x') .
For the other derivatives of the encoder equation algo:BA-IBeq:IB-BA-new-direct-enc,
p_i+1(x̂'|x')/ Z_i(x”, β) =
-∂log Z_i(x', β)/ Z_i(x”, β) = -δ_x', x”
And,
p_i+1(x̂|x)/ p_i(x̂') =
p_i(x̂)/ p_i(x̂') - Z_i(x, β)/ p_i(x̂') -β∂ D_KL[p(|) || p_i(y|x̂)]/ p_i(x̂') =
δ_x̂, x̂'
where the variable p_i(x̂) of Equation algo:BA-IBeq:IB-BA-new-direct-enc differs from the variables Z_i and p_i(y|x̂), on which the crossed-out terms depend.
We summarize the calculations of this subsection in the following diagram:
@C=2em@R=4emlog p_i(x̂|x)[rr]_p_i(x'|x̂) δ_x̂, x̂'@(u,u)[rrr]^δ_x, x' δ_x̂, x̂'|(0.75) log p_i(x̂)[r]_-δ_x̂, x̂'@(u,u)[rrrrr]^δ_x̂, x̂'@(d,l)[rrrd]_p_i+1(x̂'|x) log p_i(x|x̂)[rr]^p(y|x') p_i(x'|x̂)/p_i(y|x̂)·δ_x̂', x̂ log p_i(y|x̂)[rr]_β δ_x̂, x̂' p(y'|x)[d]_β p_i+1(x̂'|x) p(y'|x) log p_i+1(x̂|x)
log Z_i(x, β)@(r,d)[urr]_-δ_x, x'
A differentiation variable is denoted with commas, at an arrow's source in this diagram.
A coordinate of the function which we differentiate is written without commas, at an arrow's end. e.g.,
log p_i(x̂|x)[rrr]_ p_i(x|x̂)/ p_i(x̂'|x') = … log p_i(x|x̂)
§.§ The Jacobian matrix of BA-IB in log-decoder coordinates
By gathering the results of Appendix <ref> and following the lines of <ref>, we calculate the Jacobian matrix (<ref>) (in Section <ref>) of the Blahut-Arimoto operator BA_β in log-decoder coordinates, defined in Section <ref>.
The derivative of BA_β in decoder coordinates boils down to the four quantities:
the effect dlog p_i+1(|)d log p_i(|) that varying a coordinate log p_i(|) of an input cluster has on a coordinate log p_i+1(|) of an output cluster,
the effect dlog p_i+1(|)/d log p_i() that varying an input marginal coordinate log p_i() has on a coordinate log p_i+1(|) of an output cluster, and so forth.
And so, the Jacobian D_log p(|), log p() BA_β it is a block matrix,
(
[ ; dlog p_i+1(|)/d log p_i(|) dlog p_i+1(|)/d log p_i(); ; ; dlog p_i+1()/d log p_i(|) dlog p_i+1()/d log p_i(); ])
Its rows correspond to the output coordinates of BA_β.
We index its upper rows by ∈𝒴 and ∈{1, …, T}, while its lower rows are indexed by alone. Similarly, its columns correspond to the input coordinates of BA_β.
We index its leftmost columns by and , and its rightmost columns by alone.
Each block in (<ref>) is comprised of contributions along all the distinct paths connecting two vertices in the dependencies graph (<ref>).
For example, the lower-left block in (<ref>) is comprised of the contributions along all the paths in (<ref>) connecting p_i(|) to p_i+1().
We now spell out the paths contributing to each block in (<ref>), with repeated dummy indices understood to be summed over.
Afterward, we shall calculate the contributing paths explicitly, carrying out the summations. The upper-left block of (<ref>) is comprised of
dlog p_i+1(|)/d log p_i(|) =
p_i+1(|)/ p_i+1(x_1|x̂_2)·[
p_i+1(x_1|x̂_2)/ p_i+1(x̂_3) p_i+1(x̂_3)/ p_i+1(x̂_4|x_5) +
p_i+1(x_1|x̂_2)/ p_i+1(x̂_4|x_5)]
·[
p_i+1(x̂_4|x_5)/ p_i(|) +
p_i+1(x̂_4|x_5)/ Z_i(x_6, β) Z_i(x_6, β)/ p_i(|)]
This Equation (<ref>) encodes the fours paths connecting the vertex p_i(|) to p_i+1(|) in (<ref>).
When accumulating the contributions in (<ref>), one must carefully sum only over repeated dummy indices that appear in the given term.
e.g., the two paths in (<ref>) which traverse the edge p_i+1(x_1|x̂_2) p_i+1(x̂_4|x_5) (pointing from p_i+1(|x) to p_i+1(x|)) do not involve a summation over x̂_3. In contrast, the two paths involving p_i+1(x_1|x̂_2) p_i+1(x̂_3) p_i+1(x̂_3) p_i+1(x̂_4|x_5) there do entail a summation over x̂_3.
This is relevant for the calculations below, as in (<ref>) for example.
Similarly, for the upper-right block of (<ref>),
dlog p_i+1(|)/d log p_i() =
p_i+1(|)/ p_i+1(x_1|x̂_2)·[
p_i+1(x_1|x̂_2)/ p_i+1(x̂_3) p_i+1(x̂_3)/ p_i+1(x̂_4|x_5) +
p_i+1(x_1|x̂_2)/ p_i+1(x̂_4|x_5)]
·{ p_i+1(x̂_4|x_5)/ p_i() +
p_i+1(x̂_4|x_5)/ p_i(y_7|x̂_8) p_i(y_7|x̂_8)/ p_i(x_9|x̂_10) p_i(x_9|x̂_10)/ p_i().
. +
p_i+1(x̂_4|x_5)/ Z_i(x_6, β)[
Z_i(x_6, β)/ p_i() +
Z_i(x_6, β)/ p_i(y_7|x̂_8) p_i(y_7|x̂_8)/ p_i(x_9|x̂_10) p_i(x_9|x̂_10)/ p_i()]
}
For the lower-left block of (<ref>),
dlog p_i+1()/d log p_i(|) =
p_i+1()/ p_i+1(x̂_1|x_2)[
p_i+1(x̂_1|x_2)/ p_i(|) +
p_i+1(x̂_1|x_2)/ Z_i(x_3, β) Z_i(x_3, β)/ p_i(|)]
Last, for the lower-right block of (<ref>),
dlog p_i+1()/d log p_i()
=
p_i+1()/ p_i+1(x̂_1|x_2)·{ p_i+1(x̂_1|x_2)/ p_i() +
p_i+1(x̂_1|x_2)/ p_i(y_3|x̂_4) p_i(y_3|x̂_4)/ p_i(x_5|x̂_6) p_i(x_5|x̂_6)/ p_i().
. +
p_i+1(x̂_1|x_2)/ Z_i(x_7, β)[
Z_i(x_7, β)/ p_i() +
Z_i(x_7, β)/ p_i(y_3|x̂_4) p_i(y_3|x̂_4)/ p_i(x_5|x̂_6) p_i(x_5|x̂_6)/ p_i()]
}
Next, by using the intermediate results summarized in (<ref>) (Section <ref>), we calculate each of the four blocks of (<ref>) explicitly.
For the upper-left block (<ref>) we have
dlog p_i+1(|)/d log p_i(|) =
p(|x_1) p_i+1(x_1|)/p_i+1(|) δ_, x̂_2·[
( - δ_x̂_2, x̂_3)
p_i+1(x_5|x̂_3) δ_x̂_3, x̂_4 +
δ_x_1, x_5δ_x̂_2, x̂_4]
·[
βδ_x̂_4, p(|x_5) +
( -δ_x_5, x_6 )
β p_i+1(|x_6) p(|x_6)
]
For clarity, we elaborate on each step needed to complete the calculation of the upper-left block (<ref>) while providing only the main steps for the other blocks.
To carry out the summations over the dummy variables x_1, x̂_2, x̂_3, x̂_4, x_5 and x_6 in (<ref>), we carefully sum only over repeated dummy indices, as explained after (<ref>).
We carry out one summation at a time, starting with x̂_2. This yields,
β p(|x_1) p_i+1(x_1|)/p_i+1(|)·[
- δ_, x̂_3 p_i+1(x_5|x̂_3) δ_x̂_3, x̂_4 +
δ_x_1, x_5δ_, x̂_4]
·[
δ_x̂_4, p(|x_5)
-δ_x_5, x_6 p_i+1(|x_6) p(|x_6)
]
=
β·[
- δ_, x̂_3 p_i+1(x_5|x̂_3) δ_x̂_3, x̂_4 +
δ_, x̂_4p(|x_5) p_i+1(x_5|)/p_i+1(|)]
·[
δ_x̂_4, p(|x_5)
-δ_x_5, x_6 p_i+1(|x_6) p(|x_6)
]
=
β· p_i+1(x_5|) [
- δ_, x̂_4 +
δ_, x̂_4p(|x_5) /p_i+1(|)] ·[
δ_x̂_4, p(|x_5)
-δ_x_5, x_6 p_i+1(|x_6) p(|x_6)
]
=
- β· p_i+1(x_5|) [
1 - p(|x_5) /p_i+1(|)] ·[
δ_, p(|x_5)
-δ_x_5, x_6 p_i+1(|x_6) p(|x_6)
]
=
- β· p(|x_5) p_i+1(x_5|) [
1 - p(|x_5) /p_i+1(|)] ·[
δ_,
- p_i+1(|x_5)
]
=
-β∑_ p(|x) p_i+1(x|) ·[
1 -
p(|x) /p_i+1(|)] ·[
δ_,
- p_i+1(|x)
]
In the first equality above we carried out the summation over x_1, in the second over x̂_3, in the third over x̂_4, in the fourth over x_6, and in the fifth over x_5.
To simplify the notation, we replace summations over x with definitions as in Equation (<ref>) (Section <ref>),
C(, ; i)_, := ∑_ p(|x) p(|x) p_i(|x) p_i(x|)
B(, ; i)_ := ∑_ p(|x) p_i(|x) p_i(x|)
= ∑_ C(, ; i)_,
A(, ; i) := ∑_ p_i(|x) p_i(x|) =
∑_ B(, ; i)_
D(; i)_, := 1/p_i(|) ∑_ p(|x) p(|x) p_i(x|) =
1/p_i(|) ∑_ C(, ; i)_,
and note that
∑_, C(, ; i)_, = p_i(|) .
The quantities A, B, and C involve two IB clusters. They are a scalar, a vector, and a matrix, respectively.
The definition of D involves only one IB cluster and coincides with C_Y in <cit.>.
The relations to the right of (<ref>) show that each can be expressed in terms of C(, ; i)_,.
Equation (<ref>) shows that the latter can be rewritten as a right-stochastic matrix, up to trivial manipulations.
As seen below, the Jacobian matrix (<ref>) of a BA-IB step in log-decoder coordinates can be computed in terms of the quantities in (<ref>).
With the latter definitions (<ref>), (<ref>) can be rewritten as,
dlog p_i+1(|)/d log p_i(|)(<ref>)=
-β∑_[
δ_, p(|x) p_i+1(x|)
- p(|x) p_i+1(|x) p_i+1(x|)
.
.
- δ_, 1p_i+1(|)
p(|x) p(|x) p_i+1(x|) +
1p_i+1(|)
p(|x) p(|x) p_i+1(|x) p_i+1(x|)
]
(<ref>)=
-β[
δ_, p_i+1(|)
- B(, ; i+1)_.
.
- δ_, D(; i+1)_,
+ 1p_i+1(|) C(, ; i+1)_, ]
=
β∑_, ( δ_, - δ_, )
( 1 - δ_, p_i+1(|) )
C(, ; i+1)_,
The third equality above follows from (<ref>), the identities to the right of (<ref>) and simple algebra.
For the upper-right block (<ref>),
dlog p_i+1(|)/d log p_i() =
p(|x_1) p_i+1(x_1|x̂_2) /p_i+1(|x̂_2)δ_x̂_2, ·[
( - δ_x̂_2, x̂_3)
p_i+1(x_5|x̂_3) δ_x̂_3, x̂_4 +
δ_x_1, x_5δ_x̂_2, x̂_4]
·{δ_x̂_4, +
βδ_x̂_4, x̂_8 p(y_7|x_5)
p(y_7|x_9) p_i(x_9|x̂_8) /p_i(y_7|x̂_8)δ_x̂_8, x̂_10( -δ_x̂_10, )
.
. +
( -δ_x_5, x_6 ) [
p_i+1(|x_6) +
β p_i+1(x̂_8|x_6) p(y_7|x_6)
p(y_7|x_9) p_i(x_9|x̂_8) /p_i(y_7|x̂_8)δ_x̂_8, x̂_10( -δ_x̂_10, )
]
}
In a manner similar to (<ref>), summing over all ten dummy variables other than x_1 and x_5 yields,
( 1 - β) ·p(|x_1) p_i+1(x_1|) /p_i+1(|)·( δ_x_1, x_5 - p_i+1(x_5|) ) ·( δ_, - p_i+1(|x_5) )
=
( 1 - β) ·(
-1p_i+1(|)∑_ p(|x) p_i+1(|x) p_i+1(x|)
+ ∑_ p_i+1(|x) p_i+1(x|)
)
=
( 1 - β) ·∑_( 1 - p(|x) p_i+1(|)) p_i+1(|x) p_i+1(x|)
The two terms involving δ_, cancel out when summing over x_1 and x_5 at the first equality. Rewriting with the definitions (<ref>) of A and B further simplifies (<ref>) to,
( 1 - β) ·[
A(, ; i+1) -
1p_i+1(|) B(, ; i+1)_]
=
( 1 - β) ·∑_[
1 - δ_, p_i+1(|)] B(, ; i+1)_
For the lower-left block (<ref>),
dlog p_i+1()/d log p_i(|) =
p_i+1(x_2|) δ_, x̂_1[
βδ_x̂_1, p(|x_2) +
( -δ_x_2, x_3)
β p_i+1(|x_3) p(|x_3)
]
Summing over dummy variables and simplifying yields,
β·[
δ_, p_i+1(|) -
∑_ p(|x) p_i+1(|x) p_i+1(x|)
]
In terms of definitions (<ref>), this simplifies to
β·[
δ_, p_i+1(|) -
B(, ; i+1)_]
Finally, for the lower-right block (<ref>),
dlog p_i+1()/d log p_i()
=
p_i+1(x_2|) δ_, x̂_1·{δ_x̂_1, +
βδ_x̂_1, x̂_4 p(y_3|x_2)
p(y_3|x_5) p_i(x_5|x̂_4) /p_i(y_3|x̂_4)δ_x̂_4, x̂_6( -δ_x̂_6, )
.
. +
( -δ_x_2, x_7) [
p_i+1(|x_7) +
β p_i+1(x̂_4|x_7) p(y_3|x_7)
p(y_3|x_5) p_i(x_5|x̂_4) /p_i(y_3|x̂_4)δ_x̂_4, x̂_6( -δ_x̂_6, )
]
}
This simplifies to,
(1 - β)
(δ_, - ∑_ p_i+1(|x) p_i+1(x|) )
With definitions (<ref>), this can be written as
(1 - β)
( δ_, - A(, ; i+1) )
Collecting the results from (<ref>), (<ref>), (<ref>) and (<ref>) back into (<ref>), BA's Jacobian in these coordinates is
(
[ β∑_, ( δ_, - δ_, )
.
.
·(1 - δ_, p_i+1(|))
C(, ; i+1)_, ( 1 - β) ·∑_[
1 - δ_, p_i+1(|)] B(, ; i+1)_; ; ; β·[
δ_, p_i+1(|) -
B(, ; i+1)_] (1 - β)
( δ_, - A(, ; i+1) ) ])
When evaluated at an IB root, this is Equation (<ref>) of Section <ref>.
Equivalently, it can be written in the following form, which is more convenient for implementation
(
[ β[
B(, ; i+1)_
- δ_, p_i+1(|)
.
.
+ δ_, D(; i+1)_,
- 1p_i+1(|) C(, ; i+1)_, ] ( 1 - β) ·[
A(, ; i+1) -
1p_i+1(|) B(, ; i+1)_]; ; ; β·[
δ_, p_i+1(|) -
B(, ; i+1)_] (1 - β)
( δ_, - A(, ; i+1) ) ])
§.§ The partial β-derivatives of BA-IB in log-decoder coordinates
We calculate the vector D_β BA_β of partial derivatives of the BA_β operator in log-decoder coordinates (of Section <ref>), which appears at the right-hand side of the IB-ODE (<ref>) (in Section <ref>).
To that end, we differentiate backward along the dependencies graph (<ref>) (in Appendix <ref>) with respect to β, starting at the output coordinates p_i+1(|) and p_i+1() of BA_β.
After differentiating, we mind our independent variables.
Here, these are β, and the input coordinates p_i(|) and p_i() of BA_β.
The differentiation of these with respect to β vanishes (except for dβdβ = 1), as they are independent.
Finally, we compose the differentiations to obtain the effect D_β BA_β of changing β on BA's output.
We note that, in principle, one can differentiate the explicit formulae (<ref>) of BA_β in decoder coordinates (Appendix <ref>) with respect to β.
However, we find that to be cumbersome and far more error-prone than our approach, and so proceed in the spirit of the previous Appendix <ref>.
We start by differentiating each of the equations defining the Blahut-Arimoto Algorithm <ref> with respect to β, as if all its variables are dependent.
For the cluster marginal Equation algo:BA-IBeq:IB-BA-cluster_marginal,
p_i() = ∑_ p() p_i(|x)
For the inverse-encoder Equation algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc,
p_i(|) =
p()/p_i()p_i(|) -
p_i(|)p()/p_i()^2p_i()
For the decoder Equation algo:BA-IBeq:IB-BA-decoder-eq
p_i(|) =
∑_ p(|x) p_i(x|)
For the KL-divergence,
D_KL[ p(|) || p_i(|) ] =
∑_ p(|) logp(|) /p_i(|) =
- ∑_p(|) /p_i(|) p_i(|)
And its exponent,
exp{ -β D_, } =
- ( D_, + βD_, ) ·exp{ -β D_, }
(<ref>)=
- ( D_, - β∑_p(|) /p_i(|) p_i(|) ) ·exp{ -β D_, }
Where we have written D_, := D_KL[ p(|) || p_i(|) ] for short.
Thus, for the partition function's Equation algo:BA-IBeq:IB-BA-partition-func we have,
Z_i(, β) =
∑_ p_i() exp{ -β D_, }
(<ref>)=∑_( p_i() - p_i() D_, + β p_i() ∑_p(|) /p_i(|) p_i(|) )
·exp{ -β D_, }
Finally, for the encoder Equation algo:BA-IBeq:IB-BA-new-direct-enc we have
p_i+1(|) =
( p_i() e^-β D_, /Z_i(,β))
(<ref>)=
p_i() e^-β D_, /Z_i(,β)[ 1/p_i() p_i()
- ( D_, - β∑_p(|) /p_i(|) p_i(|) ) - 1/Z_i(,β)Z_i(,β)]
algo:BA-IBeq:IB-BA-new-direct-enc=
p_i+1(|) ·[ 1/p_i() p_i()
- ( D_, - β∑_p(|) /p_i(|) p_i(|) ) - 1/Z_i(,β)Z_i(,β)]
Next, picking β and the inputs log p_i(|) and log p_i() of BA_β as our independent variables, we compose the differentiations above to obtain D_β BA_β at an output coordinate.
That is, we seek log p_i+1(|) and log p_i+1().
By the chain rule, we trace the dependencies graph (<ref>) (Section <ref>) backwards, from the output nodes p_i+1(|) and p_i+1() back to the input nodes.
The derivatives of the latter with respect to β vanish, as these are our independent variables.
Starting with a decoder output coordinate,
log p_i+1(|) =
1/p_i+1(|) p_i+1(|) (<ref>)=1/p_i+1(|) ∑_ p(|x) p_i+1(x|)
(<ref>)=1/p_i+1(|) ∑_ p(|) [ p()/p_i+1()p_i+1(|) -
p_i+1(|)p()/p_i+1()^2p_i+1()]
(<ref>)=∑_p(|) p()/p_i+1(|) p_i+1()[ p_i+1(|) -
p_i+1(|) /p_i+1()∑_ p() p_i+1(|) ]
(<ref>)=∑_p(|) p()/p_i+1(|) p_i+1(){
p_i+1(|) ·[ 1/p_i() p_i()
- ( D_, - β∑_p(|) /p_i(|) p_i(|) ) - 1/Z_i(,β)Z_i(,β)] .
.
- p_i+1(|) /p_i+1()∑_ p() (
p_i+1(|) ·[ 1/p_i() p_i()
- ( D_, - β∑_p(|) /p_i(|) p_i(|) ) - 1/Z_i(,β)Z_i(,β)])
}
Since p_i(|) and p_i() are independent input variables, their derivatives with respect to the independent variable β vanish, yielding
- ∑_p(|) p()/p_i+1(|) p_i+1(){
p_i+1(|) ·[ D_, + 1/Z_i(,β)Z_i(,β)]
.
.
- p_i+1(|) /p_i+1()∑_ p()
p_i+1(|) ·[ D_, + 1/Z_i(,β)Z_i(,β)]
}
To complete the calculation at (<ref>), note that the same argument can be used for two of the three summands in (<ref>), reducing it to
Z_i(, β) =
- ∑_ p_i() D_, e^ -β D_,
since p_i(|) and p_i() are considered as independent variables. Therefore,
log p_i+1(|) (<ref>)(<ref>)=
- ∑_p(|) p()/p_i+1(|) p_i+1(){
p_i+1(|) ·[ D_, - ∑_(p_i() /Z_i(,β) e^ -β D_, ) D_, ]
.
.
- p_i+1(|) /p_i+1()∑_ p()
p_i+1(|) ·[ D_, - ∑_(p_i() /Z_i(,β) e^ -β D_, ) D_, ]
}
algo:BA-IBeq:IB-BA-new-direct-enc=
- ∑_p(|) p()/p_i+1(|) p_i+1(){
p_i+1(|) ·[ D_, - ∑_ p_i+1(|) D_, ]
.
.
- p_i+1(|) /p_i+1()∑_ p()
p_i+1(|) ·[ D_, - ∑_ p_i+1(|) D_, ]
}
algo:BA-IBeq:IB-BA-decoder-eqalgo:BA-IBeq:IB-BA-bayes-for-computing-inverse-enc=∑_ p_i+1(|) D_,
- ∑_p(|)/p_i+1(|) p_i+1(|) D_,
+ ∑_, p(|)/p_i+1(|) p_i+1(|) p_i+1(|) D_,
- ∑_, p_i+1(|) p_i+1(|) D_,
=
∑_[ 1 - p(|)/p_i+1(|) ] p_i+1(|) D_,
- ∑_, [ 1 - p(|)/p_i+1(|) ] p_i+1(|) p_i+1(|) D_,
At the second equality to the bottom we started with the third summand, then with the first, and only then with the third and fourth summands.
And so,
log p_i+1(|) =
∑_, [ 1 - p(|)/p_i+1(|) ] ·[ δ_, - p_i+1(|) ] · p_i+1(|) D_,
Next, consider a cluster marginal output coordinate,
log p_i+1() =
1/p_i+1() p_i+1() (<ref>)=1/p_i+1()∑_ p() p_i+1(|)
(<ref>)=1/p_i+1()∑_ p() p_i+1(|) ·[ 1/p_i() p_i()
- ( D_, - β∑_p(|) /p_i(|) p_i(|) ) - 1/Z_i(,β)Z_i(,β)]
Since p_i(|) and p_i() are independent variables, their derivative with respect to β vanish, yielding
- 1/p_i+1()∑_ p_i+1(|) p() [
D_, + 1/Z_i(,β)Z_i(,β)]
(<ref>)=
- 1/p_i+1()∑_ p_i+1(|) p() [ D_,
- ∑_( p_i() /Z_i(,β) e^ -β D_, ) D_, ]
algo:BA-IBeq:IB-BA-bayes-for-computing-inverse-encalgo:BA-IBeq:IB-BA-new-direct-enc=
- ∑_, [ δ_, - p_i+1(|) ] · p_i+1(|) D_,
Thus, for the marginals' coordinates, we have obtained
log p_i+1() =
- ∑_, [ δ_, - p_i+1(|) ] · p_i+1(|) D_,
When evaluated at an IB root, Equations (<ref>) and (<ref>) form respectively the decoder and marginal coordinates of D_β BA_β, which appears at the right-hand side of the IB ODE (<ref>) (note the extra minus sign in the implicit ODE (<ref>)).
§.§ The coordinates exchange Jacobians between log-decoder and log-encoder coordinates
Following the discussion in Section <ref> on the pros and cons of each coordinate system, we leverage the observations of Appendix <ref> in order to derive the coordinate exchange Jacobians, between the log-decoder and log-encoder coordinate systems.
Exchanging between the other coordinate system pairs adds little to the below and thus is omitted.
Given the encoder's logarithmic derivative log p_β(|), we would like to compute from it the logarithmic derivative (log p_β(|), log p_β()) in decoder coordinates, and vice versa.
To that end, recall that an (arbitrary) encoder p(|) determines a decoder-marginal pair ( p(|), p() ) and vice versa (e.g., Equation (<ref>) in Section <ref>).
So, one can follow the dependencies graph (<ref>) (in Appendix <ref>) backward between these coordinate systems to exchange the coordinates of an implicit derivative.
For example, consider p_i(|) and p_i() as functions of the encoder p_i(|) preceding it in the graph (<ref>).
When at an IB root, multiplying by the coordinates exchange Jacobian yields
log p_β(|) = dlog p_β(|)/dlog p_β(|)log p_β(|) and
log p_β() = dlog p_β()/dlog p_β(|)log p_β(|) .
Similarly, considering an encoder p_β(|) as a function of p_β(|) and p_β(),
log p_β(|) =
dlog p_β(|)/dlog p_β(|)log p_β(|) +
dlog p_β(|)/dlog p_β()log p_β() +
log p_β(|) .
The last term log p_β(|) in (<ref>) stems from the fact that the encoder Equation algo:BA-IBeq:IB-BA-new-direct-enc depends explicitly on β, unlike the decoder and marginal Equations
algo:BA-IBeq:IB-BA-decoder-eq and algo:BA-IBeq:IB-BA-cluster_marginal.
cf., the comments around (<ref>) in Appendix <ref>.
The matrices dlog p_β(|)/dlog p_β(|) and dlog p_β()/dlog p_β(|) for exchanging from encoder to decoder coordinates follow from the chain rule, and are calculated in Appendix <ref> below, at Equations (<ref>) and (<ref>).
Similarly, the matrices dlog p_β(|)/dlog p_β(|) and dlog p_β(|)/dlog p_β() and the partial derivative log p_β(|) for exchanging from decoder to encoder coordinates are Equations (<ref>), (<ref>) and (<ref>), in Appendix <ref>.
§.§.§ Exchanging from encoder to decoder coordinates
An input encoder p_i(|) determines a decoder p_i(|) and a marginal p_i().
As in previous subsections, we follow the dependencies graph (<ref>) along all the paths between these.
Using diagram (<ref>) from Section <ref>, for the marginal one has
dlog p_i()/dlog p_i(|) =
p_i(|) δ_, .
While for the decoder,
dlog p_i(|)/dlog p_i(|) =
∂log p_i(|)/∂log p_i(x_1|x̂_2)[
∂log p_i(x_1|x̂_2)/∂log p_i(|) +
∂log p_i(x_1|x̂_2)/∂log p_i(x̂_3)∂log p_i(x̂_3)/∂log p_i(|)]
=
p(|x_1) p_i(x_1|x̂_2)/p_i(|x̂_2) δ_x̂_2, [
δ_x_1, δ_x̂_2,
-δ_x̂_2, x̂_3 p_i(|x̂_3) δ_x̂_3, ]
Summing over the three dummy variables as before, the latter simplifies to
dlog p_i(|)/dlog p_i(|) =
[
p(|) /p_i(|) - 1
] p_i(|) δ_, .
§.§.§ Exchanging from decoder to encoder coordinates
In the other way around, a decoder p_i(|) and a marginal p_i() determine the subsequent encoder p_i+1(|).
Using diagram (<ref>), one has
dlog p_i+1(|)/dlog p_i(|) =
∂log p_i+1(|)/∂log p_i(|) +
∂log p_i+1(|)/∂log Z_i(x_1)∂log Z_i(x_1)/∂log p_i(|)
=
β δ_, p(|) -
δ_, x_1 β p_i+1(|x_1) p(|x_1)
Summing over the dummy variable x_1, this is the coordinates exchange Jacobian J_dec^enc mentioned in Section <ref>,
dlog p_i+1(|)/dlog p_i(|) =
β p(|) [ δ_, - p_i+1(|) ]
Next, for the derivative with respect to the marginal,
dlog p_i+1(|)/dlog p_i() =
∂log p_i+1(|)/∂log p_i() +
∂log p_i+1(|)/∂log Z_i(x_1)∂log Z_i(x_1)/∂log p_i()
+
[∂log p_i+1(|)/∂log Z_i(x_1)∂log Z_i(x_1)/∂log p_i(y_2|x̂_3) + ∂log p_i+1(|)/∂log p_i(y_2|x̂_3)]
∂log p_i(y_2|x̂_3)/∂log p_i(x_4|x̂_5)∂log p_i(x_4|x̂_5)/∂log p_i()
=
δ_, -
δ_, x_1 p_i+1(|x_1)
+
[ -δ_, x_1 β p_i+1(x̂_3|x_1) p(y_2|x_1) + β δ_x̂_3, p(y_2|) ]
p(y_2|x_4) p_i(x_4|x̂_3)/p_i(y_2|x̂_3) δ_x̂_3, x̂_5· (-δ_x̂_5, )
Summing over the five dummy variables, this is the coordinates exchange Jacobian J_mrg^enc from Section <ref>,
dlog p_i+1(|)/dlog p_i() =
( 1 - β) [ δ_, - p_i+1(|) ]
Finally, note that the encoder Equation algo:BA-IBeq:IB-BA-new-direct-enc depends on β explicitly, rather than indirectly only via its other variables.
So, to calculate the partial derivative term log p_i+1(|) in (<ref>), write as follows for log Z,
∂/∂β Z_i(x, β) algo:BA-IBeq:IB-BA-partition-func=∑_x̂ p_i(x̂) ∂/∂βexp{ -β D_KL[p(|) || p_i(y|x̂)] }
=
-∑_x̂ p_i(x̂) D_KL[p(|) || p_i(y|x̂)] exp{ -β D_KL[p(|) || p_i(y|x̂)] }
Thus,
∂/∂βlog Z_i(x, β) =
1/Z_i(x, β)∂/∂β Z_i(x, β)
(<ref>)=
-∑_x̂p_i(x̂) exp{ -β D_KL[p(|) || p_i(y|x̂)] }/Z_i(x, β) D_KL[p(|) || p_i(y|x̂)]
algo:BA-IBeq:IB-BA-new-direct-enc=
-∑_x̂ p_i+1(x̂|x) D_KL[p(|) || p_i(y|x̂)] .
And so, from the encoder Equation algo:BA-IBeq:IB-BA-new-direct-enc we have
log p_i+1(|) =
log p_i()
- log Z_i(x, β)
- (β D_KL[p(|) || p_i(|)] )
(<ref>)=∑_ p_i+1(|) D_KL[p(|) || p_i(|)]
- D_KL[p(|) || p_i(|)]
where the term log p_i() vanishes since it is considered as an independent variable here.
§ PROOF OF LEMMA <REF>, ON THE KERNEL OF THE JACOBIAN OF THE IB OPERATOR IN LOG-DECODER COORDINATES
We prove Lemma <ref> from Section <ref>, using the results of Appendix <ref>.
In the first direction, suppose that ( (v_, )_, , (u_)_) is a vector in the left kernel of the Jacobian of the IB operator (<ref>) in log-decoder coordinates, I - D_log p(|), log p() BA_β, as in (<ref>) in Section <ref>.
Using the Jacobian's implicit form (<ref>) (Appendix <ref>), this is to say that
v_, = ∑_, v_, dlog p_i+1(|)/d log p_i(|) +
∑_ u_dlog p_i+1()/d log p_i(|) and
u_ = ∑_, v_, dlog p_i+1(|)/d log p_i() +
∑_ u_dlog p_i+1()/d log p_i()
hold, for every and .
We spell out and manipulate these equations to obtain the desired result.
By the Jacobian's explicit form (<ref>) from Appendix <ref>, Equation (<ref>) spells out as
v_, =
β·∑_, v_, ∑_, ( δ_, - δ_, )
·(1 - δ_, p_i+1(|))
C(, ; i+1)_,
+ β·∑_ u_[ δ_, p_i+1(|) - B(, ; i+1)_] ,
while the second Equation (<ref>) spells out as
u_ =
( 1 - β) ·∑_, v_, ∑_[
1 - δ_, p_i+1(|)] B(, ; i+1)_
+
(1 - β) ·∑_ u_( δ_, - A(, ; i+1) ) .
Next, we expand and simplify each of the terms in (<ref>) and (<ref>), using the definition (<ref>) of A, B and C from Appendix <ref>.
For the first summand to the right of (<ref>),
β·∑_, v_, ∑_, ( δ_, - δ_, )
(1 - δ_, p_i+1(|))
C(, ; i+1)_,
(<ref>)=β·∑_, v_, ∑_, ( δ_, - δ_, )
(1 - δ_, p_i+1(|))
∑_ p(|) p(|) p_i+1(|) p_i+1(|)
We simplify each of the four addends to the right of (<ref>) while temporarily ignoring the β coefficient.
For the δ_, · 1 term,
∑_, v_, ∑_, δ_, ∑_ p(|) p(|) p_i+1(|) p_i+1(|)
=
∑_, v_, ∑_ p(|) p_i+1(|) p_i+1(|)
For the - δ_, ·δ_, /p_i+1(|) term,
- ∑_, v_, ∑_, δ_, δ_, ∑_1p_i+1(|) p(|) p(|) p_i+1(|) p_i+1(|)
=
- ∑_, v_, ∑_1p_i+1(|) p(|) p(|) p_i+1(|) p_i+1(|)
For the -δ_, · 1 term,
-∑_, v_, ∑_, δ_, ∑_ p(|) p(|) p_i+1(|) p_i+1(|)
=
-∑_ v_, ∑_ p(|) p_i+1(|) =
-∑_ v_, p_i+1(|)
And for the last -δ_, ·-δ_, p_i+1(|) term,
∑_, v_, ∑_, δ_, ·δ_, p_i+1(|)∑_ p(|) p(|) p_i+1(|) p_i+1(|)
=
∑_v_, /p_i+1(|)∑_ p(|) p(|) p_i+1(|)
Collecting (<ref>), (<ref>), (<ref>) and (<ref>) back into (<ref>), we obtain
β·∑_, v_, ∑_ p(|) p_i+1(|) p_i+1(|) [ 1 - p(|) p_i+1(|)]
+
β·∑_ v_, 1 p_i+1(|)∑_ p(|) p(|) p_i+1(|)
- β· p_i+1(|) ∑_ v_,
for the first summand to the right of (<ref>).
The second summand to the right of (<ref>) equals,
β·∑_ u_[ δ_, p_i+1(|) - B(, ; i+1)_]
(<ref>)=β· u_ p_i+1(|) -
β·∑_ p(|) p_i+1(|) ∑_ u_ p_i+1(|)
Combining (<ref>) and (<ref>), Equation (<ref>) is equivalent to
1β· v_,
+ p_i+1(|) ∑_ v_,
- u_ p_i+1(|)
= ∑_, v_, ∑_ p(|) p_i+1(|) p_i+1(|) [ 1 - p(|) p_i+1(|)]
+ ∑_ v_, ∑_ p(|) p_i+1(|) p(|) p_i+1(|)
- ∑_ p(|) p_i+1(|) ∑_ u_ p_i+1(|)
for any and .
Summing (<ref>) over and simplifying, we obtain
1 β·∑_ v_, - u_
=
∑_, v_, ∑_ p_i+1(|) p_i+1(|) [ 1 - p(|) p_i+1(|)]
- ∑_ p_i+1(|) ∑_ u_ p_i+1(|)
for any .
Next, we expand and simplify Equation (<ref>). Using the definition (<ref>) of B, the first summand to its right can be written as
( 1 - β) ·∑_, v_, ∑_ p_i+1(|) p_i+1(|) [
1 - p(|) p_i+1(|)] .
Similarly, the second summand to the right of (<ref>) can be written as
(1 - β) ·[ u_ - ∑_ p_i+1(|) ∑_ u_ p_i+1(|) ] .
Combining (<ref>) and (<ref>), Equation (<ref>) can now be written explicitly,
β1 - β· u_ =
∑_, v_, ∑_ p_i+1(|) p_i+1(|) [ 1 - p(|) p_i+1(|)]
- ∑_ p_i+1(|) ∑_ u_ p_i+1(|)
for every .
Next, subtracting (<ref>) from (<ref>), we obtain
u_ = 1 - ββ·∑_ v_,
for any .
Substituting (<ref>) into (<ref>) and using the decoder Equation algo:BA-IBeq:IB-BA-decoder-eq to expand p_i+1(|) there,
1β· v_,
= ∑_, v_, ∑_ p_i+1(|) p(|) p_i+1(|) [ 2β - 1β - p(|) p_i+1(|)]
- ∑_ v_, ∑_ p(|) p_i+1(|) ·2β - 1β
+ ∑_ v_, ∑_ p(|) p_i+1(|) p(|) p_i+1(|)
Next, inserting ∑_δ_, into the sums on the last line,
1β· v_,
= ∑_, v_, ∑_ p_i+1(|) p(|) p_i+1(|) [ 2β - 1β - p(|) p_i+1(|)]
- ∑_, v_, ∑_δ_, p(|) p_i+1(|) [ 2β - 1β - p(|) p_i+1(|)]
Finally, this simplifies to
v_,
= ∑_, v_, ∑_ p(|) [ δ_, - p_i+1(|) ] p_i+1(|) [ β·p(|) p_i+1(|) + (1 - 2β) ]
The latter is to say that (v_, )_, is a left-eigenvector of the eigenvalue 1 of the matrix to the right. At an IB root, this is precisely the matrix S (<ref>) from the Lemma's statement, as desired.
As a side note, we comment that Equations (<ref>) and (<ref>) also imply
∀ ∑_ v_, = 0
and ∑_ u_ = 0 ,
which can be seen by summing (<ref>) and (<ref>) respectively over , and simplifying.
At the other direction, let v := (v_, )_, be a left-eigenvector of the eigenvalue 1 of S (<ref>). That is, assume that Equation (<ref>) holds. Define a vector u := (u_)_ by Equation (<ref>).
Reversing the algebra, (<ref>) is equivalent to (<ref>).
Substituting (<ref>) into the latter yields back (<ref>), which is equivalent to the explicit form (<ref>) of Equation (<ref>).
Next, summing (<ref>) over and simplifying yields (<ref>). Adding the latter to (<ref>) yields back (<ref>), which is equivalent to Equation (<ref>), the explicit form of (<ref>).
To conclude, both of the Equations (<ref>) and (<ref>) hold, as claimed.
§ APPROXIMATE ERROR ANALYSIS FOR DETERMINISTIC ANNEALING AND FOR EULER'S METHOD WITH BA
Complementing the results of Section <ref>, we provide an approximate error analysis for two computation methods for the IB: deterministic annealing and Euler's method combined with a fixed number of BA iterations.
First, we recap the linearization argument around <cit.>.
Denote repeated BA iterations initialized at p_0 by
p_k+1 := BA_β[p_k] .
Linearizing around a fixed-point p_β of BA,
BA [p_k] ≃p_β + D BA_β|_p_β·( p_k - p_β) ,
where D BA_β|_p_β denotes the Jacobian matrix of BA_β evaluated at p_β.
Rewriting in terms of the error δp_k := p_k - p_β of the k-th iterate,
δp_k+1≃ D BA_β|_p_β·δp_k .
Thus, to first order, repeated applications of BA_β reduce the initial error according to
δp_k+1≃(D BA_β|_p_β)^k ·δp_0 .
Next, consider k > 0 applications of BA_β+Δβ to a root p_β at β. This is similar to deterministic annealing, but with a capped number of BA iterations.
Plugging the initial error δp_0 := p_β - p_β + Δβ≃ -Δβ dpdβ|_β into Equation (<ref>) shows that this method is of the first order,
δp_k+1≃
|Δβ| ·(D BA_β + Δβ|_p_β + Δβ)^k dpdβ|_β .
Finally, we combine BA with Euler's method for the IB, Equation (<ref>).
Consider k > 0 applications of BA_β+Δβ to the approximation p_β + Δβ dpdβ|_β produced by an Euler method step.
Its initial error is
δp_0 :=
p_β + Δβ dpdβ|_β
- p_β + Δβ =
-12 (Δβ)^2 d^2pdβ^2|_β' ,
where the last equality follows from the second-order expansion p_β + Δβ = p_β + Δβ dpdβ|_β + 12 (Δβ)^2 d^2pdβ^2|_β', with β' ∈ [β, β + Δβ].
Similar to before, plugging this into Equation (<ref>) shows that this method is of the second order,
δp_k+1≃12 |Δβ|^2 ·(D BA_β + Δβ|_p_β + Δβ)^k d^2pdβ^2|_β' .
§ AN EXACT SOLUTION FOR A BINARY SYMMETRIC CHANNEL
Define an IB problem by Y∼Bernoulli(12) and X := Y ⊕ Z for Z∼Bernoulli(α) independent of Y, 0 < α <12, where ⊕ denotes addition modulo 2. Explicitly, it is given by p_Y|X = 1-α α
α 1-α and p_X = (12, 12). We synthesize exact solutions for this problem using Mrs. Gerber's Lemma <cit.> and by following <cit.>.
Let h(p) := -p log p - (1-p) log (1-p) be the binary entropy, with h(0) := h(1) := 0. It is injective on [0, 12], with a maximal value of log 2 at p = 12. So, its inverse function h^-1 is well defined on [0, log 2].
Given a constraint I_X∈ [0, log 2] on I(X̂; X), I(X̂; X) ≤ I_X, define a random variable V ∼Bernoulli(δ) and set X̂ := X ⊕ V, where δ is defined by h(δ) = log 2 - I_X or equivalently in terms of h^-1 by δ := h^-1(log 2 - I_X).
Explicitly, p(|x) = 1-δ δ
δ 1-δ, with its rows indexed by and columns by .
X̂ is also a Bernoulli(12) variable since X is, and so
I(X̂; X) =
H(X̂) - H(X̂|X) =
log 2 - h(δ) = I_X,
showing that the constraint on I(X̂; X) holds.
The chain X̂→ X → Y of random variables is readily seen to be Markov.
By <cit.>, it follows that I(X̂; Y) ≤log 2 - h(α * δ), where a*b := a(1-b) + b(1-a). Finally, equality follows by Theorem 1 there.
Thus, the above p(|x) is IB-optimal.
The above defines an IB solution p(|x) as a function of I_X. However, our numerical computations are phrased in terms of the IB's Lagrange multiplier β. To that end, <cit.> show that
β· (1 - 2α) log1 - α * δ/α * δ =
log1 - δ/δ ,
and that the bifurcation of this problem occurs at
β_c = 1/(1 - 2α)^2 .
To conclude, we have β = β(δ) as a function of δ, δ = δ(I_X) as a function of I_X, and the encoder p(|x) as a function of δ.
These functional dependencies are summarized as follows,
@C=4em
p(|x)[r] δ[dl]
I_X β[u]
where the variable at the tail of each arrow is a function of that at its head.
Writing p = (p(|x))_, x, its derivative with respect to β can be calculated by the chain rule,
dp/dβ =
d/dβ( p(β^-1(δ)) ) =
dp/dδ(dβ/dδ)^-1 ,
where we have applied the derivative of an inverse function (f^-1)' = 1/f' to β(δ) in (<ref>), to differentiate δ(β).
From the argument around (<ref>), dp/dδ = -1 1
1 -1.
While this yields an analytical expression for the derivative dpdβ, both of the terms to the right of (<ref>) are evaluated at δ(β), for a given β value.
Although it is straightforward to compute δ(β) numerically from (<ref>), this entails numerical error, especially as δ approaches 1/2 near the bifurcation.
For the solution with respect to decoder coordinates, an immediate application of the Bayes rule shows that
p() = 1/2 and
p(|) =
α * (1 - δ) α * δ
α * δ α * (1 - δ)
,
where the rows of p(|) are indexed by , and columns by .
Along with dp(|)dδ = (2α-1)·1 -1
-1 1, its derivatives with respect to β follow as in (<ref>).
§ EQUIVALENT CONDITIONS FOR CLUSTER-MERGING BIFURCATIONS
We briefly discuss the equivalent conditions for cluster-merging bifurcations in the IB (Subsection <ref>) found in the literature.
<cit.> derive a condition for cluster-splitting phase transitions (Equation (17) there) in the context of fuzzy clustering.
Following this, <cit.> derives an analogous condition for cluster splitting in the IB,
( I - β C_X(; β) ) u = 0 ,
which is Equation (12) there.
Namely, for a cluster to split it is necessary that 1/β would be an eigenvalue of an |𝒳|-by-|𝒳| matrix C_X(; β), whose entries at an IB root are given by
C_X(; β)_, := ∑_p(|) p(|) p_β(|)/p_β(|) ,
and I is the identity.
While the coefficients matrix (<ref>) for the IB differs from the one for fuzzy clustering, inter-cluster interactions are explicitly neglected in both derivations (see therein).
Indeed, the definition (<ref>) of C_X involves the coordinates of cluster alone, as one might expect when considering a root in either decoder or in inverse-encoder coordinates (Section <ref>).
Reversing the dynamics in β, condition (<ref>) characterizes cluster-merging bifurcations in the IB (Subsection <ref>).
<cit.> notes that (<ref>) is closely related to the bifurcation analysis of <cit.>.
The latter provides a condition to identify the critical β values of IB bifurcations, given in their Theorem 5.3.
Indeed, their condition is equivalent to (<ref>), and therefore it also characterizes cluster-merging bifurcations.
To see this, the necessary condition they give for a phase transition at β is that 1/β must be an eigenvalue of a matrix V (Equation (21) there). When written in our notation, this matrix is given by
V(; β)_, :=
∑_p(, ) p(, ) p_β( | ) / p_β(, ) p() .
However, V (<ref>) is readily seen to be the transpose of C_X (<ref>), and so they have the same eigenvalues.
§ LYAPUNOV-STABILITY OF AN OPTIMAL IB ROOT
We provide the essential parts of a proof that an optimal IB root is Lyapunov uniformly asymptotically stable on closed intervals which do not contain a bifurcation when following the flow dictated by the IB's ODE (<ref>) in decreasing β.
Definitions for the below are as in <cit.> (see especially Section 4.2 there).
See Subsection <ref> for a discussion of the results below.
Let p^*(β) be an optimal IB root.
We start by rewriting it as an equilibrium of a non-autonomous ODE, as in <cit.>.
Consider the implicit ODE (<ref>) dpdβ = - (D_p F)^-1 D_β F, specialized to the IB by setting F := Id - BA_β (<ref>).
Denote δp := p - p^*, for an arbitrary p.
Subtracting the ODE at p from that at p^* yields a non-autonomous ODE in the error δp from the optimal root,
dδpdβ =
(D_p F)^-1 D_β F|_p^* - (D_p F)^-1 D_β F|_p^* + δp
This rewrites the given root p^* as an equilibrium δp = 0 of this ODE (<ref>), simplifying the below.
Next, we define a Lyapunov function for the flow of the equilibrium δp = 0 along the ODE (<ref>), when its dynamics in β is reversed.
Consider the IB's Lagrangian ℒ_β := I(X; X̂) - β· I(Y; X̂) as a functional in p, and let ℒ_β^* := ℒ_β[p^*] be its optimal value at β. Then,
(ℒ_β - ℒ_β^*)(δp)
is the desired Lyapunov function.
Specifically, (i) ℒ_β - ℒ_β^* is positive definite and (ii) (ℒ_β - ℒ_β^*) is negative definite, when the dynamics in β are reversed.
Theorem 4.1 in <cit.> then implies that δp = 0 is uniformly asymptotically stable, <cit.>.
For (i), ℒ_β - ℒ_β^* (<ref>) is immediately seen to be positive semi-definite from the definition of ℒ_β^*, up to technicalities ignored here[ cf., <cit.>.].
The results of Subsection <ref> (after Proposition <ref>) imply that representing p in reduced log-decoder coordinates renders (<ref>) strictly positive definite.
Indeed, D(Id - BA_β) is non-singular in a reduced representation in these coordinates, as mentioned there, and so an optimal root p^* is locally unique.
As for condition (ii), from the definition of ℒ_β we have
ddβℒ_β =
ddβ I(X; X̂) - βddβ I(Y; X̂) - I(Y; X̂) =
- I(Y; X̂) ,
where ddβ I(X; X̂) = βddβ I(Y; X̂) in the last equality follows by direct calculations similar to those in the Appendix of <cit.>.
Thus, for the β-derivative of (<ref>) we have
ddβ(ℒ_β - ℒ_β^*)(δp) =
I(Y; X̂)|_p^* - I(Y; X̂)|_p .
The latter is always positive semi-definite around p^*, since by definition (<ref>) p^* yields the maximal Y-information subject to a constraint on the X-information.
The same argument as above shows that it is strictly positive definite.
Finally, reversing the dynamics in β leaves the ODE (<ref>) unaffected but flips the sign of (<ref>), rendering it negative definite as required.
§ INTRODUCING DEGENERACIES CANNOT INCREASE THE NULLITY OF THE IB OPERATOR IN DECODER COORDINATES
We show that evaluating the kernel of the IB operator on a degenerate representation cannot increase its nullity rank.
Let p∈Δ[Δ[𝒴]] be an IB root of effective cardinality T_1.
A T-clustered representation of a root (e.g., in decoder coordinates) is a function π: Δ[Δ[𝒴]] →R^(|𝒴| + 1)· T, defined on some neighborhood of the root.
In the other way around, one can consider the inclusion i: R^(|𝒴| + 1) · T→Δ[Δ[𝒴]], defined on normalized decoder coordinates in the obvious way.
Let π be a representation of p in its effective cardinality T_1, and π̃ a degenerate one on T_2 > T_1 clusters.
These satisfy
π = reduc ∘π̃
where reduc is the reduction map[ Defined similar to the root-reduction Algorithm <ref>, by setting its thresholds to zero, δ_1 = δ_2 = 0, and replacing its strict inequalities with non-strict ones. Note that Algorithm <ref> has a well-defined output for every input.].
In the other way around, one can pick a particular degenerating map degen (e.g., “split the third cluster to two copies of probability ratio 1:2”). Applying a particular degeneracy and then reducing is the identity,
reduc ∘ degen = Id ,
though not the other way around.
Let i and ĩ be the inclusions corresponding to π and π̃ respectively.
Similar to (<ref>), introducing degeneracy to a root has no effect before including it in Δ[Δ[𝒴]],
i = ĩ∘ degen
Recall from Subsection <ref> (before Conjecture <ref>) that BA_β in decoder coordinates may be considered as an operator on Δ[Δ[𝒴]].
To summarize, we have the following diagram,
@C=4em@R=.5emR[rr]^(.43)p Δ[Δ[𝒴]] @(ul, ur)^BA_β@(r,l)[drr]^π̃@(dr,l)[ddddrr]^π
R^(|𝒴| + 1) · T_2@<5pt>[ddd]^(.45)reduc@(u,ur)[llu]_ĩ
R^(|𝒴| + 1) · T_1@(dl,dl)[uuuull]^i @<5pt>[uuu]^(.55)degen
Next, consider the representations of the IB operator Id - BA_β (<ref>) on T_1 and T_2 clusters.
These amount to pre-composing with the inclusions and post-composing with the representation maps.
Denote by Id_i the identity operator on R^(|𝒴| + 1) · T_i.
By identities (<ref>), (<ref>) and (<ref>), we have
Id_1 - π∘ BA_β∘ i =
reduc ∘ degen - reduc ∘π̃∘ BA_β∘ĩ∘ degen
=
reduc ∘[ Id_2 - π̃∘ BA_β∘ĩ] ∘ degen
Differentiating, by the chain rule we have
D( Id_1 - π∘ BA_β∘ i ) =
D(reduc) D(Id_2 - π̃∘ BA_β∘ĩ) D(degen) .
Multiplying matrices can only enlarge the kernel, AB ≥ A, and so
D( Id_1 - π∘ BA_β∘ i ) ≥ D(Id_2 - π̃∘ BA_β∘ĩ)
Thus, introducing degeneracies to the IB operator in decoder coordinates cannot increase its nullity rank.
plain
|
http://arxiv.org/abs/2306.10938v1
|
20230619135631
|
Quantitative Parameter Reconstruction from Optical Coherence Tomographic Data
|
[
"Leopold Veselka",
"Peter Elbau",
"Leonidas Mindrinos",
"Lisa Krainz",
"Wolfgang Drexler"
] |
math.NA
|
[
"math.NA",
"cs.NA"
] |
Detailed retinal vessel segmentation without human annotations using simulated optical coherence tomography angiographs
Linus Kreitner, Johannes C. Paetzold, Nikolaus Rauch, Chen Chen, Ahmed M. Hagag, Alaa E. Fayed, Sobha Sivaprasad, Sebastian Rausch, Julian Weichsel, Bjoern H. Menze, Matthias Harders, Benjamin Knier, Daniel Rueckert, and Martin J. Menten
L. K., D. R. and M. J. M. are affiliated with the Lab for AI in Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany. J. C. P., C. C., D. R. and M. J. M are with BioMedIA, Imperial College London, United Kingdom (UK). J. C. P is also affiliated with ITERM Institute Helmholtz Zentrum Muenchen, Germany. N. R. and M. H. are with the Interactive Graphics and Simulation Group, University of Innsbruck, Austria. C. C. is also with the Oxford BioMedIA group, University of Oxford, UK. A. M. H. and S. S. are with the NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK. A. M. H. is also with Boehringer Ingelheim Limited, UK. S. S. is with University College London, UK. A. E. F. is with the Department of Ophthalmology, Kasr Al-Ainy School of Medicine, Cairo University, Egypt and the Watany Eye Hospital, Cairo, Egypt. S. R. and J. W. are with Heidelberg Engineering GmbH, Heidelberg, Germany. B. H. M. is with the Department of Quantitative Biomedicine, University of Zurich, Switzerland. B.K̇. is with the Department of Neurology, Klinikum rechts der Isar, Technical University of Munich, Germany.
July 31, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
[t]12em^1University of Vienna
Oskar-Morgenstern-Platz 1
A-1090 Vienna, Austria[t]17em^2Medical University of Vienna
Waehringer Guertel 18-20
A-1090 Vienna, Austria
[t]17em^3Agricultural University of Athens
Department of Natural Resources, Development and Agricultural Engineering
Athens, Greece
Quantitative tissue information, like the light scattering properties, is considered as a key player in the detection of cancerous cells in medical diagnosis. A promising method to obtain these data is optical coherence tomography (OCT). In this article, we will therefore discuss the refractive index reconstruction from OCT data, employing a Gaussian beam based forward model.
We consider in particular samples with a layered structure, meaning that the refractive index as a function of depth is well approximated by a piece-wise constant function. For the reconstruction, we present a layer-by-layer method where in every step the refractive index is obtained via a discretized least squares minimization. For an approximated form of the minimization problem, we present an existence and uniqueness result.
The applicability of the proposed method is then verified by reconstructing refractive indices of layered media from both simulated and experimental OCT data.
§ INTRODUCTION
Optical coherence tomography is a non-invasive, high-precision imaging modality with micrometer resolution based on the interferometric measurement <cit.> of backscattered light. Since its invention in the 1990's <cit.>, different optical coherence tomographic systems, see <cit.>, for example, have been developed. All share the basic working principle, which can be described as follows:
Laser light in the near infrared region is sent into the system. A beamsplitter then separates the light in two parts. One is directed into the sample arm and the second into the reference arm. Depending on the type of the system, the object in the sample arm is then either raster scanned, meaning that depth profiles on a lateral grid across the object are obtained, or illuminated at once. In both cases, the backscattered light by the object is then coupled into the system again and transported to the detector. There, the combined intensity of the light from the sample and the reference arm, where the light is backreflected by a perfect mirror, is detected.
Due to its (ultra-)high resolution property and its very fast acquisition rate of up to more than 50000 raster scans per second, OCT, together with its adaptions and its extensions like polarization-sensitive OCT (PS-OCT) <cit.>, optical coherence angiography <cit.> or optical coherence elastography (OCE) <cit.>, has become an outstanding technology in imaging of biological tissues, especially in the field of ophthalmology.
Nowadays, the structural information which is provided by such an OCT system is already successfully used for medical diagnosis. However, the obtained information is mainly of qualitative nature. For medical diagnosis it is worth to have supplementary quantitative information, like the optical (scattering) properties of the object of interest, which are considered as future key makers in the field of medical diagnosis.
The quantification of such physical properties, in our case the refractive index, from OCT data, commonly represents an ill-posed problem, which is interesting from a mathematical perspective. This is the scope of this article. In its most general form, the parameter quantification in (Fourier domain based) OCT is a severely ill-posed problem dealing with the reconstruction of the optical properties, which we describe by the refractive index which is a function of space (three dimensions) and wavenumber (one dimension), from maximally three-dimensional (two spatial dimensions and one for the wavenumber) measurement data <cit.>.
The insufficient amount of available data has driven the need for modeling possibilities of the direct problem, which are commonly expressed by assumptions on the object. A general overview of existing modeling possibilities for the direct problem in OCT is presented, for example, in <cit.> or <cit.>.
The inverse problem in OCT can be seen as an inverse electromagnetic scattering problem, especially if one considers the ideal case of having direct access to the backscattered sample. It has been treated, also non-specifically related to OCT, in a theoretical context, for example, in <cit.> and under a number of (simplifying) assumptions, like having a weakly scattering medium, in <cit.>.
The aim of this article is to provide a reconstruction method from experimental data obtained by a swept-source OCT system, a specific form of Fourier domain OCT, which is based on a raster scanning of the object with focused and practically monochromatic laser light centered at multiple wavelengths. Due to this raster scanning process, the (inverse) scattering problem can be restricted to the narrow illuminated region, where the refractive index is typically considered only as a depth dependent function.
One of the very first attempts on reconstructing a depth dependent refractive index from an OCT depth profile by Fourier transform has been presented in <cit.>. The reconstruction under the, in this case, crucial assumption of a weakly scattering object (so that the Born approximation of the electromagnetic wave equation is valid) has also been treated in <cit.>, for example, where additionally a Gaussian beam model for the incident laser light has been employed.
Another assumption on the object, which substantially simplifies the mathematical model, is that the object shows a multi-layer structure. This case typically can be identified in OCT images of the human retina <cit.> and in human skin imaging. For such a multi-layer structure, at least locally in the illuminated spot, the depth-dependent refractive index can be simplified to a piecewise constant function. The corresponding inverse problem has been examined in <cit.> and in <cit.> where additional inclusions have been discussed.
In this article, we consider the multi-layer structure of the sample with a Gaussian beam model, presented in <cit.>, which resembles more efficiently the laser light illumination. Within this context, we treat the inverse problem of quantifying the refractive index by using a layer-by-layer method, where in each step the reconstruction concentrates on a pair of parameters, the refractive index and the width of the layer.
Hereby, the reconstruction is formulated as an ℓ^2-minimization problem in the Fourier domain between the Gaussian beam prediction model and the data. That is, we match the absolute values of both in order to avoid high-frequency components which are typically present. In <cit.>, the reconstruction within this setting for the case of a plane wave incident field only gave a unique solution when artificially adding plausible bounds on each parameter. We can omit these bounds in our analysis on the existence and uniqueness of solutions of the present minimization problem.
The analysis itself is complicated by the compact form of the Gaussian model. There the directional dependent reflection coefficients containing information about the refractive indices hinder a deeper investigation. For this reason, the analysis is divided into two parts: Firstly, we consider an approximation of the original model, where the reflection coefficient is assumed to be directional independent. We show that the modeling error caused by this approximation is small and bounded from above. For the approximation we then show the existence and the uniqueness of a solution under the strong but necessary condition that the data is in the range of the prediction.
The distance in every step is obtained by the biggest overlap of the prediction model with the data once the refractive index has been calculated.
In order to support our arguments, we show the functionality of the proposed method by reconstructing the refractive indices from both simulated and experimental data. While reconstructions from simulated data, partially under very theortic assumptions, have been discussed several times, the reconstruction using experimental data has been treated fairly rare.
The assumption on the object to have layered structure is a very strong assumption which turns the inverse problem to the well-posed side. Hence, regularization methods which are typically related to problems with noise, are not considered in this context.
The outline of this article is as follows: In <ref> and <ref> the major parts of the Gaussian beam model introduced in <cit.> are summarized. The section ends by giving an explicit representation of a single OCT depth profile. Based on this representation, we introduce the inverse problem which we want to treat layer-by-layer. We discuss this in <ref>. That the actual reconstruction can be formulated as a layer-by-layer method is justified in <ref> and its reformulation as ℓ^2-minimization problem is shown in <ref>. Before we present in <ref> a characterization of existence and uniqueness of the solutions to an approximated version of the original problem, we show that the approximation error between the functionals is almost negligible. We conclude the section on the refractive index quantification by presenting a purely theoretical method to also retrieve uniqueness. The analysis on the minimum with respect to the width is presented in <ref> in form of purely visual arguments by showing the behavior of the minimization functional with respect to the width. There, we consider simulated data with and without noise.
Finally, in <ref> the results of the reconstruction from simulated and experimental data of a three-layer object are presented.
§ OCT FORWARD MODEL
The image formation in standard OCT systems nowadays is based on raster scanning the object of interest. This means that strongly focused laser light is directed by movable galvanometric mirrors to spots located on an imaginary lateral (horizontal) grid on the object's top surface. For every position then an OCT measurement, a depth profile along the vertical axis through the spot showing the microstructure inside the object, is recorded.
Latest publications in the field of OCT have pointed out that modeling such focused laser light by a single plane wave is not sufficient enough in capturing several system relevant aspects. The huge influence of the focus and the beam width, the diameter of the laser in the focus spot, for example, are a few modeling aspects which are lost when simulating the measurements of an OCT system using single plane waves. For this reason we base our analysis on a Gaussian beam model for the incident field, which is said to model laser light in an accurate way. Hereby, we adapt for our purpose the forward model which has been presented in <cit.>. There, a model is provided which includes all relevant system parameters and allows a good approximation of an actual measurement.
In this section we summarize briefly the main parts of an OCT system, namely the light scattering, the fiber coupling and finally the measurement. Hereby, we use the fact that the object is raster scanned, which allows us to model in the following the light propagation for each raster scan independently. At the end of this section we provide an equation for a single raster scan (A-scan) data. This will be used as the data for the corresponding inverse problem.
The main parts are specifically modeled for a swept-source OCT system <cit.>, since the data used for the numerical experiments in <ref> has been obtained by this system type. Within such a system almost monochromatic laser light centered at multiple wavelengths in a certain spectrum is used for the illumination of the object. For each of these wavelengths the scattered light is detected by a narrow-bandwidth interferometer. This finally gives a complete measurement for each wavelength within the spectrum.
§.§ Field of incidence
We model the light propagation from the point on where the laser light has entered the sample or the reference arm respectively. The presentation restricts to the modeling of the sample arm. The reference arm, which is modeled analogously with the difference that the object is a perfectly reflecting mirror, is considered as a special case.
For each raster scan we model the incident illumination as an electromagnetic wave Ê:×^3→^3, a function of the wavenumber k∈ and the spatial coordinate x∈^3, satisfying the system of Maxwell's equations in vacuum, that is (in Fourier space)
ΔÊ(k,x) + k^2 n_0^2 Ê(k,x) = 0 and ∇·Ê(k,x)=0 for all k∈, x∈^3,
where n_0 = 1 is the free-space refractive index. Since we consider a swept-source OCT system, Ê is assumed to fulfill the support condition
Ê( · ,x) ⊂ (k_0-ϵ_0,k_0+ϵ_0) for all x∈^3,
for a sufficiently small parameter ϵ_0 > 0. The experiment is repeated for multiple wavelengths k_0 in a spectrum 𝒮 = [k_1, k_2].
The delta-like support in wavenumber allows us to model each experiment, including the backscattering of the light by the sample, independently. This means we can solve the Helmholtz equation for the electric field for each k∈𝒮 separately. However, there is no difference to the mathematical formulation for a broadband illumination, an electric field which solves the Helmholtz equation for all k∈𝒮 simultaneously. Hence, we formulate everything simply for a broadband illumination.
In order to specify the Gaussian beam for the field of incidence, we impose additionally an initial condition at a hyperplane {x∈^3 | x_3 = r_0}, which we locate in the focus of the beam. Hence, by r_0∈ we denote the focus position, which we always assume to be along the vertical line e_3, where (e_i)_i=1^3 denotes the standard basis of ^3. For a function f_k:^2→ with compact support in D_k(0)⊂^2, the ball with radius k and center zero, and a vector η∈𝕊^1×{0}, we then write
ℱ_x̅(Ê)(k,κ,r_0) = f_k(κ) η for all κ∈^2,
where x̅ = (x_1, x_2) and where
ℱ_x̅(u)(κ) = ∫_^2 u(x̅) e^-i x̅·κ d x̅
denotes the two-dimensional Fourier transform with respect to the variable x̅.
We consider in the following illumination from the top only. Hence, we eliminate one major propagation direction from the obtained solution Ê <cit.> to the combined problem of <ref> and <ref> and consider only downward propagating waves, that is in direction -e_3. We denote this part by E, which for any fixed wavenumber k∈𝒮 is then represented by
E(k,x̅,x_3) = 1/4π^2∫_D_k(0) g(κ) e^-i √(k^2-|κ|^2)(x_3-r_0) e^i κ·x̅ d κ, (x̅,x_3)∈^3
where
g(κ) = 1/2 f_k(κ) (η - (η_3 - κ·η̅/√(k^2-|κ|^2))e_3), η̅=(η_1,η_2).
The incident wave in <ref> represents a weighted superposition of plane waves e^i K· x, where the wave vector K represents the propagation direction of each plane wave. We note that every wave direction K in <ref> is implicitely given as a function of κ, that is
K(κ) = (κ,-√(k^2-|κ|^2)).
We suppress this dependence in the following.
We complete the Gaussian beam incident field by specifying the weight function f_k in the focal plane. Hereby, we use Gaussian weights only.
In the following, we restrict our attention to Gaussian distributions, meaning that we consider in <ref> functions of the form
f_k(κ) = e^-|κ|^2 a, for all κ∈^2,
where a>0 is such that f_k - χ_D_k(0)f_k _L^1(^2), where χ_𝒰 is the characteristic function of a set 𝒰, is almost negligible.
The Gaussian distribution in this case pronounces only those directions κ∈ D_k(0) which have a small radial deviation from zero, others are excluded from the integral in <ref>.
Hence, we can consider the quotient |κ|/k as a small parameter for which the paraxial approximation expressed by the linearization of the square-root
√(k^2-|κ|^2) = k - |κ|^2/2k + o(|κ|^2/k)
is valid. We want to include this fact also in the representation of the polarization vector.
Restricting to η = e_2, we consider an approximated polarization vector
g̃(κ) = 1/2 f_k(κ) e_2, κ∈^2,
with f_k as in <ref> in our model, which reduces the electric field to its transverse components.
We combine <ref> with <ref> and <ref> and denote the resulting incident field by E^(0) = Ẽ^(0) e_2, with scalar complex field
Ẽ^(0)(k,x) = 1/8π^2∫_D_k(0) e^-|κ|^2 ae^i √(k^2-|κ|^2)r_0 e^i K· x d κ, k∈, x∈^3.
§.§ Light scattering
The field in <ref> radiates the sample, which we denote by Ω. We characterize the optical properties of Ω by the refractive index represented by a function n:Ω⊂^3→ [1,∞). The assumption that n is a real function refers to the case of a sample where the scattering dominates over the thus neglected absorption. Additionally, because of the small bandwidth of the laser, we assume that n is constant with respect to the wavenumber in the spectrum 𝒮.
The main assumption however is that Ω shows a multi-layered structure, meaning that Ω consists of a finite union of subsets Ω_j, where each is characterized by a homogeneous refractive index n_j≥ 1, with n_j≠ n_l, l∈{j-1, j+1}. We claim that all layers are parallel and therefore share a single unit normal vector ν_Ω = (sinθ_Ω,0,cosθ_Ω), for a small angular value of θ_Ω∈, which we assume for simplicity to be orthogonal to the polarization vector η=e_2 and which is pointing outward the object.
To summarize, we consider a sample given by
Ω = ⋃_j=1^J Ω_j, Ω_j = {x∈^3 | a_j+1≤( x·ν_Ω)≤ a_j}, n(x) = ∑_j=1^J χ_Ω_j(x) n_j,
for a sequence of coefficients (a_j)_j⊂. The total number of layers J∈ hereby is an unknown but arbitrary finite number. We additionally assign to every layer Ω_j its width, which is given by the positive real number d_j = a_j - a_j+1.
The light E:×^3→^3 in presence of the sample, <ref>, is then modeled as a solution of the vectorial Helmholtz equation for all x∈^3:
∇×(∇× E)(k,x) - k^2 ñ(x)^2 E(k,x) = 0 with ñ(x)= n_0, x∈^3∖Ω,
n(x), x∈Ω.
By taking the divergence of this equation, we see that the assumption of piecewise homogeneous layers implies that ∇· E=0 in every set Ω_j, so that ∇×(∇× E)=-Δ E and the equation reduces to the simpler Helmholtz equation (see <ref>) for the second component E_j (the first and third components vanish by our choice of incident field) of the field E inside the layer Ω_j, similar to <cit.>, where k^2n_0^2 is replaced by k^2 ñ^2. At the boundaries between the single layers, we claim that the homogeneous parts of the electric field fulfill the continuity conditions
E_j(k,x) = E_j+1(k,x), ∇ E_j(k,x)·ν_Ω = ∇ E_j+1(k,x) ·ν_Ω,
for all x∈∂Ω_j ∩∂Ω_j+1 and j∈{1,…,J}. We finally denote by E^(s) the second component of the backscattered field E - E^(0) from the object.
Since the Fresnel equations allow us to get an explicit solution for the case of one discontinuity, we can iteratively construct a solution for a finite number of layers as a series of Fresnel solutions, which physically corresponds to multiple light reflections at the boundaries, see <cit.>.
Since multiply relfected light only contributes minorly, we ignore it in the model for the backreflected sample field. However, we keep in mind that these multiple reflections, at least up to a finite order, can be added to the model at any time. The assumption of a single reflection model makes it possible to use the transmitted part of the light at a certain layer directly as an incident field to the next layer.
A layer-by-layer scheme, similar to the one in <cit.>, allows us to give an explicit representation of the backscattered field E^(s). The incident field hereby is decomposed into its plane wave parts. For each, we use the Fresnel formulas, see <cit.>, to determine the reflection coefficients (as functions of the propagation direction of the incoming plane wave) which we denote by r_j:^2→ for every interface ∂Ω_j-1∩∂Ω_j (with Ω_0=Ω_J+1=^3∖Ω).
Finally, the reflected fields are combined again and we obtain for x, with x_3 > x_Ω,3, where x_Ω is a point on the top surface {x_Ω∈^3:x_Ω·ν_Ω=a_1}, the backscattered field
E^(s)(k,x) = 1/8π^2∑_j=1^J+1∫_^2 r_j(κ) (r_≤ j-1(κ) e^i k Ψ_j(κ)) e^-|κ|^2 a
× e^i √(k^2-|κ|^2)r_0 e^i (K-Φ(K))· x_Ω e^i Φ(K)· x d κ,
with
r_j(κ) = n_j-1cosθ^j-1_t(κ) - √(n_j^2-n_j-1^2+n_j-1^2 cos^2θ^j-1_t(κ))/n_j-1cosθ^j-1_t(κ) + √(n_j^2-n_j-1^2+n_j-1^2 cos^2θ^j-1_t(κ))
and where we denote by K=K(κ)=(κ,-√(k^2-|κ|^2)) the wave vector (introduced in <ref>) and by Φ(K)=K - 2( K·ν_Ω)ν_Ω the wave direction of the reflection of a plane wave with incident vector K at an interface with unit normal vector ν_Ω. Moreover, we introduced for j≥ 1 the transmission coefficients and the (transmission) phase factors
r_≤ j-1(κ) = ∏_l=1^j-1 (1-r_l^2(κ)), Ψ_j(κ) = 2∑_l=1^j-1 n_l d_l cosθ^l_t(κ),
where the angles θ^j_t(κ) of transmission between the layers j and j+1 for an incident plane wave with wave vector K(κ) can be iteratively calculated via Snell's law
θ^j_t(κ) = arcsin( n_j/n_j+1sin(θ^j-1_t(κ)) ) with θ^0_t(κ) = arccos(-K|K|·ν_Ω).
We remark that since |K(κ)| = k, the quotient
K(κ)/|K(κ)| = (κ/k,-√(1-|κ|^2/k^2))
in the definition of the angle of incidence θ_t^0 (in <ref>) is actually a function of the dimensionless parameter κ/k. This implies that the same is true also for the angle of transmission θ_t^j and the phase factor Ψ_j. Hence, we will always write Ψ_j(κ/k) in the following.
§.§ Fiber coupling
The backscattered light is then coupled into a single-mode fiber and later transfered to the detector. This coupling is done by using a scan lens which discards all (plane wave) parts in <ref> of which wave direction strongly deviates from the third unit normal e_3. The maximal deviation is described by the (maximal) angle of acceptance, which we denote by θ. The set of incident directions yielding an accepted wave vector is then given by
ℬ = {κ∈^2 | arccos( 1/k(Φ(K)· e_3 ) ) ≤θ}⊂^2.
This means that the area of integration in <ref> is reduced to the set ℬ⊂^2.
Because of the restriction to this for typical cases small set the deviation in the directions κ is limited and small. Hence, also the reflection coefficients are only varying slighty (with κ) which motivates us to consider an approximated form of <ref>, especially in context of the inverse problem in <ref> and <ref>, where the reflection coefficients are directional independent.
For example, if θ_Ω = 0, the accepted directions are given by ℬ = D_k sinθ(0), the ball centered at zero with radius ksinθ. The inclination angle of K(κ) with ν_Ω is maximal if |κ| = k sinθ, which means that κ is on the boundary of ℬ. The corresponding reflection coefficient is given by
r(κ) = n cosθ^0_t(κ) - √((n')^2-n^2sin^2(θ_t^0(κ)))/n cosθ^0_t(κ) + √((n')^2-n^2sin^2(θ_t^0(κ))) = r̃(θ^0_t(κ))
(for a single boundary reflection between n and n'), which we then can write as a function of the inclination angle θ^0_t. For |κ| = ksinθ, we get θ_t^0=θ and we can use the approximation
r̃(θ) ≈n cosθ - n' + n^2n'(1-cos^2θ)/n cosθ + n' - n^2n'(1-cos^2θ).
Under the assumption of cosθ≈ 1, which is a good approximation in our setting as θ is around 2^∘, the reflection coefficient is approximated by the constant n-n'/n+n', the classical Fresnel coefficient <cit.>.
In general, the angular values of θ and θ_Ω are considered to be small, which allows us to assume that the variation of the reflection coefficient on ℬ is also well approximated by a constant coefficient. Next, we show that for small angular values the difference between the reflection coefficient (as a function of the angle of incidence), see <ref>, and its approximating constant tends to zero quadratically.
Let n,n'∈ and let r^† = n-n'/n+n'∈. Further, let r:[-π/2,π/2]→ be defined by
r(y) = ncos(y) - √(n'^2-n^2 + n^2 cos^2(y))/ncos(y) + √(n'^2-n^2 + n^2 cos^2(y)).
Then for small values of y we have that
lim_y→ 0|r^†- r(y)/y| = 0.
We determine the Taylor expansion of r locally around the point y_0 = 0. Obviously, r(y) = r(-y), for all y∈, which makes r an even function. Thus, for its Taylor expansion we expect only even exponents.
Estimating now the difference between r^† and r for small values of y gives
|r^†- r(y)| ≤|n^3-2n n'^2|/2|n'|(n+n')^2 y^2,
which proves the result.
Applying this formula in our context, we obtain the strongest deviation from r^† on the (closed) set ℬ in the boundary point κ with maximal angular deviation θ_0= θ + 2 θ_Ω from the unit vector -ν_Ω. Thus, we obtain a maximal error when approximating r by r^† by the order of (θ +2 θ_Ω)^2. We silently assume that the refraction at the deeper interfaces does not change the angle too much so that this approximation remains valid at all interfaces.
§.§ OCT measurement
At the detector, an interference pattern of the backscattered field from the sample combined with a reference field is measured. This reference field is obtained by the reflection of the incident light by a perfectly reflecting non-tilted mirror, which is modeled as semi-infinite object with (infinitely) large refractive index. We assume that the mirror is located around the focus of the beam where the far-field approximation <cit.> is considered to be valid <cit.>.
For the (measurement) direction s = e_3 and the distance ρ∈, 1≪ρ, we then obtain an interference pattern
𝒞(k)=(E^(s)(k,ρ e_3)E_∞^(0)(k,ρ e_3))
where E_∞^(0)(k,ρ e_3) denotes the far field approximation of the reflected incident field. Using the representation of the backscattered field, we obtain
𝒞(k) = -k/16π^3 ρ∑_j=1^J+1∫_ℬ(r_j(κ) r_≤ j-1(κ)) e^-|κ|^2 asin(k(-|κ|^2 ψ_0/2k^2 + κ_1/kψ_1 + Δ_0 + Ψ_j(κk)))d κ,
where we used the paraxial approximation, see <ref>. The elements
ψ_0 = r_0 - ρ - 2 cos^2(θ_Ω) (x_Ω,3-ρ), ψ_1 = sin(2θ_Ω)(x_Ω,3-ρ)
and Δ_0 describe the position of the object with respect to the focus and the phase difference between the sample and the reference mirror respectively, see <cit.>.
The backscattered light is recorded for each wavenumber k∈𝒮 independently. Collecting these single measurements, leads to the measurement 𝒞 as a function of the wavenumber.
The interference pattern in <ref> hereby represents the OCT measurement comprising one A-scan of the object, which is the center of interest in the OCT forward and inverse problem.
We want to assume now that all the parameters of the system are known and that the unknown is the refractive index function n of the medium.
With the forward operator ℐ^J+1×^J→ L^2(𝒮)
ℐ[(n_j)_j=1^J+1,(d_j)_j=1^J](k) = -k/16π^3 ρ∑_j=1^J+1∫_ℬ(r_j(κ) r_≤ j-1(κ)) e^-|κ|^2 a
×sin(k (-|κ|^2ψ_0/2k^2 + κ_1/kψ_1 +Δ_0 + Ψ_j(κk)))d κ,
we can thus write the inverse problem of OCT as the determination of (n_j)_j=1^J+1 and (d_j)_j=1^J from the measurements 𝒞 via the non-linear equation
ℐ[(n_j)_j=1^J+1,(d_j)_j=1^J](k) = 𝒞(k), k∈𝒮.
§ THE PHYSICAL PARAMETERS
So far <ref> together with <ref> has been formulated as a general inverse scattering problem for interferometric measurement data. We now want to simplify the equations by plugging in the typical values of the parameters in an OCT system and keeping only terms of significant size.
In <ref>, we listed the working parameters of the OCT system used for our data acquisition. These are kept fixed for the whole modeling process and also the inverse problem.
In particular, we have a rather narrow bandwidth, a small beam width, an almost normal incident angle, a small angle of acceptance, and we measure at a comparatively large distance to the object. This leads to some combinations of and relations between these parameters, which are small compared to one and which we will often simply neglect in the following analysis.
We assume that the following quantities can be considered sufficiently small so that they can be safely neglected in the model.
* In general, we assume the angular values θ and θ_Ω to differ only slightly from zero, which allows us to consider both quantities as small parameters and to keep in the following only terms which are linear with respect to them.
* We assume that the position of the detector is sufficiently far away from the object, which implies that in the relation in <ref> the distance between the focus and the object is dominated by the distance between the object and the detector, yielding that ψ_0 ≈ρ-x_Ω,3, with ρ≫ x_Ω,3. In particular, we assume that the ratio k a/ψ_0 between half the beam width k√(a) (measured in multiples of the averaged wave length) and the distance ψ_0/√(a) to the detector (in multiples of half the beam width) is neglible.
* Similarly, we assume that the distance Δ_0 between the object in the sample arm and the mirror in the reference arm is so large that the difference k_2Δ_0-k_1Δ_0 in multiples of the wave lengths between measuring it with the maximal wave vector k_2 and the minimal k_1 is large, so that we can assume ( kΔ_0)^-1 to be small.
* The ratio between the bandwidth 2 k of the spectrum and its center k is assumed to be so small that it is enough to keep the zeroth order in k k^-1.
* Finally, we assume that the beam width is so small that the deviation k√(a)sin(θ) of the tilt measured with respect to the different wave lengths in the specturm is sufficiently small to neglect all terms of higher than linear order therein.
We summarize all these small quantities and their values in our experimental setup in <ref>.
§ A LAYER-BY-LAYER METHOD FOR THE INVERSE PROBLEM
To avoid having to solve for all the 2J+1 parameters in <ref> at once, we want to try to split the reconstruction, as it was also done in <cit.>, for example, into the subproblems
ℐ_j[n_j,d_j-1](k) = 𝒞^j(k), for j=1,…,J,
where ℐ_j×→ L^2(𝒮) is the jth term in the sum of <ref>, which corresponds to the contribution from the reflection at the boundary between the layers Ω_j-1 and Ω_j:
ℐ_j[n_j,d_j-1](k) = -k/16π^3 ρ∫_ℬ r_j(κ) r_≤ j-1(κ)
× e^-|κ|^2 asin(k (-|κ|^2ψ_0/2k^2 + κ_1/kψ_1 +Δ_0 + Ψ_j(κk)))d κ.
Since also the coefficients (n_l)_l=1^j-1 and (d_l)_l=1^j-2 from the previous interfaces appear in ℐ_j via the combined reflection coefficients r_≤ j-1 and the combined phase factor Ψ_j of the transmissions at the previous interfaces, we want to proceed iteratively by recovering first n_1 from <ref> for j=1, and then obtain (n_j,d_j-1) from the jth problem in <ref> after having already recovered (n_l)_l=1^j-1 and (d_l)_l=1^j-2 from the previous ones.
The difficulty hereby is, however, that we a priori do not have access to the corresponding measurements 𝒞^j. To get an approximation for these, we perform a Fourier transform of <ref> with respect to the wave number k (we extend the function from 𝒮 to by zero)
ℱ_k(u)(z) = 1/√(2π)∫_ u(k) e^-ikz dk
and obtain
k√(2/π)∑_j=1^J+1(( k . )e^-i k . )*_zℱ_k(ℐ_j) = k√(2/π)(( k . )e^-i k . )*_zℱ_k(𝒞),
with k and k as in <ref> and where we call the dual variable to the wave number k the optical distance z and write *_z for the convolution with respect to z, see <ref>. This is a combination of sinc-functions (defined by (z) = sin(z)/z), which are centered at the frequencies
Δ̃_j(κ̃) = -|κ̃|^2ψ_0/2 + κ̃_1ψ_1 + Δ_0 +Ψ_j(κ̃), κ̃= κ/k, j=1,…,J, κ∈ℬ,
of the sine under the integral in <ref>. Since ℬ is a small disk close to the origin, we get in a very rough approximation
Δ̃_j(κ̃)-Δ̃_j-1(κ̃) ≈Δ̃_j(0)-Δ̃_j-1(0) = 2n_j-1d_j-1,
so that the distance between the peaks of the sinc-functions corresponds in zeroth order to twice the travel time of the light between the two interfaces.
If in addition to the width of the layers, the size of the spectrum 𝒮, described by k, is sufficiently large (which is the basis of OCT), these peaks can be nicely separated from each other so that we find intervals 𝒰_j around the points Δ̃_j(κ/k), κ∈ℬ, so that
k√(2/π)∑_l=1^J+1(( k . )e^-i k . )*_zℱ_k(ℐ_l) ≈ k√(2/π)(( k . )e^-i k . )*_zℱ_k(ℐ_j) on 𝒰_j.
We will therefore assume that we can recast the inverse problem from <ref> as the iterative procedure, where we start from the top and then go layer by layer deeper inside the object and recover in the jth step from the knowledge of (n_l)_l=1^j-1,(d_l)_l=1^j-2 the parameters (n_j,d_j-1) from
k√(2/π)(( k .)e^-i k .)*_zℱ_k(ℐ_j[n_j,d_j-1]) = k√(2/π)(( k .)e^-i k .)*_zℱ_k(𝒞^j), z∈𝒰_j,
where ℐ_j:^2→ L^2(𝒮) is the forward operator for the jth light-layer interface interaction, defined by <ref> and 𝒰_j is a suitably chosen interval around the peaks of the data 𝒞^j = 𝒞 - ∑_l=1^j-1ℐ_l[n_l,d_l-1].
We note that the operator ℐ_1, corresponding to the reflection at the top boundary, only takes the refractive index as an argument and no distance.
§ ALMOST NORMAL INCIDENCE
We will thus look at a specific step j of the inverse problem <ref>. To discuss its properties, we want to approximate in ℐ_j the refractive index r_j by the expression r_j^†=n_j-1-n_j/n_j-1+ n_j, which corresponds to the refractive index a plane wave with normal incidence on the interface would experience. This yields the simplified forward operator
ℐ^*_j[n_j,d_j-1](k) = -kr^†_j/16π^3 ρ∫_ℬ r_≤ j-1(κ)e^-|κ|^2 asin(k (-|κ|^2ψ_0/2k^2 + κ_1/kψ_1 +Δ_0 + Ψ_j(κk)))d κ.
Since the angle θ_Ω is assumed to be small, we can expand this around the value θ_Ω=0, at which we have ℬ=D_ksinθ(0) and ψ_1=0, and get an analytic expression for the leading order term of the forward operator. We will do the calculations for the operator ℐ^*_1 corresponding to the first interface, which is a bit simpler than those for deeper interfaces, since also the terms r_≤ 0=1 and Ψ_1=0, defined in <ref>, disappear.
Let θ_Ω = 0 and let ℐ^*_1 be defined as in <ref>. Then, we have that
ℐ^*_1[n_1](k) = i (r_1^† k^2/16π^2ρ)∑_ϵ∈{-1,1}ϵe^iϵ kΔ_0/2 a k + iϵψ_0( 1- e^-k^2γ e^ -iϵ k ξ)
with
γ = asin^2(θ) and ξ = ψ_0/2sin^2(θ).
For θ_Ω = 0, the set of accepted wave vectors ℬ is defined by the ball with radius ksinθ and center zero. By using
sin(x) = 1/2i(e^ix - e^-ix)
we rewrite <ref> as
ℐ^*_1[n_1] = i (r_1^†/16π^3ρ) k/2∫_D_ksinθ(0) e^-|κ|^2 a( e^-|κ|^2i/2kψ_0e^i k Δ_0 - e^|κ|^2 i/2kψ_0e^-i k Δ_0) dκ
= i (r_1^†/16π^3ρ) k/2∑_ϵ∈{-1,1}ϵ e^iϵ k Δ_0∫_D_ksinθ(0) e^-|κ|^2(a+ϵi/2kψ_0)dκ.
By switching to polar coordinates for κ, we can explicitly calculate this integral and find the desired representation.
Taking the Fourier transform of <ref> with respect to the wavenumber k, we obtain in leading order a representation of ℱ_k(ℐ^*_1[n_1]) as a sum of sinc-functions.
Let θ_Ω = 0 and for k∈𝒮=[k_1, k_2] let ℐ^*_1 be defined as in <ref>. Then its bandlimited Fourier transform is given as the linear combination
ℱ_k(ℐ^*_1[n_1] χ_𝒮)(z) = -1/8π^2√(2π) k k^2 r_1^†/ρψ_0∑_ϵ∈{-1,1}e^-i k(z-ϵΔ_0)[u_2(z-ϵΔ_0) - 2a kϵ/ψ_0 u_3(z-ϵΔ_0)
- e^-k̅^2 γe^-i k̅ϵξ( (u_2(z-ϵ(Δ_0-ξ)) +𝒪( k√(γ)))- 2a kϵ/ψ_0(u_3(z-ϵ(Δ_0-ξ)) +𝒪( k√(γ))) )].
where the functions u_j:→ are defined by
u_j(z) = k^-jd^j/d z^j(( k z)e^-i k̅ z)e^i k̅ z
and we define the parameters γ and ξ as in <ref>.
The Fourier transform of <ref> yields
ℱ_k(ℐ^*_1[n_1]χ_𝒮)(z) =
i(r_1^†/16π^3ρ)√(π/2)∑_ϵ∈{-1,1}ϵ∫_k_1^k_2k^2(2 a k- iϵψ_0)/4 a^2 k^2 + ψ_0^2e^-i k (z-ϵΔ_0)( 1 - e^-k^2γ e^ -iϵ k ξ) dk
Because of <ref> we have that ak is small compared to ψ_0 and we linearize the fraction
2ak - iϵψ_0/4a^2k^2 +ψ_0^2 = 2ak - iϵψ_0/ψ_0^2(1+𝒪(a^2k^2/ψ_0^2)).
We split this into the linear and the constant term in k and see that the integrals in <ref> reduce to the two types
I_j,ϵ^(1)(z) = ∫_k_1^k_2 k^j e^-i k (z-ϵΔ_0) dk and
I_j,ϵ^(2)(z) = ∫_k_1^k_2 k^j e^-i k (z-ϵΔ_0) e^-k^2γ e^ -iϵ k ξdk
of integrals for j∈{2,3} and ϵ∈{-1,1}.
* We calculate the first type by writing k^j as derivative with respect to the variable z and obtain with k=1/2(k_1+k_2) and k=1/2(k_2-k_1) that
I_j,ϵ^(1)(z) = i^j∫_k_1^k_2∂_z^j(e^-i k (z-ϵΔ_0)) dk = 2i^j k ∂_z^j(( k(z-ϵΔ_0))e^-i k(z-ϵΔ_0)).
* For the second integral, we proceed analogously which gives us together with the Fourier convolution theorem
I_j,ϵ^(2)(z) = -i^j-2∫_k_1^k_2∂_z^j(e^-i k (z-ϵ(Δ_0-ξ)))e^-k^2γdk
= -i^j-2 k/√(πγ)∫_e^-ζ^2/4γ∂_z^j(( k(z-ζ-ϵ(Δ_0-ξ)))e^-i k(z-ζ-ϵ(Δ_0-ξ))) dζ.
We substitute ζ̃=ζ/√(γ) to rewrite this in the form
I_j,ϵ^(2)(z) = -i^j-2 k/√(π)∫_e^-1/4ζ̃^2∂_z^j(( k(z-ζ̃√(γ)-ϵ(Δ_0-ξ)))e^-i k(z-ζ̃√(γ)-ϵ(Δ_0-ξ))) dζ̃.
Using that, according to <ref>, the term k√(γ) is much smaller than one and the integral is due to the Gaussian factor e^-1/4ζ̃^2 restricted to a domain of order one, we approximate the sinc-function in the integral by its zeroth order in terms of ζ̃ k√(γ). The mean value theorem for
|^(j̃)( k(z-ζ̃√(γ)-ϵ(Δ_0-ξ)))-^(j̃)( k(z-ϵ(Δ_0-ξ)))| ≤ k|ζ̃|√(γ)^(j̃+1)_∞
where ^(j̃+1)_∞≤ C, yields
∂_z^j(( k(z-ζ̃√(γ)-ϵ(Δ_0-ξ)))e^-i k(z-ζ̃√(γ)-ϵ(Δ_0-ξ)))
= ∂_z^j(( k(z-ϵ(Δ_0-ξ)))e^-i k(z-ζ̃√(γ)-ϵ(Δ_0-ξ))) + |ζ̃| k^j∑_j̃=0^j𝒪(( k k^-1)^j̃ k√(γ)).
Again by <ref>, using that k k^-1 is small, we only keep terms of zeroth order and obtain
I_j,ϵ^(2)(z) = -i^j-2 k/√(π)(∫_e^-1/4ζ̃^2∂_z^j(( k(z-ϵ(Δ_0-ξ)))e^-i k(z-ζ̃√(γ)-ϵ(Δ_0-ξ))) dζ̃
+ k^j𝒪( k√(γ))∫_ e^-1/4ζ̃^2|ζ̃| dζ̃ )
= -2i^j-2 k (e^- k^2γ∂_z^j(( kz-ϵ(Δ_0-ξ))e^-i k(z-ϵ(Δ_0-ξ)))+ k^j𝒪( k√(γ)) ).
Introducing the functions u_j as in <ref>, we obtain for
ℱ_k(ℐ^*_1[n_1] χ_𝒮)(z) = (r_1^†/16π^3ρ)√(π/2)∑_ϵ∈{-1,1}(1/ψ_0(I_2,ϵ^(1)(z)-I_2,ϵ^(2)(z))+2aϵ i/ψ_0^2(I_3,ϵ^(1)(z)-I_3,ϵ^(2)(z)))
=-1/8π^2 √(2π) k k^2 r_1^†/ρψ_0∑_ϵ∈{-1,1}e^-i k(z-ϵΔ_0)[u_2(z-ϵΔ_0) - 2a kϵ/ψ_0 u_3(z-ϵΔ_0)
- e^-k̅^2 γe^-i k̅ϵξ( (u_2(z-ϵ(Δ_0-ξ)) +𝒪( k√(γ)))- 2a kϵ/ψ_0(u_3(z-ϵ(Δ_0-ξ)) +𝒪( k√(γ))) )].
By <ref> we have that kΔ_0≫1, which holds true for the experimental data. We can then ignore for z>0 those sinc terms in <ref> which are centered on the negative axis. The function ℱ_k(ℐ^*_1[n_1]χ_𝒮) is then approximated by
ℱ_k(ℐ^*_1[n_1]χ_𝒮)(z) = -1/8π^2√(2π) k k^2 r_1^†/ρψ_0 e^-ik̅(z-Δ_0)( u_2(z-Δ_0) - 2a k/ψ_0 u_3(z-Δ_0)
- e^-k̅^2 γe^-i k̅ξ( (u_2(z-(Δ_0-ξ)) +𝒪( k√(γ)))- 2a k/ψ_0(u_3(z-(Δ_0-ξ)) +𝒪( k√(γ))) )+𝒪(1/ kΔ_0) ),
for z>0, which reflects the minor influence of the sinc-functions u_j which are centered around -Δ_0 at the function value close to the point z=Δ_0.
* For z+Δ_0>0, we have the asymptotic behavior
(ℱ_k(ℐ^*_1[n_1]χ_𝒮))(z + Δ_0) e^ik̅ z
= -1/8π^2√(2π) k k^2r_1^†/ρψ_0[ -( k z)+ e^-k̅^2γe^-i k̅ξ( k (z+ξ))
- 2i ( k k^-1('( k z) - e^-k̅^2γe^-i k̅ξ'( k (z+ξ)))+ a k/ψ_0(( k z) -e^-k̅^2γe^-i k̅ξ( k (z+ξ))))
+𝒪(( k^-1 k+a k/ψ_0)^2)+ 𝒪( k√(γ)) ( 1+ a k/ψ_0)+𝒪(1/ kΔ_0)].
* In particular, we find for the norm in the highest order
|(ℱ_k(ℐ^*_1[n_1]χ_𝒮))(z + Δ_0)|^2 = 1/2^7π^5( k k^2r_1^†/ρψ_0)^2 | -( k z)+ e^-k̅^2γe^-i k̅ξ( k (z+ξ))
+𝒪(1/ kΔ_0)+𝒪( k^-1 k)+𝒪(a k/ψ_0)+𝒪( k√(γ))(1+a k/ψ_0)|^2.
* We remark that
u_j(z) = k^-j∂_z^j(( kz)e^-i kz)e^i kz
= k^-j∑_l=0^j jl∂_z^l(( kz))∂_z^j-l(e^-i kz)e^i kz
= (-i)^j(( kz)+ij k^-1 k'( kz)+𝒪( k^-2 k^2)).
With this, we find
(u_2( z) - 2a k/ψ_0 u_3(z) ) -e^-k̅^2 γe^-i k̅ξ( u_2(z+ξ) - 2a k/ψ_0u_3(z+ξ) )
=-( kz)-2i k^-1 k'( kz)-2a/ψ_0(i k( kz)-3 k'( kz))
+e^-k̅^2 γe^-i k̅ξ(( k(z+ξ))+2i k^-1 k'( k(z+ξ))+2a/ψ_0(i k( k(z+ξ))-3 k'( k(z+ξ))))
+ 𝒪( k^-2 k^2).
If we neglect herein the fourth and eighth term
6a k/ψ_0('( kz)-e^-k̅^2 γe^-i k̅ξ'( k(z+ξ))) = 𝒪(a k/ψ_0 k^-1 k)
as they are of the order of the small quantity a k/ψ_0 (which is of order 10^-3 in our setting) and plug this into our expression in <ref> for ℱ_k(ℐ^*_1[n_1]χ_𝒮), we end up with <ref>.
* We see that the second term in <ref> is of lower order compared to the first one:
k k^-1('( k z) - e^-k̅^2γe^-i k̅ξ' ( k(z+ξ)))
+ a k/ψ_0(( k z) - e^-k̅^2γe^-i k̅ξ( k (z+ξ))) = 𝒪( k^-1 k)+𝒪(a k/ψ_0).
If we thus neglect this term, we obtain <ref>.
In order to apply these formulas for j ≥ 2, we notice that the only difference from <ref> is the term ∑_l=1^j-1 2 n_l d_lcosθ^l_t,
where the components are calculated iteratively by using the linearization of the square-root for κ∈ℬ=D_ksinθ(0) making use of the smallness of the angle of acceptance θ:
2n_1d_1 cosθ_t^1 = 2n_1d_1 √(1 - |κ|^2 n_0^2/n_1^2k^2) = 2n_1 d_1 - n_0^2/n_1|κ|^2 /k^2 d_1+𝒪(θ^4), l=1,
2n_ld_l cosθ_t^l = 2n_l d_l - n_0^2/n_l|κ|^2 /k^2 d_l+𝒪(θ^4), l ≥ 2.
Thus, we can adapt <ref> to the contribution ℐ^*_j from the jth interface by introducing the layer dependent variables
Δ_j = Δ_0 + 2 ∑_l=1^j-1 n_l d_l,
ψ_0,j = ψ_0 + 2∑_l=1^j-1d_l n_0/n_l,
ξ_j = ψ_0,j/2sin^2(θ),
for all j=1,2,…,J, where the coefficients for the first interface coincide with the system parameters: Δ_1=Δ_0, ψ_0,1=ψ_0, and ξ_1=ξ.
Moreover, we use <ref> to approximate the term r_≤ j-1(κ), defined in <ref>, in the integral of ℐ_j^* by
r_≤ j-1(κ) = r_≤ j-1^†+𝒪(θ^2) with r_≤ j-1^†=∏_l=1^j-1(1-(r^†_l)^2).
With this, we obtain more generally the formula
|(ℱ_k (ℐ^*_j[n_j]χ_𝒮))(z)|^2
= 1/2^7π^5( k k^2r_j^† r_≤ j-1^†/ρψ_0,j)^2 | U_j(z)+𝒪(1/ kΔ_j)+𝒪( k^-1 k)+𝒪(a k/ψ_0,j)+ 𝒪( k√(γ))(1+a k/ψ_0,j) |^2
+( k k^2/ρψ_0,j)^2𝒪(θ^2)
with the function
U_j(z) = -( k (z-Δ_j)) + e^-k̅^2γ e^-i kξ_j( k (z-(Δ_j - ξ_j))),
for the Fourier transform of the forward operator ℐ_j^*. The last error term 𝒪(θ^2) herein accounts for the approximation of the combined reflection coefficient r_≤j-1 by its zeroth order.
In particular, we can use this explicit expression to estimate how much the signal ℱ_k(ℐ^*_jχ_𝒮) from the jth interface influences the values ℱ_k(ℐ^*_lχ_𝒮) from the lth interface in an interval 𝒰_l around the peak Δ_l of the second interface. We show the estimate for simplicity only for the interaction between the first and the second interface.
There exists an interval 𝒰_2 such that
|ℱ_k(ℐ^*_1χ_𝒮)(z)|/|ℱ_k(ℐ^*_2χ_𝒮)(z)|≤1/ kn_1d_14/1-e^- k^2γ|r_1^†|/|r_2^† r_≤1^†|(1+2n_0d_1/n_1ψ_0) = 𝒪(1/ kd_1)
for all z∈𝒰_2.
The behavior of the two Fourier transforms is mainly described by the functions U_1 and U_2 from <ref>. We therefore estimate the contribution from U_1 at a point z=y+Δ_2=y+Δ_1+2n_1d_1 around the position Δ_2, where U_2 has its peak, from above and the contribution of U_2 there from below.
To this end, we first choose a small parameter τ∈(0,n_1d_1) such that
|U_2(z)| = |( k y) - e^-k̅^2γ e^-i k ξ_2( k (y +ξ_2))| ≥1-e^-k̅^2γ/2
holds for all y∈(-τ,τ). (Such a parameter has to exist, since at y=0 the term is bounded by 1-^- k^2γ.)
Since y∈(-τ,τ), we can then also bound U_1 with
|U_1(z)|=|( k (y+2n_1 d_1)) - e^-k̅^2γ e^-i k ξ_1( k (y+2 n_1 d_1 +ξ_1)))| ≤1+e^-k̅^2γ/ k(2n_1 d_1-τ)≤2/ kn_1 d_1
from above.
Therefore, we get with ψ_0,1=ψ_0 and ψ_0,2=ψ_0+2n_0d_1/n_1
|ℱ_k(ℐ^*_1χ_𝒮)(z)|/|ℱ_k(ℐ^*_2χ_𝒮)(z)| = |r_1^†/r_2^† r_≤ 1^†ψ_0,2/ψ_0,1U_1(z)/U_2(z)| ≤1/ kn_1d_14/1-e^- k^2γ|r_1^†|/|r_2^† r_≤ 1^†|(1+2n_0d_1/n_1ψ_0).
The above arguments justify the method to reduce the all-at-once-approach of the inverse problem to the simpler layer-by-layer reconstruction presented in <ref>. However, the enclosed form of the integral operators ℐ_j (and ℐ_j^* respectively) still prevents us from an analytic expression for the reconstruction. Hence, we formulate our problem as a least squares minimization problem.
§ LEAST SQUARES MINIMIZATION FOR THE REFRACTIVE INDEX
To numerically obtain a solution of the inverse problem in <ref> for a fixed j, we write it as a discrete least squares minimization problem of the functional
𝒥_j(n_j,d_j-1) = ∑_m=1^M_j(|ℱ_k((∑_l=1^j-1ℐ_l+ ℐ_j[n_j,d_j-1])χ_𝒮)(z_j,m)|^2
- y_j,m^(δ))^2,
for the parameters n_j and d_j-1.
To simplify the analysis, we will, however, replace herein the full forward operator ℐ_j by the reduced forward operator ℐ_j^* and consider thus the functional
𝒥^*_j(n_j,d_j-1) = ∑_m=1^M_j(|ℱ_k((∑_l=1^j-1ℐ_l + ℐ^*_j[n_j,d_j-1])χ_𝒮)(z_j,m)|^2
- y_j,m^(δ))^2,
Hence, for every interface j we want to minimize 𝒥^*_j(n_j,d_j-1) with respect to n_j and d_j-1. Hereby, the data is provided on a discretized grid of M_j points which we denote by {z_j,m}_m=1^M_j⊂𝒰_j in the interval 𝒰_j selected in <ref>. We silently assume that the distance between the interfaces is sufficiently large so that the influence from the values ℱ_k(ℐ_l[n_l,d_l-1]χ_𝒮) for l>j in the interval 𝒰_j is negligible (as estimated in <ref>).
Finally, we declare the set of admissible values for n and d (in every step) by
𝒜_j = {(n,d)∈^2 | 1≤ n<∞, n≠ n_j-1, 0< d <∞}⊂^2.
The reconstruction is based on the isolation of the clearly visible peaks in the data, which originates from a relatively high refractive index contrast between the single layers. If no contrast is present, meaning that n_j=n_j-1, the method considers the layer as a unit and goes on to the next layer. Hence, we may exclude – for the step j – n_j-1, the refractive index from the previously reconstructed layer, from the admissible space 𝒜_j.
Calculating both values from this minimization functional is a slight overkill, since the width d_j-1 in the jth step, can be found by the largest overlap of the sinc-function in the forward model and the peak in the data independent of their height, which itself is determined by the refractive indices of the different media. Thus, we can separate the extraction of the pair of parameters (n,d) into two parts, similar to how it is done for stepwise-gradient methods. Firstly, the width d of the layer is reconstructed, and secondly the refractive index.
We delay the justification of our arguments to <ref>. There we show (visually) that the functionals (in a step j) actually allow a precise prediction of the width d_j-1 without the knowledge of the refractive index n_j.
We will therefore in the following consider the minimization problems with respect to n_j for a known value d_j-1. We thus end up with the reduced minimization problem
_n∈𝒜_j𝒥^(*)_j(n), 1≤ j ≤ J+1, ⁎
for the functionals defined in <ref> and <ref>. We hereby assume that the value of d_j-1 has been reconstructed already and restrict the analysis to the refractive index argument. The second argument of the functional is therefore omitted.
To simplify the notation, we define for j and m
Λ_j,m = ℱ_k(∑_l=1^j-1ℐ_lχ_𝒮)(z_j,m), Γ_j,m(n_j) = ℱ_k(ℐ_j[n_j]χ_𝒮)(z_j,m), Γ^*_j,m = ℱ_k(ℐ^*_j[n]/r^†_jχ_𝒮)(z_j,m),
so that we can write
|ℱ_k((∑_l=1^j-1ℐ_l + ℐ_j[n_j])χ_𝒮)(z_j,m)|^2 = | Λ_j,m + Γ_j,m(n_j)|^2 = F_j,m(n_j)
and
|ℱ_k((∑_l=1^j-1ℐ_l + ℐ^*_j[n_j])χ_𝒮)(z_j,m)|^2 = | Λ_j,m + r_j^†(n_j) Γ^*_j,m|^2 = F^*_j,m(n_j).
We note that by definition Γ^*_j,m is independent of the refractive index, since ℐ^*_j carries the directional independent reflection coefficient r_j^†, see <ref>.
Before we consider the minimization problem, we want to justify that the modeling error introduced by replacing 𝒥_j with the simplified functional 𝒥_j^* can be controlled.
For 2≤ j≤ J+1 let 𝒥_j and 𝒥^*_j be defined by <ref> and <ref>, respectively. Then for a fixed value d, the following estimate holds true for any n
|𝒥_j(n) - 𝒥^*_j(n)| ≤ C θ_t^j-1^2_∞,ℬ̃,
where θ_t^j-1 is the (small) angle of transmission (defined in <ref>) and where we defined the supremum norm h_∞,ℬ̃ = sup_κ̃∈ℬ̃|h(κ̃)| on the reduced domain of integration ℬ̃, which is defined by κ̃= κ/k for κ∈ℬ.
We first calculate the difference, which using F_j,m = F_j,m-F^*_j,m+F^*_j,m is given by
𝒥_j(n) - 𝒥^*_j(n) = ∑_m=1^M_j (F_j,m(n) - F^*_j,m(n))(F_j,m(n) + F^*_j,m(n) - 2 y_j,m^(δ))
= ∑_m=1^M_j (F_j,m(n) - F^*_j,m(n))^2 + 2(F^*_j,m(n) - y_j,m^(δ))(F_j,m(n) - F^*_j,m(n)) .
By using the definition of the forward model, we find that the essential part is to find an upper bound for
F_j,m - F^*_j,m = 2 {Λ_j,m(Γ_j,m-r_j^†Γ^*_j,m)} + (|Γ_j,m|^2 - |r_j^†Γ^*_j,m|^2 )
≤ 2 |Λ_j,m||Γ_j,m-r_j^†Γ^*_j,m| + (|Γ_j,m|^2 - |r_j^†Γ^*_j,m|^2 ).
First, we need to estimate the integral
1/16 π^3 ρ∫_𝒮k∫_ℬ e^-|κ|^2 ad κ d k.
After a change of coordinates κ = k κ̃, where κ̃ is in the reduced set ℬ̃, we can replace ℬ̃ by the disk D_sin(2θ_Ω+θ) in the integral and find an upper bound
1/16 π^3 ρ∫_𝒮k∫_ℬ̃ k^2 e^-k^2|κ̃|^2 ad κ̃d k ≤ 1 /32 π^2 √(2π)ρ a( k^2|_∂𝒮 + e^-sin^2(2θ_Ω+θ)k^2 a|_∂𝒮/sin^2(2θ_Ω+θ) a) = U_p.
For any value of n it holds that the supremum norm r_≤ j-1_∞,ℬ̃ = sup_κ̃∈ℬ |r_≤ j-1(κ̃)|≤ 1. Thus, we obtain
|Γ_j,m(n)| ≤sup_n r_j(. ,n)_∞,ℬ̃ U_p, |r_j^†(n)Γ^*_j,m| ≤sup_n |r^†_j(n)| U_p .
The difference between the actual function and its approximated form then is estimated by
∫_𝒮|ℐ_j[n]- ℐ^*_j[n]| d k ≤1/16π^3ρ∫_𝒮 k ∫_ℬ|r_j(κ,n)-r_j^†(n)| e^-|κ|^2 a |r_≤ j-1(κ)| dκ
≤θ_t^j-1^2_∞,ℬ̃/2 U_p.
We combine all the above estimates to find
|Γ_j,m|^2 - |r_j^†Γ^*_j,m|^2 ≤|Γ_j,m-r_j^†Γ^*_j,m| (|Γ_j,m|+|r_j^†Γ^*_j,m|)
≤(θ_t^j-1^2_∞,ℬ̃/2(sup_n r_j(. ,n)_∞,ℬ̃ +sup_n |r_j^†(n)|))U_p^2.
Finally, we find an upper bound for
2|Λ_j,m| |Γ_j,m-r_j^†Γ^*_j,m| = 2 | ∑_l=1^j-1ℱ_k(ℐ_lχ_𝒮)(z_l,m)| |Γ_j,m-r_j^†Γ^*_j,m|
≤(∑_l=1^j-1r_l_∞,ℬ̃θ_t^j-1^2_∞,ℬ̃) U_p^2.
Thus, we get
|𝒥_j(n) - 𝒥^*_j(n)| ≤∑_m=1^M_j (F_j,m(n) - F^*_j,m(n))^2 + 2 ∑_m=1^M_j|(F^*_j(n) - y_j^(δ))_m| |F_j,m(n) - F^*_j,m(n)|
≤ M_j ((∑_l=1^j-1r_l_∞,ℬ̃θ_t^j-1^2_∞,ℬ̃) + θ_t^j-1^2_∞,ℬ̃/2(sup_n r_j(. ,n)_∞,ℬ̃ +sup_n |r_j^†(n)|))^2 U_p^4
+ 2 U_p^2 M_j sup_m (sup_n |(F^*_j(n) - y^(δ)_j)_m|) ((∑_l=1^j-1r_l_∞,ℬ̃θ_t^j-1^2_∞,ℬ̃)
+θ_t^j-1^2_∞,ℬ̃/2(sup_n r_j(. ,n)_∞,ℬ̃ +sup_n |r_j^†(n)|)).
We summarize all coefficients of θ_t^j-1^2_∞,ℬ̃ in a constant C and obtain the desired result.
The estimate and its proof for j=1 is similar
|𝒥_1(n) - 𝒥^*_1(n)| ≤ M_1 ( θ_t^0^2_∞,ℬ̃/2(sup_n r_1(. ,n)_∞,ℬ̃ +sup_n |r_1^†(n)|))^2 U_p^4
+ 2 U_p^2 M_1 sup_m (sup_n |(F^*_1(n) - y^(δ)_1)_m|) ( θ_t^0^2_∞,ℬ̃/2(sup_n r_1(. ,n)_∞,ℬ̃ +sup_n |r_1^†(n)|)).
§ EXISTENCE AND UNIQUENESS RESULTS
We finally turn to the question if the minimization problem (<ref>) has a unique minimizer. Actually, under the assumption that the data is an element of the range of the forward model, that is for any m we have
y_j,m = |Λ_j,m + r^†_j(ñ) Γ^*_j,m|^2, for some ñ∈𝒜_j,
we can show that there exists a unique solution to the minimization problem (<ref>) corresponding to the approximated functional in <ref>.
For j=1, we have that Λ_1,m = 0, for all m, which reduces the functional 𝒥^*_1 to
𝒥^*_1(n) = ∑_m=1^M_1(|r^†_1(n)Γ^*_1,m|^2 -y^(δ)_1,m)^2.
This functional, even in the best case, where the data is in the range of the forward model, attains two global maxima. In order to exclude one of the possible solutions, we use the fact that the object is surrounded by air and that the refractive index of the first layer thus has to satisfy n_1 > 1.
For j≥ 2, we obtain the uniqueness from the definition of the functional, which allows – in contrast to j=1 – the calculation of the reflection coefficient from a second order polynomial equation.
For this reason, we discuss the cases j=1 and j≥ 2 separately.
For j=1 let 𝒥^*_1 be defined as in <ref> and let y_1 satisfy the range condition <ref> for some ñ_1 >1. Then there exists a unique solution to the minimization problem (<ref>).
We take the derivative with respect to n and find
∂_n 𝒥^*_1(n) = 4 r_1^†(n) ∂_n r_1^†(n) ∑_m=1^M_1(|Γ^*_1,m|^2 (|r_1^†(n)|^2 - |r_1^†(ñ_1)|^2)) |Γ^*_1,m|^2.
Since we have ∂_n r_1^†(n) = -2/(1+n)^2≠ 0 the derivative of 𝒥^*_1 is zero if and only if r_1^†(n) = 0 which implies n = 1 or if r_1^† satisfies
|r_1^†(n)|^2 = |r_1^†(ñ_1)|^2.
Since we have excluded n = 1 (as refractive index from the previous layer) from the possible solutions, we find that r_1^†(n) = ± r_1^†(ñ_1), which further yields
n_± = 1∓ r_1^†(ñ_1)/1±r̃_1^†(ñ_1).
Since (r_1^†(ñ_1)) = -1, due to the assumption that ñ_1 >1, we can also exclude n_- from the set of possible solutions. Otherwise it would hold that n<1.
The non-negativity of the functional 𝒥^*_1 and the fact that 𝒥^*_1(n_+) = 0 implies that n_+ is actually the global minimum.
Let n be a minimum of 𝒥^*_1, then we have ∂^2_n𝒥^*_1(n) = 𝒫 >0.
Let n satisfy ∂_n 𝒥^*_1(n) = 0. Then according to the proof of <ref> the second derivative of 𝒥^*_1 with respect to n is given by
∂^2_n𝒥^*_1(n) = 8∑_m=1^M_1|Γ^*_1,m|^4 (r_1^†∂_n r_1^† )^2 = 𝒫.
Thus, together with r_1^†∂_n r_1^† = 1-n/(1+n)^3≠ 0, we immediately find that
∂^2_n𝒥^*_1(n) = 𝒫 >0.
We formulate the uniqueness for the deeper interfaces in the case j=2. The generalization to general j is straightforward.
Let the functional 𝒥^*_2 be defined as in <ref> and let the data y_2 satisfy the range condition <ref> for some r̃_2:=r_2^†(ñ_2)∈[-1, 1]. Then, for a suitable choice of discretization points z_2,m, m∈{1,…,M_2} with M_2>1, the value ñ_2 is the unique minimizer of 𝒥^*_2:
𝒥^*_2(ñ_2) = 0 and 𝒥^*_2(n) > 0 for all nñ_2.
Using the abbreviations from <ref>, the non-negative functional 𝒥^*_2 can be written as
𝒥^*_2(n) = ∑_m=1^M_2(|Γ^*_2,m|^2((r_2^†(n))^2 - r̃_2^2) + 2{Λ_2,mΓ^*_2,m}(r_2^†(n) - r̃_2) )^2
= (r_2^†(n) - r̃_2)^2∑_m=1^M_2(|Γ^*_2,m|^2(r_2^†(n) + r̃_2) + 2{Λ_2,mΓ^*_2,m})^2
Thus, 𝒥^*_2 vanishes if either r_2^†(n)=r̃_2 or
∑_m=1^M_2(|Γ^*_2,m|^2(r_2^†(n) + r̃_2) + 2{Λ_2,mΓ^*_2,m})^2 = 0.
Since all terms in this sum are non-negative, it can only vanish if all of them are zero, that is, if we have
r_2^†(n) + r̃_2 = -2{Λ_2,mΓ^*_2,m}/|Γ^*_2,m|^2 = -2Λ_2,m/Γ^*_2,m
for all m∈{1,…,M_2} where Γ^*_2,m0.
Inserting the definitions of Λ_2,m and Γ^*_2,m, this becomes the condition that the function
z↦(ℱ_k(ℐ_1χ_𝒮))(z)/(ℱ_k(ℐ_2^*[n]/r_j^†(n)χ_𝒮))(z)
is constant, at least restricted to the values z_2,m. However, we see from the asymptotic expression <ref> for ℱ_k(ℐ^*_1[n_1]χ_𝒮) and the correspondingly adapted expression for the second interface (as in <ref>) that this function (in the leading order terms) is not constant. Therefore, for a suitable choice of discretization points z_2,m, we can guarantee that there are at least two values m_1, m_2 for which
2Λ_2,m_1/Γ^*_2,m_12Λ_2,m_2/Γ^*_2,m_2,
thus excluding any zero values of 𝒥_2^* other than r̃_2.
So far we have considered the case where the data is in the range of the forward model without any noise. Noisy data is treated similarly. We note that since the positive functional 𝒥^*_j (also with noise) remains a fourth order polynomial in r^†_j, the existence of the local minima with respect to n is guaranteed. One is located on the half (1,n_j-1), the other one on (n_j-1,∞). The functional 𝒥_j^* (as a continuous function of n) changes also continuously depending on how the data is disturbed. This means, that up to a certain (small) level of noise, the position of the global minimum remains in the correct position. However, cases where the noise is such that the position of the global minimum might jump to the other side cannot be excluded.
§.§ Sign determination
A difficulty, which we are facing in the reconstruction of the first layer in <ref> (and which has been identified also in the reconstruction method shown in <cit.>) is that the minimum of the functional can only be determined up to its absolute value. In <cit.>, we had in each step two local (and also global) minima, where we excluded one by using a priori information on the refractive indices of the layers.
Additional noise on the data may lead to the case where the functional (for the refractive reconstruction from layers deeper inside the object) attains equal values for the two local minima. In this case, a distinction of the correct solution to the minimization problem (<ref>) is no longer possible. Now we want to show from a very theoretical point of view how one may exclude one of the possible solutions.
In detail, let j≥ 2 and let the refractive indices and the widths n_l,d_l-1, 1≤ l≤ j-1, be determined and assume that the data in the jth step is within the range of the forward operator ℐ^*_j for some
r̃_j = r_j^†(ñ_j) = n_j-1-ñ_j/n_j-1+ñ_j.
Hence, r̃_j (or ñ_j) is the objective to be reconstructed in this step. What we have seen so far is that the minima with respect to n are distributed such that one is located on each side of the point n_j-1, the refractive index from the previous layer.
Then, the position of ñ_j with respect to n_j-1, in detail the difference n_j-1-ñ_j, determines
(r̃_j)=1, if ñ_j < n_j-1,
(r̃_j)=-1, if ñ_j > n_j-1.
Thus, the correct solution ñ_j of the minimization problem – in step j – is completely connected to the sign of the reflection coefficient.
If one is able to determine the sign of the reflection coefficient r̃_j from the data, one already the determines the position of the exact solution with respect to n_j-1. Therefore, while searching for the minimum of the functional, we may restrict to a certain half, (1,n_j-1) or (n_j-1,∞), of the admissible values.
Basically the goal is to read off the sign from the measurement data by avoiding the highly frequent parts originating from the exponential factor e^-i k ., see <ref>. To do so, we multiply the equation with the exponential factor with opposite sign and obtain the sign of r̃_j in the neighbourhood of z = Δ_j.
We present this method for small values θ_Ω, where we by using <ref> can calculate the Fourier transform of the integral in <ref> explicitly in the leading order.
For simplicity, we consider in the following the single reflection from the first interface, meaning we consider the case j=1.
Let the asymptotic behavior as in <ref> hold true, meaning that the Fourier transform of the data, ℱ_k(𝒞^1χ_𝒮), is given by <ref> for a r_1^†. Further, let Δ_1 and ξ_1 be defined as in <ref>. Then we obtain
{ℱ_k(𝒞^1χ_𝒮) (z+Δ_1)e^i k̅ z} = 1/8π^2√(2π) k k^2 r_1^†/ρψ_0(( kz) - e^- k^2γcos( kξ_1)( k(z+ξ_1))
+ 𝒪( k√(γ))(1+a k/ψ_0) + 𝒪(1/ kΔ_1) +𝒪(a k/ψ_0 k^-1 k) + 𝒪( k^-1 k))
for z+Δ_1>0. Furthermore, in a sufficiently small neighbourhood of z=0 we have that
(r_1^†) = ({ℱ_k(𝒞^1 χ_𝒮)(z+Δ_1) e^ik̅ z}).
Using the real part of <ref>, we see that
k^-1 k ('( k(z+ξ_1)) + ( k(z+ξ_1)))sin( k ξ_1)e^- k^2γ = 𝒪( k^-1 k)
is of lower order compared to
-( kz) + e^- k^2γcos( kξ_1)( k(z+ξ_1)).
Hence, we have that
{ℱ_k(𝒞^1χ_𝒮) (z+Δ_1)e^i k̅ z} = 1/8π^2√(2π) k k^2 r_1^†/ρψ_0(( kz) - e^- k^2γcos( kξ_1)( k(z+ξ_1))
+ 𝒪( k√(γ))(1+a k/ψ_0) + 𝒪(1/ kΔ_1) +𝒪(a k/ψ_0 k^-1 k) + 𝒪( k^-1 k))
Then, in a sufficiently small neighbourhood of z=0, the sign of the dominant term is determined by the sign of the first sinc-function, meaning that
(( k z) - e^-k̅^2γcos( k ξ_1)( k(z +ξ_1))) = (( k z)) = 1.
Finally, due to the assumption that ρ, ψ_0, k, k >0, together with <ref>, we can conclude that within a sufficiently small domain around z = 0
(r_1^†) = ({ℱ_k(𝒞^1 χ_𝒮)(z+Δ_1) e^ik̅ z}).
In the best case possible, where the measurement data (on basis of a plane wave model) is known for all wavenumbers k∈, the position Δ_1 is perfectly determined by the duality of the Fourier transform
𝒞^1(k) = r_1^†cos(k Δ_1) ⟷ ℱ_k(𝒞^1)(z)χ__+(z) = r_1^†/2√(2π)δ(z-Δ_1).
However, for the Gaussian model, see <ref>, the situation is different. In order to determine Δ_1, the position of the peak, from the measurement, the absolute value of <ref> is used. There, the sinc-term centered around z = Δ_1 - ξ_1 overlaps (and interacts) with the one centered around z=Δ_1 and therefore eventually shifts the position of the global maximum slightly to Δ_1 + μ_0 for a μ_0≪ 1. Even for a small change, say k̅μ_0 > π/2, we then get from <ref>
({ℱ_k(𝒞^1 χ_𝒮)(z) e^ik̅ (z-(Δ_1+μ_0))})= ({( k (z-Δ_1))e^-i kμ_0}) = -( ( k (z-Δ_1)) ).
An increasing bandwidth 2 k of wavenumbers, which yields that
k ( k z) ⟶δ(z), k →∞,
increases the precision which is needed for determining Δ_1. However, for an actual OCT setup, the precision is limited and provides only non-sufficient reconstructions.
§ BEHAVIOR OF THE FUNCTIONAL WITH RESPECT TO THE WIDTH
In the previous sections, we showed that in every step the minimization problem (<ref>) for the functional 𝒥^*_j in <ref> attains a unique solution. Hereby, we have assumed that the determination of the width d_j-1 (in step j) can be carried out independently of n and therefore can be assumed to be reconstructed before the jth step, j>1.
In this section we want to justify our argument, that this reconstruction may be separated, by presenting the behavior of the corresponding functionals for different values with respect to d. That is, we want to argue that the functional 𝒥_j^(*)(n,d) in the jth step (almost) independently of the refractive index n yields a unique minimum with respect to d.
We have seen that the actual data is nicely predicted by the forward model showing also the sinc-function structure. To this end let us assume that the data y be given and in the range of the forward model, meaning that for j and m let
y_j,m = |Λ_j,m + ℱ_k(ℐ^*_j[ ñ_j,d̃_j-1 ]χ_𝒮)(z_j,m)|^2,
for some (ñ_j,d̃_j-1)∈𝒜_j and let r̃_j = r_j^†(ñ_j) denote the corresponding directional independent reflection coefficient. Clearly, we obtain the best match, meaning that the according minimization functional is zero, if the amplitude, which is determined by the refractive index n, and the width d satisfy (n,d) = (ñ_j,d̃_j-1).
In the actual situation the refractive index in the jth step is reconstructed after the width of the layer is determined. This means that the reflection coefficient corresponding to the forward model ℐ_j^* in <ref>, which further determines the absolute value after Fourier transform, is certainly not matched. We want to show that even though the amplitude is not matched perfectly, the functional (and the corresponding minimization problem with respect to d) still yields a unique minimum at the correct position.
We cannot explicitly calculate the extremal values of the functional 𝒥^(*)_j. Instead, we first provide, in <ref>, for a better understanding a comparison between the simulated data y, for a two layer model, and the forward model. This is shown for different values of d and r^†_2, and the L^2-error between the data and the model is calculated, see <ref>.
The influence of r^†_2, especially for a large value, on the L^2-error has a more severe impact compared to the cases where the distance in the model is chosen incorrectly. In order to retrieve the width correctly, we have to use a test functional for suitably chosen refractive index, since the correct is not determined at this stage. In order to minimize the L^2-error, the test model with respect to r^†_2 (and n) should satisfy |r^†_2| < |r̃_2|, which guarantees that the second peak of the model is below the one of the data.
In <ref>, we show the (approximated) functional for the same two layer medium on a grid of values d. From left to right, the number of grid points M_2 for the integration used in the functional in <ref> increases. In order to point out the independence of the existence of a (global) minimum on the refractive index, we plotted for each M_2,i, i=1,2,3, the functional for different values of r_2^†, satisfying |r^†_2| ≤ |r̃_2|, where |r̃_2| is the upper bound for the peak height in the data.
Clearly, for the actual value r_2^† = r̃_2, the functional is zero for all M_2,i. By increasing the number of grid points we additionally obtain a sharpening effect which pronounces the minimum even better. Further, we see that for r_2^† with the wrong sign and where the simulated peak is ultimately on the level of the sideloops of the sinc-functions, the brown curve in <ref>, the minimum is shifted slightly for larger values of M_2,i.
The location of the global minimum of the functional for each M_2,i and every r_2,l^†, l∈{1,…,4}, is presented in the first three lines of <ref>. There, we see that for an intermediate number of grid points, the localization of the minimum position is not shifted for resonable values of r_2^†.
Finally, we consider the case where the data is disturbed
y_j,m = |Λ_j,m + r̃_j Γ^*_j,m + Φ|^2, 1≤ m≤ M_j,
for some Φ representing possible noise or a reflection from deeper inside the sample. We use the forward model
|Λ_j,m + r_j^†(n) Γ^*_j,m|^2
for the functional 𝒥^*_j in <ref>, and we still obtain a unique global minimum. For the simulation of the data, we used for Φ the forward model (for an additional layer reflection) with r̃_3 = -r̃_2 and distance d_2 = d_1. In <ref>, we plot the functional for different values of r_2^† and M_2,i∈{150,300}. In <ref> the two bottom rows show the position of the global minimum for the disturbed data. Here, again, an intermediate value of M_2 is the most effective one.
Aside from the global minimum, which we obtain in every case above, there exist also local minima. These occur if the peak overlaps with the sideloops of the data-sinc-function and therefore yield a smaller L^2-error. Hence, for the minimization (process) of the functionals, the initial guess is crucial. Otherwise the solution is stuck in the neighbourhood of one of these local minima.
§ NUMERICAL EXPERIMENTS
§.§ OCT setup and phantom description
In the following examples we illustrate the applicability of the proposed method for both simulated and experimental data. The OCT experimental data, shown in <ref>, was generated by a swept-source OCT system with a laser light source centered at the wavelength λ_0 = 1300nm. The bandwidth around the central wavelength ranges from approximatively 1282.86nm to 1313.76nm. During a single experiment the object was raster scanned on a grid of 1024 × 1024 points in horizontal direction and for each raster scan a spectrum consisting of 1498 data points, equally spaced with respect to the wavenumber, was recorded. Due to the relatively small bandwidth of the laser, the approximation that n does not dependent on the wave number is valid. For the reconstruction, a single depth profile was chosen from the data set and no averaging has been done.
Additionally, for the calibration of all necessary system parameters, introduced in <cit.>, the object was shifted multiple times along the axial direction, where for each position a full measurement was recorded.
In order to examine the correctness of the recovered values, the object needs to consist of materials where the ground truths of the refractive indices, on the used spectrum, are available. The sample is a three-layer medium where two coverglasses enclose water. The refractive indices of both materials, at the central wavelength, are given by n_g ≈ 1.5088 and n_w ≈ 1.3225. The thickness of every layer is estimated between 0.14mm and 0.19mm.
The simulated data were generated for the same three-layer sample with refractive indices n_0 = 1, n_1 = 1.5088, n_2 = 1.3225, n_3 = 1.5088 and widths d = [0.174, 0.186, 0.173]mm by using the forward model presented in <ref>.
Given the list of system and object parameters described in <ref>, the formula for the reflected field <ref> was implemented where the domain of accepted wave directions, see <ref>, was replaced by its discretized version.
In addition we use a sample showing characteristica in lateral directions, which are then visible in the outcome for different raster scans across the object. On a grid of 20× 20, we define the object for every (l,m)∈{1,…,20}^2 as a two layer object with n(l,m) = [1, 1.5088, n̂(l,m)], where the distribution n̂ is shown in <ref>. For every grid element, we collected the simulated measurement data. For the reconstruction 5% uniformly distributed noise was added to the data.
Multiple reflections, due to their minor contribution to the final measurement, have been omitted.
§.§ Inverse problem and minimization
For the inverse problem, the functionals 𝒥_j and 𝒥^*_j, based on the direct model, defined in <ref> and <ref> respectively, were implemented. The discretized integral uses M_j = M = 401 equally spaced grid points in Fourier domain, which have been chosen symmetrically around every exposed local maximum in the data.
Simulated vs. Real Data
In <ref>, we discussed the minimization problem for the data y_j, 1≤ j≤ J+1, which so far, has been considered as an element of the range of the forward operator (plus a certain noise level δ). However, experimental data fails to satisfy this relation because of the unknown intensity in the focal plane:
In <ref> we have set the amplitude of the Gaussian distribution in our forward model equal to one, which further yields that the maximal intensity of the incident field in <ref>, that is |E^(0)|^2 in the focal spot x_3 = r_0, is also one.
For the experimental data the maximal intensity represents the power of the laser light in the focal spot and is in general an unknown quantity. We consider this quantity as real positive number Q_0. Even for a single layer reflection, the factor Q_0 causes incorrect reconstructions if it is not included in the model. Let the data be given by
ỹ = Q_0 y, for y = |r^†(ñ_1) Γ^*_1,m_0|^2,
for a single grid point z_1,m_0. Then from <ref>, we deduce that
|r^†(n) Γ^*_1,m_0|^2 = Q_0 |r^†(ñ_1) Γ^*_1,m_0|^2,
which then yields |r^†(n)| = √(Q_0) |r^†(ñ_1)|.
We correct the mismatch between the model and the experimental data by the quantifying Q_0 from the first air-glass reflection. Hereby, we use the coverglass plate as a calibrational layer. We recover instead of the refractive index n_1, which we assume in this case to be determined perfectly before, the quantity Q_0 as the quotient between the data and the forward model for j=1 in <ref>.
Minimization in Two Steps: In <ref> (which is carried out in <ref>) we proposed a layer-by-layer method where in each step a single pair of parameters (n_j,d_j-1), 1≤ j≤ J+1, is recovered. Each step itself is then again split into two steps. After the minimum of the functional 𝒥^*_j is determined, the output is used as an input in form of an initial guess for the actual minimization problem corresponding to the functional 𝒥_j. In <ref> and <ref> a comparison of the two functionals (for the minimum value of d) for every reconstruction step is provided for simulated and experimental data, respectively.
§.§ Results
The outcome of both minimization problems, for simulated and experimental data, are presented in this section. In <ref> we plot the functionals 𝒥_j, j=1,2,3, on a (n,d)-grid on the half of the admissible values 𝒜_j where the global minimum is located.
For the experimental data, the functionals evaluated over a (n,d)-grid are shown in <ref>. Additionally, a contour plot of the close neighbourhoods of each pair of minima are shown in <ref>. The reconstructed values are plotted in <ref>. The laterally reconstructed refractive index distribution is shown in <ref>.
§ CONCLUSION
The inverse problem in quantitative optical coherence tomography for layered media has been discussed. Based on a Gaussian beam forward model a discrete least squares minimization problem for the reconstruction of the refractive index and the thickness of each layer is formulated. Existence and uniqueness of solutions for the minimization problem are discussed. The method is validated by numerical examples considering the refractive index reconstruction from a three-layer object from both simulated and experimental data.
§ ACKNOWLEDGEMENT
This work was made possible by the greatly appreciated support of the Austrian Science Fund (FWF) via the special research programme SFB F68 “Tomography Across the Scales”:
Peter Elbau and Leopold Veselka have been supported via the subproject F6804-N36 “Quantitative Coupled Physics Imaging”, Lisa Krainz and Wolfgang Drexler have been supported via the subproject F6803-N36 “Multi-Modal Imaging”.
§ REFERENCES
[heading=none]
|
http://arxiv.org/abs/2306.07787v1
|
20230613140823
|
Quantum coherent feedback control of an N-level atom with multiple excitations
|
[
"Haijin Ding",
"Guofeng Zhang"
] |
quant-ph
|
[
"quant-ph",
"cs.SY",
"eess.SY",
"physics.atom-ph"
] |
Quantum coherent feedback control of an N-level atom with multiple excitations
Haijin Ding, Guofeng Zhang
Haijin Ding is with the Laboratoire des Signaux et Systèmes (L2S), CNRS-CentraleSupélec-Université Paris-Sud, Université Paris-Saclay, 3, Rue Joliot Curie, 91190, Gif-sur-Yvette, France (e-mail: [email protected]).
Guofeng Zhang is with the Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, SAR, China, and The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, 518057, China (e-mail: [email protected]).
Corresponding author: Guofeng Zhang.
July 31, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The purpose of this paper is to study the coherent feedback control dynamics based on the network that an N-level atom is coupled with a cavity and the cavity is coupled with a single or multiple parallel waveguides through two semitransparent mirrors.
When initially the atom is excited at the highest energy level, it can emit multiple photons into the cavity via the spontaneous emission, and the photons in the cavity can be transmitted into the waveguide and then re-interact with the cavity quantum electrodynamics (cavity-QED) system through the feedback channel. When the cavity is coupled with a single waveguide, the generation of multi-photon states in the waveguide can be characterized by the exponential stability of the linear control system with feedback delays determined by the feedback loop length. By tuning the feedback loop length, there can be zero or multiple photons in the waveguide. Besides, when the cavity-QED system is coupled with multiple parallel waveguides, the emitted photons oscillate among different waveguides and this process is influenced by the feedback loop length and coupling strengths among waveguides.
quantum coherent feedback control, multi-photon state, cavity-waveguide interaction;
§ INTRODUCTION
Quantum feedback control has varied applications in quantum information processing (QIP) <cit.>. In quantum measurement feedback by means of classical controllers, the sensors can perform measurements on the quantum system by detecting the coherent fields such as the fluorescence field, then based on the measurement results, the actuators can supply semiclassical potentials to regulate the quantum states, and this can work as gate operations in quantum computations <cit.>. Besides, in coherent feedback control, the quantum system can be controlled by a quantum controller consisting of other simple quantum systems such as ions, photons, or spins <cit.>. For example, to control a trapped ion, another ion can work as a quantum controller, then the system and controller ions can create a joint system. By separately focusing light on the system ion or the controller ion, the system ion can be modulated via the feedback mediation of the vibrational modes between two ions <cit.>.
For the coherent feedback based on the cavity quantum electrodynamics (cavity-QED) system, the atoms are coupled with the cavity, and the cavity can be coupled with the waveguide through semi-transparent mirrors <cit.>. Then the photons in the cavity can be transmitted into the waveguide, and the photons in the waveguide can re-interact with the cavity-QED system after being transmitted in the waveguide, which can be regarded as a coherent feedback channel, and this coherent feedback process can stabilize the Rabi oscillation in the cavity <cit.> and control the number of available photons by tuning the feedback loop length <cit.>. When an atom is directly coupled with a semi-infinite waveguide, the photon emitted by the atom can also get into the waveguide and be reflected by the terminal mirror, then the reflected photon wave packet can re-interact with the atom to construct the feedback loop <cit.>. When there are two or multiple two-level atoms coupled with the waveguide, the reflection of the photon packet by atoms can induce additional feedback interactions, and this can be used to create the entangled quantum states <cit.> and generate multi-photon states <cit.> for quantum state engineering <cit.>. Take the multi-photon cluster state as an example <cit.>, it can be generated with an atomic emitter which is driven repeatedly, and the emitted photons can be reflected by the terminal mirror of the waveguide and further generate multi-dimensional photonic networks. Both in the coherent feedback network based on the cavity-waveguide interactions <cit.> and atom-waveguide interactions <cit.>, the generated photonic states are dependent on the transmission delay between the nodes of the quantum network, which is similar with the traditional time-delay systems <cit.>, therefore it is meaningful to explore the delay-dependent quantum coherent feedback systems from the perspective of control theory.
For the typical quantum coherent feedback control based on the cavity-waveguide coupled network <cit.>, the two- or multi-level atom is coupled with the cavity via the Jaynes-Cummings model, and the cavity is coupled with the waveguide with a series of continuous modes via two semitransparent mirrors. In this architecture,
the transparency of the mirrors of the cavity can influence the coupling strength between the cavity and the waveguide, and the feedback loop length can influence the evolutions of the atomic states and available photon numbers in the waveguide <cit.>. Additionally, the cavity-QED system can be coupled with an array of parallel waveguides <cit.>, then the photons can spread out within multiple waveguides, which can serve as a beam splitter and further construct logic gates in quantum computations <cit.>. Upon all the architectures above, the total available number of photons is determined by the physical design of the feedback network, and the multi-photon source can be multiple excited two-level atoms <cit.>, an excited multi-level atom <cit.>, or a single two-level atom which is driven repeatedly <cit.>.
Practically, multi-photon states can be generated with various platforms. For example, a two-photon state can be generated by re-exciting an initially excited quantum dot after its first spontaneous emission, during which the width of the driving pulse can influence the efficiency of two-photon generations <cit.>. Alternatively, two independent photons can be generated from two isolate emitters, such as two remotely trapped ions <cit.>, or two independent superconducting qubits <cit.>, then either the two emitters or the generated photons can be entangled via the photon interference and the atom-photon interactions. Based on this, multi-photon states can be experimentally realized by repeatly driving the atomic ensemble to the collective excited state and then emit entangled multiple photons <cit.>. Apart from the spontaneous emission, parametric down conversion is another method to generate multi-photon Fock states and can be further used to generated entangled quantum states <cit.>, the photon source based on Sagnac interferometer can generate multiple tunable correlated photons using a silicon nanophotonic waveguide <cit.>, and the multiple micro-ring resonator with a pulsed laser input can generate entangled photon pairs, which can be further used in chip-to-chip teleportations <cit.>. Additionally, multi-photon states widely exist in the closed quantum feedback loop and have wide potential applications in quantum optics. For example, by designing the feedback control via detecting photons emitted by the atom, we can estimate and track the motion trajectory of the atom in the cavity, and then steer the atomic motions <cit.>. In the generation of single photon states with pump lasers and spontaneous four-wave mixing, the pump field can be fed back onto the resonator to stabilize the central frequency of the single photon, which can enhance the efficiency of generating high-quality continuous single photons <cit.>. Besides, in quantum dot based single photon sources, the photon stream can be transmitted via an optical Sagnac loop with delays, then the photon statistics of quantum states can be tunable <cit.>. In this experimental scheme, the function of the Sagnac loop is similar with the feedback loop with delays, and it provide a method to control the multi-photon statistics by designing the length of the Sagnac loop. In summary, multi-photon states are experimentally available with or without a feedback loop, while the feedback control with multi-level atoms has not been explored, and it should be another tunable and efficient method by replacing the two-level atom in Ref. <cit.> with a multi-level atom.
Once initially the atom in the cavity is excited, the number of generated photons is determined by the energy level structure of the atom. When the cavity is further coupled with the waveguide to construct a feedback loop, the whole quantum system can be modeled as the linear control system with delays <cit.>. In this system, the number of photons in the cavity and waveguide is related with the oscillation properties of the amplitudes of the eigenstates, which is also the stability properties of the quantum linear system <cit.>. For example, when the eigenstates in the cavity can stably converge to zero, then all the emitted photons will eventually be in the waveguide. Similarly, in a classical linear control system with delays, the stability can be analyzed with the linear matrix inequality (LMI) approach combined with the Lyapunov method <cit.>, and the convergence of the states can be evaluated in terms of the exponential stability for the time-invarient systems <cit.> and time-varied systems <cit.>. Additionally for the quantum system with a classical feedback controller, the quantum states can be driven by the external controller influenced by system delays, and the control precision can be evaluated in terms of the convergence stability to the target state <cit.>, and the stability of the quantum states can be distinguished with the exponential estimate and Lyapunov method <cit.>. Thus it is quite meaningful to further generalize the traditional stability theory to the coherent feedback loop with photons, and this is useful for the controllable generation of multi-photon states.
In this paper, we first study how to generate multi-photon states with an N-level atom coupled with a cavity, which is coupled with the waveguide to construct a feedback loop, as shown in Fig. <ref>. Then we introduce the exponential stability analysis to the quantum coherent feedback network. The photons emitted by the multi-level atom can be transmitted to the waveguide or stored in the cavity, and this can be evaluated with the exponential stability of the evolution of the eigenstates. Then we generalize the architecture in Fig. <ref> with one waveguide feedback loop to the circumstance in Fig. <ref> with parallel waveguides, and study on the relationship between the distribution of the photonic states and the system design, and further how the feedback loop length influences the evolution of the high-dimension quantum system.
The rest of the paper is organized as follows. Section <ref> concentrates on the feedback interaction between one waveguide and a cavity coupled with a multi-level atom which is initially excited, especially on the distributions of photons influenced by the feedback design and the exponential stability of this coherent feedback network. In Section <ref>, the feedback system is time varying when the detunings between the cavity and multi-level atom are considered. In Section <ref>, we generalize to the circumstance that the cavity-QED system is coupled with an array of parallel waveguides, and the photons can oscillate among the cavity and waveguides. Section <ref> concludes this paper.
The reduced Planck constant ħ is set to be 1 in this paper.
§ COHERENT FEEDBACK CONTROL OF AN N-LEVEL ATOM WITH ONE WAVEGUIDE
As illustrated in Fig. <ref>, a ladder-type (also called as Ξ-type) N-level atom is coupled with a cavity and the cavity is coupled with a waveguide of length 2L. The initially excited atom can emit a photon into the cavity, and then the photon can be transmitted from the cavity to one side (upper or lower) of the waveguide, and enter back into the cavity from the other side after travelling through the waveguide. The waveguide with continuously transmitted photons can construct a feedback loop for the cavity-QED system, and then the quantum states in and out of the cavity can be both manipulated. The length of the cavity is l, and it is assumed that l≪ L. The Hamiltonian of this quantum system can be written as:
H = H_A + ω_c a^†a +∫ω_k d_k^†d_k dk+ H_I.
Here, ω_c is the resonant frequency of the cavity, ω_k = kc refers to the continuous mode k of the waveguide with c being the velocity of the field, a(a^†) and d_k(d_k^†) are the annihilation(creation) operators of the cavity and the waveguide with mode k, respectively. The continuous modes k in the waveguide are integrated within [0,+∞). For the N-level atom, the n-th level has the energy ħω̃_n, and the Hamiltonian of the N-level atom reads H_A = ∑_n=0^N-1ħω̃_n |n⟩⟨ n|. Denote ω_n= ω̃_n - ω̃_n-1 for n=1,2,⋯,N-1 to be the energy gap, as shown in Fig. <ref>, see also <cit.>. The interaction Hamiltonian H_I includes the interaction between the waveguide and cavity as well as that between the cavity and the N-level atom, and can be concisely represented in the interaction picture when ω_1 = ω_2 =… = ω_N-1≡ω_a=ω_c as
H̃_I = -∑_n=1^N-1γ_n (σ_n^-a^† + σ_n^+a )- ∫dk [G(k,t) a^†d_k + G^*(k,t)ad^†_k ],
where σ_n^- = |n-1⟩⟨ n| and σ_n^+ = |n⟩⟨ n-1| are respectively the lowering and raising operators of the n-th energy level of the atom, γ_n is the coupling strength between the cavity and the atomic transition between states |n⟩ and |n-1⟩. Finally, G(k,t) = G_0sin(kL)e^-i(ω-Δ_0)t is the coupling strength between the cavity and the waveguide of the mode k where G_0 is the amplitude and Δ_0 = ω_a is the central mode of the emitted photon.
Assume that initially the atom is at the highest energy level |N-1⟩, and the cavity and waveguide are both empty. The evolution of the quantum state |Ψ(t)⟩ of the quantum in Fig. <ref> is governed by the Schrödinger equation
d/d t|Ψ(t)⟩ = -i H |Ψ(t)⟩.
As there are at most N-1 excitations in the whole system, the system state |Ψ(t)⟩ assumes the following form
|Ψ(t)⟩ = c_0(t)|N-1,0,0⟩ + ∑_j = 1^N-1∑_m=0^j∫⋯∫ c_j,k^m(t,k_1,⋯,k_j-m) |N-1-j,m,j-m⟩dk_1 ⋯dk_j-m,
where |N-1-j,m,j-m⟩ means that the atom is at the (N-1-j)th level, there are m photons in the cavity and j-m photons in the waveguide, and
c_j,k^m(t,k_1,⋯,k_j-m) is the time-varying amplitude of the state |N-1-j,m,j-m⟩. Especially when m=j, all the generated photons are in the cavity and accordingly c_j,k^m(t,k_1,⋯,k_j-m) is only a function of the time t, which thus can be written as c_j,k^j(t). The normalization condition of the state |Ψ(t)⟩ is
|c_0(t)|^2 + ∑_j=1^N-1∑_m=0^j∫⋯∫ |c_j,k^m(t,k_1,⋯,k_j-m)|^2 dk_1 ⋯dk_j-m =1.
assumptionAssumption
Initially |Ψ(0)⟩ =|N-1,0,0⟩ and take Eq. (<ref>) into the Schrödinger equation with H being the interaction Hamiltonian H̃_I in Eq. (<ref>), we can derive the ordinary differential equations (ODEs) for the amplitudes as
ċ_0(t) = iγ_N-1 c_1,k^1(t),
ċ_j,k^m(t,k_1,⋯,k_j-m) = i√(m)γ_N-j c_j-1,k^m-1(t,k_1,⋯,k_j-m) + i√(m+1)γ_N-j-1 c_j+1,k^m+1(t,k_1,⋯,k_j-m)
+i∑_p=1^j-m+1∫ G(k_p,t) c_j,k^m-1(t,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+1) dk_p
+i∑_p=1^j-m G^*(k_p,t) c_j,k^m+1(t,k_1,⋯,k_p-1,k_p+1,…, k_j-m-1), m>0,
ċ_j,k^0(t,k_1,⋯,k_j) = iγ_N-j-1 c_j+1,k^1(t,k_1,⋯,k_j)
+i∑_p=1^j G^*(k_p,t) c_j,k^1(t,k_1,⋯,k_p-1,k_p+1,…, k_j-1), m=0.
Here, Eq. (<ref>) means that the atom can absorb one photon from the cavity to reach the highest excited level if the system is at the state |N-2,1,0⟩. The first two components of the right-hand side of Eq. (<ref>) represent the exchange of a single photon between the atom and the cavity, the third term shows that the cavity can absorb one photon from the waveguide, and the last term means that one photon can be transmitted from the cavity to the waveguide. The superscript “0” on the left-hand side of Eq. (<ref>) indicates that the photons are all in the waveguide and this can be acquired after the absorption of one photon from the cavity by the atom as shown by the first item on the right hand side of Eq. (<ref>) or the emission of the single photon in the cavity into the waveguide as shown by the second item on the right hand side of Eq. (<ref>).
For example, when N=2, there is only one excitation in the system, in this case there is at most one photon in the cavity or waveguide, the system in Eq. (<ref>) reduces to Eqs. (10-12) in Ref. <cit.>. In general, for an N-level ladder type atom, according to the mathematical calculations in the Appendix, Eq. (<ref>) can be re-written as a system of ODEs with time delays:
ċ_0(t) = iγ_N-1 c_1k^1(t),
ċ_j,k^m(t,k_1,⋯,k_j-m) = i√(m)γ_N-j c_j-1,k^m-1(t,k_1,⋯,k_j-m) + i√(m+1)γ_N-j-1 c_j+1,k^m+1(t,k_1,⋯,k_j-m)
-G_0^2/4c∑_p=1^j-m+1[c_j,k^m(t,k_1,⋯,k_p-1,k_p+1,…, k_j-m) - e^iΔ_0τ c_j,k^m(t-τ,k_1,⋯,k_p-1,k_p+1,…, k_j-m) ]
+i∑_p=1^j-m G^*(k_p,t) c_j,k^m+1(t,k_1,⋯,k_p-1,k_p+1,…, k_j-m-1), m>0,
ċ_j,k^0(t,k_1,⋯,k_j) = iγ_N-j-1 c_j+1,k^1(t,k_1,⋯,k_j)
+i∑_p=1^j G^*(k_p,t) c_j,k^1(t,k_1,⋯,k_p-1,k_p+1,…, k_j-1), m=0,
where τ = 2L/c denotes the round-trip delay of the transmission of photons in the waveguide.
Compared with Eq. (<ref>), the third term on the right-hand side of Eq. (<ref>) shows that the amplitude c_j,k^m(t,k_1,⋯,k_j-m) is not only influenced by its present value, but also the past value due to the round-trip delay τ. This indicates how the feedback loop mediated by the waveguide influences the dynamics of the atom.
§.§ Example: feedback control of the ladder-type three-level atom
We take the three-level atom as an example to demonstrate the system modeling in the previous section. Assume the initial system state is |Ψ(0)⟩ = |2,0,0⟩. Take N=3 in Eq. (<ref>), then c_1,k^m with m=0,1 indicates that there is one photon either in the cavity or the waveguide, and c_2,k^m with m=0,1,2 indicates that there are overall two photons in the cavity and waveguide. According to Eq. (<ref>), the quantum state is
|Ψ(t)⟩ = c_0(t)|2,0,0⟩ + c_1,k^1(t) |1,1,0⟩ + ∫ c_1,k^0(t,k) |1,0,1⟩dk
+ c_2,k^2(t) |0,2,0⟩ + ∫ c_2,k^1(t,k) |0,1,1⟩dk + ∫∫ c_2,k^0(t,k_1,k_2) |0,0,2⟩dk_1 dk_2.
The interaction Hamiltonian in Eq. (<ref>) reads
H̃_I = - γ_1 (σ_1^-a^† + σ_1^+a) - γ_2 (σ_2^-a^† + σ_2^+a) - ∫dk [G(k,t) a^†d_k + G^*(k,t)ad^†_k].
By Eqs. (<ref>,<ref>),
ċ_0(t) = iγ_2 c_1,k^1(t),
ċ_1,k^1(t) = iγ_2 c_0(t) + i γ_1 c_2,k^2(t) -G_0^2/4c [c_1,k^1(t) -e^iΔ_0τ c_1,k^1(t-τ)],
ċ_2,k^2(t) = iγ_1 c_1,k^1(t) - G_0^2/4c [ c_2,k^2(t) -e^iΔ_0τ c_2,k^2(t-τ)],
ċ_1,k^0(t,k) =i G^*(k,t) c_1,k^1(t) + iγ_1 c_2,k^1(t,k),
ċ_2,k^1(t,k) = iγ_1 c_1,k^0(t,k) + i G^*(k,t) c_2,k^2(t) -G_0^2 /2c [c_2,k^1(t,k) - e^iΔ_0τc_2,k^1(t-τ,k) ],
ċ_2,k^0(t,k_1,k_2) = iG^*(k_2,t) c_2,k^1(t,k_1) + iG^*(k_1,t) c_2,k^1(t,k_2).
Denote κ = G_0^2/4c, and apply the Laplace transform to Eqs. (<ref>-<ref>) we get
sC_0(s) -1 = iγ_2 C_1,k^1(s),
sC_1,k^1(s) = iγ_2 C_0(s) + i γ_1 C_2,k^2(s) -κ[C_1,k^1(s) -e^iΔ_0τ e^-sτC_1,k^1(s)],
sC_2,k^2(s) = iγ_1 C_1,k^1(s) - κ[ C_2,k^2(s) -e^iΔ_0τ e^-sτ C_2,k^2(s) ],
where C_0(s), C_j,k^j(s) are the frequency counterparts of the time-domain functions c_0(t), c_j,k^j(t), respectively. According to Eq. (<ref>), we have
C_0(s) = iγ_2 C_1,k^1(s) + 1/s,
sC_1,k^1(s) = -γ_2^2 C_1,k^1(s) + iγ_2/s - γ_1^2/s+ κ(1- e^(iΔ_0-s)τ) C_1,k^1(s) -κ(1- e^(iΔ_0-s)τ)C_1,k^1(s) ,
C_2,k^2(s) = iγ_1/s+ κ(1- e^(iΔ_0-s)τ) C_1,k^1(s).
Finally, Eq. (<ref>) can be rewritten as
[s^2+ γ_2^2 + sγ_1^2/s+ κ (1- e^(iΔ_0-s)τ ) + κ (1- e^(iΔ_0-s)τ)s ]C_1,k^1(s) = iγ_2.
In what follows we discuss two scenarios depending on the length of the waveguide.
§.§.§ Feedback with a short waveguide
In practical design, the length 2L is finite and the velocity of the field in the waveguide is large, thus τ2L/c≪ 1 and e^-isτ≈ 1 (noted that the inverse Laplace transform is taken by integrating on the positive half of the complex plane close to the imaginary axis). As a result, Eq. (<ref>) can be simplified as:
[(s^2+ γ_2^2) (s+ κ (1- e^iΔ_0τ)) + sγ_1^2 + κ (1- e^iΔ_0τ)s (s+ κ (1- e^iΔ_0τ)) ]C_1,k^1(s) = iγ_2 (s+ κ (1- e^iΔ_0τ)).
Consequently,
C_1,k^1(s) = iγ_2 (s+ κ (1- e^iΔ_0τ))/[(s^2+ γ_2^2) (s+ κ (1- e^iΔ_0τ)) + sγ_1^2 + κ (1- e^iΔ_0τ)s (s+ κ (1- e^iΔ_0τ)) ]
=iγ_2 (s+ κ (1- e^iΔ_0τ))/s^3 +2κ (1- e^iΔ_0τ)s^2 + (γ_1^2 +γ_2^2 + κ^2(1- e^iΔ_0τ)^2)s + γ_2^2 κ (1- e^iΔ_0τ).
Denote c_0(∞)=lim_t→∞c_0(t) and c_j,k^m(∞) = lim_t→∞c_j,k^m(t) as the steady values of the amplitudes. We have the following results.
myproProposition
When Δ_0τ≠ 2nπ, finally there are two photons in the waveguide. When Δ_0τ = 2nπ, the atom oscillates in the cavity and there are no photons in the waveguide.
When Δ_0τ≠ 2nπ, by Eq. (<ref>) and the final value theorem, we have c_1,k^1(∞) = lim_s→ 0sC_1,k^1(s) = 0. Moreover, as 1- e^(iΔ_0-s)τ≠ 0, by Eqs. (<ref>) and (<ref>), we get
c_0(∞) = lim_s→ 0sC_0(s) = lim_s→ 0 [ iγ_2 C_1,k^1(s) + 1 ] = -γ_2^2 κ (1- e^iΔ_0τ) /γ_2^2 κ (1- e^iΔ_0τ) + 1 =0,
and
c_2,k^2(∞) = lim_s→ 0sC_2,k^2(s) = lim_s→ 0 s iγ_1/s+ G_0^2/4c (1- e^(iΔ_0-s)τ) C_1,k^1(s) =0.
Thus, the waveguide eventually contains two photons.
On the other hand, when Δ_0τ = 2nπ where n=0,1,2,⋯, 1- e^iΔ_0τ = 0.
Then
C_1,k^1(s) =iγ_2 s/s^3 + (γ_1^2 +γ_2^2 )s =iγ_2 /s^2 + (γ_1^2 +γ_2^2 ) .
Apply the inverse Laplace transform to C_1,k^1(s) in Eq. (<ref>) yields
c_1,k^1(t) = iγ_2/√(γ_1^2 +γ_2^2)sin(√(γ_1^2 +γ_2^2) t).
Moreover, by Eqs. (<ref>) and (<ref>),
C_0(s) = iγ_2 C_1,k^1(s) + 1/s
=γ_1^2/γ_1^2 + γ_2^21/s + γ_2^2/γ_1^2 + γ_2^2s/s^2 + γ_1^2 + γ_2^2.
Therefore, applying the inverse Laplace transform we get
c_0(t) =γ_1^2/γ_1^2 + γ_2^2Θ(t) + γ_2^2/γ_1^2 + γ_2^2cos (√(γ_1^2 + γ_2^2) t ),
where Θ(t) represents the Heaviside step function. Finally, by Eqs. (<ref>) and (<ref>),
C_2,k^2(s) =iγ_1/s+ G_0^2/4c (1- e^(iΔ_0-s)τ) C_1,k^1(s)
=-γ_1γ_2/γ_1^2 +γ_2^2 1/s + γ_1γ_2/γ_1^2 +γ_2^2 s/s^2 + (γ_1^2 +γ_2^2 ).
Applying the inverse Laplace transform to C_2,k^2(s) yields
c_2,k^2(t) =-γ_1γ_2/γ_1^2 +γ_2^2 Θ(t) + γ_1γ_2/γ_1^2 +γ_2^2 cos (√(γ_1^2 +γ_2^2)t )
=γ_1γ_2/γ_1^2 +γ_2^2 [cos (√(γ_1^2 +γ_2^2)t )-1].
Notice that
|c_0(t)|^2+ |c_1,k^1(t)|^2 +|c_2,k^2(t)|^2
=γ_1^4/ ( γ_1^2 +γ_2^2 )^2 + γ_1^2γ_2^2/ ( γ_1^2 +γ_2^2 )^2 + γ_2^2/γ_1^2 +γ_2^2 + [γ_2^4/ ( γ_1^2 +γ_2^2 )^2 + γ_1^2γ_2^2/ ( γ_1^2 +γ_2^2 )^2 - γ_2^2/γ_1^2 +γ_2^2 ]cos^2 (√(γ_1^2 +γ_2^2)t )
=1.
The normalization condition of populations yields
c_2,k^1(t,k)= c_2,k^0(t,k) = c_2,k^2(t,k_1,k_2) = 0.
In words, the atom oscillates in the cavity while there are no photons in the waveguide.
Specially, when γ_1= γ_2, the oscillation frequency of c_1,k^1(t) is twice of that of c_0(t) and c_2,k^2(t), implying that the atom can be at the middle energy level |1,1,0⟩ by emitting one photon from |2,0,0⟩, or absorbing one photon from |0,2,0⟩. In the following, we demonstrate Proposition <ref> with an example. Set Δ_0 =50, G_0 = 0.2, γ_1 = γ_2 =0.3. In Fig. <ref> the simulation results are compared according to the number of photons in the waveguide. In the upper three subfigures labeled as (1-1), (1-2) and (1-3) respectively, Δ_0τ = 2π. The populations that there are no photons in the waveguide oscillate, as shown in Fig. <ref> (1-1). The populations that there are one or two photons in the waveguide finally converge to zero when t= 200τ, as shown in Fig. <ref> (1-2) and (1-3). Hence, asymptotically there are no photons in the waveguide. On the other hand, in the lower three subfigures labeled as (2-1), (2-2) and (2-3) respectively, Δ_0τ = 3π. |c_0(t)|^2, |c_1,k^1(t)|^2 and |c_2,k^2(t)|^2 converge to zero as shown in Fig. <ref> (2-1), the single-photon population converges to zero when t= 200τ, as shown in Fig. <ref> (2-2), and finally there are two photons in the waveguide as shown in Fig. <ref> (2-3).
§.§.§ Feedback control with an infinitely long waveguide
When the waveguide is so long that the induced round-trip delay τ is much larger than the lifetime of the atom, neglecting the delay-induced part, Eq. (<ref>) of the Ξ-type three-level atom can be written as
ċ_0(t) = iγ_2 c_1,k^1(t),
ċ_1,k^1(t) = iγ_2 c_0(t) + i γ_1 c_2,k^2(t) -κc_1,k^1(t) ,
ċ_2,k^2(t) = iγ_1 c_1,k^1(t) - κc_2,k^2(t) ,
ċ_1,k^0(t,k) =i G^*(k,t) c_1,k^1(t) + iγ_1 c_2,k^1(t,k),
ċ_2,k^1(t,k) = iγ_1 c_1,k^0(t,k) + i G^*(k,t) c_2,k^2(t),
ċ_2,k^0(t,k_1,k_2) = iG^*(k_2,t) c_2,k^1(t,k_1) + iG^*(k_1,t) c_2,k^1(t,k_2).
Applying the Laplace transform to c_0(t), c_1,k^1(t) and c_2,k^2(t) yields
C_0(s) = iγ_2 C_1,k^1(s) + 1/s,
sC_1,k^1(s) = -γ_2^2 C_1,k^1(s) + iγ_2/s - γ_1^2/s+ κ C_1,k^1(s) -κC_1,k^1(s) ,
C_2,k^2(s) = iγ_1/s+ κ C_1,k^1(s),
respectively, which can be rewritten as
C_1,k^1(s)
=iγ_2 (s+κ)/s^3 + 2κ s^2 + s(γ_1^2 + γ_2^2 + κ^2) + κγ_2^2,
C_0(s) = s^2 + 2κ s +γ_1^2 + κ^2/s^3 + 2κ s^2 +s(γ_1^2 + γ_2^2 +κ^2) + κγ_2^2,
and
C_2,k^2(s) =-γ_1γ_2 /s^3 + 2κ s^2 + s(γ_1^2 + γ_2^2 + κ^2) + κγ_2^2,
respectively.
Clearly, according to the final value theorem we get
lim_t→∞ c_0(t) = lim_t→∞ c_1,k^1(t)=lim_t→∞ c_2,k^2(t) = 0.
That is, when the feedback loop is infinitely long, eventually the atom settles to its ground state, the cavity is empty, and there are two photons in the waveguide.
§ COHERENT FEEDBACK CONTROL WITH DETUNINGS
For a general N-level ladder-type atom, the interaction Hamiltonian H̃_I in Eq. (<ref>) should be generalized to <cit.>,
H̃_I = -∑_n=1^N-1γ_n (e^-iδ_ntσ_n^-a^† + e^iδ_ntσ_n^+a )- ∫dk [G(k,t) a^†d_k + G^*(k,t)ad^†_k ],
where δ_n = ω_n-ω_c, (n = 1,2,⋯,N-1), represents the detuning between the n-th level of the atom and the cavity. Accordingly, Eq. (<ref>) should be modified as
ċ_0(t) = iγ_N-1 e^iδ_N-1t c_1k^1(t),
ċ_j,k^m = i√(m)γ_N-j e^-iδ_N-jt c_j-1,k^m-1(t,k_1,⋯,k_j-m) + i√(m+1)γ_N-j-1 e^iδ_N-j-1t c_j+1,k^m+1(t,k_1,⋯,k_j-m)
-κ∑_p=1^j-m+1 [c_j,k^m(t,k_1,⋯,k_p-1,k_p+1,…, k_j-m) - e^iΔ_0τ c_j,k^m(t-τ,k_1,⋯,k_p-1,k_p+1,…, k_j-m) ]
+i∑_p=1^j-m G^*(k_p,t) c_j,k^m+1(t,k_1,⋯,k_p-1,k_p+1,…, k_j-m), m>0,
ċ_j,k^0(t,k_1,⋯,k_j) = iγ_N-j-1 e^iδ_N-j-1t c_j+1,k^1(t,k_1,⋯,k_j)
+i∑_p=1^j G^*(k_p,t) c_j,k^1(t,k_1,⋯,k_p-1,k_p+1,…, k_j), m=0.
When j=m, all the emitted photons are in the cavity, then c_j,k^m (t,k_1,⋯,k_j-m) is only the function of time t. Thus we can define a time-domain vector
X(t) = [c_0(t), c_1,k^1(t),⋯, c_N-1,k^N-1(t)]^⊤∈𝐂^N,
where the superscript ⊤ represents the transpose of a vector. Then from Eq. (<ref>) we can get
ċ_0(t) = iγ_N-1 e^iδ_N-1t c_1k^1(t),
ċ_1,k^1(t) = iγ_N-1 e^-iδ_N-1tc_0(t) + i √(2)γ_N-2 e^iδ_N-2t c_2,k^2(t) -κ[c_1,k^1(t) -e^iΔ_0τ c_1,k^1(t-τ)],
ċ_j,k^j(t) = i√(j)γ_N-j e^-iδ_N-jt c_j-1,k^j-1(t) + i√(j+1)γ_N-j-1 e^iδ_N-j-1t c_j+1,k^j+1(t) -κ[c_j,k^j(t) - e^iΔ_0τ c_j,k^j(t-τ) ],
ċ_N-1,k^N-1(t) = i√(N-1)γ_1 e^-iδ_1t c_N-2,k^N-2(t) -κ[c_N-1,k^N-1(t) - e^iΔ_0τ c_N-1,k^N-1(t-τ) ],
where j =2,3,⋯, N-2. We define two matrices
A(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1)≜
[ 0 iγ_N-1 e^iδ_N-1t ⋯ 0 0 0 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ i√(j)γ_N-j e^-iδ_N-jt -κ i√(j+1)γ_N-j-1 e^iδ_N-j-1t ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ 0 0 0 ⋯ i√(N-1)γ_1 e^-iδ_1t -κ; ],
and
B ≜[ 0 0 0 ⋯ 0; 0 κ e^iΔ_0τ 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ κ e^iΔ_0τ; ].
Clearly, A(t) is time-varying, and the constant matrix B is determined by the round trip delay τ and the coupling strength κ between the waveguide and cavity.
With the aid of these two matrices, Eq. (<ref>) can be rewritten in a more compact form as:
Ẋ(t) = A(t) X(t) + B X(t-τ),
X(t) = φ(t), ∀t∈[-τ,0],
where φ(t) ≡ [1, 0,⋯, 0 ]^⊤ for all t∈ [-τ,0].
When γ_1, γ_2,⋯,γ_N-1≠ 0, there is no such X(t) that lim_t→∞Ẋ(t) = 0 and lim_t→∞X(t) ≠ 0.
Noticing that
A(t)+B =
[ 0 iγ_N-1 e^iδ_N-1t ⋯ 0 0 0 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ i√(j)γ_N-j e^-iδ_N-jt κ(e^iΔ_0τ-1) i√(j+1)γ_N-j-1 e^iδ_N-j-1t ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ 0 0 0 ⋯ i√(N-1)γ_1 e^-iδ_1t κ(e^iΔ_0τ-1); ].
Denote the determinant |A(t)+B| = D_N. Clearly, D_N ≠ 0 for all t≥0 when γ_1, γ_2,⋯,γ_N-1≠ 0.
Assume that lim_t→∞X(t) = X̅≠ 0. Then by Eq. (<ref>) we get lim_t→∞Ẋ(t) = lim_t→∞(A(t)+B)X̅. However, as D_N ≠ 0 for all t≥0, lim_t→∞Ẋ(t)=0 if and only if X̅=0, which contradicts the condition that X̅≠ 0.
Notice that when an amplitude c_ψ(t) of a quantum state |ψ⟩ satisfies that lim_t→∞ċ_ψ(t) = 0, then the state will not oscillate. We have the following proposition.
The oscillation property of the quantum feedback control system described in Eq. (<ref>) is equivalent to the oscillation property of the amplitude c_1,k^1(t) under the condition that γ_1, γ_2,⋯,γ_N-1≠ 0.
(1) As illustrated in Proposition <ref>, the condition lim_t→∞Ẋ(t) = 0 induces that lim_t→∞X(t) = 0, and obviously lim_t→∞c_1,k^1(t) = 0.
(2) When c_1,k^1(t) oscillates, lim_t→∞c_1,k^1(t) dose not exist. Then c_0(t) oscillates according to Eq. (<ref>).
§.§ Example: feedback control for a three-level atom with detunings
For a Ξ-type three-level atom with detunings, Eq. (<ref>) becomes
ċ_0(t) = iγ_2 e^iδ_2t c_1,k^1(t),
ċ_1,k^1(t) = iγ_2 e^-iδ_2tc_0(t) + i γ_1 e^iδ_1t c_2,k^2(t) -κ[c_1,k^1(t) -e^iΔ_0τ c_1,k^1(t-τ)],
ċ_2,k^2(t) = iγ_1 e^-iδ_1tc_1,k^1(t) - κ[ c_2,k^2(t) -e^iΔ_0τ c_2,k^2(t-τ)],
ċ_1,k^0(t,k) =i G^*(k,t) c_1,k^1(t) + iγ_1 e^iδ_1tc_2,k^1(t,k),
ċ_2,k^1(t,k) = iγ_1 e^-iδ_1tc_1,k^0(t,k) + i G^*(k,t) c_2,k^2(t) -2κ[c_2,k^1(t,k) - e^iΔ_0τc_2,k^1(t-τ,k)],
ċ_2,k^0(t,k_1,k_2) = iG^*(k_2,t) c_2,k^1(t,k_1) + iG^*(k_1,t) c_2,k^1(t,k_2).
Applying the Laplace transform to Eqs. (<ref>-<ref>) we get
sC_0(s) -1 = iγ_2 C_1,k^1(s-iδ_2),
sC_1,k^1(s) = iγ_2 C_0(s+iδ_2) + i γ_1 C_2,k^2(s-iδ_1) -κ[C_1,k^1(s) -e^iΔ_0τ e^-sτC_1,k^1(s)],
sC_2,k^2(s) = iγ_1 C_1,k^1(s+iδ_1) - κ[ C_2,k^2(s) -e^iΔ_0τ e^-sτ C_2,k^2(s)].
When τ≪ 1 and e^-sτ≈ 1. Solving Eqs. (<ref>) we have
sC_1,k^1(s) = -γ_2^2 C_1,k^1(s) + iγ_2/s+iδ_2 - γ_1^2/s-iδ_1+ κ (1- e^iΔ_0τ) C_1,k^1(s) -κ (1- e^iΔ_0τ)C_1,k^1(s) ,
and
C_1,k^1(s) = iγ_2/s+iδ_2/ [s+κ (1- e^iΔ_0τ) + γ_2^2/s+iδ_2 + γ_1^2/s-iδ_1+ κ (1- e^iΔ_0τ)].
In particular, when Δ_0τ = 2nπ, 1-e^iΔ_0τ = 0. In this case Eq. (<ref>) reduces to
[s + γ_2^2/s+iδ_2 + γ_1^2/s-iδ_1 ]C_1,k^1(s) = iγ_2/s+iδ_2 .
Consequently, Eq. (<ref>) becomes
C_1,k^1(s) = iγ_2(s-iδ_1)/s^3 +i(δ_2-δ_1)s^2 +(γ_1^2 + γ_2^2 +δ_1δ_2)s + i(δ_2γ_1^2 - δ_1γ_2^2).
When δ_1 = δ_2 = 0, Eq. (<ref>) reduces to Eq. (<ref>). Then we have the following result.
When Δ_0τ = 2nπ and δ_2/δ_1 = γ_2^2/γ_1^2, c_1,k^1(t) oscillates persistently.
When Δ_0τ = 2nπ and δ_2/δ_1 = γ_2^2/γ_1^2, δ_2γ_1^2 - δ_1γ_2^2 = 0 in Eq. (<ref>). Hence,
C_1,k^1(s) = iγ_2(s-iδ_1)/s^3 +i(δ_2-δ_1)s^2 +(γ_1^2 + γ_2^2 +δ_1δ_2)s
= A_0/s + A_1s+A_2/s^2 +i(δ_2-δ_1)s +(γ_1^2 + γ_2^2 +δ_1δ_2)
= A_0/s + A_1s+A_2/(s+i(δ_2-δ_1)/2)^2 + [γ_1^2 + γ_2^2 + (δ_1+δ_2/2)^2],
where A_0 =γ_2δ_1/γ_1^2 + γ_2^2 + δ_1δ_2, A_1 and A_2 are nonzero constant numbers whose specific values are irreverent. Due to the second item of the last line of Eq. (<ref>), c_1,k^1(t) persistently oscillates around γ_2δ_1/γ_1^2 + γ_2^2 + δ_1δ_2.
remarkRemark
The conclusion in Proposition <ref> means that when there are detunings between the multi-level atom and cavity, there can be a dark state in the waveguide. However, this is unlikely to occur because the condition δ_2/δ_1 = γ_2^2/γ_1^2 is difficult to be satisfied in practice.
§.§ Stability of the N-level system
Generalizing the definition of the exponential stability in the real domain <cit.>, we give its definition in the complex space.
myDefDefinition
The system (<ref>) is exponentially stable if there exist χ≥ 1 and α > 0 such that for every solution X(t), the following exponential estimate holds:
X(t)≤χ e^-α t |φ|_τ^*,
where · represents an arbitrary vector norm and |φ|_τ^* = max_t∈[-τ,0]{φ(t)}.
In our system, φ(t) ≡ 1 for t∈ [-τ, 0] according to the initial condition in Eq. (<ref>).
The distributions of the photons between the cavity and waveguide of the the coherent feedback network in Fig. <ref> can be studied by means of the exponential stability. Specifically, if the system is exponentially stable, there will be not any photons in the cavity eventually, thus finally there is a multi-photon state in the waveguide. If the system oscillates rather than being exponentially stable, the photons are always exchanging between the waveguide and cavity.
Now we rewrite Eq. (<ref>) by separating the real and imaginary parts as:
X̃(t) = [c̅_0(t), č_0(t), c̅_1,k^1(t),č_1,k^1(t),⋯, c̅_N-1,k^N-1(t),č_N-1,k^N-1(t)]^⊤∈𝐑^2N,
where c̅_0(t) represents the real part of c_0(t), and č_0(t) represents its imaginary part,
similarly for that of c_j,k^j(t) for all j = 1,2,⋯,N-1.
Define matrices
Ã(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1)≜
[ 0 γ_N-1𝐑_N-1 ⋯ 0 0 0 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ √(j)γ_N-j𝐑_N-j -κ I √(j+1)γ_N-j-1𝐑_N-j-1 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ 0 0 0 ⋯ √(N-1)γ_1𝐑_1 -κ I; ],
where
𝐑_j≜[ sin(δ_jt) -cos(δ_jt); cos(δ_jt) sin(δ_jt) ],
and
B̃ ≜[ 0 0 0 ⋯ 0; 0 κ𝐏(τ) 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ κ𝐏(τ); ],
where
𝐏(τ)≜[ cos(Δ_0τ) -sin(Δ_0τ); sin(Δ_0τ) cos(Δ_0τ) ].
Then the system (<ref>) can be written as
Ẋ̃̇(t) = Ã(t) X̃(t) + B̃ X̃(t-τ),
X̃(t) = φ̃(t), ∀t∈[-τ,0].
in the real state-space configuration.
When δ_j = 0, the energy difference between arbitrary neighboring levels are the same which is ω_c. In this case à in Eq. (<ref>) is time-invariant because 𝐑_j in Eq. (<ref>) now becomes 𝐑_j= [ 0 -1; 1 0 ].
In the following, we investigate the dynamics of the quantum coherent feedback according to whether or not the detunings δ_j=0.
a) The case of δ_j = 0.
When δ_j = 0,
applying the Laplace transform to Eq. (<ref>) gives
sX̃(s) - X̃(0) = ÃX̃(s) + B̃X̃(s)e^-sτ.
Hence
X̃(s) = (sI -B̃e^-sτ - Ã)^-1X̃(0),
where I is the 2N×2N identity matrix.
It can be seen that
(sI -B̃e^-sτ - Ã)^-1
= [ s I -γ_N-1𝐑 ⋯ 0 0 0 ⋯ 0 0; -γ_N-1𝐑 𝐃 ⋯ 0 0 0 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ -√(j)γ_N-j𝐑 𝐃 - √(j+1)γ_N-j-1𝐑 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ 0 0 0 ⋯ -√(N-1)γ_1𝐑 𝐃 ]^-1
=1/|sI -B̃e^-sτ - Ã| (sI -B̃e^-sτ - Ã)^*,
where the superscript * represents the adjugate operation, 𝐑=[ 0 -1; 1 0 ], and 𝐃 = (s+κ) I - κ e^-sτ𝐏(τ).
When Δ_0τ = 2nπ and δ_j = 0, X̃(t) oscillates with the frequency determined by the coupling strengths between the multi-level atom and the cavity.
The dynamics of the quantum system in Eq. (<ref>) is determined by its poles, which are roots of the equation |sI -B̃e^-sτ - Ã|=0. When Δ_0τ = 2nπ, 𝐏(τ)= [ 1 0; 0 1 ] in Eq. (<ref>), 𝐑_j= [ 0 -1; 1 0 ], and τ≈ 0 because Δ_0 ≫ 1 in practical systems. Then in Eq. (<ref>), 𝐃≈ s I + κ( I - 𝐏(τ)) = s I.
After some tedious calculation we find the determinant
|sI -B̃e^-sτ - Ã|
=|s𝐃 + γ_N-1^2 I | |𝐃 ^2 +2γ_N-2^2 I | ·⋯· |𝐃 ^2 +(N-1)γ_1^2 I |
=(s^2+γ_N-1^2)^2 (s^2+2γ_N-2^2)^2 ·⋯· (s^2+(N-1)γ_1^2)^2.
Obviously, all the roots of the equation |sI -B̃e^-sτ - Ã|=0 are on the imaginary axis of the complex plane, which means that X̃(t) oscillates with the frequency determined by γ_1,γ_2,⋯,γ_N-1.
<cit.> For the real time-delayed system (<ref>) with δ_j = 0, if there exist real positive definite matrices P, Q and a positive constant β such that the inequality
ℳ(P,Q) + 2β𝒩(P) <0
holds, where
ℳ(P,Q) = [ P Ã +Ã^⊤ P+Q PB̃; B̃^⊤ P -e^-2βτQ ],
𝒩(P) =[ I; 0 ]P [ I 0 ] =[ P 0; 0 0 ],
then
X̃(t,φ̃(t))≤√(α_2/α_1) e^-β tφ̃(t))_τ,
where the positive constants α_1 and α_2 are defined as
α_1 = λ_min(P),
α_2 = λ_max(P) + τλ_max(Q).
corolCorollary
According to Proposition <ref>, when Δ_0τ = 2nπ and δ_j = 0, the system oscillates, then the relationship in Eq. (<ref>) cannot be satisfied.
A generalized version of Proposition <ref> will be given later.
b) The case of δ_j ≠ 0.
When δ_j ≠ 0, denote
Ã(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1)= Ã_0 + Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1),
where
Ã_0 =
[ 0 γ_N-1𝐑 ⋯ 0 0 0 ⋯ 0 0; γ_N-1𝐑 -κ I ⋯ 0 0 0 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ √(j)γ_N-j𝐑 -κ I √(j+1)γ_N-j-1𝐑 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ 0 0 0 ⋯ √(N-1)γ_1𝐑 -κ I; ],
and
Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1)=
[ 0 γ_N-1𝐑̃_N-1 ⋯ 0 0 0 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ √(j)γ_N-j𝐑̃_N-j 0 √(j+1)γ_N-j-1𝐑̃_N-j-1 ⋯ 0 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 ⋯ 0 0 0 ⋯ √(N-1)γ_1𝐑̃_1 0; ],
with 𝐑=[ 0 -1; 1 0 ], 𝐑_j= [ sin(δ_jt) -cos(δ_jt); cos(δ_jt) sin(δ_jt) ], and 𝐑̃_j = 𝐑_j-𝐑.
The induced 2-norm of Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) is the square root of the maximum eigenvalue of Υ^⊤Υ. It can be found that
Υ^⊤ (t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1)
=[ γ_N-1^2 𝐑̃_N-1^⊤𝐑̃_N-1 0 ⋯ 0 0 ⋯ 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ jγ_N-j^2 𝐑̃_N-j^⊤𝐑̃_N-j + (j+1)γ_N-j-1^2 𝐑̃_N-j-1^⊤𝐑̃_N-j-1 0 ⋯ 0; ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ ⋯ ⋯ ⋯ (N-1)γ_1^2 𝐑̃_1^⊤𝐑̃_1 ],
with
𝐑̃_j^⊤𝐑̃_j = [sin^2(δ_jt) + (1-cos(δ_jt))^2] I = 2(1-cos(δ_jt)) I .
Hence, the induced 2-norm of Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) reads
Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) _2 =max_j √( 2(N-j) γ_j^2 (1-cos(δ_jt)) + 2(N-j-1) γ_j+1^2 (1-cos(δ_j+1t))),
which is finite when 1≤ j ≤ N-1, and is determined by the coupling strength and detunings between the atom and the cavity.
The following result generalizes Proposition <ref> by allowing nonzero detunings.
For the quantum control system described by Eq. (<ref>), if there exist real positive-definite matrices P̃, Q̃, and a positive constant β such that
ℳ(P̃,Q̃) + 2 λ_max(P̃) Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) _2 I + 2β𝒩(P̃) <0,
where ℳ and 𝒩 are defined in Eq. (<ref>), then X̃(t) is exponentially stable.
We consider the Lyapunov-Krasovkii functional
V(X̃_t) = X̃^⊤(t)P̃X̃(t) + ∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ,
where P̃ and Q̃ are real positive-definite matrices. Denote
α̃_1 = λ_min(P̃),
α̃_2 = λ_max(P̃) + τλ_max(Q̃).
Then obviously
V(X̃_t) ≥α̃_1 X̃_t ^2 .
Notice that
V(X̃_t) = [ X̃(t); X̃(t-τ) ]^⊤𝒩(P̃)[ X̃(t); X̃(t-τ) ] + ∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ,
where the matrix 𝒩 is that defined in Eq. (<ref>). Differentiating both sides of Eq. (<ref>) yields
d/dt V(X̃_t) = X̃^⊤ (t)P̃Ẋ̃̇(t) + Ẋ̃̇(t) ^⊤P̃X̃(t) + d/dt∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ.
Denote u =t+θ, and F(t) ≜∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ= ∫_t-τ^t X̃^⊤ (u) e^2β(u-t)Q̃X̃(u) du. We have
d/dt∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ
= -2β∫_t-τ^t X̃^⊤ (u) e^2β(u-t)Q̃X̃(u) du + X̃^⊤ (t) Q̃X̃(t) - X̃^⊤ (t-τ) e^-2βτQ̃X̃(t-τ)
=-2β∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ + X̃^⊤ (t) Q̃X̃(t) - X̃^⊤ (t-τ) e^-2βτQ̃X̃(t-τ).
Substituting Eq. (<ref>) into Eq. (<ref>) yields
d/dt V(X̃_t)
= X̃^⊤ (t)P̃Ẋ̃̇(t) + Ẋ̃̇^⊤ (t) P̃X̃(t) -2β∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ + X̃^⊤ (t) Q̃X̃(t) - X̃^⊤ (t-τ) e^-2βτQ̃X̃(t-τ)
= X̃^⊤ (t)P̃ [(Ã_0 + Υ(t)) X̃(t) + B̃X̃(t-τ)] + [(Ã_0 + Υ(t)) X̃(t) + B̃X̃(t-τ)]^⊤P̃X̃(t)
-2β∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ + X̃^⊤ (t) Q̃X̃(t) - X̃^⊤ (t-τ) e^-2βτQ̃X̃(t-τ)
=[ X̃(t); X̃(t-τ) ]^⊤{[ P̃Ã_0 +Ã^⊤ _0P̃+Q̃ P̃B̃; B̃^⊤P̃ -e^-2βτQ̃ ] + [ Υ(t)^⊤; 0_2(N-1)*1 ]P̃ [I 0_2(N-1)*1] .
. +[ I; 0_2(N-1)*1 ]P̃ [Υ(t) 0_2(N-1)*1] }[ X̃(t); X̃(t-τ) ]
-2β∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ + X̃^⊤ (t) Q̃X̃(t) - X̃^⊤ (t-τ) e^-2βτQ̃X̃(t-τ)
≤[ X̃(t); X̃(t-τ) ]^⊤ ([ P̃Ã_0 +Ã^⊤ _0P̃+Q̃ P̃B̃; B̃^⊤P̃ -e^-2βτQ̃ ] + 2 λ_max(P̃) Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) _2 ) [ X̃(t); X̃(t-τ) ]
-2β∫_-τ^0 X̃^⊤ (t+θ) e^2βθQ̃X̃(t+θ) dθ + X̃^⊤ (t) Q̃X̃(t) - X̃^⊤ (t-τ) e^-2βτQ̃X̃(t-τ).
Consequently,
d/dt V(X̃_t) + 2β V(X̃_t)
≤[ X̃(t); X̃(t-τ) ]^⊤ ([ P̃Ã_0 +Ã^⊤ _0P̃+Q̃ P̃B̃; B̃^⊤P̃ -e^-2βτQ̃ ] + 2 λ_max(P̃) Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) _2 +2β𝒩(P̃) ) [ X̃(t); X̃(t-τ) ].
Once the condition in Eq. (<ref>) is satisfied, one has
d/dt V(X̃_t) + 2β V(X̃_t) ≤ 0,
and this, together with Eq. (<ref>), gives
α̃_1 X̃_t ^2 ≤ V(X̃_t) ≤ e^-2β t V (X̃_0 ),
where V(X̃_0) = X̃^⊤ (0)P̃X̃(0) + ∫_-τ^0 X̃^⊤ (θ) e^2βθQ̃X̃(θ) dθ. Thus
V(X̃_t) ≤ e^-2β tλ_max(P̃) X̃^⊤ (0)X̃(0) + ∫_-τ^0 X̃^⊤ (θ) e^2β(θ-t)Q̃X̃(θ) dθ
≤ e^-2β tλ_max(P̃) |X̃_t |_τ^*^2 + e^-2β tτλ_max(Q̃) |X̃_t |_τ^*^2 = α̃_2 e^-2β t |X̃_t |_τ^*^2,
where |X̃_t |_τ^* = max_t∈[-τ,0]{φ̃(t)}, as defined in Definition <ref>. That is, the system is exponentially stable according to Definition <ref>.
When δ_j = 0 for j =1,2,⋯, N-1, Υ(t,γ_1,⋯,γ_N-1,δ_1,⋯,δ_N-1) _2 = 0 in Eq. (<ref>). In this case Proposition <ref> reduces to Proposition <ref>.
We take the four-level atom as an example, and compare the feedback control performance between the cases that there are atom-cavity detunings or not. As illustrated in Fig. <ref> where Δ_0 = 50, G_0 = 0.2, γ_j = 0.6 for j=1,2,3, the solid lines represent the evolution of populations when δ_j = 0 and the dash lines represent the case δ_j = 1. The comparison reveals that when there are detunings between the cavity and the multi-level atom, the coherent feedback control stability can be worse, which agrees with Proposition <ref> which says that the stability inequality is more difficult to be satisfied when there are detunings between the cavity and the multi-level atom.
§ COHERENT FEEDBACK CONTROL WITH MULTIPLE PARALLEL WAVEGUIDES
Apart from the circumstance that one waveguide is coupled with a cavity to construct a feedback loop, in many practical optical systems, a cavity can be coupled to multiple parallel waveguides. Such configuration has abundant potential applications in photon routing <cit.>, creating correlated photonic states <cit.>, and among others.
Therefore in this section, we study the quantum coherent feedback network as shown in Fig. <ref> (a), which generalizes the one in Fig. <ref> by allowing multiple waveguides. Specifically, the cavity is coupled to the inner red waveguide with coupling strengths G^*(k,t) and G(k,t), and each waveguide is only coupled with its nearest neighbors. Thus the photons in the inner red waveguide can be transmitted into the outer black waveguides through the cascade coupling designated by coupling strengths K_12,K_23,⋯,K_(W-1)W, and vice versa. This process can be equivalently represented in Fig. <ref> (b), where the parallel waveguides are coupled to each other designated by coupling strengths K_12,K_23,⋯,K_(W-1)W,K_W(W-1),⋯,K_32,K_21, thus forming a coherent feedback network so that photons are transmitted back and forth between the cavity and parallel waveguides.
According to the conclusion in Ref. <cit.> and Section <ref> above, when δ_n ≪ω_c and γ_n ≪ω_c, once there are observable photons in a waveguide, its photonic modes will be Lorentzian around the central mode ω_c, thus the couplings among the waveguide array is largely determined by the central mode ω_c of the Lorentzian spectrum; see also Refs. <cit.>. Due to this, we make the following assumption.
The coupling strengths K_𝐰(𝐰+1) and K_(𝐰+1)𝐰 between the two neighboring waveguides are constant, for all 𝐰=1,2,… W-1.
Generalized from Eq. (<ref>), the Hamiltonian of the system in Fig. <ref> reads
H = H_A + ω_c a^†a +∑_𝐰=1^W∫ω_kd_k^†𝐰d_k^𝐰dk+ H_I,
where the waveguide array Hamiltonian is a summation of all the W waveguides with continuous modes. The interaction picture Hamiltonian reads <cit.>
H̃_I = -∑_n=1^N-1γ_n (e^-iδ_ntσ_n^-a^† + e^iδ_ntσ_n^+a)
- ∫dk [G^*(k,t)ad^† 1_k + G(k,t) a^†d_k^1 ]
-∑_𝐰=1^W-1 K_𝐰(𝐰+1) d_k^𝐰 d_k^† (𝐰+1)
-∑_𝐰=2^W K_𝐰(𝐰-1) d_k^𝐰 d_k^† (𝐰-1) -∑_𝐰=1^W β_𝐰 d_k^†𝐰d_k^𝐰,
where the real number β_𝐰 is the propagation constant of the modes in the 𝐰-th waveguide and β_𝐰 d_k^†𝐰d_k^𝐰 represents that the photon number in the 𝐰-th waveguide is not changed. For simplicity, it is assumed that K_𝐰(𝐰+1) = K_(𝐰+1)𝐰^* for 𝐰 = 1,2,⋯,W-1, as has been done in <cit.>.
§.§ Photon states in the parallel waveguides
If there are n_𝐰 photons in the 𝐰-th waveguide, then the n_𝐰-photon state in the 𝐰-th waveguide can be represented as the linear superposition of the unnormalized basis states
|{k_⊙ n_𝐰^𝐰}⟩≜
d^†𝐰_k_n_𝐰⊙ n_𝐰⋯ d^†𝐰_k_1⊙ n_𝐰 |0⟩,
where d^†𝐰_k_p⊙ n_𝐰|0⟩ with p=1,2,⋯,n_𝐰 creates a single photon of mode k_p in the 𝐰-th waveguide, and the symbol ⊙ n_𝐰 indicates there are altogether n_ w photons in the 𝐰-th waveguide. Of course, |{k_⊙ n_𝐰^𝐰}⟩ is the vacuum state |0⟩ if n_𝐰=0, namely no photons in the 𝐰-th waveguide.
Assume the total number of photons in the waveguide array is N_W ≜∑_𝐰 = 1^W n_𝐰.
Then the basis states for the photons in the waveguide array read
|{k_⊙ n_1^1}, ⋯, {k_⊙ n_𝐰^𝐰}, ⋯,{k_⊙ n_W^W}⟩
≜ |{k_1⊙ n_1^1, ⋯,k_n_1⊙ n_1^1}, ⋯,{k_1⊙ n_𝐰^𝐰, ⋯,k_n_𝐰⊙ n_𝐰^𝐰} , ⋯,{k_1⊙ n_W^W⋯,k_n_W⊙ n_W^W}⟩ ,
where {k_1⊙ n_𝐰^𝐰, ⋯,k_n_𝐰⊙ n_𝐰^𝐰} represents that the basis states {k_⊙ n_𝐰^𝐰} in the 𝐰-th waveguide is composed with n_𝐰 photons.
Because of the exchange of photons between the neighboring waveguides, the photon state in the parallel waveguides is not simply the product of the states of different channels; instead it is a superposition of the tensor-product basis states given in Eq. (<ref>).
§.§ Quantum states of the feedback system
For the quantum system in Fig. <ref>, when the atomic state is |N-1-j⟩ and there are m photons in the cavity, then there are j-m photons in the parallel waveguides. Assume the number of photons in the i-th waveguide is n_i, i=1,2,⋯,W, then the quantum system can be represented as <cit.>:
|Ψ(t)⟩ =∑_j=0^N-1 c_j^j(t)|N-1-j,j,{0}⟩
+ ∑_j = 1^N-1∑_m=0^j-1∑_N_W = j-m∫⋯∫ c_j,⊙ n_1⋯⊙ n_W^m(t,k_⊙ n_1^1,⋯,k_⊙ n_W^W) |N-1-j,m,{k_⊙ n_1^1}, ⋯,{k_⊙ n_W^W}⟩dk_1⊙ n_1^1⋯dk_n_W⊙ n_W^W,
where
c_j,⊙ n_1⋯⊙ n_W^m is the amplitude of the state with overall j excitations including m photons in the cavity and n_1,n_2,⋯,n_W photons in the first, second, ⋯, W-th waveguide, respectively.
Obviously, 1 ≤max{n_𝐰}≤ j-m and N_𝐰≡∑_𝐰 = 1^M n_𝐰 = j-m. Initially, the atom is excited at |N-1⟩ and n_1 = n_2 = ⋯ = n_W = 0.
The normalization condition of the state in Eq. (<ref>) is
∑_j=1^N-1 |c_j^j(t)|^2 + ∑_j = 1^N-1∑_m=0^j-1∑_N_W = j-m∫⋯∫ |c_j,⊙ n_1⋯⊙ n_W^m(t,k_⊙ n_1^1,⋯,k_⊙ n_W^W)|^2 dk_1⊙ n_1^1⋯dk_n_W⊙ n_W^W=1.
The population representing that there are n photons in the 𝐰-th waveguide can be represented as:
P_n^𝐰 = ∑_j = n^N-1∑_m=0^j-n∑_n_1 + ⋯ + n_𝐰-1 + n_𝐰+1 + n_W = j-m-n∫⋯∫ |c_j,⊙ n_1⋯⊙ n_W^m(t,k_⊙ n_1^1,⋯,k_⊙ n_W^W)|^2 dk_1⊙ n_1^1⋯dk_n_W⊙ n_W^W,
because there can be n photons in a single waveguide only when the atom is at the state |N-1-j⟩ with j = n,n+1,⋯,N-1.
Substituting Eq. (<ref>) into the Schrödinger equation with the Hamiltonian in Eq. (<ref>), we can get
ċ_j^j(t) = i√(j)γ_N-j e^-iδ_N-jt c_j-1^j-1(t) + i√(j+1)γ_N-j-1 e^iδ_N-j-1t c_j+1^j+1(t)
-κ[c_j,k^j(t) - e^iΔ_0τ c_j,k^j(t-τ) ],
ċ_j,⊙n_1⋯⊙n_W^m(t,k_⊙n_1^1,⋯,k_⊙n_W^W) = i√(m)γ_N-j e^-iδ_N-jt c_j-1,⊙n_1⋯⊙n_W^m-1(t,k_⊙n_1^1,⋯,k_⊙n_W^W)
+ i√(m+1)γ_N-j-1 e^iδ_N-j-1t c_j+1,⊙n_1⋯⊙n_W^m+1(t,k_⊙n_1^1,⋯,k_⊙n_W^W)
+i∑_p=1^n_1+1∫G(k_p,t) c_j,⊙n_1+1⋯⊙n_W^m-1(t,k_1⊙n_1+1^1,⋯,k_p⊙n_1+1^1,⋯,
k_n_1+1⊙n_1+1^1;k_1⊙n_2^2,⋯,k_n_2⊙n_2^2;⋯; k_1⊙n_W^W;⋯,k_n_W⊙n_W^W) dk_p
+i∑_p=1^n_1 G^*(k_p,t) c_j,⊙n_1-1⋯⊙n_W^m+1(t,k_1⊙n_1-1^1,⋯,k_p-1⊙n_1-1^1,k_p+1⊙n_1-1^1,⋯, k_n_1-1⊙n_1-1^1;
k_1⊙n_2^2,⋯,k_n_2⊙n_2^2;⋯; k_1⊙n_W^W;⋯,k_n_W⊙n_W^W)
+ i∑_𝐰 =1^W-1 K_𝐰 (𝐰+1) c_j,⊙n_1⋯⊙n_𝐰+1⊙n_𝐰+1-1⋯⊙n_W^m(t,k_⊙n_1^1,⋯,k_⊙n_𝐰+1^𝐰,k_⊙n_𝐰+1-1^𝐰+1,⋯,k_⊙n_W^W)
+ i∑_𝐰 =2^W K_𝐰(𝐰-1)c_j,⊙n_1⋯⊙n_𝐰-1-1⊙n_𝐰+1⋯⊙n_W^m(t,k_⊙n_1^1,⋯,k_⊙n_𝐰-1-1^𝐰-1,k_⊙n_𝐰+1^𝐰,⋯,k_⊙n_W^W)
+i∑_𝐰 =1^W n_𝐰β_𝐰 c_j,⊙n_1⋯⊙n_W^m(t,k_⊙n_1^1,⋯,k_⊙n_W^W), m>0,
ċ_j,⊙n_1⋯⊙n_W^0(t,k_⊙n_1^1,⋯,k_⊙n_W^W) = iγ_N-j-1 e^iδ_N-j-1t c_j+1,⊙n_1⋯⊙n_W^1(t,k_⊙n_1^1,⋯,k_⊙n_W^W)
+i∑_p=1^n_1-1 G^*(k_p,t) c_j,⊙n_1-1⋯⊙n_W^1(t,k_1⊙n_1^1, ⋯, k_p-1⊙n_1^1,k_p+1⊙n_1^1,⋯, k_n_1⊙n_1^1,k_⊙n_2^2 ·k_⊙n_W^W)
+ i∑_𝐰 =1^W-1 K_𝐰 (𝐰+1) c_j,k^0(t,k_⊙n_1^1,⋯,k_⊙n_𝐰+1^𝐰,k_⊙n_𝐰+1-1^𝐰+1,⋯,k_⊙n_W^W)
+ i∑_𝐰 =2^W K_𝐰(𝐰-1)c_j,k^0(t,k_⊙n_1^1,⋯,k_⊙n_𝐰-1-1^𝐰-1,k_⊙n_𝐰+1^𝐰,⋯,k_⊙n_W^W)
+i∑_𝐰 =1^W n_𝐰 β_𝐰c_j,⊙n_1⋯⊙n_W^0(t,k_⊙n_1^1,⋯,k_⊙n_W^W), m=0.
The dynamics of the states that all the generated photons are in the cavity (Eq. (<ref>)) is not influenced by the parallel waveguide architecture because the cavity is only coupled with the nearest waveguide.
For the third component of the RHS of Eq. (<ref>), we have
i∑_p=1^n_1+1∫ G(k_p,t) c_j,k^m-1(t,k_1⊙ n_1+1^1,⋯,k_p⊙ n_1+1^1,⋯, k_n_1+1⊙ n_1+1^1;k_1⊙ n_2^2,⋯,k_n_2⊙ n_2^2;⋯; k_1⊙ n_W^W;⋯,k_n_W⊙ n_W^W) dk_p
=i∑_p=1^n_1+1∫ G(k_p,t) ∫_0^t ċ_j,k^m-1(u,k_1⊙ n_1+1^1,⋯,k_p⊙ n_1+1^1,⋯, k_n_1+1⊙ n_1+1^1;k_1⊙ n_2^2,⋯,k_n_2⊙ n_2^2;⋯; k_1⊙ n_W^W;⋯,k_n_W⊙ n_W^W) du dk_p
=i∑_p=1^n_1+1∫ G(k_p,t) ∫_0^t i∑_q=1^n_1+1 G^*(k_q,u) c_j,k^m(u,k_1⊙ n_1^1,⋯,k_q-1⊙ n_1^1,k_q+1⊙ n_1^1,⋯, k_n_1-1⊙ n_1^1;
k_1⊙ n_2^2,⋯,k_n_2⊙ n_2^2;⋯; k_1⊙ n_W^W;⋯,k_n_W⊙ n_W^W) du dk_p
=-∑_p=1^n_1+1∫ G(k_p,t) ∫_0^t G^*(k_p,u) c_j,k^m(u,k_1⊙ n_1^1,⋯,k_p-1⊙ n_1^1,k_p+1⊙ n_1^1,⋯, k_n_1-1⊙ n_1^1;
k_1⊙ n_2^2,⋯,k_n_2⊙ n_2^2;⋯; k_1⊙ n_W^W;⋯,k_n_W⊙ n_W^W) du dk_p
=-∑_p=1^n_1+1∫_0^t ∫ G(k_p,t) G^*(k_p,u) c_j,k^m(u,k_1⊙ n_1^1,⋯,k_p-1⊙ n_1^1,k_p+1⊙ n_1^1,⋯, k_n_1-1⊙ n_1^1;
k_1⊙ n_2^2,⋯,k_n_2⊙ n_2^2;⋯; k_1⊙ n_W^W;⋯,k_n_W⊙ n_W^W) dk_p du
=-G_0^2 ∑_p=1^n_1+1∫_0^t ∫sin^2(k_p L) e^-i(ω_p-Δ_0)(t-u) c_j,k^m(u,k_1⊙ n_1^1,⋯,k_p-1⊙ n_1^1,k_p+1⊙ n_1^1,⋯, k_n_1-1⊙ n_1^1;
k_1⊙ n_2^2,⋯,k_n_2⊙ n_2^2;⋯; k_1⊙ n_W^W;⋯,k_n_W⊙ n_W^W) dk_p du
=-κ n_1 [c_j,k^m(t,k_⊙ n_1^1,⋯,k_⊙ n_W^W) - e^iΔ_0τ c_j,k^m(t-τ,k_⊙ n_1^1,⋯,k_⊙ n_W^W) ],
where the derivation of the influence by the round trip delay τ is the same as the single waveguide circumstance in Eq. (<ref>).
§.§ Transmission property of photons in the parallel waveguides
The transmission of photons between the cavity and the parallel waveguides is determined by the design of the feedback loop and the coupling strengths among the parallel waveguides. By tuning the length of the feedback loop, the photons can be controlled to be emitted or not emitted from the cavity into the waveguides.
When Δ_0τ = 2nπ and δ_j =0, there are no photons in the parallel waveguides.
When Δ_0τ = 2nπ and δ_j =0, the photons cannot be transmitted from the cavity to the first waveguide according to Eq. (<ref>) for the same reason as in Proposition <ref>. Thus there are no photons in the parallel waveguides.
Noticing that the dynamics of Eq. (<ref>) is the same as that in Eq. (<ref>) based on the same initial condition that the atom is at the highest energy level and there are no photons in the cavity or waveguide, when the system in Eq. (<ref>) is exponentially stable, then lim_t→∞c_j^j(t) = 0 for arbitrary j in Eq. (<ref>) and all the emitted photons are in the parallel waveguide array. Therefore, we consider the following simplified model for the photonic states in the parallel waveguides as
ċ_j,⊙ n_1⋯⊙ n_W^0(t,k_⊙ n_1^1,⋯,k_⊙ n_W^W) =
i∑_𝐰 =1^W-1 K_𝐰 (𝐰+1) c_j,k^0(t,k_⊙ n_1^1,⋯,k_⊙ n_𝐰+1^𝐰,k_⊙ n_𝐰+1-1^𝐰+1,⋯,k_⊙ n_W^W)
+ i∑_𝐰 =2^W K_𝐰(𝐰-1)c_j,k^0(t,k_⊙ n_1^1,⋯,k_⊙ n_𝐰-1-1^𝐰-1,k_⊙ n_𝐰+1^𝐰,⋯,k_⊙ n_W^W)
+i∑_𝐰 =1^W n_𝐰β_𝐰c_j,⊙ n_1⋯⊙ n_W^0(t,k_⊙ n_1^1,⋯,k_⊙ n_W^W).
In particular, when N_W = 1, i.e., the single-excitation case, denoting c_j,⊙ n_1⋯⊙ n_W^0(t,k_⊙ n_1^1,⋯,k_⊙ n_W^W) by c_𝐰(t) for the state that there is one photon of the given mode k in the 𝐰-th waveguide. Then Eq. (<ref>) can be re-written as
ċ_𝐰(t) = iK_(𝐰-1) 𝐰 c_𝐰-1(t) + iK_𝐰 (𝐰+1)c_𝐰+1(t) + i β_𝐰c_𝐰(t).
Denote C_W(t) = [c_1(t), ⋯, c_𝐰(t), ⋯,c_W(t)]^⊤∈𝐂^W. Then
Ċ_W(t) = iG_W C_W(t),
where
G_W =
[ β_1 K_12 0 ⋯ 0 0; K_12 β_2 K_23 ⋯ 0 0; 0 K_23 β_3 ⋯ 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 ⋯ β_W-1 K_(W-1)W; 0 0 0 ⋯ K_(W-1)W β_W; ].
As all the eigenvalues of the real symmetric matrix G_W are real, the eigenvalues of iG_W must be purely imaginary. Thus C_W(t) will oscillate, i.e., the photon is superposed over the parallel waveguides.
§.§ A simulation example with N=2, W=3
Take the circumstance that a two-level atom is coupled with a cavity, and the cavity is coupled with three parallel waveguides. Let the initial state be |1,0,0⟩. Then there is at most one photon in the cavity or the waveguides for all time. The quantum state of the system can be represented as
|Ψ(t)⟩ = c_e(t)|1,0,0⟩ + c_g(t)|0,1,0⟩ + ∫ c_k^1(t,k)|0,0,k_⊙ 1^1⟩dk
+ ∫ c_k^2(t,k)|0,0,k_⊙ 1^2⟩dk + ∫ c_k^3(t,k)|0,0,k_⊙ 1^3⟩dk,
where |1,0,0⟩ means that the atom is excited and there are no photons in the cavity or waveguides, |0,1,0⟩ means that the atom is at the ground state and there is one photon in the cavity, |0,0,k_⊙ 1^𝐰⟩ means that the atom is at the ground state, there is no photon in the cavity, and there is one photon in the 𝐰th waveguide. Additionally, we denote the time-varied amplitude of the central mode of the photon in the 𝐰th waveguide as c_𝐰(t) with 𝐰 = 1,2,3. The simulation results when G_0 = 0.25, γ =1, Δ_0τ = π, K_12 = K_23 = 0.5 and β_1 = β_2 = β_3 = 0 are shown in Fig. <ref>. The oscillation of the photons in waveguides is determined by the zero points of the following determinant according to Eq. (<ref>)
|sI-iG_W| = s(s^2+K_12^2 + K_23^2 ),
which reveals that the steady photon amplitudes oscillate around different nonzero mean values with the frequency determined by √(K_12^2 + K_23^2) when β_𝐰 = 0.
§.§ A simulation example with N=3, W=2
Take the circumstance that a three-level ladder type atom is coupled with the cavity, and the cavity is coupled with two parallel waveguides. Then the state of the quantum system can be represented as
|Ψ(t)⟩ = c_0^0(t)|2,0,0⟩ + c_1^1(t)|1,1,0⟩ + c_2^2(t)|0,2,0⟩
+ ∫ c_1,⊙ 1⊙ 0^0(t,k_1,1⊙ 1^1)|1,0,{k_⊙ 1^1},{k_⊙ 0^2}⟩dk_1,1⊙ 1^1 + ∫ c_1,⊙ 0⊙ 1^0(t,k_1,1⊙ 1^2)|1,0,{k_⊙ 0^1},{k_⊙ 1^2}⟩dk_1,1⊙ 1^2
+ ∫ c_2,⊙ 1⊙ 0^1(t,k_2,1⊙ 1^1)|0,1,{k_⊙ 1^1},{k_⊙ 0^2}⟩dk_2,1⊙ 1^1 + ∫ c_2,⊙ 0⊙ 1^1(t,k_2,1⊙ 1^2)|0,1,{k_⊙ 0^1},{k_⊙ 1^2}⟩dk_2,1⊙ 1^2
+ ∫∫ c_2,⊙ 2⊙ 0^0 (t,k_2,1⊙ 2^1,k_2,2⊙ 2^1)|0,0,{k_⊙ 2^1},{k_⊙ 0^2}⟩dk_2,1⊙ 2^1 dk_2,2⊙ 2^1
+ ∫∫ c_2,⊙ 0⊙ 2^0 (t,k_2,1⊙ 2^2,k_2,2⊙ 2^2)|0,0,{k_⊙ 0^1},{k_⊙ 2^2}⟩dk_2,1⊙ 2^2 dk_2,2⊙ 2^2
+ ∫∫ c_2,⊙ 1⊙ 1^0 (t,k_2,1⊙ 1^1,k_2,1⊙ 1^2)|0,0,{k_⊙ 1^1},{k_⊙ 1^2}⟩dk_2,1⊙ 1^1 dk_2,1⊙ 1^2 ,
where the meaning of the state vectors are as follows:
(1) |2,0,0⟩: the atom is at the excited state |2⟩ and there are not any photons in the cavity or waveguides;
(2) |1,1,0⟩: the atom is |1⟩ and there is one photon in the cavity but no photons in the waveguides;
(3) |0,2,0⟩: the atom is at the ground state |0⟩ and there are two photons in the cavity but no photons in the waveguides;
(4) |1,0,{k_⊙ 1^1},{k_⊙ 0^2}⟩: the atom is |1⟩ and there is one photon in the first waveguide, and the mode of the photon is k_2,1⊙ 1^1, where the subscript represent that there are overall one excitons and the photon is in the first waveguide;
(5) |1,0,{k_⊙ 0^1},{k_⊙ 1^2}⟩: the atom is |1⟩ and there is one photon in the second waveguide;
(6) |0,1,{k_⊙ 1^1},{k_⊙ 0^2}⟩: the atom is |0⟩, there is one photon in the cavity and one photon in the first waveguide;
(7) |0,1,{k_⊙ 0^1},{k_⊙ 1^2}⟩: the atom is |0⟩, there is one photon in the cavity and one photon in the second waveguide;
(8) |0,0,{k_⊙ 2^1},{k_⊙ 0^2}⟩: the atom is |0⟩, there are two photons in the first waveguide;
(9) |0,0,{k_⊙ 0^1},{k_⊙ 2^2}⟩: the atom is |0⟩, there are two photons in the second waveguide;
(10) |0,0,{k_⊙ 1^1},{k_⊙ 1^2}⟩: the atom is |0⟩, there is one photon in the first waveguide and the other in the second waveguide.
The meanings of the photon mode in the waveguide are as following:
(1) k_1,1⊙ 1^1, the mode of the photon in the first waveguide when j=1 and there is one photon in the first waveguide;
(2) k_1,1⊙ 1^2, the mode of the photon in the second waveguide when j=1 and there is one photon in the second waveguide;
(3) k_2,1⊙ 1^1, the mode of the photon in the first waveguide when j=2, there is one photon in the cavity and the other is in the first waveguide;
(4) k_2,1⊙ 1^2, the mode of the photon in the second waveguide when j=2, there is one photon in the cavity and the other is in the second waveguide;
(5) k_2,1⊙ 2^1,k_2,2⊙ 2^1, the modes of the two photons in the first waveguide when j=2 and there are two photons in the first waveguide;
(6) k_2,1⊙ 2^2,k_2,2⊙ 2^2, the modes of the two photons in the second waveguide when j=2 and there are two photons in the second waveguide;
(7) k_2,1⊙ 1^1,k_2,1⊙ 1^2, when j=2 and there is one photon in the first and second waveguide, respectively, k_2,1⊙ 1^1 is for the photon mode in the first waveguide, k_2,1⊙ 1^2 for the photon in the second waveguide.
The population representing that there is one photon in the first or the second waveguide can be represented
according to Eq. (<ref>).
The dynamics of the quantum states in Eq. (<ref>) is governed by Eq. (<ref>) with N=3 and W = 2. In the following numerical simulations, Δ_0 =50, G_0 = 0.2, γ_1 = γ_2 =0.3, Δ_0τ = 3π, β_1 = β_2 = 0.1, K_12 = K_21 = 0.5.
As illustrated in Fig. <ref>, the excited three-level atom can emit two photons, and finally the two-photon state can oscillate in the parallel waveguide array.
§ CONCLUSION
In this paper, we have studied a coherent feedback scheme in the architecture that a multi-level atom is coupled with the cavity and the cavity is coupled with a single waveguide or multiple parallel waveguides. The available number of photons in the waveguide is determined by the evolution of the quantum states, especially by their exponential stabilities. By tuning the length of the feedback loop, the photons can be emitted into the waveguides, then the eigenstates representing that there are photons in the cavity or the atom is excited can simultaneously converge to zero; or reversely the multi-level atom exchanges photons with the cavity via Rabi oscillations and there are no photons in the waveguides. The photonic states in the parallel waveguides can be represented in the tensor format and the coupling parameters among the parallel waveguides can further influence the distribution of photons in the waveguide array.
In this Appendix, we derive the delay-dependent ODE system in Eq. (<ref>) in the main text.
The third term of the right-hand side of Eq. (<ref>) can be represented as
i∑_p=1^j-m+1∫ G(k_p,t) c_j,k^m-1(t,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+1) dk_p
=i∑_p=1^j-m+1∫ G_0sin(k_pL)e^-i(ω_p-Δ_0)t c_j,k^m-1(t,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+1) dk_p
=i∑_p=1^j-m+1∫ G_0sin(k_pL)e^-i(ω_p-Δ_0)t∫_0^t ċ_j,k^m-1(u,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+1) du dk_p
=i∑_p=1^j-m+1∫∫_0^t G_0sin(k_pL)e^-i(ω_p-Δ_0)tċ_j,k^m-1(u,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+1) du dk_p,
where
ċ_j,k^m-1(u,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+1)
= i√(m-1)γ_N-j c_j-1,k^m-2(u,k_1,⋯,k_j-m+1) + i√(m)γ_N-j-1 c_j+1,k^m(u,k_1,⋯,k_j-m+1)
+i∑_p=1^j-m+2∫ G(k_p,u) c_j,k^m-2(u,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+2) dk_p
+i∑_p=1^j-m+1 G^*(k_p,u) c_j,k^m(u,k_1,⋯,k_p-1,k_p+1,…, k_j-m), m>0.
First, let us look at the the first term on the RHS of Eq. (<ref>). Denote the round trip delay τ = 2L/c, then
-√(m-1)γ_N-j G_0 ∫∫_0^t sin(k_pL)e^-i(ω_p-Δ_0)t c_j-1,k^m-2(u,k_1,⋯,k_j-m+1) du dk_p
= -√(m-1)γ_N-j G_0 ∫∫_0^t e^ik_pL - e^-ik_p L/2ie^-i(ω_p-Δ_0)t c_j-1,k^m-2(u,k_1,⋯,k_j-m+1) du dk_p
=i√(m-1)γ_N-j G_0/2∫∫_0^t (e^-i(ω_p-Δ_0)t + ik_pL -e^-i(ω_p-Δ_0)t - ik_pL) c_j-1,k^m-2(u,k_1,⋯,k_j-m+1) du dk_p
=i√(m-1)γ_N-j G_0/2e^iΔ_0 t∫ ( e^-iω_p(t-τ/2) -e^-iω_p(t+τ/2)) ∫_0^t c_j-1,k^m-2(u,k_1,⋯,k_j-m+1) du dk_p.
Denote c̃(t,k_p) = ∫_0^t c_j-1,k^m-2(u,k_1,⋯,k_j-m+1) du, then Eq. (<ref>) reads
i√(m-1)γ_N-j G_0/2 e^iΔ_0 t∫( e^-iω_p(t-τ/2) -e^-iω_p(t+τ/2)) c̃(t,k_p)dk_p
=i√(m-1)γ_N-j G_0/2 e^iΔ_0 t[δ(t-τ/2) -δ(t+τ/2)]∫_0^t c_j-1,k^m-2(u,k_1,⋯,k_j-m+1) du
- i√(m-1)γ_N-j G_0/2 e^iΔ_0 t[δ(t-τ/2) -δ(t+τ/2)] ∫∫_0^t ∂ c_j-1,k^m-2(u,k_1,⋯,k_j-m+1)/∂ k_pdu dk_p
=0,
because the integration equals zero when t≠τ/2 and the amplitude is continuous in the time domain.
Similarly the second term on the RHS of Eq. (<ref>) equals zero too.
Substituting the third component of Eq. (<ref>) into Eq. (<ref>), the integration reads
-G_0^2 ∑_p=1^j-m+1∫sin(k_pL)e^-i(ω_p-Δ_0)t∫_0^t ∑_q=1^j-m+2∫sin(k_qL)e^-i(ω_q-Δ_0)t c_j,k^m-2(u,k_1,⋯,k_q-1,k_q,k_q+1…,k_j-m+2) dk_q.
When p≠ q, the above integration equals zero. When p=q, by the calculations in Eq. (<ref>)
-G_0^2 ∑_p=1^j-m+1∫sin(k_pL)e^-i(ω_p-Δ_0)t∫_0^t ∫sin(k_pL)e^-i(ω_p-Δ_0)t c_j,k^m-2(u,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+2) dk_p du dk_p
=-G_0^2 ∑_p=1^j-m+1∫∫sin^2(k_pL)e^-i2(ω_p-Δ_0)t∫_0^t c_j,k^m-2(u,k_1,⋯,k_p-1,k_p,k_p+1…,k_j-m+2) du dk_p dk_p
=-G_0^2 ∑_p=1^j-m+1∫∫sin^2(k_pL)e^-i2(ω_p-Δ_0)tc̅(t,k_p) dk_p dk_p
=-G_0^2 ∑_p=1^j-m+1∫∫(1/2 -1/4e^iω_pτ -1/4e^-iω_pτ)e^-i2(ω_p-Δ_0)tc̅(t,k_p) dk_p dk_p
=0.
Substituting the fourth term on the RHS of Eq. (<ref>) into Eq. (<ref>), the integration reads
-∑_p=1^j-m+1∫∫_0^t G_0sin(k_pL)e^-i(ω_p-Δ_0)t∑_q=1^j-m+1 G^*(k_q,u) c_j,k^m(u,k_1,⋯,k_q-1,k_q+1,…, k_j-m) du dk_p
=-G_0^2 ∑_p=1^j-m+1∫_0^t ∫sin^2(k_pL) e^-i(ω_p-Δ_0)(t-u) c_j,k^m(u,k_1,⋯,k_p-1,k_p+1,…, k_j-m) dk_p du
=-G_0^2/4c∑_p=1^j-m+1∫_0^t ∫[2e^-i(ω_p-Δ_0)(t-u) - e^iΔ_0τ e^-i(ω_p-Δ_0)(t-u-τ) - e^-iΔ_0τe^-i(ω_p-Δ_0)(t-u+τ)]
c_j,k^m(u,k_1,⋯,k_p-1,k_p+1,…, k_j-m) dk_p du
=-G_0^2/4c∑_p=1^j-m+1∫_0^t [2δ(t-u) - e^iΔ_0τδ(t-u-τ) - e^-iΔ_0τδ(t-u+τ)]c_j,k^m(u,k_1,⋯,k_p-1,k_p+1,…, k_j-m) du
=-G_0^2/4c∑_p=1^j-m+1[c_j,k^m(t,k_1,⋯,k_p-1,k_p+1,…, k_j-m) - e^iΔ_0τ c_j,k^m(t-τ,k_1,⋯,k_p-1,k_p+1,…, k_j-m) ],
which is the component with delay in Eq. (<ref>) in the main text.
IEEEtran
|
http://arxiv.org/abs/2306.04441v1
|
20230607135855
|
STEPS: A Benchmark for Order Reasoning in Sequential Tasks
|
[
"Weizhi Wang",
"Hong Wang",
"Xifeng Yan"
] |
cs.CL
|
[
"cs.CL"
] |
When time matters:Poissonian cellular Potts models reveal nonequilibrium kinetics of cell sorting
A. Erzberger 0000-0002-2200-4665
July 31, 2023
=================================================================================================
Various human activities can be abstracted into a sequence of actions in natural text, i.e. cooking, repairing, manufacturing, etc. Such action sequences heavily depend on the executing order, while disorder in action sequences leads to failure of further task execution by robots or AI agents. Therefore, to verify the order reasoning capability of current neural models in sequential tasks, we propose a challenging benchmark , named . involves two subtask settings, focusing on determining the rationality of given next step in recipes and selecting the reasonable step from the multi-choice question, respectively. We describe the data construction and task formulations, and benchmark most of significant Large Language Models (LLMs). The experimental results demonstrate 1) The commonsense reasoning of action orders in sequential tasks are challenging to resolve via zero-shot prompting or few-shot in-context learning for LLMs; 2) Prompting method still significantly lags behind tuning-based method on . The benchmarking dataset will be open-sourced at <https://github.com/Victorwz/STEPS>.
§ INTRODUCTION
Human tasks are universally described and abstracted into a sequence of actions. Such action sequences are mostly recorded and spread in the form of natural text, i.e. in the form of recipes, product manuals, service manuals, etc. Human have generalized and flexible capability to understand such action sequences and executing such actions in order. Human can also easily infer whether the given step is a reasonable next step without exposure to large amount of prior knowledge. With such reasoning capability, human can avoid making disordered next steps or actions to prevent the failure or accidents on the whole tasks. For example, boiling water should always go behind adding pasta to the pot, otherwise the pasta will get burned. Therefore, reasoning about the plausibility of next steps are essential for human to accomplish both daily and producing tasks.
Large Language Models (LLMs) <cit.> have significantly promoted the state-of-the-art on benchmarks of natural language understanding and generation <cit.>. Enabled by self-supervised learning on large-scale high-quality training corpus and billions of parameters, LLMs are found to be capable of completing downstream tasks as few-shot or zero-shot learners. Via simple prompting with task-specific natural language templates, LLMs can achieve state-of-the-art performance on text classification, sentiment analysis, reading comprehension, language modeling, etc. without any further tuning with task-specific data. In addition, using the method in-context learning (ICL) to get LLMs exposed to several prompting task-specific examples, LLMs can harvest the task-specific knowledge in given local context and achieve human-parity performance on downstream tasks <cit.>.
Human can easily draw an answer to commonsense questions via access and memory to world knowledge and daily observations. However, LLM only encodes and acquires the commonsense knowledge via pre-training on large text corpora. Such knowledge is implicitly encoded in its trainable parameters, which weakens its reasoning capability without explicit memory and access towards commonsense knowledge. For example, human can easily order sequential actions in a recipe that pre-heating the oven goes before putting the pizza into it because human are heavily exposed to daily cooking scenarios and a large amount of knowledge bases like Internet or Books. In contrast, LLMs can only acquire such simple commonsense knowledge via neural-based memorization on extremely small split of recipes in web-crawled text dataset. To robustly and effectively evaluate the action order reasoning capability of language models, we propose a novel benchmark . involves two subtask settings: classification, which verifies the reasoning capability in determining the rationality of the candidate next step given previous steps in a recipe, and multi-choice setting, which focuses on differentiating the correct next step choice given two candidate next steps and the previous steps in the recipe. Firstly, We formulate the sequence order reasoning evaluation into two evaluation tasks, the classification and multi-choice questions. Then we present the data resources, data construction and evaluation setting in details. Based on that, we benchmark three groups of state-of-the-art LLMs (GPT2, OPT, BLOOM) as baselines to evaluate their sequence order reasoning capabilities on recipes.
§ RELATED WORK
Large Language Models for Reasoning Tasks.
LLMs are becoming dominant methods on all natural language processing tasks in the few-shot or zero-shot learning manner, while their capabilities in commonsense reasoning remain under explored. <cit.> classifies the various reasoning benchmarks into four categories based on the required reasoning skills, arithmetic reasoning <cit.>, commonsense reasoning <cit.>, logical reasoning <cit.>, symbolic reasoning <cit.>, and multimodal reasoning <cit.>. The proposed Sequence Order Reasoning benchmark lies at the research line of commonsense reasoning for LLMs. In addition to conventional directly answering methods of fully fine-tuning, zero-shot prompting or in-context learning, Chain-of-Thought (CoT) Prompting <cit.> guides LLMs to generate explicit intermediate reasoning steps to get the final answer.
§
§.§ Task Formulation
Classification Setting. The task of next step reasoning requires the LLMs to figure out whether the given step is a reasonable next step and is confronted with the previous recipe. The task can be naturally formulated into a binary classification task on the given textual concatenation of previous steps and candidate next step, which can be formulated as follows:
Assume a recipe R_i∈𝒟_R contains N_i action steps described in textual sentences {S^(i)_1, S^(i)_2, ⋯, S^(i)_N_i}, ∀ j∈ [2,⋯,N_i-1]. Given the previous steps {S^(i)_1, S^(i)_2, ⋯, S^(i)_j-1}, each candidate step in the set of {S^(i)_j, S^(i)_j+1, ⋯, S^(i)_N_i} will be classified and recognized based on whether it is the correct next step given the previous steps. The ground truth next step S^(i)_j should be classified into the label of "Yes" while the other candidates S^(i)_j+1, ⋯, S^(i)_N_i should be classified into "No".
Multi-Choice Question Setting. Parallel to the classification task setting which lies at the natural language understanding pattern, we propose the second task setting, multi-choice question setting, which is more confronted with the causal language modeling manner of LLMs. Given the textual sequence of the previous steps {S^(i)_1, S^(i)_2, ⋯, S^(i)_j-1}, LLMs are required to choose the correct next reasonable step in the two step candidates. The correct choice of next step will be the step of S^(i)_j, while the false choice is a step randomly selected from the steps {S^(i)_j+1, ⋯, S^(i)_N_i}.
§.§ Dataset Construction
The benchmark of sequence order reasoning is conducted based on the Food.com Recipes dataset, a web crawled dataset of recipes from “food.com“ over 18 years which is collected and released by <cit.>. We keep the original train/dev/test splits for recipes and filter the recipes with less than four action steps or more than ten action steps to avoid exceeding input context length limitation of LLMs. To construct the classification dataset, for each original recipe with N steps, the true sample are constructed as (S_[1:i-1], S_i),∀ i∈ [2,N-1] and the false samples are constructed as (S_[1:i-1], S_i),∀ i∈ [2,N-1], ∀ k∈ [j+1, N]. To construct the dataset of multi-choice question setting, for each original recipe with N steps, each sample is constructed as the tuple of (S_[1:i-1], S_i, S_k),∀ i∈ [2,N-1], in which k is a random selection within [i+1, N].
The statistics for the constructed datasets on two subtask settings are presented in Table <ref>.
§.§ Baselines
For the baselines of proposed benchmark, we evaluate three groups of significant large language models, including 1) four size of GPT-2 <cit.> (Small, Medium, Large, and XL); 2) three size of Open-Pretrained-Transformer Language Models (OPT) <cit.> (1.3B, 13B, and 30B); 3) two size of BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) <cit.> (3B, 7B1). We include the model architecture details for all baseline LLMs in Table <ref>.
§.§ Evaluation Setting
Classification Evaluation Setting. All baseline large language models are evaluated in zero-shot, one-shot and few-shot in-context learning manner. For the few-shot in-context learning evaluation, we set the number of demonstration examples K in the prompt context as 4. The k demonstration examples are balanced, in which k/2 positive samples and k/2 negative samples are randomly selected from positive and negative training samples, respectively. For each K we evaluate with six random seeds for the random selection on the demonstrations and the mean and standard deviation of classification accuracy are reported. For each test sample with (previous-steps, next-step, label), we deploy the textual task template "[previous-steps] . Is the next step of [next-step] a reasonable step in this recipe ?" to concatenate the previous steps and the candidate step into a prompt query. Then the prediction label is chosen based on the relation between P(Yes|query) and P(No|query).
In addition, we fully fine-tune three size of GPT-2 models on the training set of the classification subtask to verify the task-specific adaptation capability of LLMs, and the fine-tuning details are presented in Appendix <ref>. As the constructed dataset for classification task is imbalanced (78.7% negative versus 21.3% positive samples), we deploy the class-wise accuracy as the evaluation metrics for classification task setting. Specifically, the classification accuracy on positive class (), the classification accuracy on negative class (), and the geometric mean of and are deployed towards the proposed imbalanced classification task. To avoid imbalanced fine-tuning, we adopt up-sampling on the positive samples in the training set to match the number of negative samples. For each fine-tuned baseline LLM, we truncate the whole resampled training set into segments, of which the length is 1024 tokens. We fine-tune each LLM for 6000 updates in total with the batch size of 8. We perform the validation every 300 updates and save the checkpoint with best performance on validation set. The LLM checkpoints are accessed via <cit.>. We deploy Adam <cit.> (β_1=0.9,β_2=0.98) optimizer and train all models with lr=0.0003.
Multi-Choice Question Evaluation Setting. For each test samples with the previous steps and two options, each query q into LLM is the concatenation of the previous steps and one option. We follow <cit.> to use the language modeling perplexity as the scorer for each potential query score(q)=PPL(q). Then the option with lower language modeling score is selected as the solution. The multi-choice answering accuracy is used as the evaluation metric in this task setting.
§.§ Benchmark Results
The evaluation results of baseline LLMs on the proposed two subtasks of benchmark are presented in Table <ref> and Table <ref>.
Classification Results. In zero-shot prompting evaluation, we find that GPT-2 series LLMs partially fail in performing the classification on the rationality of given next steps. The model predictions all lies at positive class towards 1000 testing samples, leading to perfect sensitivity but almost 0 specificity. In addition, it is hard to conclude that the performance strictly increases with the scaling up of the model. Scaling up the model is beneficial to performance improvement for BLOOM models, in which BLOOM-7b1 significantly outperforms its 3B size model by 38.6% G-Mean score. But such scaling law is not supported for GPT-2 and OPT series of models and the largest baseline LLMs, OPT-30B performs worse than smaller OPT models, OPT-13B. Secondly, providing demonstrations to perform in-context learning helps LLMs to avoid fully biased predictions on positive class, in which three size of GPT-2 models (M, L, XL) gain large performance improvement on both Specificity and G-Mean score. However, the demonstration examples do not contribute to the LLMs which have performed well in zero-shot learning, including OPT-1.3B, OPT-30B, and BLOOM-7B1. At last, tuning-based method still achieves best performance compared with non-parametric methods for each LLM. The balanced fine-tuning can effectively fix the issues of biased prediction and zero Specificity for GPT-2 (S, M, XL) models, leading to the increase of 73% Specificity for GPT2-Large compared with zero-shot learning.
Multi-Choice Results. The results on multi-choice answering subtask for LLMs strictly follow the scaling law, in which the largest model, OPT-30B outperforms all others with 71.7 answering accuracy. In addition, we find that such scaling law is even valid across different groups of LLMs, in which BLOOM-3B achieves better performance than OPT-1.3B. Such experimental results demonstrate that the multi-choice question is a more effective, accurate, and robust method for evaluating LLMs because it is evaluated via language modeling perplexity scoring which is the same as pre-training objective of LLMs.
§ CONCLUSIONS AND DISCUSSIONS
In this paper, we propose a novel commonsense reasoning benchmark for order reasoning in sequential tasks. We present the two evaluation subtask settings for , classification task and multi-choice question answering task, as well as the task formulation and data construction. We benchmark most of state-of-the-art LLMs on for further comparisons.
Overall, the experimental results demonstrate that the performance of LLMs on both classification and multi-choice question settings all lie at the interval of 70%-80% using the accuracy-style metrics, which might be a potential performance upper-bound for LLMs pre-trained on large-scale corpora via self-supervision. To go beyond this performance bound, more commonsense knowledge bases and effective chain-of-thought prompting method are supposed to be introduced in LLMs reasoning on the proposed sequence order reasoning benchmark.
§ ACKNOWLEDGEMENTS
This research was partly sponsored by the DARPA PTG program (HR001122C0009). Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of funding agencies.
acl_natbib
|
http://arxiv.org/abs/2306.07991v1
|
20230611135529
|
Description of the three phases of water regarded as Bose-Einstein condensates
|
[
"François Fillaux"
] |
cond-mat.stat-mech
|
[
"cond-mat.stat-mech",
"physics.chem-ph"
] |
[][email protected]
Sorbonne Université, CNRS, MONARIS, 4 place Jussieu, Paris, F-75005 France.
The pivotal reason why many physical properties of water are notoriously at odd with Boltzmann's statistics is that the states of equilibrium of H_2O are Bose-Einstein-condensates—free of quantum and statistical fluctuations—protected against depletion by the modest dissociation threshold of the H-bonds. Indeed, condensates support a line of reasoning proceeding from the laws of quantum mechanics to macroscopic phenomena, with no quantum-to-classical discontinuity and no arbitrary hypothesis. The phases of water are two-level systems, with or without tunneling splittings of second order. The heat capacity indicates departure from equipartition of energy due to entanglement. The thermal energy of every phase is determined by occupation numbers and a phase transition occurs whenever molecules bunch into either level. The latent heats mirror changes of the number of degenerate condensates and of the H-bonds. The structures of the three phases are isomorphic hexagonal lattices. This case study demonstrates existence of condensates for the three phases of the matter in an everyday environment.
Description of the three phases of water regarded as Bose-Einstein condensates
François Fillaux
July 31, 2023
==============================================================================
Water, the matrix of life, covers the two-thirds of our planet. It is also one of the most abundant constituent of the universe. This molecule is of central importance in physic and chemistry, earth and life sciences, cosmology and technologies. No other substance is found as a solid, a liquid, or a gas at normal pressure. At the microscopic level, the most popular descriptions of the three phases are statistical in nature: <cit.> Hexagonal ice Ih is viewed as a frustrated dynamical lattice hosting an exponential number of proton configurations in conformity with the “ice-rules”; <cit.> Liquid water is a tetrahedral network of cooperative H-bonds in a jumble of molecular clusters which continually break and form; Steam is thought of as a random dimers linked through fleeting H-bonds. In every case, nuclear quantum effects indicate that the working of water is quantum in nature. <cit.>
Although water is likely the most extensively studied material, <cit.> we still do not have a satisfactory theory, in the sense that there is no line of argument proceeding from the laws of microscopic physics to macroscopic phenomena, that is convincing in all respects, with no arbitrary assumption. The lack of theory has brought modeling and computer simulations to the forefront of knowledge, but these approaches are not conclusive, for they are notoriously sensitive to how forces are defined. So far, there is no compelling explanation of the thermal properties of water, nor of its many “abnormal” properties, and the innumerable variety of models indirectly underscores their lack of success. <cit.> This failure jeopardizes our ability to capture the functioning of more complicated systems like those of biological interest. <cit.>
Here, I present a quantum description of water that is a follow up of a previous work which showcased the quantum nature of the phase transitions. <cit.> The conceptual twist introduced in the present work is that water molecules are H-bonded bosons whose number of stationary eigenstates is drastically limited by the dissociation threshold of the H-bonds, that is D_0 = (13.2 ± 0.12)10^3 J.mol^-1 (≈ 1.6× 10^3 K) for (H_2O)_2. <cit.> These bosons necessarily cluster in a stationary state below D_0 that is a Bose-Einstein condensate (BEC). An ensemble of N H_2O confined inside a box in diathermal contact with a black-body at T is in one and the same state with perfect phase-correlation, irrespective of the separation in real space. (Molecular spin-states and boundary effects are ignored.) BEC can be described with the first-order correlation function depending on the off-diagonal elements of the reduced one-body density matrix, <cit.> but, here, it is not necessary to pursue this formalism because we are exclusively concerned with generic features. (i) The lack of fluctuation means that the thermodynamic temperature is not an internal variable and the entropy-related laws of thermodynamics are irrelevant. (ii) The spacial coherence means that every tetrahedrally coordinated molecule sits at the center
of an hexagonal ring
of 6 equidistant nearest neighbors isomorphic to ice Ih. <cit.>
(III) The fluctuation-free wavefunction (aka the order parameter) interlinking macroscopic classical-like properties and quantum physics is Φ(t) = ∑_k(D_0)√(N_k)Φ_kexp iω_k t. <cit.> N_k is the occupation number of the eigenstate |Φ_k⟩ and ∑_k N_k = N for E_k < D_0.
The benchmark of the theory is the molar heat-capacity effectively measured throughout the phase diagram (Fig. <ref> and Table <ref>). This is a glaring illustration of the departure of water from statistical physics on several points. First consider the liquid: C_W ≈ 9ℛ is the classical limit for dipolar relaxation. Then, the first question is why the heat capacity is divided by 2 at the freezing point T_F, as well as at the boiling point T_B? The second set of questions is why C_I (ice) vanishes below T_0, in defiance of Debye's T^3-law, <cit.> and why C_S = 9/2ℛ is a constant for steam? The answer to these questions is that entangled H-bonds violate the equipartition theorem.
The heat capacity is rationalized in Sec. <ref> and Sec. <ref> describes the three phases of water. The internal energies and the partition functions are gathered in Table <ref>. Table <ref> gives the relationships between the critical temperatures and the eigenenergies.
§ THE HEAT CAPACITY
Because there is no fundamental theory for H-bonds HO_d-H⋯O_aH_2 between a donor H_2O_d and an acceptor H_2O_a, the interpretation of spectroscopic data rests on models which are either classical or quantum in nature.
The classical model consists of a dimensionless proton moving in an asymmetric double-well along the stretching coordinate. The asymmetry is the energy difference between configurations HO_d-H⋯O_aH_2 (L) and HO_d⋯H-O_aH_2 (R) with opposite electric dipole moment (EDM) orientations. Accordingly, Bove et al. <cit.> sorted out of the inelastic neutron scattering (INS) spectra of ice quasi-elastic profiles from which they deduced proton relaxation rates via thermally activated over-barrier proton-jumps. However, they noted that the observed lack of temperature effect is not consistent with
classical jumps and concluded that quantum effects are likely.
In the quantum representation the eigenstates and eigenenergies of L below D_0 read: <cit.>
[ |ψ_L0⟩ = cosϕ |ψ_L⟩ + sinϕ |ψ_R⟩; E_0;; |ψ_L1⟩ = sinϕ |ψ_L⟩ - cosϕ |ψ_R⟩; E_0+hν_1.; ]
|ψ_L⟩ and |ψ_R⟩ are the zeroth-order local states and the mixing angle ϕ≪π determines the relaxation rate. The eigenenergies are invariant via L ⟷ R permutation. In the momentum representation the Fourier transform FTψ_L0 and FTψ_L1 are quite different and so the kinetic energy depends on T, in conformity with the equipartition theorem.
For isolated dimers superposition of L and R configurations is favored by the EDM interaction: <cit.>
[ |ψ_0±⟩ = 1/√(2) [|ψ_L0⟩± |ψ_R0⟩] ; E_0 + 1/2 (- E_μ± hν_t);; |ψ_1±⟩ = 1/√(2) [|ψ_L1⟩± |ψ_R1⟩]; E_0 + hν_1 + 1/2 (E_μ± hν_t).; ]
hν_t ≈ 2ϕ hν_1 ≪ E_μ. |ψ_0±⟩ is protected against decoherence by the energy gain -1/2 E_μ for antiparallel EDM in the zeroth-order ground state |ψ_0⟩ = 1/√(2) [|ψ_L⟩ + |ψ_R⟩]. This gain counterbalances the energy cost 1/2 E_μ for parallel EDM in |ψ_1⟩ = 1/√(2) [|ψ_L⟩ - |ψ_R⟩]. Therefore, (<ref>) ⟷ (<ref>) is energy free. In the momentum representation, FTψ_0+ = FTψ_1+ and FTψ_0- = FTψ_1-. So the kinetic energy is frozen and equipartition is precluded.
At the microscopic level, (<ref>) and (<ref>) explain neutron Compton scattering (NCS) data reported for ice and liquid water. <cit.> This technique probes the mean kinetic-energy of protons, say K̅(T). In the case of equipartition K̅(T) = K̅_0 + 3/2 k_B T, where K̅_0 is the zero-point energy—that is of first order T-independent—and 3/2 k_B ≈ 0.12 meV.mol^-1.K^-1. The data presented by Senesi et al. (Fig. 4) <cit.> depart from equipartition regarding two points. For ice, the slope of (0.02 ± 0.02) meV.mol^-1.K^-1 is within error bars. K̅ is of first order T-independent, in conformity with (<ref>). For the liquid, K̅(T) ≈K̅_0 + 3/2k_B (T-T_F). The slope accords with equipartition but the temperature law is not 3/2 k_BT. (This is explained in Sec <ref>.) As a consequence, the heat capacity of the condensate is proportional to either 9 ℛ for (<ref>) in the liquid or 9/2ℛ for (<ref>) in ice (and steam).
Regarding the interpretation of the INS spectra, the dilemma is whether they consist of the broad quasi-elastic profile contemplated by Bove et al., or, alternatively, consist of separate features on either side of the elastic peak. In this respect, the spectra taken at face value are ambiguous but the quantum nature of ice, the lack of temperature effect, the split probability density of protons <cit.> and NCS data definitively discredit (<ref>). Contrariwise, hν_t/k_B = (1.2 ± 0.2) K (k_B is Boltzmann's constant) explains with no objection the humps of intensity clearly visible at ± (0.10 ± 0.01) meV. <cit.>
§ THE PHASES OF WATER
§.§ Ice
The tunneling splitting of H_14O_7 made out of 7 non-interacting pairs (<ref>) explains T_0 ≈ 7 hν_t / k_B (Table <ref>). This unit is involved as whole in heat transfer. The eigenenergies (frequencies) are 7hν_t (ω_t), 7(hν_1 + E_μ) (ω_1+), 7(hν_1 + E_μ + hν_t) (ω_1-), and the Φ_k's are identical to the ψ_i's (<ref>),
Below T_0 the time-independent ground-state precludes heat transfer (C ≡ 0). Ice is a “super-insulator” and k_BT_0 is the lowest state attainable upon cooling.
In the range T_0 -T_F:
[ N_0-/N = 1-Θ_I ;; N_1/N = Θ_I;; Φ_I = √(N_0-)ψ_0- e^iω_tt + √(N_1+)ψ_1+ e^iω_1+t; + √(N_1-)ψ_1- e^iω_1-t .; ]
Θ_I is the partition function (Table <ref>) and N_1 = N_1+ + N_1-. Of first order, heat transfer via coherent oscillations of the EDM at ω_t = ω_1- - ω_1+ explains the heat capacity 9/2ℛΘ_I proportional to (T-T_0) in Table <ref>. According to Plank's law, the relative-power radiated at
ω_1+ -ω_t is (ω_t/ω_1+)^3 ∼ 10^-9.
The impact on the heat capacity is negligible, but this channel is essential for thermalization.
Above T_F (Table <ref>), the frozen kinetic energy (<ref>) excludes occupation of the fleeting states above D_0 and the phase transition to the liquid state made out of disentangled dimers (<ref>) is necessary to unlock heat transfer.
§.§ Liquid water
Let Ω_W and Ω_I be the numbers of condensates in liquid water and ice, respectively. At T_F, energy-free disentanglement of H_14O_7 into 7 L and 7 R (<ref>) gives Ω_W = 14 Ω_I and
[ Δ H = ℛT_FlnΩ_W/Ω_I≈ 5993 J.mol^-1 ]
explains the empirical heat of fusion Δ H_F≈ 6007 J.mol^-1. <cit.> The mixture of 14 HCP condensates also is in line with diffraction data. <cit.> The lack of coherence of the whole explains heat transfer via dielectric relaxation independent of Θ_W (Table <ref>). The lowest energy-state is E_W0 = 9 ℛ T_F (Table <ref>) and the kinetic energy proportional to T-T_F explains NCS data.
The absolute value of Ω_W includes the degeneracy of OH-groupings equally coordinated to every four orbitals of 6 equidistant O. The number of configurations is 3/2 per H-bond, (3/2)^2 per molecule and so Ω_W = 14(3/2)^2. For ice, the number of condensates is 3/2 per eigenstate and so Ω_I = 3/2(1-Θ_I + 3/2Θ_I ). Note in passing that Ω_I = 3/2 at T_0 resembles Pauling's residual disorder, <cit.> but this is fortuitous.
§.§ Steam
At T_B, heat transfer above D_0 dissociates dimers and vaporization takes place. The energy available, E_B, comprises the latent heat, Δ H_B = (40660 ± 80) J.mol^-1, <cit.> and the thermal energy of the liquid:
[ E_B = Δ H_B + ℛ T_B[ 9 + ln 14 + 2ln3/2 ] ;; = (79270 ± 80) J.mol^-1;; E_B/D_0 = 6.01 ± 0.06.; ]
Dissociation of 6/7 H-bonds yields a low density HCP condensate (Ω_S =1) of (<ref>). The lowest energy state is E_S0 = 9ℛT_B (Table <ref>).
The eigenstates of steam and ice are basically identical. The tunnelling splitting, ≈ 0.94 K, <cit.> is close to 1.2 K and T_c - T_B≈ 273.8 K≈ T_F. Overlooking such tiny differences, Θ_S = Θ_I gives:
[ N_0/N = 1- Θ_S ;; N_1/N = Θ_S;; Φ_S = e^iω_S0t [√(N_0+)Ψ_0+ + Φ_I]; ]
ω_S0 = 7E_S0/ħ and N_0 = N_0+ + N_0-. Coherent heat transfer via oscillation of the EDM at ω_t is independent of Θ_S and C_S = 9/2ℛ up to T_c (Table <ref>).
Pursuing the analogy with ice, entangled dimers in steam likely separate at T_c into a supercritical mixture of two condensates repeating the scenario of the liquid up to T_c + 100 K. Beyond this limit, dissociation of the remaining H-bonds likely leads to depletion.
§ CONCLUSION
The thermal properties and the structures of the three phases of water are inferred from Bose-Einstein condensates without arbitrary hypothesis and statistical ignorance. Every phase is a superposition of two macroscopically occupied eigenstates, with or without tunneling splittings of second order. Pairwise entanglement, or not, explains the heat capacity. The eigenenergies determine the critical temperatures. Changes of the number of degenerate condensates and that of H-bonds, if any, determine the latent heats. Ice is a super-insulator at low temperature.
This nonlocal description of water is at variance with received wisdom according to which: (i) BEC are exotic state of the matter observed in highly controlled environments exclusively; (ii) Entanglement is prone to decoherence in open systems; (iii) Apart from the ice-rules, there is no steric constraint for molecules linked via “flexible” H-bonds. On the contrary: (i) The dissociation threshold of the H-bond ensures stability of BEC in an everyday environment; (ii) EDM interaction precludes decoherence of entangled dimers; (iii) The spatial coherence of BEC overwrites the ice-rules.
As compared with monoatomic gas, the case of water extends by orders of magnitude the size- and temperature-scale of condensates for a relatively complex molecule in the three states of the matter in an everyday environment. Apparently, there is no upper limit in size, complexity and temperature for the quantum theory and condensates could be relevant for various molecular systems.
aapmrev4-2
32
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Ball(2008)]Ball
author author P. Ball, @noop journal journal Nature volume 452, pages 291 (year
2008)NoStop
[Chaplin()]Ice
author author M. Chaplin, @noop journal
<https://water.lsbu.ac.uk/water/>, year =2023, NoStop
[Bernal and Fowler(1933)]BF
journal author author J. D. Bernal and author R. H. Fowler, @noop journal
journal J. Chem. Phys. volume 1, pages 515 (year 1933)NoStop
[Pauling(1935)]Pauling2
author author L. Pauling, @noop journal journal J.
Am. Chem. Soc. volume 57, pages 2680
(year 1935)NoStop
[Benton, Sikora, and Shannon(2016)]BSS2
author author O. Benton, author O. Sikora, and author N. Shannon, @noop journal journal Phys. Rev. B volume 93, pages 125143 (year 2016)NoStop
[Weingärtner and Chatzidimitriou-Dreismann(1990)]WCD
author author H. Weingärtner and author C. A. Chatzidimitriou-Dreismann, @noop journal journal Nature volume 346, pages 548 (year 1990)NoStop
[Chatzidimitriou-Dreismann et al.(1995)Chatzidimitriou-Dreismann, Krieger, Moiler, and Stern]CDK
author author C. A. Chatzidimitriou-Dreismann, author U. K. Krieger, author A. Moiler, and author M. Stern, @noop journal journal Phys.
Rev. Lett. volume 75, pages 3008
(year 1995)NoStop
[Keutsch and Saykally(2001)]KS
author author F. N. Keutsch and author R. J. Saykally, @noop journal journal
PNAS volume 98, pages 10533
(year 2001)NoStop
[Bove et al.(2009)Bove,
Klotz, Parciaroni, and Sacchetti]BKPS
author author L. E. Bove, author S. Klotz,
author A. Parciaroni, and author F. Sacchetti, @noop
journal journal Phys. Rev. Lett. volume 103, pages 165901 (year
2009)NoStop
[Pietropaolo et al.(2009)Pietropaolo, Senesi, Andreani, and Mayers]PSA
author author A. Pietropaolo, author R. Senesi,
author C. Andreani, and author J. Mayers, @noop
journal journal Braz. J. Phys. volume 39, pages 318 (year
2009)NoStop
[Senesi et al.(2013)Senesi,
Romanelli, Adams, and Andreani]SRA
author author R. Senesi, author G. Romanelli,
author M. Adams, and author C. Andreani, @noop
journal journal Chem. Phys. volume 427, pages 111 (year
2013)NoStop
[Fillaux(2017)]Fil8
author author F. Fillaux, @noop journal journal
Europhys. Lett. volume 119, pages
4008 (year 2017)NoStop
[Stanley et al.(1999)Stanley, Buldyrev, Campolat, Havlin, Mishima, Sadr-Lahijani,
Scala, and Starr]SBC
author author H. E. Stanley, author S. V. Buldyrev, author M. Campolat,
author S. Havlin, author O. Mishima, author
M. R. Sadr-Lahijani, author
A. Scala, and author
F. W. Starr, @noop journal journal Physica D volume
133, pages 453 (year 1999)NoStop
[Giese and York(2017)]GY
author author T. J. Giese and author D. M. York, @noop journal journal J.
Phys.: Condens. Matter volume 29, pages
383002 (14pp) (year 2017)NoStop
[Ball(2017)]Ball2
author author P. Ball, @noop journal journal PNAS volume 19, pages 13329 (year
2017)NoStop
[Rocher-Casterline et al.(2011)Rocher-Casterline, Ch'ng, Mollner, and Reisler]RCM
author author B. E. Rocher-Casterline, author L. C. Ch'ng, author A. K. Mollner, and author H. Reisler, @noop journal journal J. Chem. Phys. volume 134, pages 211101 (year 2011)NoStop
[Shank et al.(2009)Shank,
Wang, Kaledin, Braams, and Bowman]SWK
author author A. Shank, author Y. Wang,
author A. Kaledin, author B. J. Braams, and author J. M. Bowman, @noop
journal journal J. Chem. Phys. volume 130, pages 144314 (year
2009)NoStop
[Penrose and Onsager(1956)]PO
author author O. Penrose and author L. Onsager, @noop journal journal Phys.
Rev. volume 104, pages 576 (year 1956)NoStop
[Soper(2000)]Soper
author author A. K. Soper, @noop journal journal Chem.
Phys. volume 258, pages 121 (year 2000)NoStop
[Leggett(2001)]Leggett2
author author A. J. Leggett, @noop journal journal Rev.
Modern Phys. volume 73, pages 307
(year 2001)NoStop
[Smith et al.(2007)Smith,
Lang, Liu, Boerio-Goates, and Woodfieldt]SLL
author author S. J. Smith, author B. E. Lang,
author S. Liu, author
J. Boerio-Goates, and author
B. F. Woodfieldt, @noop
journal journal J. Chem. Thermodyn. volume 39, pages 712 (year
2007)NoStop
[Verma(2003)]Verma
author author M. P. Verma, @noop journal journal
Computers Geosci. volume 29, pages
1155 (year 2003)NoStop
[Lishchuk, Malomuzh, and Makhlaichuk(2011)]LMM
author author S. V. Lishchuk, author N. P. Malomuzh, and author P. V. Makhlaichuk, @noop journal journal
Phys. Lett. A volume 375, pages 2656
(year 2011)NoStop
[Murphy and Koop(2005)]MK
author author D. M. Murphy and author T. Koop, @noop journal journal Q. J. R. Meteorol.
Soc. volume 131, pages 1539 (year 2005)NoStop
[Feistel and Wagner(2006)]FW
author author R. Feistel and author W. Wagner, @noop journal journal J.
Phys. Chem. Ref. Data volume 35, pages
1021 (year 2006)NoStop
[Fillaux, Tomkinson, and Penfold(1988)]FTP
author author F. Fillaux, author J. Tomkinson, and author J. Penfold, @noop journal journal Chem. Phys. volume 124, pages 425 (year
1988)NoStop
[Fillaux and Cousson(2016a)]FCou5
author author F. Fillaux and author A. Cousson, @noop journal journal Chem.
Phys. volume 479, pages 26 (year 2016a)NoStop
[Fillaux and Cousson(2016b)]FCou4
author author F. Fillaux and author A. Cousson, @noop journal journal Eur.
Phys. J. B volume 89, pages 72
(year 2016b)NoStop
[Kuhs and Lehmann(1983)]KLe
author author W. F. Kuhs and author M. S. Lehmann, @noop journal journal J.
Phys. Chem. volume 87, pages 4312
(year 1983)NoStop
[Narten and Levy(1969)]Nl
author author A. H. Narten and author H. A. Levy, @noop journal journal
Science volume 165, pages 447
(year 1969)NoStop
[Marsh(1987)]Marsh
editor K. N. Marsh, ed., @noop title Recommended Reference Materials for
the Realization of Physicochemical Properties (publisher
Blackwell, Oxford, year 1987)NoStop
[Odutola et al.(1988)Odutola, Hu, Prinslow, O'Dell, and Dyke]OHP
author author J. A. Odutola, author T. A. Hu,
author D. Prinslow, author S. E. O'Dell, and author T. R. Dyke, @noop
journal journal J. Chem. Phys. volume 88, pages 5352 (year
1988)NoStop
|
http://arxiv.org/abs/2306.02000v1
|
20230603044705
|
Context-TAP: Tracking Any Point Demands Spatial Context Features
|
[
"Weikang Bian",
"Zhaoyang Huang",
"Xiaoyu Shi",
"Yitong Dong",
"Yijin Li",
"Hongsheng Li"
] |
cs.CV
|
[
"cs.CV"
] |
Can Directed Graph Neural Networks be Adversarially Robust?
Zhichao Hou
North Carolina State University
Xitong Zhang
Michigan State University
Wei Wang
Meta
Charu C. Aggarwal
IBM T.J. Watson Research Center
Xiaorui Liu
North Carolina State University
July 31, 2023
=========================================================================================================================================================================================================================================================================================================
We tackle the problem of Tracking Any Point (TAP) in videos, which specifically aims at estimating persistent long-term trajectories of query points in videos. Previous methods attempted to estimate these trajectories independently to incorporate longer image sequences, therefore, ignoring the potential benefits of incorporating spatial context features.
We argue that independent video point tracking also demands spatial context features.
To this end,
we propose a novel framework Context-TAP, which effectively improves point trajectory accuracy by aggregating spatial context features in videos.
Context-TAP contains two main modules: 1) a SOurse Feature Enhancement (SOFE) module, and 2) a TArget Feature Aggregation (TAFA) module.
Context-TAP significantly improves PIPs all-sided, reducing 11.4% Average Trajectory Error of Occluded Points (ATE-Occ) on CroHD and increasing 11.8% Average Percentage of Correct Keypoint (A-PCK) on TAP-Vid-Kinectics. Demos are available at <https://wkbian.github.io/Projects/Context-TAP/>.
§ INTRODUCTION
Video particles are a set of sparse point trajectories in a video that originate from the first frame (the source image) and move across the following frames, which are regarded as the target images.
In contrast to optical flow estimation that computes pixel-wise correspondences between a pair of adjacent video frames,
Tracking Any Point (TAP) <cit.> or Persistent Independent Particles (PIPs) <cit.> is interested in tracking the points in the follow-up frames that correspond to the original query points even when they are occluded in some frames.
Video particles provide long-term motion information for videos and can support various downstream tasks, such as video editing <cit.> and Structure-from-Motion <cit.>.
Long-range temporal information is essential for video particles especially when the particles are occluded
because the positions of the occluded particles can be inferred from the previous and subsequent frames where they are visible.
However, simultaneously encoding long image sequences brings larger computational and memory costs.
Previous methods <cit.> learn to track individual points independently because dense video particles are unnecessary in most scenarios.
Inspired by optical flow estimation from visual similarities, they learn to predict point trajectories from the similarities between the query point and the subsequent target images.
Specifically, given a query point at the source image, PIPs encodes T feature maps from T consecutive target images and builds a T× H× W correlation volume by computing the feature similarity between the feature of the query point and the feature maps.
The T particle positions are iteratively refined with the 3D correlation volume through a shared MLP-Mixer <cit.>.
In other words, PIPs trades the spatial context features of the particle for longer temporal feature encoding.
PIPs achieves great performance on the DAVIS dataset, which contains large movement particles and weak texture images (e.g., fast-moving dogs and black bears).
We argue that independent point tracking still demands spatial context features.
Intuitively, although PIPs only accounts for specified query points, spatial context features around them provide informative cues for point trajectory refinement.
For example,
video particles on the same objects always share similar motions over time.
In some video frames where the target particles are occluded, their surrounding particles may be visible and provide guidance for the position estimation of the target particles.
However, PIPs only takes correlations and features belonging to the particles while ignoring abundant spatial context features around them.
In this work, we propose Tracking Any Point with Context (Context-TAP) to improve independent point tracking with spatial context features.
Context-TAP contains two key modules for better point trajectory refinement: 1) a source feature enhancement (SOFE) module that learns to adopt more spatial context features in the source image and builds a guidance correlation volume, and 2) a target feature aggregation (TAFA) module that aggregates spatial context features in the target image guided by the correlation information.
In the source image, points that possess similar appearances are supposed to move in similar trajectories in subsequent frames.
Such an assumption has also been used in GMA <cit.> for optical flow estimation.
Given a query point, SOFE computes the correlation between the query point and the source image feature map, which is its self-similarity map. With the guidance of the correlation (self-similarity), SOFE predicts M offsets centered from the query point, and samples at the corresponding M auxiliary points to collect source context features.
During the iterative point trajectory refinement, the correlation information between the M auxiliary features and T target feature maps is injected into the MLP-Mixer, which provides strong guidance and shows evident performance improvement.
Existing methods for optical flow and video particle estimation ignore features in target images for iterative refinement.
To better utilize the context of target images,
in each iteration, our TAFA module collects target features surrounding the previous iteration's resulting movements.
TAFA for the first time shows that context features in target images also benefit correspondence estimation and further improve the point tracking accuracy.
Our contributions can be summarized as threefold: 1) We propose a novel framework to improve independent video particle tracking with context features from both source and target features. 2) We design a novel source feature enhancement module that builds a guidance correlation volume with spatial context features in the source image, and a novel target feature aggregation module that extracts context features from target images. 3) Our Context-TAP ranks 1st on the four benchmarks and shows clear performance superiority.
§ RELATED WORK
Optical Flow.
Optical flow estimates the dense displacement field between image pairs and has traditionally been modeled as an optimization problem that maximizes the visual similarity between image pairs with regularizations <cit.>.
Since FlowNet <cit.>, learning optical flow with neural networks presents superior performance over traditional optimization-based methods and is fast progressing with more training data obtained by the renderer and better network architecture <cit.>.
In recent years, iterative refining flow with all-pairs correlation volume presents the best performance.
The most successful network designs are RAFT <cit.> and FlowFormer <cit.>, which achieves state-of-the-art accuracy.
Typical optical flow estimation only takes image pairs but longer image sequences can provide more information that benefits optical flow estimation.
Kalman filter <cit.> had been adopted in dealing with the temporal dynamics of motion and estimating multi-frame optical flow.
Recent learning-based methods also attempted to exploit multi-frame information and perform multi-frame optical flow estimation.
PWC-Fusion <cit.> is the first method that learns to estimate optical flow from multiple images.
However, it only fuses information from previous frames in U-Net and improves little performance.
The "warm start" technique <cit.> that wrapped the previous flow to initialize the next flow is firstly proposed in RAFT and shows clear accuracy increasing.
Recently, VideoFlow <cit.> achieves state-of-the-art performance by iteratively fusing multi-frame information in a three-frame and five-frame structure, which reveals that longer temporal information benefits pixel tracking.
Tracking Any Point.
Optical flow methods merely focus on tracking points between image pairs but ignore tracking points across multiple consecutive frames, which is still challenging.
Harley et al. <cit.> studied pixel tracking in the video as a long-range motion estimation problem inspired by Particle Video <cit.>.
They propose a new dataset FlyingThings++ based on FlyingThings <cit.> for training and Persistent Independent Particles (PIPs) to learn to track single points in consecutive frames with fixed lengths.
Doersch et al. <cit.> is a parallel work, which formalized the problem as tracking any point(TAP).
They also propose a new dataset Kubric <cit.> for training and a network TAP-Net to learn point tracking.
Moreover, they provide the real video benchmarks that are labeled by humans, TAP-Vid-DAVIS <cit.> and TAP-Vid-Kinetics <cit.>, for evaluation.
PIPs and TAP solve the video particle tracking problem in a similar manner, i.e., recurrently refining multi-frame point trajectory via correlation maps.
In this paper, our Context-TAP follows the training paradigm of PIPs and improves the network architecture design of PIPs. We also take the TAP-Vid-DAVIS and TAP-Vid-Kinetics benchmarks from TAP-Net for evaluation.
§ METHOD
In contrast to optical flow methods <cit.> that track dense pixel movement between an image pair,
the problem of Tacking Any Point (TAP) takes T consecutive RGB images with a single query point 𝐱_src∈ℝ^2 at the first frame as input, and estimates T coordinates 𝐗 = {𝐱_0, 𝐱_1, …, 𝐱_T-1} at the video frames where every 𝐱_t indicates the point's corresponding location at time t. Persistent Independent Particles (PIPs) <cit.> is the state-of-the-art network architecture for TAP.
It iteratively refines the point trajectory by encoding correlation information that measures visual similarities between the query point and the T video frames.
The query points to be tracked are easily lost when the network only looks at them and ignores spatial context features.
We propose a novel framework Context-TAP (Fig. <ref>) that improves PIPs with a SOurce Feature Enhancement (SOFE) module and a TArget Feature Aggregation (TAFA) module.
In this section,
we first briefly review PIPs and then elaborate our Context-TAP.
§.§ A Brief Revisit of PIPs
PIPs <cit.> processes T video frames containing N independent query points simultaneously and then extends the point trajectories to more video frames via chaining rules <cit.>.
Given a source frame with a query point 𝐱_src∈ℝ^2 and T-1 follow-up target video frames,
PIPs first extracts their feature maps 𝐈_0, 𝐈_1, …, 𝐈_T-1∈ℝ^C × H × W through a shallow convolutional neural network and bilinearly samples to obtain the source point feature 𝐟_src=𝐈_0(𝐱_src) from the first feature map at the query point 𝐱_src. C, H, W are the feature map channels, height, and width.
Inspired by RAFT <cit.>, PIPs initializes the point trajectory and point visual features at each frame with the same 𝐱_src and 𝐟_src:
𝐗^0 = {𝐱_0^0, 𝐱_1^0, …, 𝐱_T-1^0|𝐱_t^0=𝐱_src,t=0,…,t=T-1},
𝐅^0 = {𝐟_0^0, 𝐟_1^0, …, 𝐟_T-1^0|𝐟_t^0=𝐟_src,t=0,…,t=T-1},
and iteratively refines them via correlation information. 𝐱_t^k and 𝐟_t^k respectively denote the point trajectory and point features in the t-th frame and k-th iteration.
Intuitively, the point features store the visual feature at the currently estimated query point location in all the T frames.
Specifically, in each iteration k, PIPs constructs multi-scale correlation maps <cit.> between the guidance feature {𝐟_t^k}_t=0^T-1 and the target feature maps {𝐈_t^k}_t=0^T-1, which constitutes T correlation maps 𝐂^k = {𝐜_0^k, 𝐜_1^k, …, 𝐜_T-1^k}
of size T× H× W, and crops correlation information inside the windows centered at the point trajectory: 𝐂^k(𝐗^k)={𝐜_0^k(𝐱_0^k), 𝐜_1^k(𝐱_1^k), ..., 𝐜_T-1^k(𝐱_T-1^k)}, where 𝐜_t^k(𝐱_t^k)∈ℝ^D× D denotes that we crop D× D correlations from 𝐜_t^k inside the window centered at 𝐱_t^k.
The point features 𝐅^k, point locations 𝐗^k, and the local correlation information 𝐂^k(𝐗^k) are fed into a standard 12-layer MLP-Mixer that produces Δ𝐅 and Δ𝐗 to update the point feature and the point trajectory:
Δ𝐅, Δ𝐗 = MLPMixer(𝐅^k, 𝐂^k(𝐗^k), Enc(𝐗^k-𝐱_src )),
𝐅^k+1 = 𝐅^k + Δ𝐅, 𝐗^k+1 = 𝐗^k + Δ𝐗.
PIPs iterates K times for updates and the point trajectory in the last iteration 𝐗^K is the output.
PIPs achieves state-of-the-art accuracy on point tracking by utilizing longer temporal features. However, the previous method ignores informative spatial context features which are beneficial to achieve more accurate point tracking. Context-TAP keeps all modules in PIPs and is specifically designed to enhance the correlation information 𝐂^k and
point features 𝐅^k as 𝐂̂^k and 𝐅̂^k with the proposed SOFE and TAFA.
§.§ Source Feature Enhancement
Given the query point 𝐱_src and feature map 𝐈_0 of the source image, PIPs simply samples a source feature 𝐟_src at the query point location to obtain the point visual features 𝐅^k. Although the point features are updated via the iterative refinement, its perceptive field is limited to the single point, easily compromised in harsh scenarios.
The correlation maps 𝐂^k in the k-th iteration provide vague information when the query point is in a less textured area.
Moreover, the correlation map 𝐜_t^k at timestamp k is ineffective once the particle is occluded at the t-th frame.
To enhance the source feature, as shown in Fig. <ref>, we propose SOurce Feature Enhancement (SOFE) that accepts spatial context features in the source image as auxiliary features to guide the point trajectory refinement.
The MLP-Mixer can infer the point locations via the auxiliary features even when the points are occluded or on less textured regions, which improves the point tracking accuracy and robustness.
Directly adopting all features in the source image brings large computational costs. SOFE learns to sample a small number of auxiliary features to enhance the source feature.
Specifically, SOFE improves the point features in three steps. Firstly, SOFE learns to predict M offsets δ𝐱_0,δ𝐱_1, …, δ𝐱_M-1∈ℝ^2 with an MLP-based sampler to sample M auxiliary features G={𝐠_0, 𝐠_1, …, 𝐠_M-1|𝐠_m=𝐈_0(𝐱_src+δ𝐱_m)} around the query point 𝐱_src in the source image.
Motivated by GMA that aggregates pixel flows from pixels that are likely to belong to the same object through self-similarity, our proposed sampler also learns the locations of the auxiliary features based on local self-similarities 𝐜^0_0(𝐱_src) which store the correlations cropped from the first frame at the query point location.
Secondly, we construct the correlation map 𝐜'_m,t=<𝐠_m,𝐈_t>∈ℝ^H× W that measure visual similarities of the m-th auxiliary feature and the t-th frame feature map. 𝐜̂_m,t provides additional correlation information to guide the iterative point trajectory refinement.
In each iteration k, we crop the additional correlation information 𝐜'_m(𝐱^k_t) according to the point locations 𝐱^k_t and concatenate them with the original point correlation information 𝐜^k_t(𝐱^k_t), where 𝐜'_m(𝐱^k_t) denotes the same cropping operation as 𝐜_t^k(𝐱^k_t).
Finally, for each frame t, we reduce the augmented correlations to a correlation feature vector 𝐜̂_t of length 196 through a correlation encoder CorrEnc.
𝐜̂_t^k = CorrEnc( Concat(𝐜'_0(𝐱^k_t), 𝐜'_1(𝐱^k_t), …, 𝐜'_M-1(𝐱^k_t), 𝐜^k_t(𝐱^k_t))),
and inject 𝐂̂^k={𝐜̂_0^k,𝐜̂_1^k, …, 𝐜̂_T-1^k,} into the MLP-Mixer.
Compared with PIPs that only adopts 𝐜^k_t(𝐱^k_t)), SOFE provides more informative correlations to the MLP-Mixer with spatial context features but does not increase its parameters and computations.
The additional auxiliary features from the self-similarity map of the source image enhance the original source point features and significantly improves tracking accuracy.
§.§ Target Feature Aggregation
Inspired by existing optical flow methods,
PIPs iteratively refines the point trajectory with correlation information and context features and also
iteratively updates the point visual feature 𝐅^k+1 = 𝐅^k + Δ𝐅 after initializing them with the source feature, which presents benefits for point trajectory refinement.
We observe that the input for supporting the point feature updating comes from the correlations 𝐂^k only.
However, such correlations 𝐂^k are calculated as only cosine distances between the source point visual feature 𝐅^k and the target features around currently estimated point locations 𝐗^k, which provide limited information on how to conduct visual feature updating.
Can we better guide the point feature update with context features in target images?
We, therefore, propose TArget Feature Aggregation (TAFA) to augment point features with target image features nearby the point trajectory.
Specifically, for each target frame t, a patch of shape D × D cropped from the corresponding target feature map 𝐈_t centered at 𝐱_t^k to generate key and value.
The augmented correlation features 𝐂̂ in Eq. <ref> encode abundant visual similarities.
Therefore, we generate a query from it to extract target context features and adopt cross-attention with relative positional encoding to obtain the target context feature
𝐟'_t^k, which is added to the original source point feature
𝐟̂_t^k=𝐟_t^k+𝐟'_t^k.
Finally, such augmented point features 𝐅̂^k={𝐟̂_0^k,𝐟̂_1^k,…, 𝐟̂_T-1^k} are injected into the MLP-Mixer.
Similar to our proposed SOFE, TAFA also keeps the same parameters and computations of MLP-Mixer as PIPs while providing additional target context features and further improving PIPs.
Although context features in the source image are used since RAFT <cit.>, no previous methods adopt context features from target images. TAFA for the first time validates that target images also contain critical context features that benefit point movement refinement.
SOFE improves PIPs with auxiliary features in the source image while TAFA absorbs more target image features. Equipping SOFE and TAFA to PIPs constitutes our final model, Context-TAP.
§.§ Loss Functions
We compute the L1 distance between 𝐗^k computed in iteration k and the ground truth 𝐗_gt and constrain with exponentially increasing weights γ = 0.8.
ℒ_TAP = ∑_k=1^Kγ^K - k ||𝐗^k - 𝐗_gt||_1
In addition, we will predict the visibility/occlusion 𝐕 by a linear layer according to the 𝐅̂^K obtained by the iterative update.
And the cross entropy loss is used to supervise 𝐕 with the ground truth 𝐕_gt.
ℒ_Vis = 𝐕_gtlog𝐕 + (1 - 𝐕_gt) log (1 - 𝐕).
The final loss is the weighted sum of the two losses:
ℒ_total = w_1ℒ_TAP + w_2ℒ_Vis.
We use w_1=1 and w_2=10 during training.
§ EXPERIMENTS
We evaluate our Context-TAP on four benchmarks: FlyingThings++ <cit.>, CroHD <cit.>, TAP-Vid-DAVIS, and TAP-Vid-Kinectics <cit.>.
Following PIPs <cit.>, we train Context-TAP on Flyingthings++ only and evaluate it on other benchmarks without finetuning.
Context-TAP achieves state-of-the-art performance on all benchmarks and significantly improves PIPs.
Moreover, we show that by utilizing spatial context features in Context-TAP, we achieve on-par performance with PIPs when using only 40.2% of its parameters.
Datasets
Flyingthings++ is a synthetic dataset based on Flyingthings3D <cit.>, which contains 8-frame trajectories with occlusion.
The video resolution is 384 × 512 for both training and evaluation.
Crowd of Heads Dataset (CroHD) is a high-resolution crowd head tracking dataset.
Following PIPs, the RAFT <cit.>, PIPs, and our Context-TAP are evaluated at 768 × 1280 resolution. DINO <cit.> and TAP-Net <cit.> are respectively evaluated at 512 × 768 and 256 × 256 resolution.
TAP-Vid-DAVIS and TAP-Vid-Kinectics are two evaluation datasets in the TAP-Vid benchmark, both of which consist of real-world videos with accurate human annotations for point tracking.
Note that TAP-Vid provides two distinct query sampling strategies, i.e., first and strided.
The “first” sampling contains only the initial visible query points until the last frame, while the “strided” sampling is to sample all visible query points in every 5 frames.
RAFT, TAP-Net, PIPs, and our Context-TAP are eveluated with two different sampling strategies respectively at 256 × 256 resolution.
While Kubric-VFS-like <cit.> and COTR <cit.> are only evaluated with the “first” sampling strategies at the same resolution.
Experiment Setup
We use the average trajectory error (ATE) metric <cit.> for evaluation on FlyingThings++ and CroHD.
ATE measures the average L2 distances between the coordinates of all predicted points in the trajectories and the corresponding ground truth coordinates.
According to the ground truth visibility of the points, we calculate ATEs for all visible points and occluded points separately. i.e., ATE-Vis. and ATE-Occ.
We use the average Jaccard (AJ) <cit.>, the average percentage of correct keypoint (A-PCK) metrics for TAP-Vid-DAVIS and TAP-Vid-Kinetics.
The Jaccard metric measures the ratio of “true positive” visible points within a given threshold.
Average Jaccard (AJ) averages Jaccard across the different thresholds.
PCK measures the percentage of the predicted coordinates whose L2 distances from the ground truth are smaller than a given threshold.
A-PCK refers to calculating PCK according to multiple different thresholds and then taking the average.
We set the thresholds as 1, 2, 4, 8, and 16.
Implementation Details
We train our Context-TAP with a batch size of 4 and 100,000 steps on Flyingthings++ with horizontal and vertical flipping.
We use the one-cycle learning rate scheduler. The highest learning rate is set as 5 ×10^-4.
During training, we set the convolution stride to 8 and the resolution of the input RGB images to 384 × 512, and randomly sample N = 128 visible query points for supervision.
To limit the length of the input videos, we set T = 8 and apply the trajectory linking mechanism <cit.> at test time, similarly to PIPs.
To align to the PIPs paper, the PIPs compared in Tab. <ref> and Tab. <ref> is trained with K = 6.
In the ablation study, all models are trained with K = 4 while tested with K=6 for a fair comparison.
§.§ Quantitative Comparison
FlyingThings++ and CroHD As shown in Tab. <ref>, Context-TAP ranks 1st on all metrics and presents significant performance superiority compared with previous methods. Specifically, Context-TAP achieves 7.06 ATE-Occ and 4.28 ATE-Vis on the CroHD dataset, 11.4% and 9.5% error reductions from PIPs, the runner-up. On the FlyingThings++ dataset, our Context-TAP decreases the ATE-Vis and ATE-Occ by 0.96 and 2.18, respectively.
TAP-Vid-DAVIS and TAP-Vid-Kinectics (first) A-PCK, the average percentage of correct key points, is the core metric. Context-TAP ranks 1st in terms of A-PCK on both benchmarks.
Specifically, Context-TAP outperforms TAP-Net by 24.1% on the TAP-Vid-DAVIS benchmark and improves PIPs by 11.8% on the TAP-Vid-Kinectics benchmarks.
TAP-Vid-DAVIS and TAP-Vid-Kinectics (strided)
Tab. <ref> compares methods in the “strided” sampling setting. Our Context-TAP also achieves the best performance on both AJ and A-PCK metrics for the two datasets.
§.§ Qualitative Comparison
We visualize the trajectories estimated by TAP-Net, PIPs, and our Context-TAP respectively in Fig <ref> to qualitatively demonstrate the superior performance of our method.
By incorporating additional spatial context features for point tracking, Context-TAP surpasses the compared methods in accuracy and robustness. Specifically, the first row shows the case of large-scale variation. The trajectory predicted by TAP-Net deviates considerably from the ground truth. TAP-Net also outputs jittery predictions when the query pixel is on the texture-less area as shown in the second row. Our Context-TAP generates more accurate results than PIPs in these two hard cases. Furthermore, as depicted in the third row, PIPs struggles to distinguish the front wheel and the rear wheel due to the changing lighting conditions. However, our Context-TAP achieves consistent tracking, thanks to the rich context information brought by the SOFE and TAFA modules.
§.§ Efficiency Analysis
We train our Context-TAP and PIPs with different MLP-Mixer depths, i.e., the number of layers in the MLP-Mixer, to show the extraordinary efficiency and effectiveness of our proposed Context-TAP.
Context-TAP improves PIPs with SOFE and TAFA, which introduce minor additional parameters and time costs.
We show that the accuracy benefits do not come from the increased parameters trivially.
As displayed in Tab. <ref>, we increase the MLP-Mixer depth to 16, which significantly increases the parameters but does not bring performance gain.
We also decrease the MLP-Mixer depth in our Context-TAP.
Even with only a 3-layer MLP-Mixer, Context-TAP achieves better performance than the best PIPs (MLP-Mixer depth=12). Context-TAP outperforms PIPs with only 40.2% parameters, which reveals high efficiency.
§.§ Ablation Study on Modules
We conduct a module ablation study on the proposed modules as presented in Tab. <ref>. The errors of Context-TAP consistently decrease when we sequentially add the SOFE and TAFA modules, which reveals the effectiveness of SOFE and TAFA.
To demonstrate the necessity of the cross-attention mechanism used in TAFA, we attempt to predict a matrix of weights corresponding to the feature map shaped with r_a and directly weigh and sum the features to get δ F.
Cross-attention performs better than prediction.
§.§ Ablation Study on Parameters
We conduct a series of ablation experiments (Tab. <ref>) to demonstrate the significance of each module and explain the rationality of the settings.
All ablation experiments are trained on Flyingthings++.
Starting from the PIPs baseline, we first add the SOFE module and explore the two related hyperparameters, i.e., the correlation radius r_c and the number of samples M.
Then, we further add the TAFA module and also adjust the attention window radius r_a.
We additionally conduct a comparison between the prediction and attention mechanisms in TAFA.
In the below experiments, we set N = 64, learning rate as 3 ×10^-4, and train for 20,000 steps.
Below we describe the details.
Correlation Radius in SOFE
We crop a multi-scale correlation of size (2r_c+1)× (2r_c+1) from the first correlation map to predict the auxiliary feature offsets in SOFE.
The correlation radius r_c determines the cropping patch size.
We fix M = 3, and gradually increase r_c from 1 to 4.
The model achieves the best performance when r_c = 2.
Number of Samples in SOFE
SOFE learns to sample M additional auxiliary features to enhance the source feature.
Given r_c = 2, we continued to experiment with the different number of samples M. The model achieves the best performance on both Flyingthings++ and CroHD datasets when M=9.
Attention Radius in TAFA
TAFA aggregates target features surrounding the currently estimated corresponding point locations to enhance the context feature via cross-attention.
The radius of the attention window r_a determines how far the attention can reach.
We gradually increase r_a from 1 up to 5, and find that r_a = 3 performs best.
§ CONCLUSION
We have presented a novel framework Context-TAP that improves PIPs with spatial context features, including a SOurce Feature Enhancement (SOFE) module and a TArget Feature Aggregation (TAFA) module.
Experiments show that Context-TAP achieves the best tracking accuracy on four benchmark datasets with significant superiority. This technology has broad applications in video editing and 3D reconstruction and other fields.
Limitations. Following PIPs, Context-TAP tracks points in videos with a sliding-window. The target point cannot be re-identified when the point is lost. In our future work, we will explore to re-identify the lost points when the points are visible again.
packages/ieee_fullname
§ APPENDIX
§ MORE IMPLEMENTATION DETAILS
SOFE sampler. While sampling auxiliary features, SOFE learns to predict offsets with an MLP-based sampler which consists of 5 linear layers interweaved with RELU activations.
The local self-similarities 𝐜_0^0 at location 𝐱_src) would first be projected into 128 feature channels.
A 3-layer feedforward network with 4 × 128 channels followed, outputting the feature in 128 feature channels.
The final linear layer is used to predict offsets M × 2 from the 128-channel features.
SOFE CorrEnc. SOFE reduces the augmented correlations (M+1)× 196 to a correlation feature vector 𝐜̂^k_t
through a correlation encoder CorrEnc which contains only 2 linear layers.
The first linear layer reduces the feature channels to 4 × 196. After a RELU, the later linear layer further reduces the feature channels to 196, which is the 𝐜̂^k_t.
§ MORE QUANTITATIVE COMPARISONS
PIPs Re-implementation.
There are two official PIPs <cit.> versions. PIPs (Paper) and PIPs (Released) respectively denote the model reported in the paper and the model provided in the released code.
There are many misalignments between the paper description and the released code.
We follow the parameters suggested in the paper and the released code but fail to reproduce the results.
We, therefore, re-implement two PIPs as the baselines,
according to the settings provided in the paper (K = 6) and the released code (K = 4).
K denotes the number of refinement iterations in training.
The underscored PIPs (Re-imp.), i.e. K=6, is the chosen baseline for comparison in the main paper.
The performance of the re-implemented model is better than the numbers reported in the paper (Tab. <ref>).
Although our re-implemented model presents inferior performance than the released model on FlyingThings++ and CroHD, they are comparable on TAP-Vid-DAVIS and the re-implement model is even better than the released model on TAP-Vid-Kinectics (Tab. <ref>).
We add our proposed SOFE and TAFA modules to the re-implemented baselines to obtain our Context-TAP models.
We list the results for Flyingthings++ and CroHD in Tab. <ref>, the results for TAP-Vid-DAVIS (first) and TAP-Vid-Kinetics (first) in Tab. <ref>, and the results for TAP-Vid-DAVIS (strided) and TAP-Vid-Kinetics (strided) in Tab. <ref>.
“first” and “strided” are two distinct query sampling strategies proposed by TAP-Vid <cit.>, where “first” sampling only contains the initial visible query points, while “strided” sampling would contain all visible query points in every 5 frames.
The released PIPs model tends to overfit on FlyingThings++ because although it obtains the lowest error on FlyingThings++ but is inferior on TAP-Vid-DAVIS and TAP-Vid-Kinectics.
Although we only show the Context-TAP with K=4 in the main paper, our K=6 version achieves the best performance, outperforming PIPs K=6 by 9.44% and 11.76% on DAVIS and Kinetics.
Moreover, Context-TAP trained with K=4 and 3-layer MLP-Mixer achieves even better results than PIPs trained with K=6 and 12-layer MLP-Mixer (Tab. <ref>).
|
http://arxiv.org/abs/2306.03316v1
|
20230605235840
|
CoSiNES: Contrastive Siamese Network for Entity Standardization
|
[
"Jiaqing Yuan",
"Michele Merler",
"Mihir Choudhury",
"Raju Pavuluri",
"Munindar P. Singh",
"Maja Vukovic"
] |
cs.CL
|
[
"cs.CL"
] |
Robust Statistical Inference for Large-dimensional Matrix-valued Time Series via Iterative Huber Regression
[
July 31, 2023
=============================================================================================================
Entity standardization maps noisy mentions from free-form text to standard entities in a knowledge base. The unique challenge of this task relative to other entity-related tasks is the lack of surrounding context and numerous variations in the surface form of the mentions, especially when it comes to generalization across domains where labeled data is scarce. Previous research mostly focuses on developing models either heavily relying on context, or dedicated solely to a specific domain. In contrast, we propose CoSiNES, a generic and adaptable framework with Contrastive Siamese Network for Entity Standardization that effectively adapts a pretrained language model to capture the syntax and semantics of the entities in a new domain.
We construct a new dataset in the technology domain, which contains 640 technical stack entities and 6,412 mentions collected from industrial content management systems. We demonstrate that CoSiNES yields higher accuracy and faster runtime than baselines derived from leading methods in this domain. CoSiNES also achieves competitive performance in four standard datasets from the chemistry, medicine, and biomedical domains, demonstrating its cross-domain applicability.
Code and data is available at <https://github.com/konveyor/tackle-container-advisor/tree/main/entity_standardizer/cosines>
§ INTRODUCTION
The automatic resolution of mentions in free-form text to entities in a structured knowledge base is an important task for understanding and organizing text. Two well-recognized tasks tackle entity mentions in text. Entity matching concerns resolving data instances that refer to the same real-world entity <cit.>. The data instances usually comprise a specific schema of attributes, such as product specifications. Entity linking, also known as entity disambiguation, associates ambiguous mentions from text with entities in a knowledge base, where precise attributes and relationships between entities are curated <cit.>. Both tasks involve rich context surrounding the mention and the underlying entity <cit.>. Much effort in deep learning approaches focuses on ways to leverage and encode the context surrounding mentions in text and attributes associated with entities in the knowledge base. However, little work has been done on scenarios where such rich context and precise information are not available. In domains such as finance, biology, medicine, and technology, mentions involve specialized jargon, where no context is associated with the mentions and often no attribute of the entities is available other than the mentions themselves.
We tackle the challenge of missing context for entity standardization (ES) mapping, which involves mapping mentions to entities in the knowledge base across multiple domains. Due to the lack of a public dataset for ES and to foster research on the problem, we manually construct a dataset in the technology domain geared to application modernization. We propose an approach called CoSiNES for the dataset and then evaluate the generalization of CoSiNES in the biomedical domain.
Application modernization consists in migrating legacy applications to the cloud. It relies on a faithful assessment of the technical components of such applications. Much technical information is contained in free-form textual application descriptions, but automatic extraction of such knowledge is nontrivial due to variations in how the same entities are mentioned <cit.>.
Compared to the two aforementioned tasks of entity matching and linking, ES presents unique challenges. First, the mentions could have acronyms, numbers, symbols, alias, punctuation, and misspellings. Figure <ref> shows two examples of multiple mentions referring to the same entity. Second, there is a lack of context surrounding the mentions, and there are no attributes or relationships for entities in the knowledge base, which the previous approaches heavily rely on. Third, large deep learning models require massive training datasets, which are not available for specialized domains. Therefore, architectures that are suited for zero-shot or few-shot learning are of great value for this task.
Another challenge is how to perform entity standardization at scale. A naive way is to have exhaustive comparisons between each possible mention and entity pair, which is inefficient. Previous deep learning models for entity matching and entity linking usually have multiple stages <cit.>: first stage, such as blocking in entity matching, reduces the number of comparison pairs via a coarse-grained criterion so that the latter stages can focus on filtered candidate pairs. This multistage approach leads to globally inferior performance due to the errors accumulated along the pipeline.
We tackle these challenges with a generic framework based on Contrastive Siamese Network which efficiently adapts domain-agnostic pretrained language models (PLMs) to specific domains using a limited number of labeled examples. Language models have shown great capacity to capture both syntactic and semantic variations of text. Our framework decouples the comparison of mention-entity pairs for training and inference so that the model can be used as a standalone encoder after training. Therefore, the embeddings of the entity from the knowledge base can be precomputed and hashed. At inference time, the running time is linear in the size of query mentions, and we can leverage existing tools, such as FAISS,[https://github.com/facebookresearch/faiss] for efficient and large-scale similarity search.
Our contributions are the following.
* A generic, scalable, and adaptable framework that leverages domain-agnostic pretrained language models.
* A method for generating anchored contrastive groups and a training scheme with a hybrid of batch-all and batch-hard online triplet mining.
* A dataset curated for application modernization, where various mentions for technical components are manually labeled.
We validate these contributions via comprehensive experiments with various hyperparameters, loss functions, and training schemes and show the robustness and effectiveness of the framework on our custom dataset in the technology domain. With optimal settings on our dataset, we further evaluate the framework on four datasets from the biomedical domain. We show that the framework can be adapted to other domains with minimal changes.
§ RELATED WORK
Various forms of entity-related tasks have been studied by previous research, of which three are most relevant to our task.
Entity Matching (EM) identifies if different mentions refer to the same real-world entity, and is an important step in data cleaning and integration <cit.>. The targets of EM are records from a database, where records follow a specific schema of attributes. The goal is to find pairs of records from two databases that refer to the same entity. Whereas early approaches of EM mostly apply rule-based heuristics, recent research often relies on deep neural network <cit.>. As the number of pairwise comparisons grows quadratically, a preprocessing step (blocking) is usually applied to reduce the number of candidate matches. The matcher then takes a pair of a mention and an entity as input and produces a probability of a match. In contrast, entity standardization comes with a predefined set of standard entities, and the mentions come with no attributes. Our method involves learning a metric function, where the model can be used as an encoder to embed mentions and entities in the same space.
Entity Linking (EL) is the process of linking a mention in context with an entity in a knowledge base. Unlike entity standardization, the entities in the knowledge base, such as WikiData <cit.> and Freebase <cit.>, usually have well-structured attributes and precisely defined relationships between them. The mention comes with rich context and unstructured raw text. To leverage these two different types of contextual information, separate context-mention and graph-entity encoders are designed to produce embeddings respectively, and another neural network is used to combine and project these two embeddings to the same space <cit.>. Due to the lack of context for both the mention and entity for entity standardization, we propose to use a single unified model as the encoder, which can reduce the complexity of the pipeline.
Entity Normalization (EN) is widely used in the biomedical domain. The task is to map noisy mentions to entities in a well-defined reference set, such as ontologies and taxonomies <cit.>. The mentions usually have no context, and the entities come with no attributes, but there is a hierarchical structure in the reference set. Unlike entity standardization in the technology domain, the variations of mentions in life science are fairly standardized and synonyms are rare. The task can be well addressed with a sufficient number of training examples for each entity category, which is not the case in our setting. <cit.> propose a similar idea using a Siamese neural network for EN. Our approach differs in the following aspects: the designed training batch-generation algorithm, the computation of the contrastive loss, and the usage of PLMs in our specialized training scheme.
§ METHODOLOGY
§.§ Problem Formulation
We denote the set of query mentions as 𝒬≡{m_q}, and the set of standard entities as 𝒮≡{e_s}. Each entity in 𝒮 is associated with zero or more mentions referring to it e_s ←{m_s}. Importantly, there should be no overlap between the query mention set 𝒬 and the mentions associated with the standard entity set 𝒮. The task is to retrieve an entity e ∈𝒮 given m ∈𝒬 such that e is the entity m refers to.
We tackle this task with contrastive learning by learning an embedding encoder such that mentions and entities are encoded to the same high-dimensional embedding space. The property of the embedding space is that the cosine distance between mentions of the same entity is smaller than mentions of different entities.
We design a BERT-based Siamese neural network architecture, which acts as the embedding encoder after training. The training is conducted with a hybrid of batch-all and batch-hard online triplet mining schemes. Figure <ref> gives an overview of CoSiNES. The training (top) phase has the goal of pulling similar mentions together and pushing dissimilar mentions far away in the embedding space. After training, the inference (bottom) phase has the goal of using a Siamese neural network to project entities in the knowledge base and query mentions to the same embedding space. At inference time, nearest neighbor search algorithms can be used to retrieve the target entity.
§.§ Contrastive Learning and Triplet Loss
Contrastive Learning <cit.> aims to group similar data points together and push dissimilar data points far apart in a high-dimensional embedding space. Equation <ref> shows the core idea of contrastive learning. Here x represents any data point in the domain, x^+ is a positive sample that is similar to x (or from the same class as x), and x^- is a negative sample that is dissimilar to x. E is an encoder, which could be any neural network. And, dis is a distance measure between the embedding vectors.
dis(E(x), E(x^+)) ≪dis(E(x), E(x^-))
As shown in Equation <ref>, triplet loss is calculated based on triplets {x, x^+, x^-}, which consist of two samples from the same class and a third sample from a different class.
The intuition
is that the distance d(x, x^-)
should be larger than the distance
d(x, x^+)
by a margin. The margin is a hyperparameter that needs to be tuned.
ℒ = max(d(x, x^+)-d(x, x^-)+ margin, 0)
Based on the difference between d(x, x^-) and d(x, x^+), we can classify triplets into three categories: easy, semihard, and hard. See appendix <ref> for detailed definitions.
§.§ Online Triplet Mining
There are two different strategies of mining triplets for contrastive learning. Offline mining generates triplets at the beginning of training. The embeddings of the whole training dataset are computed, then hard and semihard triplets are mined based on the embeddings. Offline mining is highly inefficient. First, it requires computing the embeddings for all the training data to mine the triplets. Second, as the model starts to learn, the hard and semihard triplets may turn into easy triplets. Therefore, at least for a few epochs, we need to update the triplet set frequently.
Online triplet mining <cit.> seeks to generate triplets on the fly within a batch. There are two strategies to mine triplets from a batch, i.e., batch all and batch hard. We adopt the same idea in our model and propose a hybrid online mining scheme which is shown to be superior to single-mining strategy.
§.§.§ Batch–All
To form valid triplets, a batch of training data should always include samples from more than one class, and each class should contain at least two samples. Suppose the size of the batch is B and the number of all possible triplets is B^3. However, not all of these triplets are valid as we need to make sure each triplet comprises two distinct samples from the same class and one sample from another class. For all valid triplets in the batch, we simply select all hard and semihard triplets and compute the average loss over them. We do not include easy triplets in computing the average as it will make the loss too small. The calculations are based on the embeddings of the batch after they pass through the model.
§.§.§ Batch–Hard
This strategy always selects the hardest positive and negative for each anchor in the batch. Each data instance in the batch can be used as an anchor. Therefore, the number of triplets is always equal to the size of the batch. The hardest positive has the largest d(x, x^+) among all positives, and the hardest negative has the smallest d(x, x^-) among all negatives.
§.§.§ Contrastive Group Generation
Based on the above discussion, a batch should include multiple samples from multiple classes. We sample batches with two steps. First, we randomly generate groups of samples from the same class with size g, and second, we randomly sample b classes of groups to form a batch. Therefore, the effective batch size would be B = g*b.
§.§ BERT-Based Siamese Neural Network
The canonical Siamese neural network is an architecture that consists of two towers with shared weights working in parallel on two different inputs. The outputs are passed on to a distance function to learn comparable output vectors. We extend the same idea to a batch of inputs instead of a pair of inputs. We sample the batch as described in Section <ref> and feed the sampled triplets through the network. The output embeddings of the batch are used to generate valid triplets and compute the loss. The backbone of the Siamese model could be any neural network. We use the pretrained language model BERT <cit.> as the backbone.
§.§ Hashing and Retrieval
Once the Siamese model is trained, it can be used as a standalone encoder to compute the embeddings of entities and mentions. We precompute the embeddings for all entities and save them for comparisons at inference time. For each query mention, we use the same Siamese model to get the embedding and our task is to retrieve the entity with the closest distance to the mention in the embedding space. For a query set of size q, we need to run the Siamese model only q times, avoiding exhaustive pairwise running of the Siamese model. Potentially, we still need to conduct a pairwise nearest neighbor search over the mention and entity embeddings. Tools such as FAISS can be leveraged to efficiently perform large-scale nearest neighbor search.
§ EXPERIMENTAL SETUP
§.§ Dataset
We curate a dataset (ESAppMod) on application modernization that comprises named entities with respect to the technical stack of business applications. There are a total number of 640 unique entities, covering a variety of technical component categories, such as Operating System (OS), Application Server, Programming Language, Library, and Runtime. We manually extract and label 6412 unique mentions associated with the entities in AppMod from real application descriptions. All annotations are done by domain experts. We split the mentions 60–40 into train and test sets, which yields 3973 and 2439 mentions in the training and testing splits, respectively. The mentions associated with each entity are not evenly distributed, ranging from one to over a hundred.
§.§ Hyperparameter Tuning
Implementing our framework involves many design choices and hyperparameters. To facilitate performance at scale, the tradeoff between accuracy and inference time is crucial. We experimented with different sizes of BERT as the backbone of CoSiNES, including BERT-tiny, BERT-mini, BERT-small, BERT-medium, and BERT-base. For triplet mining, we evaluated batch–all, batch–hard, and a hybrid of the two. For the measure of distance, we investigated cosine, Euclidean, and squared Euclidean distance. For the hyperparameters, we evaluated different values of margin, learning rate, and batch size detailed in appendix <ref>. All training experiments were carried out on an NVIDIA A100 GPU with 40GB memory. We use the tool Ray.tune[https://docs.ray.io/en/latest/tune/index.html] for hyperparameter tuning. Inference times were computed as the cumulative time to predict all 2,439 mentions in the test set on the CPU of Macbook pro with 2.3 GHz Quad-Core Intel Core i7, 32 GB 3733 MHz LPDDR4X RAM. We report the median inference time of 10 runs.
§.§ Baselines
We compare CoSiNES with four baselines.
TF-IDF A model that computes TF-IDF embeddings learned from training data<cit.>.
GNN A graph neural network that treats each entity or mention as a chain. Each character represents a node in the graph and its embedding representation is learned during training. The average of the character embeddings are used to represent entity names and mentions <cit.>.
BERT We use the mean of last layer outputs of all tokens from BERT_small <cit.> to represent entities and mentions. This is the same backbone used to train CoSiNES.
GPT3[https://beta.openai.com/docs/guides/embeddings/] We use the embedding GPT-3 api from OpenAI to compute the embeddings using model .
§ RESULTS AND DISCUSSIONS
Table <ref> shows the comparative results on our dataset. Our model outperforms all baselines by a significant margin in terms of top–1 retrieval accuracy: 10.46% over TF-IDF, 13.2% over GNN, 47.76% over BERT, and 3.16% over GPT3. Through comprehensive experimentation, we observe that the best performance model has the BERT-small as the backbone. The learning rate is set to 1e-5, contrastive group size is 10, and the batch size of groups is 16, which makes the effective batch size 160. We set the margin to 2.
§.§ Learning Rate
To investigate how different learning rates affect the convergence of the Siamese model on our dataset, we run five-fold cross-validation with four learning rates (1e-4, 5e-5, 1e-5, and 1e-6) on the training data, as shown in Figure <ref>. For each learning rate, we experiment with different numbers of epochs, ranging from 10 to 200 with an interval of 10. The X axis is the number of epochs for each experiment and the Y axis is the top–1 accuracy. The average of the five-fold top–1 accuracy is shown for each dot in the figure, together with the standard deviation across five folds. As we can see, the learning rate affects how fast and stably the model converges, and most of them reach similar performance when trained for enough number of epochs. This indicates that the Siamese model is robust with respect to the learning rate.
We set the learning rate to be 1e-5 as it tends to have a smaller deviation of performance.
§.§ Hybrid Triplet Mining
We propose a hybrid of batch–all and batch–hard triplet mining during training. Figure <ref> shows the training process with 200 epochs with the above three learning rates, of which the first 100 epochs apply batch–all triplet sampling and the second 100 epochs employ batch–hard triplet sampling. The result shows that for the first batch–all 100 epochs, the training of 1e-4 and 5e-5 is unstable and performance oscillates greatly. When batch–hard mining comes into play, the training becomes much smoother and the performance continues to improve steadily for all three learning rates. This experiment shows that the hybrid mining scheme improves the top–1 accuracy by around 2% compared to the single-mining strategy.
§.§ Model Size
Normally, there is a tradeoff between model accuracy and efficiency.
Therefore, we experiment with different sizes of BERT as backbone to find a balance between performance and running time. Figure <ref> shows the inference time on the testing set with top–1 accuracy. The results show that CoSiNES with BERT-small achieves the best performance and fast inference time. Although the GPT3 embeddings achieve performance close to CoSiNES, running inference using the GPT3 OpenAI api is inefficient.
§.§ ROC Curve
For a comprehensive comparison between our model and the baselines, we conduct an experiment to compute the receiver operating characteristic (ROC) curve. We add 420 previously unseen relevant but negative mentions from the technology domain that do not refer to any entities in the training set, and calculate the false positive rate under different thresholds. Figure <ref> shows that our proposed model has a larger area under the curve, which demonstrates its superior performance over the baselines.
§.§ Qualitative Error Analysis
We examine the predictions from CoSiNES on ESAppMod and categorize the following error types. Table <ref> shows a few examples for each of these types.
Misspelling. When a mention has an error in the spelling, the tokens returned by PLMs could be very different, which leads to mismatch. This is a challenge for PLMs, whereas human could easily handle, e.g. “Andriod" vs “Android".
Acronym. Linking acronyms to full expressions seem to be a trivial task for humans, however, CoSiNES falls short of this capability. The rescue might be to design a task specialized for recognizing acronyms for PLMs.
Multi-match. This is the most common error where multiple entities partially match with the mention in the surface form. One way to address this issue is to enrich the training dataset with various mentions, which is not always within easy reach. Another potential approach is to integrate external knowledge about entities so that the model can refer to.
No-match. When the entity and mention have no match at all in the surface form, it is unlikely for the model to retrieve the correct target, especially no context can be leveraged. Therefore, external knowledge could be particularly useful in this case.
§ ADAPTATION TO BIOMEDICAL DOMAIN
We show how to adapt our framework to the biomedical domain with minimal changes.
§.§ Datasets
We consider four public datasets, ncbi, bc5cdr-disease, bc5cdr-chemical, and bc2gm, covering three types of entities: chemicals, diseases, and genes. Details and statistics regarding the datasets can be found in apprendix <ref>.
§.§ Baselines
We compare our framework with three models.
TF-IDF Like the baseline for ESAppMod, we implement a straightforward TF-IDF model <cit.> based on the knowledge database for each dataset and apply nearest-neighbor search for testing.
BioBERT ranking
Use BioBERT <cit.> to encode concepts and mentions without fine-tuning. BioBERT is a large biomedical language representation model pretrained with PubMed abstracts and PMC full-text articles.
BioSyn BioSyn <cit.> is the state-of-the-art model for biomedical entity normalization with synonym marginalization and iterative candidate retrieval. The model leverages sparse embedding from TF-IDF and dense embedding from BioBERT.
§.§ Domain Adaptation
For domain adaptation, it would be ideal if we can make none or a few changes to the model architecture and training process. Therefore, we follow all experimental settings, such as learning rate, margin, contrastive group generation, and hybrid training scheme from the experiments on our proposed datasets. The most significant change is that to adapt to a new domain, we use dmis-lab/biobert-v1.1[https://huggingface.co/dmis-lab/biobert-v1.1] in replacement of the regular BERT as our backbone. We conduct all experiments on two NVIDIA A100 GPUs and adjust the batch size for each dataset based on the lengths of the mentions.
§.§ Results
The results are shown in Table <ref>. We reproduce the BioBERT experiment reported by <cit.> using the embedding of the [CLS] token as the representation. The results are almost identical. The minor differences might be due to different versions of the pretrained language model.
The performance of BioSyn reported by <cit.> is high. However, as pointed out by <cit.>, the original testing splits used by <cit.> have significant overlapping mentions with the knowledge base. Therefore, removed all the duplicates and produced refined testing splits. We follow the performance of BioSyn reported by them.
The results show that CoSiNES significantly outperforms the baselines of TF-IDF and BioBERT ranking in terms of top-k accuracy.
CoSiNES achieves competitive results with BioSyn on all the datasets. Given that we didn't change any hyperparameters or architectures of CoSiNES, and directly applied the framework to new domains, we demonstrate the cross-domain applicability of CoSiNES.
§ CONCLUSION
We propose a generic, scalable, and adaptable framework CoSiNES for the entity standardization task, which maps various mentions to standard entities in the knowledge base. We first construct a new dataset ESAppMod in the technology domain and demonstrate the superiority of our framework over four other models. We conduct comprehensive experiments regarding batch size, learning rate, margin, loss calculation and different sizes of BERT, with our designed contrastive group generation and hybrid triplet mining, and show that the framework is rather robust with respect to hyper-parameters. With the optimal setting on our dataset, we further show that our model can be easily adapted to new domains with minimal changes by achieving competitive performance on four benchmark datasets from the biomedical domain covering three different types of entities.
After examining the errors produced by the framework on our proposed dataset, we categorize four different types of errors and defer to future work with the following directions: (1) integrating the framework with external knowledge. For multi-match errors, where multiple entities partially match with the mention, it would be ambiguous to retrieve the target entity. For no-match errors, external knowledge could provide extra information; (2) Adversarial training for misspellings. For technical terms, misspelling could lead to completely different tokenization of the mentions; (3) Construct new or augment the existing training dataset with acronym samples. The pretrained language models are not specialized in recognizing acronyms. Therefore, it would be worthwhile endowing PLMs with such capability.
§ LIMITATIONS
We focuses on resolving various mentions from different domains. Although we have tested our framework on multiple datasets, it relies on a human-annotated dataset and effort should be taken to investigate how the model performs with emerging domains without human-annotated data. Our model works with mentions that have been extracted from raw text. It would be more practical if the model could work with raw text directly and interact with another mention-extraction module. The performance of the model is largely affected by the surface form of the mentions, although our framework is robust to variations in the surface form, it would be more beneficial to further investigate how adversarial turbulence in the mentions could affect the behaviors of the framework.
§ ETHICS STATEMENT
The domain and data we work with don't involve any personal information and are all publicly available. However, as the work could be potentially applied in the medical domain to resolve mentions of disease, discretion is advised when any medical decisions or diagnostics are made with the assistance of the model.
acl_natbib
§ BIOMEDICAL DATASETS DESCRIPTIONS AND STATISTICS
Detailed descriptions of the datasets can also be found in <cit.> and <cit.>.
NCBI Disease Corpus
NCBI Disease Corpus <cit.> contains manually annotated disease mentions extracted from 793 PubMed abstracts and their corresponding concepts in the MEDIC dictionary <cit.>. The July 6, 2012 version of MEDIC has 11915 CUIs (concept ids) and 71923 synonyms (mentions).
BioCreative 5 CDR
BioCreative V CDR (BC5CDR) <cit.> is a challenge for extracting chemical-disease relations. There are manual annotations for both chemical and disease from 1500 PubMed abstracts. Like the NCBI disease corpus, disease mentions are mapped into the MEDIC dictionary. The chemical mentions are mapped into the Comparative Toxicogenomics DataBase (CTD) <cit.>. The Nov 4, 2019 version of CTD contains 171203 CUIs and 407247 synonyms.
BioCreative 2 GN
BioCreative 2 GN (BC2GN) <cit.> contains human gene and gene product mentions from PubMed abstracts. It has 61646 CUIs and 277944 synonyms <cit.>.
§ TRIPLET TYPES
As shown in Equation <ref>, triplet loss is calculated based on triplets {x, x^+, x^-}, which always consist of two samples from the same class and a third sample from a different class. We usually call x the anchor of the triplet, x^+ the positive sample, and x^- the negative sample. The intuition behind the loss function is that the distance d(x, x^-) between the anchor and negative should be larger than the distance d(x, x^+) between the anchor and positive by a margin. The margin is a hyperparameter that needs to be tuned.
ℒ = max(d(x, x^+)-d(x, x^-)+ margin, 0)
Based on the difference between d(x, x^-) and d(x, x^+), we can classify triplets into three categories.
* Easy triplets, which have a loss of zero based on Equation <ref>. Therefore, easy triplets provide no learning signal to the model.
d(x, x^-) - d(x, x^+) > margin
* Semihard triplets, which have a loss less than the margin.
0 < d(x, x^-) - d(x, x^+) < margin
* Hard triplets, which are most informative for the model.
d(x, x^-) - d(x, x^+) < 0
§ HYPERPARAMETER SEARCH
We have done the following hyperparameter search grid on ESAppMod
|
http://arxiv.org/abs/2306.17693v1
|
20230630141944
|
Thompson sampling for improved exploration in GFlowNets
|
[
"Jarrid Rector-Brooks",
"Kanika Madan",
"Moksh Jain",
"Maksym Korablyov",
"Cheng-Hao Liu",
"Sarath Chandar",
"Nikolay Malkin",
"Yoshua Bengio"
] |
cs.LG
|
[
"cs.LG"
] |
[
Thompson Sampling for Improved Exploration in GFlowNets
equal*
Jarrid Rector-Brooksmila,udem,dreamfold
Kanika Madanmila,udem
Moksh Jainmila,udem
Maksym Korablyovmila,dreamfold
Cheng-Hao Liumila,dreamfold,mcgill
Sarath Chandarmila,poly
Nikolay Malkinmila,udem
Yoshua Bengiomila,udem,cifar
milaMila – Québec AI Institute
udemUniversité de Montréal
polyPolytechnique Montréal
mcgillMcGill University
cifarCIFAR Fellow
dreamfoldDreamFold
Jarrid [email protected]
Machine Learning, ICML
0.3in
]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy. Unlike other algorithms for hierarchical sampling that optimize a variational bound, GFlowNet algorithms can stably run off-policy, which can be advantageous for discovering modes of the target distribution. Despite this flexibility in the choice of behaviour policy, the optimal way of efficiently selecting trajectories for training has not yet been systematically explored. In this paper, we view the choice of trajectories for training as an active learning problem and approach it using Bayesian techniques inspired by methods for multi-armed bandits. The proposed algorithm, Thompson sampling GFlowNets (TS-GFN), maintains an approximate posterior distribution over policies and samples trajectories from this posterior for training. We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
§ INTRODUCTION
Generative flow networks <cit.> are generative models which sequentially construct objects from a space by taking a series of actions sampled from a learned policy P_F. A GFlowNet's policy P_F is trained such that, at convergence, the probability of obtaining some object x ∈ as the result of sampling a sequence of actions from P_F is proportional to a reward R(x) associated to x. Whereas traditional probabilistic modeling approaches (e.g., those based on Markov chain Monte Carlo (MCMC)) rely on local exploration in for good performance, the parametric policy learned by GFlowNets allows them to generalize across states and yield superior performance on a number of tasks <cit.>.
While GFlowNets solve the variational inference problem of approximating a target distribution on with the distribution induced by the sampling policy <cit.>, they are trained in a manner reminiscent of reinforcement learning (RL). GFlowNets are typically trained by either sampling trajectories on-policy from the learned sampling policy or off-policy from a mix of the learned policy and random noise. Each trajectory sampled concludes with some object x ∈ for which the GFlowNet receives reward R(x) and takes a gradient step on the parameters of the sampler with respect to the reward signal. Despite GFlowNets' prior successes, this mode of training leaves them vulnerable to issues seen in the training of reinforcement learning agents — namely, slow temporal credit assignment and optimally striking the balance between exploration and exploitation.
Although multiple works have tackled the credit assignment issue in GFlowNets <cit.>, considerably less attention has been paid to the exploration problem. Recently <cit.> proposed to augment GFlowNets with intermediate rewards so as to allow the addition of intrinsic rewards <cit.> and incorporate an exploration signal directly into training. However, while density-based exploration bonuses can provide much better performance on tasks where the reward R(x) is very sparse, there is no guarantee that the density-based incentives correlate with model uncertainty or task structure. In fact, they have been shown to yield arbitrarily poor performance in a number of reinforcement learning settings <cit.>. In this paper, we develop an exploration method for GFlowNets which provides improved convergence to the target distribution even when the reward R(x) is not sparse.
Thompson sampling <cit.> is a method which provably manages the exploration/exploitation problem in settings from multi-armed bandits to reinforcement learning <cit.> and has been employed to much success across a variety of deep reinforcement learning tasks <cit.>. The classical TS algorithm <cit.> maintains a posterior over the model of the environment and acts optimally according to a sample from this posterior over models. TS has been generalized to RL problems in the form of Posterior Sampling RL <cit.>. A variant of TS has been adapted in RL, where the agent maintains a posterior over policies and value functions <cit.> and acts optimally based on a random sample from this posterior. We consider this variant of TS in this paper.
Our main contribution in this paper is describing and evaluating an algorithm based on Thompson sampling for improved exploration in GFlowNets. Building upon prior results in <cit.> we demonstrate how Thompson sampling with GFlowNets allows for improved exploration and optimization efficiency in GFlowNets. We validate our method on a grid-world and sequence generation task. In our experiments TS-GFN substantially improves both the sample efficiency and the task performance. Our algorithm is computationally efficient and highly parallelizable, only taking ∼15% more computation time than prior approaches.
§ RELATED WORK
Exploration in RL
There exists a wide literature on uncertainty based RL exploration methods. Some methods rely on the Thompson sampling heuristic and non-parametric representations of the posterior to promote exploration <cit.>. Others employ uncertainty to enable exploration based on the upper confidence bound heuristic or information gain <cit.>.
Another set of exploration methods attempts to make agents “intrinsically” motivated to explore. This family of methods includesrandom network distillation (RND) and Never Give Up <cit.>. <cit.>, proposes to augment GFlowNets with intrinsic RND-based intrinsic rewards to encourage better exploration.
MaxEnt RL RL has a rich literature on energy-based, or maximum entropy, methods <cit.>, which are close or equivalent to the GFlowNet framework in certain settings (in particular when the MDP has a tree structure <cit.>). Also related are methods that maximize entropy of the state visitation distribution or some proxy of it <cit.>, which achieve a similar objective to GFlowNets by flattening the state visitation distribution. We hypothesize that even basic exploration methods for GFlowNets (e.g., tempering or ϵ-noisy) could be sufficient exploration strategies on some tasks.
§ METHOD
§.§ Preliminaries
We begin by summarizing the preliminaries on GFlowNets, following the conventions of <cit.>.
Let G=(,) be a directed acyclic graph. The vertices s∈ are called states and the directed edges (u v)∈𝒜 are actions. If (u v) is an edge, we say v is a child of u and u is a parent of v. There is a unique initial state s_0∈ with no parents. States with no children are called terminal, and the set of terminal states is denoted by .
A trajectory is a sequence of states τ=(s_m s_m+1… s_n), where each (s_i s_i+1) is an action. The trajectory is complete if s_m=s_0 and s_n is terminal.
The set of complete trajectories is denoted by .
A (forward) policy is a collection of distributions P_F(-|s) over the children of every nonterminal state s∈. A forward policy determines a distribution over by
P_F(τ=(s_0… s_n))=∏_i=0^n-1P_F(s_i+1|s_i).
Similarly, a backward policy is a collection of distributions P_B(-|s) over the parents of every noninitial state.
Any distribution over complete trajectories that arises from a forward policy satisfies a Markov property: the marginal choice of action out of a state s is independent of how s was reached. Conversely, any Markovian distribution over arises from a forward policy <cit.>.
A forward policy can thus be used to sample terminal states x∈ by starting at s_0 and iteratively sampling actions from P_F, or, equivalently, taking the terminating state of a complete trajectory τ∼ P_F(τ). The marginal likelihood of sampling x∈ is the sum of likelihoods of all complete trajectories that terminate at x.
Suppose that a nontrivial (not identically 0) nonnegative reward function R:→_≥0 is given. The learning problem solved by GFlowNets is to estimate a policy P_F such that the likelihood of sampling x∈ is proportional to R(x). That is, there should exist a constant Z such that
R(x)=Z∑_τ in : τ=(s_0… s_n=x)P_F(τ) ∀ x∈.
If (<ref>) is satisfied, then Z=∑_x∈R(x).
The sum in (<ref>) may be intractable. Therefore, GFlowNet training algorithms require estimation of auxiliary quantities beyond the parameters of the policy P_F. The training objective we primarily consider, trajectory balance (TB), learns an estimate of the constant Z and of a backward policy, P_B(s| s'), representing the posterior over predecessor states of s' in trajectories that contain s'. The TB loss for a trajectory τ is:
_TB(τ; θ) = (logZ_θ∏_t=0^n-1 P_F(s_t+1|s_t; θ)/R(s_n) ∏_t=0^n-1 P_B(s_t|s_t+1; θ))^2
where θ are the parameters of the learned objects P_F, P_B, and Z. If _TB(τ; θ)=0 for all τ, then P_F samples objects x∈ with probability proportional to R(x), i.e., (<ref>) is satisfied. Algorithms minimize this loss for trajectories τ sampled from some training policy π_θ, which may be equal to P_F itself (on-policy training) but is usually taken to be a more exploratory distribution, as we discuss below.
Notably, any choice of a backwards policy P_B yields a unique corresponding P_F and Z which makes the expression on the right side of (<ref>) equal to zero for all τ∈ (see <cit.> for interpretations of this result in terms of variational methods).
§.§ GFlowNet exploration strategies
Prior work on GFlowNets uses training policies based on dithering or intrinsic motivation, including:
On-policy The training policy is the current P_F: π_θ(s' | s) = P_F(s' | s;θ).
Tempering Let α_θ(s'|s): × be the logits of P_F, then the training policy is a Boltzmann distribution with temperature T ∈ as
π_θ(s'|s) ∝exp(α_θ(s'|s) / T).
ϵ-noisy For ϵ∈ [0,1], the training policy follows P_F with probability 1 - ϵ and takes a random action with probability ϵ as π_θ(s'|s) = (1-ϵ)P_F(s'|s;θ) + ϵ/#{s”:(s s”)∈}.
GAFN <cit.> The training policy is the current P_F, but P_F is learned by incorporating a pseudocount-based intrinsic reward for each state s ∈τ into the objective (τ;P_F,P_B) so that
π_θ(s'|s) = P_F(s'|s;θ).
§.§ Thompson sampling for GFlowNets
Learning GFlowNets over large spaces requires judicious exploration. It makes little sense to explore in regions the GFlowNet has already learned well – we would much rather prioritize exploring regions of the state space on which the GFlowNet has not accurately learned the reward distribution. Prior methods do not explicitly prioritize this. Both dithering approaches (tempering and ϵ-noisy)
and GAFNs encourage a form of uniform exploration, be it pure random noise as in dithering or a pseudocount in GAFNs. While it is impossible to a priori determine which regions a GFlowNet has learned poorly, we might expect that it performs poorly in the regions on which it is uncertain. An agent with an estimate of its own uncertainty could bias its action selection towards regions in which it is more uncertain.
With this intuition in mind, we develop an algorithm inspired by Thompson sampling and its applications in RL and bandits <cit.>. In particular, following <cit.> we maintain an approximate posterior over forward policies P_F by viewing the last layer of our policy network itself as an ensemble. To maintain a size K ∈^+ ensemble extend the last layer of the policy network to have K ·ℓ heads where ℓ is the maximum number of valid actions according to G for any state s ∈. To promote computational efficiency all members of our ensemble share weights in all layers prior to the final one.
To better our method's uncertainty estimates, we employ the statistical bootstrap to determine which trajectories τ may be used to train ensemble member P_F,k and also make use of randomized prior networks <cit.>. Prior networks are a downsized version of our main policy network whose weights are fixed at initialization and whose output is summed with the main network in order to produce the actual policy logits. Prior networks have been shown to significantly improve uncertainty estimates and agent performance in reinforcement learning tasks.
Crucially, while we parameterize an ensemble of K forward policies we do not maintain an ensemble of backwards policies, instead sharing one P_B across all ensemble members P_F,k. Recall from <ref> that each P_B uniquely determines a P_F which ℒ_TB(τ) = 0 ∀τ∈. Specifying a different P_B,k for each P_F,k would result in setting a different learning target for each P_F,k in the ensemble. By sharing a single P_B across all ensemble members we ensure that all P_F,k converge to the same optimal P_F^*. We show in Section <ref> that sharing P_B indeed yields significantly better performance than maintaining separate P_B,k.
With our policy network parameterization in hand, the rest of our algorithm is simple. First we sample an ensemble member P_F,k with k ∼Uniform{1,…,K} and then sample an entire trajectory from it τ∼ P_F,k. This trajectory is then used to train each ensemble member where we include the trajectory in the training batch for ensemble member P_F,k based on the statistical bootstrap with bootstrap probability p (p is a hyperparameter fixed at the beginning of training). The full algorithm is presented in Appendix <ref>.
§ EXPERIMENTS
§.§ Grid
We study a modified version of the grid environment from <cit.>. The set of interior states is a 2-dimensional grid of size H × H. The initial state is (0, 0) and each action is a step that increments one of the 2 coordinates by 1 without leaving the grid. A special termination action is also allowed from each state.
Prior versions of this grid environment provide high reward whenever the agent exits at a corner of the grid. This sort of reward structure is very easy for an agent to generalize to and is a trivial exploration task when the reward is not highly sparse (such reward structures are not the focus of this paper). To compensate for this, we adopt a reward function based on a summation of truncated Fourier series, yielding a reward structure which is highly multimodal and more difficult to generalize to (see Figure <ref>). The reward function is given by
R(x) = ∑_k=1^n cos(2a_k,1π g(x_1)) + sin(2 a_k,2π g(x_1)) +
cos(2b_k,1π g(x_2)) + sin(2b_k,2π g(x_2))
where a_k,1,a_k,2,b_k,1,b_k,2∈ are preset scaling constants ∀ k, n is a hyperparameter determining the number of elements in the summation, g: _≥ 0 [c,d], g(x) = x(d-c)/H + c, and c, d ∈ are the first and last integer coordinates in the grid.
We investigate a 64 × 64 grid with this truncated Fourier series reward (see Appendix <ref> for full reward setup details). We train the GFlowNets to sample from this target reward function and plot the evolution of the L_1 distance between the target distribution and the empirical distribution of the last 2 · 10^5 states seen in training[This evaluation is possible in this environment because the exact target distribution can be tractably computed.].
The results (mean and standard error over five random seeds) are shown in Figure <ref> (left side). Models trained with trajectories sampled by TS-GFN converge faster and with very little variance over random seeds to the true distribution than all other exploration strategies.
We also investigate the effect of sharing the backwards policy P_B across ensemble members in Figure <ref> (right side). Maintaining a separate P_B,k for each P_F,k performs significantly worse than sharing a single P_B over all ensemble members. Maintaining separate P_B,k resulted in the GFlowNet learning much slower than sharing P_B and converging to a worse empirical L_1 than sharing P_B.
§.§ Bit sequences
We consider the synthetic sequence generation setting from <cit.>, where the goal is to generate sequences of bits of fixed length n=120, resulting in a search space of size 2^120. The reward is specified by a set of modes M ⊂={0,1}^n that is unknown to the learning agent. The reward of a generated sequence x is defined in terms of Hamming distance d from the modes: R(x) = exp(1 - n^-1min_y ∈ M d(x,y)). The vocabulary for the GFlowNets is {0,1}. Most experiment settings are taken from <cit.> and <cit.>.
Models are evaluated by tracking the number of modes according to the procedure in <cit.> wherein we count a mode m as “discovered” if we sample some x such that d(x,m) ≤δ. The results are presented in Figure <ref> (mean and standard error are plotted over five random seeds). We find that models trained with TS-GFN find 60% more modes than on-policy, tempering, and ϵ-noisy. TS-GFN soundly outperforms GAFN, whose pseudocount based exploration incentive is misaligned with the task's reward structure and seems to perform exploration in unhelpful regions of the (very large) search space.
§ CONCLUSION
We have shown in this paper that using a Thompson sampling based exploration strategy for GFlowNets is a simple, computationally efficient, and performant alternative to prior GFlowNet exploration strategies. We demonstrated how to adapt uncertainty estimation methods used for Thompson sampling in deep reinforcement learning to the GFlowNet domain and proved their efficacy on a grid and long sequence generation task. Finally, we believe that future work should involve trying TS-GFN on a wider array of experimental settings and building a theoretical framework for investigating sample complexity of GFlowNets.
§ ACKNOWLEDGMENTS
The authors acknowledge financial support from CIFAR, Genentech, IBM, Samsung, Microsoft, and Google.
icml2023
§ ADDITIONAL ALGORITHM DETAILS
§ EXPERIMENT DETAILS: GRID
For brevity, we recall the definition of the reward function from Section <ref> as
R(x) = ∑_k=1^n cos(2a_k,1π g(x_1)) + sin(2 a_k,2π g(x_1)) + cos(2b_k,1π g(x_2)) + sin(2b_k,2π g(x_2))
The reward function was computed using the following hyperparameters. The weights were set as a_k,1 = a_k,2 = b_k,1 = b_k,2 = 4k/1000 with n = 1000 (the equivalent of ). The grid side boundary constants were c = -0.5, d= 0.5 and the side length of the overall environment was H = 64 (so that the overall state space was of size H × H = 64^2 = 4096). Finally, we raised the reward by the exponent β = 1.5 so that we trained the GFlowNets using reward R'(x) = R(x)^β.
Besides the reward, architecture details are identical to those in <cit.>, <cit.>, and <cit.>. The architecture of the forward and backward policy models are MLPs of the same architecture as in <cit.>, taking a one-hot representation of the coordinates of s as input and sharing all layers except the last. The only difference comes from the TS-GFN implementation which has K · d heads for the output of the last layer where d is the number of heads in the architecture of the non-TS-GFNs.
All models are trained with the Adam optimizer, the trajectory balance loss, and a batch size of 64 for a total of 400,000 trajectories. Hyperparameters were tuned using the Optuna Bayesian optimization framework from project Ray <cit.>. Each method was allowed 100 hyperparameter samples from the Bayesian optimization procedure. We reported performance from the best hyperparameter setting found by the Bayesian optimization procedure averaged over five random seeds (0,1,2,3,4). We now detail the hyperparameters selected from the Bayesian optimization procedure for each exploration strategy.
For on-policy we found optimal hyperparameters of 0.00156 for the model learning rate and 0.00121 for the log Z learning rate. For tempering we found optimal hyperparameters of 0.00236 for the model learning rate, 0.0695 for the log Z learning rate, and 1.0458 for the sampling policy temperature. For ϵ-noisy we found optimal hyperparameters of 0.00112 for the model learning rate, 0.0634 for the log Z learning rate, and 0.00534 for ϵ. For GAFN we found optimal hyperparameters of 0.000166 for the model learning rate, 0.0955 for the log Z learning rate, 0.144 for the intrinsic reward weight, and the architecture of the RND networks were a 2 layer MLP with hidden layer dimension of 53 and output embedding dimension of 96 (the hidden layer dimension and embedding dimension were also tuned by the Bayesian optimization procedure). Finally, for Thompson sampling we found a model learning rate of 0.00266, log Z learning rate of 0.0976, ensemble size of 100, bootstrap probability δ of 0.274, and prior weight of 12.03.
§ EXPERIMENT DETAILS: BIT SEQUENCES
The modes M as well as the test sequences are selected as described in <cit.>. The policy for all methods is parameterized by a Transformer <cit.> with 3 layers, dimension 64, and 8 attention heads. All methods are trained for 50,000 iterations with minibatch size of 16 using Adam optimizer and the trajectory balance loss.
All hyperparameters were tuned according to a grid search over the parameter values specified below. For on-policy we used a model learning rate of 0.0001 picked from the set {0.0001, 0.001, 0.01} and a log Z learning rate of 0.001 from the set {0.001, 0.01}. For tempering we used a model learning rate of 0.0001 picked from the set {0.0001, 0.001, 0.01}, a log Z learning rate of 0.001 from the set {0.001, 0.01}, and sampling distribution temperature of 1.1 from the set {1.05, 1.1, 1.25, 1.5}. For ϵ-noisy we used a learning rate of 0.001 picked from the set {0.0001, 0.001, 0.01}, a log Z learning rate of 0.001 from the set {0.001, 0.01}, and ϵ of 0.005 from the set {0.01, 0.005, 0.001, 0.0005}. For GAFN we used a learning rate of 0.001 picked from the set {0.0001, 0.0005, 0.001}, a log Z learning rate of 0.1 from the set {0.001, 0.01, 0.1}, an intrinsic reward weight of 0.5 from the set {0.1, 0.5, 1.0, 5.0, 10.0}, the RND network was a 4 layer MLP with hidden layer dimension of 64 and output dimension of 64. For TS-GFN we used a model learning rate of 0.001 picked from the set {0.0001, 0.001, 0.01}, a log Z learning rate of 0.001 from the set {0.001, 0.01}, an ensemble size of 50 picked from the set {10, 50, 100}, a prior weight of 4.0 picked from the set {0.1, 1.0, 4.0}, and a bootstrap probability δ 0f 0.75.
|
http://arxiv.org/abs/2306.03891v2
|
20230606175236
|
Equivariant localization and holography
|
[
"Dario Martelli",
"Alberto Zaffaroni"
] |
hep-th
|
[
"hep-th",
"math-ph",
"math.MP"
] |
AMSb
bbold
equationsection
arrows,calc
OT1pzcmit
|
http://arxiv.org/abs/2306.11502v1
|
20230620124627
|
The meaning of imaginary space
|
[
"Bruno Alexandre",
"Steffen Gielen",
"João Magueijo"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] | |
http://arxiv.org/abs/2306.07543v1
|
20230613052457
|
How Secure is Your Website? A Comprehensive Investigation on CAPTCHA Providers and Solving Services
|
[
"Rui Jin",
"Lin Huang",
"Jikang Duan",
"Wei Zhao",
"Yong Liao",
"Pengyuan Zhou"
] |
cs.CR
|
[
"cs.CR",
"cs.CY"
] |
University of Science and Technology of China Research Center For Data to Cyberspace Anhui Engineering Research Center for Intelligent Applications and Security of Industrial Internet Anhui University of Technology
How Secure is Your Website? A Comprehensive Investigation on CAPTCHA Providers and Solving Services
Rui Jin1,2First Author and Second Author contribute equally to this work. Lin Huang1,2 Jikang Duan1,2 Wei ZHAO3,4 Yong Liao1,2 Pengyuan Zhou1,2
====================================================================================================================================================
Completely Automated Public Turing Test To Tell Computers and Humans Apart (CAPTCHA) has been implemented on many websites to identify between harmful automated bots and legitimate users. However, the revenue generated by the bots has turned circumventing CAPTCHAs into a lucrative business. Although earlier studies provided information about text-based CAPTCHAs and the associated CAPTCHA-solving services, a lot has changed in the past decade regarding content, suppliers, and solvers of CAPTCHA. We have conducted a comprehensive investigation of the latest third-party CAPTCHA providers and CAPTCHA-solving services' attacks. We dug into the details of CAPTCHA-As-a-Service and the latest CAPTCHA-solving services and carried out adversarial experiments on CAPTCHAs and CAPTCHA solvers. The experiment results show a worrying fact: most latest CAPT-CHAs are vulnerable to both human solvers and automated solvers. New CAPTCHAs based on hard AI problems and behavior analysis are needed to stop CAPTCHA solvers.
§ INTRODUCTION
Completely Automated Public Turing Test To Tell Computers and Humans Apart, or CAPTCHA, has been widely deployed on websites to fight against these malicious activities. As the name suggests, CAPTCHAs are puzzles that are difficult for programs to solve but can be easily solved by humans. The primitive and most common CAPTCHAs are images that contain distorted and blurred letters. Users need to correctly recognize these letters. The arms race between CAPTCHAs and CAPTCHA solver algorithms has made them more and more complicated. Nowadays, many popular websites turn to more difficult CAPTCHAs, e.g., requiring users to select one image that contains certain animals from several images or listen to audio of several words with background noise and spell them.
The websites risk losing annoyed users and insist on deploying CAPTCHA since it helps prevent automated bots from carrying out malicious activities such as spamming, brute-force attacks, data scraping, etc. Requiring users to solve a CAPTCHA adds an extra layer of security by verifying that the user is human. This further help prevents automated systems from manipulating or distorting data, such as voting systems, online polls, or user-generated content, so that the accuracy and reliability of data can be maintained. Last, CAPTCHAs help prevent fraudulent activities, such as account creation or online transactions carried out by automated bots. Adding a verification step reduces the risk of fraudulent activities and enhances the overall security of online platforms.
Widely deployed CAPTCHAs have made solving CAPTCHAs a profitable business in turn. There are two methods to solve CAPTCHAs at a low cost: advanced AI or outsourced cheap human labor. The former method is commonly used to solve relatively old CAPTCHAs that can no longer stump the latest AIs. Since some small/old websites haven't updated their outdated text-based CAPTCHAs, this method still occurs in most CAPTCHA-solving services retailers. The latter method is used when no known AI in the community can achieve a high success rate against the target CAPTCHA. Retailers usually hire workers from low-income areas to reduce costs. Since CAPTCHAs are designed to be easy for humans, the entry barrier for this job is very low. The price of solving CAPTCHAs using manpower can be as low as several dollars per thousand CAPTCHAs.
The development of AI has increased the difficulty of developing CAPTCHAs. As CAPTCHA-As-a-Service becomes mainstream, great changes have taken place in CAPTCHA providers and CAPTCHA solvers. However, few studies have demonstrated the current situation of the war between CAPTCHA providers and underground CAPTCHA-solving services. In this paper, we present the mechanic behind third-party CAPTCHA providers and CAPTCHA-solving services. Furthermore, we test the CAPTCHA-solving services against popular CAPTCHA providers to unveil more details of underground CAPTCHA-solving services. Our contributions include:
- Investigating and summarizing CAPTCHA-As-a-Service.
- Unveiling the attack details and service quality of underground CAPTCHA-solving services.
The structure of this paper is as follows. In Section <ref>, the related background of CAPTCHAs will be introduced. The CAPTCHA-As-a-Service framework and popular CAPTCHA providers will be presented in Section <ref>. Section <ref> introduces the attack framework and some CAPTCHA-solving services retailers. The adversarial experiments between selected CAPTCHA providers and CAPTCHA-solving services will be shown in Section <ref>. Section <ref> is the conclusion.
§ BACKGROUND
Luis von Ahn et al. proposed the term CAPTCHA in 2003 <cit.>. In this paper, they defined CAPTCHA as a program that can generate and grade tests that most humans can easily pass but current computer programs can't. They also proposed the core of CAPTCHA: challenging AI problems. The design of CAPTCHAs is similar to modern cryptography: cryptography algorithms are based on challenging mathematic problems that can't be solved at a realistic cost, and CAPTCHAs are based on difficult AI problems that the AI community hasn't found a method to solve with high success rate. For example, when they formalized the text CAPTCHAs, they assumed that programs couldn't achieve high accuracy in transformed letter recognition with the available technology. With modern cryptography, we haven't heard of many leakage incidents due to the decryption of encryption algorithms. However, even though many websites have armed themselves with SOTA CAPTCHAs, fake accounts and scalpers haven't disappeared and have become even more rampant. On one hand, the development of AI has made it possible to break some CAPTCHAs with a high success rate. On the other hand, the underground CAPTCHA-solving services using cheap manpower provided a guarantee to solve those CAPTCHAs that no software could pass.
In 2010, Marti Motoyama conducted a detailed investigation of CAPTCHAs and CAPTCHA solvers<cit.>. Before 2010, the mainstream CAPTCHAs were text-based, and many websites deployed their own CAPTCHA. In this study, the author tested 8 CAPTCHA-solving services against 25 popular websites' CAPTCHAs. The result showed that the services could achieve a correct rate of over 70% on most websites within 20 seconds. The author also argued that defenders were winning the war against low-cost-effectiveness automated software solvers, which usually cost hundreds or even thousands of dollars yet couldn't achieve promising results.
Most text-based CAPTCHAs were broken using software by 2014 <cit.>. Twisted texts can no longer stop software solvers while keeping themselves easy for humans to recognize. Strictly speaking, text-based CAPTCHAs are included in image-based CAPTCHAs: text-based CAPTCHAs ask users to recognize texts from images, while the recognition target of image-based CAPTCHAs can be more varied. In 2012, using photographs from Google Street View, Google reCAPTCHA developed reCAPTCHA v2, which requires the user to identify crosswalks, bikes, buses, etc. Despite the rapid development of semantic computer vision, the attacker must train the model for each new recognition target, significantly increasing the cost of software CAPTCHA solvers. So far, image-based CAPTCHAs are still mainstream. In 2019, Weng et al also tested CAPTCHA-solving services on common image-based CAPTCHAs <cit.>. 152 CAPTCHA-solving services distributed worldwide were confirmed. Most tested CAPTCHA-solving services' success rate against image-based CAPTCHAs ranged from 0.8 to 0.95.
Apart from the content of CAPTCHAs, CAPTCHA providers have also changed in the last decade. Popular websites no longer develop their own CAPT-CHA and instead turn to third-party CAPTCHA providers, e.g., Google reCAPTCHA, Arkose Labs, etc. Typically, third-party CAPTCHAs are implemented in an independent iframe using scripts developed by these providers. After solving the CAPTCHA, the user will receive a token from the CAPTCHA provider. The user can pass the CAPTCHA by sending this token to the website. Since the difficulty of CAPTCHAs varies between providers, CAPTCHA solvers usually classify target CAPTCHAs by providers and set the price respectively.
§ CAPTCHA PROVIDERS
The arms race between CAPTCHAs and CAPTCHA solvers has pushed CAPT-CHAs to become more complicated. Since AIs can easily pass self-developed text-based CAPTCHAs, which have been thoroughly researched, many websites use CAPTCHAs from professional third-party CAPTCHA providers. Few studies have presented the mechanic of third-party CAPTCHAs. In this section, we will present the framework of third-party CAPTCHAs and introduce four third-party CAPTCHA providers: Google reCAPTCHA, Arkose Labs, Geetest, and hCaptcha, since most CAPTCHA-solving services retailers support them, and we will use them as testbeds.
§.§ Third-party CAPTCHAs framework
Before deploying third-party CAPTCHAs, the website will need to apply a pair of keys from the provider: a public site key and a private secret key. On the client side, most websites will place the CAPTCHA and the public site key in an iframe. As the first line of defense, users usually need to click a check box to request and load the content of the CAPTCHA. The public site key is used here to invoke the CAPTCHA. After solving the CAPTCHA, the client will send the collected data to the provider. If the provider believes the client is a human, a response token will be generated and sent to the client. The client needs to send the response token to the server then. The server can accomplish the verification by consulting the CAPTCHA provider with the received response and the private secret key. This framework is presented in Fig. <ref>.
§.§ Popular providers
§.§.§ Google reCAPTCHA
There are three reCAPTCHA versions: reCAPTCHA v1, v2, and v3. reCAPTCHA v1 is text-based and has been shut down since 2018. There are two types of popular reCAPTCHA v2. The first type asks the user to click the “I'm not a robot” checkbox and select images with a certain object from nine candidate images (Fig. <ref>). The second type, reCAPTCHA V2 Invisible, doesn't need the user to click the checkbox but requires it to be bonded to a button or invoked programmatically instead. Also, it analysis the user's IP addresses, cookies, mouse movements, etc, to decide the risk and whether it needs to send further challenges to the user. The latest reCAPTCHA v3 is similar to reCAPTCHA V2 Invisible, except it won't send any explicit challenge and will score the user instead of simply classifying the user into human or machine.
https://anti-captcha.com/apidoc/task-types/RecaptchaV2TaskProxyless
§.§.§ GeeTest
As shown in Fig. <ref>, GeeTest's feature is its various types of interaction. It might ask the user to slide a piece of puzzle, click icons in a certain order, or swap items to line up identical items in a row. The reason they developed these challenges, they claim, is to collect the user's behavior data so that they can verify the user without accessing other client-side private information.
https://www.geetest.com/adaptive-captcha-demo
§.§.§ FunCaptcha
FunCaptcha is developed by Arkose Labs. From the example on their website, we can tell available environment information for CAPTCHAs includes but is not limited to the operating system version, the browser version, IP address, and device fingerprints. The challenge contents of FunCaptcha are presented in Fig. <ref>.
https://anti-captcha.com/apidoc/task-types/FunCaptchaTaskProxyless
§.§.§ hCaptcha
hCaptcha (Fig. <ref>) is very similar to Google reCAPTCHA. They claim all reCAPTCHA V2 and V3 features and advertise that the websites can switch from reCAPTCHA to hCaptcha with minimal effort. Another feature is that the difficulty of hCaptcha, which decides whether an explicit challenge will appear and its difficulty, can be manually set.
https://2captcha.com/demo/hcaptcha?difficulty=moderate
In summary, we can see two different development paths for CAPTCHAs. The first path is to continue designing new tests using hard AI problems. For example, apart from recognizing objects, FunCaptcha's new challenges require logical thinking and spatial imagination, thus being more difficult for AI. Another path is to collect the user's behavior and environment data and analyze it to classify or score the user. Despite this method has deviated from the original definition and concept of CAPTCHA, Google reCAPTCHA and many other CAPTCHA providers are digging deeper into it and leaving the broken challenges as a way to collect behavior data rather than an effective defense line.
§ CAPTCHA SOLVERS
CAPTCHA-solving services retailers follow CAPTCHA providers closely to ensure they support new CAPTCHAs with high demand. Thus, modern retailers differ vastly in attack framework and pricing method from old retailers. In this section, we will introduce some service retailers and how attackers utilize CAPTCHA-solving services to pass third-party CAPTCHAs.
§.§ Attack framework
The attack method differs for text-based CAPTCHAs and image-based CAPT-CHAs since only recognized texts are needed to solve a text-based CAPTCHA, but the action needed to solve an image-based CAPTCHA varies. Also, the information in a single image might not be sufficient for some CAPTCHAs, e.g., Google reCAPTCHA might replace the clicked image with a new one and ask the user to continue recognizing it. For text-based CAPTCHAs, the attacker only needs to send the image with text, wait for the recognized texts, then put them into the input box. The attack framework for image-based CAPTCHAs is represented in Fig. <ref>. The user will not interact with the CAPTCHA. Instead, the user must extract information from the page that the solving services require to load the CAPTCHA. Then, the solving services will pretend to be the user and obtain the response token. After the user receives it, the user only needs to accomplish the remaining communication process after passing the challenge, which is usually filling the token in an element on the page and submitting the form, or
dispatching an event with it.
§.§ Service retailers
We selected five CAPTCHA-solving service retailers as our test target: 2Captcha[https://2captcha.com], BestCaptchaSolver[https://bestcaptchasolver.com], AntiCaptcha[https://anti-captcha.com], DeathByCaptcha[https://www.deathbycaptcha.com], and CapSolver[https://www.capsolver.com/]. They have relatively high ranks in Google's search results, support more CAPTCHA providers than others, and reveal more statistics details on their website. It is worth mentioning that 2Captcha, BestCaptchaSolver, AntiCaptcha, and DeathByCaptcha take 100% human recognition as a feature worth advertising (Fig. <ref>). 2Captcha and DeathByCaptcha even specialized "100% accuracy service", which works by sending the challenge to multiple workers and comparing the results. CapSolver, however, advertises that they solve CAPTCHAs using 100% AI and machine learning methods.
https://anti-captcha.com/
Their prices for the selected four CAPTCHA providers are shown in Table <ref>. Since the two types of Google reCAPTCHA V2 differ in labor required, some retailers will set different prices for them. The reason for price fluctuation for reCAPTCHA V3 is the workers' scores given by reCAPTCHA V3: workers with higher scores are more expensive. Overall, more difficult CAPTCHAs or those that take more time to solve will cost more to be solved.
2Captcha is the only retailer that recruits workers on the website. According to their online statistics, the wages are 0.26$ per thousand correctly recognized text-based CAPTCHAs and 1$ per thousand correctly solved image-based CAPTCHAs.
§ EXPERIMENTS
To look into the details of CAPTCHA-solving services, we ordered them from the retailers mentioned above and used them to solve popular CAPTCHAs. Since third-party CAPTCHAs don't vary with the website they are deployed on, we selected one website for each CAPTCHA provider. hCaptcha, Google reCAPTCHA, and GeeTest were tested on 2Captcha's CAPTCHA demo pages[https://2captcha.com/demo]. FunCaptcha was tested on Outlook's register page[https://outlook.live.com/owa/?nlp=1&signup=1]. We also tested text-based CAPTCHA[https://www.mtcaptcha.com/#mtcaptcha-demo]. We tested each CAPTCHA provider-solver pair once every five minutes for one day, and recorded the response time and result. Every attempt to test 2Captcha, DeathByCaptcha, and CapSolver against FunCaptcha got "Unsolvable" in response. Thus, these results are excluded when analyzing.
§.§ Response time
Response time reflects the solving speed of CAPTCHA-solving services. Fig. <ref> and Fig. <ref> shows the mean response time and the response time variance. Despite the mean response times of BestCaptchaSolver are abnormally high, we believe the reason is their high workload: their response time dropped to normal occasionally. From these experiments, we can conclude that the normal solving speed for image-based CAPTCHAs is 20 seconds to 40 seconds. We also observed that CapSolver has the lowest mean response time and response time variance against most CAPTCHAs. This proves that their "100% AI and machine learning solutions" advertisement is not false. AntiCaptcha, on the contrary, has a relatively high mean response time, response time variance, and success rate, proving that they use 100% human solvers.
§.§ Success rate
The success rate is a key argument for judging the quality of solving services and whether a CAPTCHA is broken. Fig. <ref> shows the success rate of different CAPTCHA-solving services when solving different CAPTCHAs. Overall, the results are similar to the previous studies' results: the solving services' success rates are relatively high. The success rates can reach 90% in most cases. However, the situation is worse: the CAPTCHA providers are failing to stop automated solvers, let alone human solvers. The performance of CapSolver is worrying. Based on its mean response time, response time variance, and prices, we firmly believe it uses 100% automated solvers, yet they have achieved success rates of over 80% against most CAPTCHAs. This indicates that the latest hCaptcha, GeeTest, Google reCAPTCHA, and text-based CAPTCHA are already broken by automated solvers. Some success rates are lower, e.g., 2Captcha, DeathByCaptcha, and BestCaptchaSolver against text-based CAPTCHA. This does not indicate that text-based CAPTCHAs can effectively protect websites from CAPTCHA solvers since CapSolver's success rate against it is over 90%, and CapSolver is using automated software solvers. We believe this is because they applied similar solvers but failed to recognize this specific text-based CAPTCHA.
§.§ Difficulty
Google reCAPTCHA, hCaptcha, and FunCaptcha claim to provide CAPTCHAs of different difficulties. The differences in CAPTCHA difficulty are the possibility of requiring the user to solve an explicit challenge and the difficulty of the challenge. Since hCaptcha is the only provider that allows the user to set difficulty manually, our comparative experiment is limited to hCaptcha. In Fig. <ref> and Fig. <ref>, we compare the mean response time and success rate of different CAPTCHA-solving services when they are attempting to solve CAPTCHAs from hCaptcha with different difficulties. The result, however, proves that the difficulty of hCaptcha has little impact on both response time and success rate of the solving services.
§.§ Workload
2Captcha, AntiCaptcha, and BestCaptchaSolver show their online statistics on their websites, mainly their real-time workload. When calculating the workloads, 2Captcha classifies the CAPTCHAs into normal CAPTCHAs and JS CAPTCHAs, BestCaptchaSolver only shows normal CAPTCHAs workload and reCAPTCHA workload, AntiCaptcha presents the number of busy workers and idle workers for each CAPTCHA they support. Since normal CAPTCHA usually refers to text-based CAPTCHAs, and the third-party CAPTCHAs mentioned above are JS-based, we will classify the workloads into normal/text-based CAPTCHAs workload and reCAPTCHA/JS-based CAPTCHAs workload. For data collected from AntiCaptcha, only the CAPTCHAs mentioned above are considered. We recorded these statistics once per minute for one day and drew mean workloads and AntiCaptcha worker numbers in 2 hours (Fig. <ref>). Although we didn't observe overload in experiments, there are obvious fluctuations and peaks in workloads. The number of AntiCaptcha workers between 18:00 UTC and 00:00 UTC is significantly lower than at other periods, corresponding to the peak after 18:00 UTC. We believe this is because 18:00 UTC to 00:00 UTC is the common sleeping time for low-income countries in south-east Asia. We can also observe that the number of workers solving JS CAPTCHAs is far more than normal CAPTCHAs since most websites use third-party CAPTCHAs.
§ CONCLUSION
CAPTCHA, a test meant to protect the Internet from underground businesses, ironically gave birth to another underground business: CAPTCHA farm. Both CAPTCHAs and CAPTCHA solvers have evolved in the past decade. This study presents the details of the latest third-party CAPTCHA providers, corresponding solving services, and the adversarial experiments between them. Based on the experiment results, we draw the following conclusions:
CAPTCHA providers cannot protect websites from human CAPTCHA solvers. All selected popular third-party CAPTCHAs can be solved by at least two CAPTCHA-solving services with a high success rate.
CAPTCHA providers have relatively adequate human resources. No overload was observed in the online statistics reported by CAPTCHA-solving service retailers. If overloaded, we expect the response time to be longer. Except for BestCaptchaSolver, other retailers kept the response time reasonable. Few failures due to timeout were recorded.
CAPTCHA providers are failing to stop automated solvers. All selected popular third-party CAPTCHAs except FunCaptcha can be solved by CapSolver with a high success rate at a low price.
CAPTCHAs were initially designed to distinguish humans and computer programs. However, the profit behind solving CAPTCHA has "transformed humans into computer programs." Although FunCaptcha can't stop human solvers, their hard AI problems are challenging to automated solvers and usually cost more to be solved, thus having an advantage compared to other CAPTCHAs. On the other hand, we believe that analyzing behaviors rather than focusing on the CAPTCHA challenge is a good start to distinguishing benign users and malicious attackers instead of distinguishing humans and computers. Still, CAPTCHA providers such as Google reCAPTCHA fail to fight against the latest CAPTCHA solvers and need more research.
./figs/
splncs04
|
http://arxiv.org/abs/2306.09103v1
|
20230615130037
|
One-loop Effective Action up to Dimension Eight: Integrating out Heavy Scalar(s)
|
[
"Upalaparna Banerjee",
"Joydeep Chakrabortty",
"Shakeel Ur Rahaman",
"Kaanapuli Ramkumar"
] |
hep-ph
|
[
"hep-ph",
"hep-th"
] |
We present the complete one-loop effective action up to dimension eight after integrating out degenerate scalars using the Heat-Kernel method. The result is provided without assuming any specific form of either UV or low energy theories, i.e., universal. In this paper, we consider the effects of only heavy scalar propagators in the loops. We also verify part of the results using the covariant diagram technique.
[email protected], [email protected], [email protected], [email protected]
Indian Institute of Technology Kanpur, Kalyanpur, Kanpur 208016, Uttar Pradesh, India
One-loop Effective Action up to Dimension Eight: Integrating out Heavy Scalar(s)
Upalaparna Banerjee, Joydeep Chakrabortty, Shakeel Ur Rahaman, Kaanapuli Ramkumar
July 31, 2023
=====================================================================================
§ INTRODUCTION
The precision era in particle physics began to dawn following the discovery of the Higgs boson. Since then, various experiments indicate that the Standard Model (SM) and the new physics are separated on the energy scale. In this situation, the Effective Field Theory (EFT) <cit.> emerges as the practical and economical working framework. Instead of dealing with all of the numerous feasible options for beyond Standard Model (BSM) scenarios, we can put them on the same footing by treating the SM as a low-energy effective theory. The Standard Model effective field theory (SMEFT) <cit.> works as a bridge between the unknown territory of new physics and the SM.
EFT is primarily divided into two categories. The model-independent approach, also known as the bottom-up approach, involves augmenting the low-energy Lagrangian with higher mass dimensional operators while respecting the symmetry of the theory. Because of the redundancy caused by algebraic relations such as Fierz identities and integration by parts (IBP) satisfied by the operators, listing the higher dimensional operators becomes a challenging endeavour. The operators that are free from these redundancies are known to form Green's set <cit.>. When considering the external states to be on-shell, the equation of motion or the freedom of field redefinition <cit.> must also be taken into account, imposing more constraints on the number of independent operators that form a basis. As we go higher and higher dimension the number of operators proliferates making it impossible to keep track of these redundancies. Several techniques and automated tools <cit.> have been developed in order to construct higher dimensional operators <cit.>. In this bottom-up approach, the deviation from the low energy theory predictions is parametrised by the Wilson coefficients (WCs) of these effective operators by treating them as free parameters <cit.>. If some anomaly in the data is well explained by the effective operators, we can find UV completion using the straightforward symmetry base arguments, which aids in the model selection <cit.>.
The alternate is the model dependent or the top-down approach. In this framework, we begin with an action S[Φ,ϕ] for a UV complete model consisting of both heavy and light fields, which we denote as Φ and ϕ respectively. The corresponding low-energy effective action is obtained by expanding Φ around its classical minima (Φ_c) and evaluating the path integral over the dynamic fluctuations (η), so the heavy field is written as, Φ = Φ_c + η. The effective action is obtained by,
e^i S_eff[ϕ] = ∫ [𝒟Φ]e^i S[Φ,ϕ]
= ∫ [𝒟η] exp[i (S[Φ_c,ϕ] + 1/2δ^2 S/δΦ^2|_Φ=Φ_c η^2 + 𝒪(η^3))],
S_eff ≈ S[Φ_c] + i/2Trlog(δ^2 S/δΦ^2|_Φ=Φ_c)
= S[Φ_c] + i/2Trlog(D^2+M^2+U),
where, D is the covariant derivative, M is the mass corresponding to the heavy field Φ and U is the field-dependent term, functional of the light fields ϕ.
The term S[Φ_c] in Eq. (<ref>) arises when the heavy fields are integrated out at the tree-level and can be simply obtained by substituting the heavy field with its classical solution in the action. The other term in the equation corresponds to the part of the effective action generated at the one-loop level. This procedure of integrating out the heavy field(s) is also known as the matching of the UV theory and the low energy effective theory at an energy scale M. In this approach, the WCs accompanying the effective operators are functions of the UV parameters. This framework provides the means to bring together a variety of UV models under the same umbrella and conduct a comparative analysis <cit.>.
As one can infer from Eq. (<ref>), the structure of the UV action is quite general and it does not depend on the specific choice of the UV theory. So computing the effective operators in terms of D and U with proper factors of 1/M should provide a universal formula for integrating out <cit.>. This is indeed the case and computation of the one-loop term in Eq. (<ref>), has resulted in a master formula known as the universal one-loop effective action (UOLEA) <cit.>. In addition, some automated tools have been devised to aid this process <cit.>. The UOLEA described in the aforementioned literature has been limited to computing operators up to mass dimension six (D6). In spite of the fact that it can be calculated up to any mass dimension, the process becomes very complicated as we go for the higher dimension.
There are multiple theoretical ways to integrate out the heavy fields to compute the low-energy effective Lagrangian: using functional methods <cit.>, using Feynman diagrams, and covariant diagram techniques <cit.>. Recently, in Ref. <cit.> the Heat-Kernel (HK) method is used to find covariant Feynman rules in the context of EFT. The HK method
<cit.> has been instrumental to compute effective actions in quantum field theory and quantum gravity <cit.>. Using its intrinsic properties and background field method, the higher-order corrections in different space-time dimensions are computed in <cit.>.
As we commence to make progressively precise measurements of low energy observable, the dimension six terms will no longer be enough to explain the data, and much emphasis has been directed to mass dimension eight (D8) operators to keep up with the experimental precision <cit.>. Additionally, at the cross-section level, the interference term between the dimension eight and the renormalisable terms may become equivalent with the dimension six squared terms. It also provides leading order contributions to some physical processes such as light-by-light scattering that involves the neutral quartic gauge couplings. In the study of electroweak precision observable (EWPO), the U-parameter receives its first contribution at dimension eight <cit.>.
The primary goal of our paper is to extend the UOLEA to generate terms up to mass dimension eight. In this regard, we would also wish to advocate the advantages of the HK method. The application of the Heat-Kernel method in this context is relatively new. We provide an extensive overview of the computation of the effective action and limit ourselves to the one-loop diagrams that involve either one heavy scalar or multiple degenerate heavy scalars. We also use the conventional covariant diagram approach to validate some of our findings. We want to emphasise that the HK prescription is not just limited to heavy scalar loops and can be applied to mixed loops regardless of the spin or mass (heavy or light) of the particles involved. We are aware that in order to complete the UOLEA, one needs also to take into consideration these contributions, so we have set this aside for future study. Heat-Kernel has the capacity to expand beyond one-loop, making it more applicable to the study of precision physics.
The paper is organised as follows. To start with we briefly review the relevant working principle of the Heat-Kernel method in Sec. <ref>. Then, in the following Sec. <ref>, we discuss the computation of HK coefficients with one detailed example. In Sec. <ref>, we define the connection between the one-loop effective action with the HK coefficients. In the following Sec. <ref>, we discuss all possible independent structures of operators having up to mass dimension eight, accompanied by suitable numerical factors, that appear after integrating out heavy scalars from one-loop diagrams. In the next Sec. <ref>, we employ a new method to compute the effective operators, starting from the same UV action, based on the covariant diagram technique. This allows us to independently cross-check some of our results computed using the HK method. In Sec. <ref>, we write down the universal one-loop effective Lagrangian after collecting all the contributions computed in the earlier section and adding them up suitably. We also discuss a toy model to highlight the failure of naive power counting and the importance of different terms in our effective action. Then, we conclude in Sec. <ref>.
§ HEAT KERNEL: A BRIEF REVIEW
The one-loop effective action in Eq. (<ref>) can be written as a spectral function of a second-order elliptic differential operator after Wick's rotation. This allows us to study the one-loop effective action using spectral analysis methods, one such example is the computation of the Heat-Kernel coefficients (HKC).
Any second-order elliptic partial differential operators can be written in terms of a covariant derivative and a scalar function as <cit.>,
Δ= D^2 +U+M^2,
where D_μ≡∂_μ-iA_μ is the covariant derivative with A_μ being the connection, U is a space dependent scalar function, and M is a space independent scalar function. Later, M will be mapped to the mass term of the heavy field(s) in the process of one-loop effective action computation using HKCs.
Let us consider λ_n as the eigenvalues of the operator Δ corresponding to eigenvectors ϕ_n. If Δ is a self-adjoint operator, the Heat-Kernel can be written as <cit.>
K(t,x,y,Δ)=⟨y|e^-tΔ|x⟩=∑_n e^-tΔϕ_n(x)ϕ^†_n(y).
This HK satisfies the heat equation <cit.>
(∂_t+Δ_x)K(t,x,y,Δ)=0,
along with the initial condition
K(0,x,y,Δ)=δ(x-y).
In passing we would like to mention that t>0 is a parameter, not to be confused with time, while defining the heat equation.
In the absence of any interaction, i.e., for the free operator, Δ_0=∂_μ∂^μ+M^2, the HK can be written as <cit.>
K_0(t,x,y)=(4π t)^-d/2 Exp[z^2/4t-t M^2],
where z_μ=(x-y)_μ and d is the dimension we are working in. From here on, we assume four-dimensional Euclidean space, i.e., d=4.
For any general Laplace type operator with a potential term within a gauge theory, see the operator in Eq. (<ref>), the HK can be written in terms of the free operator HK and an interaction part (H) as <cit.>
K(t,x,y,Δ)=K_0(t,x,y) H(t,x,y,Δ).
In case of this potential term is bounded from below, the operator Δ is positive, and H(t,x,y,Δ) converges as t→∞. This allows one to write the interaction part in terms of a power law expansion in t as <cit.>
H(t,x,y,Δ)=∑_k (-t)^k/k !b_k(x,y),
where b_k are the Heat-Kernel coefficients [Also known as the Hadamard - Minackshisundaram - De Witt - Seeley (HMDS) coefficients <cit.>.] (HKCs). These b_k are polynomials of covariant derivative (D_μ) and the scalar operator U. This method has no restrictions on the dimensions of the covariant derivative and scalar function in the elliptic operator and can be generalised to matrix-valued U and D_μ <cit.>.
§ COMPUTATION OF HEAT-KERNEL COEFFICIENTS: GENERAL METHOD
Blending the ansatz depicted in Eq. (<ref>) in heat equation, Eq. (<ref>), the interaction term satisfies <cit.>
(∂_t+1/t z_μ D^μ +D^2 + U) H(t,x,y,Δ) =0.
Then, further invoking the power law expansion in t, see Eq. (<ref>), we obtain a recursion relation for the HKCs <cit.>
(k+z· D)b_k=k(U+D^2)b_k-1.
Here, b_ks for k<0 are considered to be 0. The combination of the free heat kernel part Eq. (<ref>), and the initial condition Eq. (<ref>) leads to b_0(x,x)=I, where I is the identity matrix <cit.>. This allows us to start the recursion relation with k ≥ 0.
In a later section, Sec. <ref>, we will see that the trace over the HKC at the coinciding limit, i.e., [ b_k (x,x) ], are related to the operators of the one-loop effective action where the scalar functions U and M^2 will be mapped in to functional of light-fields and the mass term of the heavy scalar field. Thus, our primary focus is to compute the HKCs at the coincident limit [b_k(x,x)]. To do so, we use the following relations <cit.>,
D_μ_1 D_μ_2...D_μ_m b_k|_z=0= 1/m+k{ k D_μ_1 D_μ_2...D_μ_m (U+D^2) b_k-1 - T_μ_1μ_2...μ_mb_k}|_z=0,
where, T_μ_1μ_2...μ_m = { D_μ_1 D_μ_2...D_μ_m (z· D)}|_z=0 -m D_μ_1 D_μ_2...D_μ_m,
⇒ T_μ_1μ_2...μ_m = D_μ_1 T_μ_2...μ_m + R_μ_2...μ_m,μ_1,
in the above equation, R_μ_2...μ_m,μ_1=[D_μ_2...D_μ_m,D_μ_1],
⇒ R_μ_2μ_3...μ_m,μ_1 = G_μ_2μ_1 D_μ_3...D_μ_m+ D_μ_2 D_μ_3...μ_m,μ_1,
where G_μν=[D_μ,D_ν] is the stress tensor.
While solving the HKCs, the recursion relation Eq. (<ref>) leads to terms involving derivatives of the HKCs, e.g., D_μ D_ν...b_k(x,y)|_z=0[It is important to note that for HKC computation at coinciding point, one must not set z=0 in Eq. (<ref>) as {D_μ (z· D) b_k(x,y)}|_z=0 = D_μ b_k(x,y)|_z=0≠ 0.]. To compute of one-loop effective Lagrangian systematically, we organise the operator classes of the form 𝒪(D^r U^s) and arrange them in a polynomial with different non-negative integer powers of the covariant derivative (D) and the field-dependent term (U).
Operators of the form 𝒪(D^r U^s) appear in the HKC b_n (x,y) where n=r/2+s. It is possible to extract the relevant part of the HKC b_n for the operators of a specific class using Eqs. (<ref>)-(<ref>) as
𝒪(D^rU^s) ≡ [b_r/2+s] U^s = ∑_k=0^n=r/2+sn! (n-1)!/k! (2n-k)!{k D^2(n-k){U b_k-1 U^s-1} - T_2(n-k) b_k U^s }_z=0.
In the next sections, we will further use the following notations to express our results in a compact form
b_k U^s ≡terms of order U^s in the HKC b_k;
[b_k] = b_k(x,x); G_μρ;μ≡ J_ρ;
T_2(k)≡ T_μ_1μ_1...μ_kμ_k;
D_μ_1...μ_n≡ D_μ_1...D_μ_n;
D^2(k)≡ D_μ_1μ_1...μ_kμ_k;
D_μ_1...μ_n (U) ≡ U_;μ_n...μ_1,
D_μ_1...μ_n (G_ρσ) ≡ G_ρσ;μ_n...μ_1;
D_μ_1...μ_n (J_ρ) ≡ J_ρ;μ_n...μ_1.
Here, for the sake of the readers, we systematically chalk out the necessary instructions to calculate the operators of a particular class, 𝒪(D^r U^s) ⊂ HKC b_n (x,y).
* First, start with Eq. (<ref>) and note down all the equations for 𝒪(D^r U^s-i) in a recursive way till 𝒪(D^r U^0). Each of these equations contains operators of the form D^2(k) (U b_j), and T_2(k) with k ∈ [0, r/2].
* Then, expand the operator T_2(k), using Eqs. (<ref>)-(<ref>), recursively until one is left with only field tensor G_μν and the covariant derivatives (D).
* Similarly expand the operator D^2(k) (U b_j) such that the derivative acts either on U or b_j.
∗ Following these steps, one ends up with terms that contain either HKCs of lower order or derivatives of HKCs.
* It is advised to start with operators of 𝒪(D^r U^0). Then, employing the initial condition [b_0]=I, evaluate the necessary derivatives of b_0 using Eq. (<ref>) and substitute them in the relation obtained for 𝒪(D^r U^0).
* Next work out another related class of operator 𝒪(D^r U^i+1) and calculate the required derivatives of HKCs.
∗ ∗ Repeat these previously mentioned two steps until one reaches the desired class of operators 𝒪(D^r U^s). Then, use the trace properties to note down the HKCs on a minimal basis.
Here, we want to explicitly mention a few subtleties while following the above-mentioned state of the art of calculation:
* The order in which the coincidence limit is used is important especially when derivative operators are involved as
D_μ_1...μ_n b_k |_z=0≠ D_μ_1...μ_n([b_k]).
* Usage of the trace properties in intermediate steps of the recursive procedure is strictly forbidden.
* As HKCs (b_k) and its nth derivative (D^n b_k) can be expressed in terms of the lower order HKCs b_k-i and lower order derivatives D^(n-j)b_k, and D^(n-j)b_k-i. Thus, one may avoid reaching out to b_0 every time while calculating the HKC or its derivatives, and instead, it is advised to use results for the intermediate HKCs, if available already.
§.§ Computation of operators 𝒪(D^4 U^3): a detailed example
Based on the methodology discussed in the previous section, we perform, here, an explicit calculation of the operators class 𝒪(D^4 U^3) for the sake of detailed demonstration. To compute 𝒪(D^4 U^3), we obtain the following necessary relations using Eq. (<ref>).
[b_2] U^0 =𝒪(D^4 U^0) = -1/12 {T_(4)b_0}_z=0,
[b_3] U^1 =𝒪(D^4 U^1) = U [b_2] U^0 + {1/2 D^2 (U b_1 ) + 1/10 ( D^4 (U b_0 )- T_(4)b_1 )}_z=0U^1 ,
[b_4] U^2 = 𝒪(D^4 U^2) = U [b_3] U^1 + {3/5 D^2 (U b_2 ) + 1/10 ( 2 D^4 (U b_1 )- T_(4)b_2 )}_z=0 U^2 ,
[b_5] U^3 = 𝒪(D^4 U^3) = U [b_4] U^2 + {2/3 D^2 (U b_3 ) + 2/21 ( 3 D^4 (U b_2 )- T_(4)b_3)}_z=0 U^3 .
Here, we set
T = 0 , T_μ = 0 , T_(2)=T_μμ = 0,
that are quite evident from Eq. (<ref>). It is important to note that in the above equations, we have two different kinds of structures: (i) D^4 and D^2 act on U b_k, and (ii) T_(4) acts on b_k. Thus, our initial aim is to calculate the explicit form of these operators, first.
The actions of D^4 and D^2 operators on (U b_k) are defined from Eqs. (<ref>)-(<ref>), as follows
D^2(U b_k) = U_;μμ b_k + 2 U_;μ D_μ b_k + U D^2b_k,
D^4(U b_k) = U_;μμνν b_k + 2 U_;μνν D_μ b_k + 2 U_;ννμ D_μ b_k + 2 U_;μ D_μνν b_k
+ 2 U_;μ D_ννμ b_k + 4 U_;νμ D_μν b_k + 2 U_;μμ D^2 b_k + U D^4 b_k,
Action of T_(4) can be addressed in the following form derived from Eqs. (<ref>)-(<ref>)
T_(4)=T_μμνν = D_μ T_μνν + R_μνν,μ =D_μμ T_νν + D_μ R_νν,μ+ R_μνν,μ
=D_μ R_νν,μ+ D_μ R_νν,μ + G_μμD_νν =2 D_μν G_νμ+ 2 D_μ G_νμD_ν
= G_μν G_νμ - 2 G_μν;μ D_ν -2 G_μνD_μν = -2 (G_μν)^2 - 2 J_ν D_ν.
In this derivation, we use the anti-symmetric nature of G_μν and the following identity
X_;νμ = X_;μν + G_μνX - X G_μν,
where X is any arbitrary tensor. This leads to our finding
2 D_μν G_μν = (G_μν)^2.
Now, we are ready to demonstrate the explicit computation of operators 𝒪(D^4 U^3) belonging to the HKC b_5.
§.§.§ 𝒪(D^4 U^0)
0.2cm
T_(4) operator contains a derivative acting on HKCs. Hence, to calculate [b_2]_U^0, we, first, need to calculate D_ν b_0 |_z=0. From Eq. (<ref>) we find
D_ν b_0|_z=0 = -T_ν b_0|_z=0 = 0.
This provides 𝒪(D^4U^0), directly from Eq. (<ref>), as
[b_2] U^0 =𝒪(D^4 U^0) = 1/6 {(G_μν)^2 b_0 + J_ν D_ν b_0}_z=0 = 1/6 (G_μν)^2.
§.§.§ 𝒪(D^4 U^1)
0.2cm
Next, to compute 𝒪 (D^4 U^1) we calculate the necessary derivatives of HKCs using Eqs. (<ref>)-(<ref>).
D^2 b_0|_z=0 = D_μμ b_0|_z=0 = -1/2 T_μμ b_0|_z=0 = 0.
D_μν b_0|_z=0 = -1/2 T_μν b_0|_z=0 = -1/2 {D_μ T_ν + R_ν,μ} b_0 |_z=0 = 1/2 G_μν.
D_μνν b_0|_z=0 = -1/3 T_μνν b_0|_z=0 = -1/3 {D_μ T_νν + R_νν,μ} b_0 |_z=0 = -1/3 {D_ν G_νμ + G_νμ D_ν} b_0 |_z=0,
= -1/3 {G_νμ;ν + 2 G_νμ D_ν} b_0 |_z=0 -1/3 J_μ.
D_ννμ b_0|_z=0 = -1/3 T_ννμ b_0|_z=0 = -1/3 {D_ν T_νμ + R_νμ,ν} b_0 |_z=0 = -2/3 D_ν G_μν b_0 |_z=0,
= 2/3 {G_νμ;ν + G_νμ D_ν} b_0 |_z=0 = 2/3 J_μ.
D^4 b_0|_z=0 = D_μμνν b_0|_z=0 = -1/4 T_(4) b_0|_z=0 = 1/2 (G_μν)^2.
[b_1] = {U+D^2}b_0|_z=0 = U.
D_μ b_1|_z=0 = 1/2{D_μ(U+D^2)b_0-T_μ b_1}|_z=0 = 1/2{U_;μb_0 + U D_μ b_0 + D_μνν b_0}_z=0,
= 1/2 U_;μ - 1/6 J_μ.
D_μμ b_1|_z=0 U^0 = 1/3{D_μμ(U+D^2)b_0 - T_μμ b_1}|_z=0 U^0 = 1/3 D_μμννb_0|_z=0 U^0 = 1/6 (G_μν)^2.
Assembling all the contributions, computed here, in Eq. (<ref>) we find
[b_3] U^0 =𝒪(D^4 U^1) = 3/10 U (G_μν)^2 +1/5(G_μν)^2 U - 1/10 U_;μJ_;μ + 1/10 J_;μU_;μ
+ 1/10 U_;μμνν + 2/10 U_;νμG_μν.
§.§.§ 𝒪(D^4 U^2)
0.2cm
Following the similar path, we calculate the derivatives of HKCs required for the computation of operators 𝒪(D^4 U^2).
D_μμ b_1|_z=0 U^1 = 1/3{D^2(U+D^2)b_0 - T_μμ b_1}|_z=0 U^1 ,
= 1/3{U_;μμ b_0 + 2 U_;μ D_μ b_0 + U D^2 b_0}|_z=0 U^1 ,
= 1/3 U_;μμ.
D_μν b_1|_z=0 U^1 = 1/3{D_μν(U+D^2)b_0 - T_μν b_1}|_z=0 U^1 ,
= 1/3{U_;νμ b_0 + U_;μ D_ν b_0 + U_;ν D_μ b_0 + U D_μν b_0 - G_νμb_1}|_z=0 U^1 ,
= 1/3{ U_;νμ + 1/2U G_μν + G_μνU},
D_μνν b_1|_z=0 U^1 = 1/4{D_μνν(U+D^2)b_0 - T_μνν b_1}|_z=0 U^1 ,
= 1/4{D_μ(U_;νν b_0 + 2 U_;νD_ν b_0 + U D^2 b_0) - D_ν G_νμ b_1 - G_νμ D_ν b_1}|_z=0 U^1 ,
= 1/4{U_;ννμ b_0 + U_;νν D_μ b_0 + 2 U_;νμD_ν b_0 + 2 U_;νD_μν b_0 + U_;μ D^2 b_0
+ U D_μνν b_0 - G_νμ;ν b_1 - 2 G_νμD_ν b_1}|_z=0 U^1 ,
= 1/4{U_;ννμ + U_;νG_μν -1/3 U J_μ -J_μ U + G_μν U_;ν}.
D_ννμ b_1|_z=0 U^1 = 1/4{D_ννμ(U+D^2)b_0 - T_ννμ b_1}|_z=0 U^1 ,
= 1/4{D_νν(U_μb_0 + U D_μ b_0) - 2 D_ν G_μν b_1}|_z=0 U^1 ,
= 1/4{U_;μννb_0 + U_μ D^2 b_0 + 2 U_μν D_ν b_0 + U_νν D_μ b_0 + U D_ννμ b_0
+2 U_;ν D_νμ b_0 + 2 G_νμ;ν b_1 + 2 G_νμ D_ν b_1}|_z=0 U^1 ,
= 1/4{U_;μνν + 2/3U J_μ + U_;νG_νμ + 2 J_μ U + G_νμ U_;ν}.
D^4 b_1|_z=0 U^1 = 1/5{D_μμνν(U+D^2)b_0 - T_μμνν b_1}|_z=0 U^1 ,
= 1/5{U_;μμνν b_0 + 2 U_;μνν D_μ b_0 + 2 U_;ννμ D_μ b_0 + 2 U_;μ D_μνν b_0 + 2 U_;μ D_ννμ b_0
+ 4 U_;νμ D_μν b_0 + 2 U_;μμ D^2 b_0 + U D^4 b_0 + 2 J_μ D_μ b_1 + 2 (G_μν)^2 b_1}|_z=0 U^1 ,
= 1/5{U_;μμνν + 2/3 U_;μ J_μ + 2 U_;νμ G_μν + 1/2 U (G_μν)^2 + J_μ U_;μ + 2 (G_μν)^2 U}.
[b_2] U^1,U^2 = {U+D^2}b_1|_z=0 U^1,U^2 = U^2 + 1/3 U_;μμ.
D_μ b_2|_z=0 U^1,U^2 = 1/3{2 D_μ(U+D^2)b_1-T_μ b_2}_z=0 U^1,U^2 ,
= 2/3{U_;μb_1 + U D_μ b_1 + D_μνν b_1}_z=0 U^1,U^2 ,
= 2/3{U_;μ U + 1/2 U U_;μ -1/4 U J_μ + 1/4 U_;ννμ + 1/4U_;νG_μν - 1/4J_μ U + 1/4G_μν U_;ν}.
D_μμ b_2|_z=0 U^1 = 1/4{2 D_μμ(U+D^2)b_1-T_μμ b_2}_z=0 U^1 ,
= 1/2{U_;μμ b_1 + 2 U_;μ D_μ b_1 + U D^2 b_1 + D_μμνν b_1}_z=0 U^1 ,
= 1/2{ -1/5 U_;μ J_μ + 4/15U (G_μν)^2 + 1/5U_;μμνν + 2/5 U_;νμ G_μν+ 2/5 (G_μν)^2 U}.
Again, we collect all these contributions and with the help of Eq. (<ref>), we note the following equation.
[b_4] U^2 =𝒪(D^4 U^2) = 1/5 U_;μμν U_;ν +3/10 U_;μ U_;ννμ + 1/5 U_;μνν U_;μ + 4/15 (U_;μν)^2 +1/3 (U_;μμ)^2
+1/5 U_;μμνν U +1/10 U_;μ U_;μνν +1/5 U U_;μμνν +2/15 J_μ U_;μ U
-2/15 U U_;μJ_μ +1/5U J_μ U_;μ - 1/6 U_;μU J_μ -1/10 U_;μJ_μ U
+ 1/15 J_μ U U_;μ +1/5 U^2 (G_μν)^2 +4/15 U(G_μν)^2U+2/15 (U G_μν)^2
+1/15 G_μν U^2 G_μν +2/15 (G_μν U)^2 + 1/5 U_;μ U_;ν G_μν + 1/5 U_;μ G_μν U_;ν
+1/5 (G_μν)^2 U^2.
§.§.§ 𝒪(D^4 U^3)
0.2cm
We repeat the same task, one more time. We focus on the computation of the relevant derivatives of HKCs to calculate the operators 𝒪(D^4 U^3).
D_μμ b_2|_z=0 U^2 = 1/4{2 D^2(U+D^2)b_1 - T_μμ b_2}|_z=0 U^2 ,
= 1/2{U_;μμ b_1 + 2 U_;μ D_μ b_1 + U D^2 b_1}|_z=0 U^2 ,
= 1/2{U_;μμ U + U_;μ U_;μ + 1/3U U_;μμ}.
D_μν b_2|_z=0 U^2 = 1/4{2 D_μν(U+D^2)b_1 - T_μν b_2}|_z=0 U^2 ,
= 1/4{2 U_;νμ b_1 + 2 U_;μ D_ν b_1 + 2 U_;ν D_μ b_1 +2 U D_μν b_1 - G_νμb_2}|_z=0 U^2 ,
= 1/4{2 U_;νμ U + U_;μ U_;ν + U_;ν U_;μ + 2/3 U U_;νμ + 1/3 U^2 G_μν+ 2/3 U G_μνU
+ G_μνU^2}.
D_μνν b_2|_z=0 U^2 = 1/5{2 D_μνν(U+D^2)b_1 - T_μνν b_2}|_z=0 U^2 ,
= 2/5{D_μ(U_;νν b_1 + 2 U_;νD_ν b_1 + U D^2 b_1) - 1/2D_ν G_νμ b_2
- 1/2 G_νμ D_ν b_2}|_z=0 U^2 ,
= 2/5{U_;ννμ b_1 + U_;νν D_μ b_1 + 2 U_;νμD_ν b_1 + 2 U_;νD_μν b_1 + U_;μ D^2 b_1
+ U D_μνν b_1 - 1/2 G_νμ;ν b_2 - G_νμD_ν b_2}|_z=0 U^2 ,
= 2/5{U_;ννμ U + 1/2 U_;νν U_;μ + U_;νμU_;ν + 2/3 U_;νU_;νμ + 1/3 U_;νU G_μν
+ 2/3 U_;νG_μνU + 1/3U_;μU_;νν + 1/4U U_;ννμ + 1/4U U_;νG_μν - 1/12U^2 J_μ
- 1/4U J_μU + 1/4U G_μνU_;ν -1/2 J_μ U^2 + 2/3 G_μνU_;νU + 1/3 G_μνU U_;ν}.
D_ννμ b_2|_z=0 U^2 = 1/5{2 D_ννμ(U+D^2)b_1 - T_ννμ b_2}|_z=0 U^2 ,
= 2/5{D_νν(U_μb_1 + U D_μ b_1) - D_ν G_μν b_2}|_z=0 U^2 ,
= 2/5{U_;μννb_1 + U_μ D^2 b_1 + 2 U_μν D_ν b_1 + U_νν D_μ b_1
+ U D_ννμ b_1 +2 U_;ν D_νμ b_1 + G_νμ;ν b_2 + G_νμ D_ν b_2}|_z=0 U^2 ,
= 2/5{U_;μνν U + 1/3U_μ U_;νν + U_μν U_;ν + 1/2U_νν U_;μ+ 1/4U U_;μνν + J_μ U^2
+1/4 U U_;νG_νμ + 1/6 U^2 J_μ + 1/2U J_μ U +1/4 U G_νμU_;ν +2/3 U_;ν U_;μν
+1/3 U_;νU G_νμ +2/3 U_;νG_νμU + 2/3 G_νμ U_;νU + 1/3 G_νμU U_;ν}.
D^4 b_2|_z=0 U^2 = 1/6{2 D_μμνν(U+D^2)b_1 - T_μμνν b_2}|_z=0 U^2 ,
= 1/3{U_;μμνν b_1 + 2 U_;μνν D_μ b_1 + 2 U_;ννμ D_μ b_1 + 2 U_;μ D_μνν b_1 + 2 U_;μ D_ννμ b_1
+ 4 U_;νμ D_μν b_1 + 2 U_;μμ D^2 b_1 + U D^4 b_1 + J_μ D_μ b_2 + (G_μν)^2 b_2}|_z=0 U^2 ,
= 1/3{ U_;μμνν U + 1/5 U U_;μμνν + 2/3 U_;μμU_;νν + U_;ννμ U_;μ + U_;μνν U_;μ
+ 2/5 U U_;μJ_μ -4/15U (G_μν)^2 U + (G_μν)^2 U^2 +1/5U J_μ U_;μ - 2/15(U G_μν)^2
- 1/10U^2 (G_μν)^2 + 1/2 U_;μ U_;ννμ +1/2 U_;μJ_μ U +1/6 U_;μU J_μ + 1/2U_;μU_;μνν
+4/3(U_;μν)^2 +1/3G_μνU^2 G_μν + 2/3(G_;μνU)^2 +2/3 J_μ U_;μ U +1/3 J_μ U U_;μ}.
[b_3] U^2,U^3 = {U+D^2}b_2|_z=0 U^2,U^3 ,
= U^3 + 1/2 U U_;μμ + 1/2 U_;μμ U + 1/2 (U_;μ)^2.
D_μ b_3|_z=0 U^2,U^3 = 1/4{3 D_μ(U+D^2)b_2-T_μ b_3}_z=0 U^2,U^3 ,
= 3/4{U_;μb_2 + U D_μ b_2 + D_μνν b_2}_z=0 U^2,U^3 ,
= 3/4{U_;μ U^2 + 2/3 U U_;μU + 1/3 U^2U_;μ -1/5 U^2J_μ - 4/15 UJ_μ U
+ 4/15 U U_;ννμ + 4/15U U_;νG_μν + 4/15 U G_μν U_;ν + 7/15 U_;μ U_;νν
+ 2/5 U_;ννμ U + 1/5 U_;νν U_;μ + 2/5 U_;νμU_;ν + 4/15 U_;νU_;νμ -1/5 J_μ U^2
+ 2/15 U_;νU G_μν + 4/15 U_;νG_μνU + 4/15 G_μνU_;νU + 2/15 G_μνU U_;ν}.
D_μμ b_3|_z=0 U^2 = 1/5{3 D_μμ(U+D^2)b_2 -T_μμ b_3}_z=0 U^2 ,
= 3/5{U_;μμ b_2 + 2 U_;μ D_μ b_2 + U D^2 b_2 + D_μμνν b_2}_z=0 U^2 ,
= 1/5{1/10 U U_;μ J_μ + 1/2U U_;μμνν + 1/6 (U G_μν)^2 +1/3G_μνU^2 G_μν
+ 1/2U J_μ U_;μ + 3/2U_;μU_;ννμ + U_;μG_μν U_;ν + U_;μ U_;ν G_μν +4/3(U_;μν)^2
+ U_;μμνν U + 2/3 U_;μμU_;νν + U_;ννμ U_;μ + U_;μνν U_;μ + 1/3 U(G_μν)^2 U
+ (G_μν)^2 U^2 -1/2 U_;μJ_μ U -5/6 U_;μU J_μ + 1/2U_;μU_;μνν +1/3 J_μ U U_μ
+ (U_;μμ)^2 + 2/3(G_;μνU)^2 +2/3 J_μ U_μ U }.
Finally, we combine all the above-computed expressions and put them in Eq. (<ref>) to find the operators of the class 𝒪(D^4 U^3). Note that, at this stage, all the evaluated operator structures are not independent. We employ the trace properties and a few identities[Under trace, as cyclic permutations are equivalent, a commutator is zero. Total derivatives can be written as a commutator, i.e., (D_μ U)=[D_μ,U] and hence are zero under a trace. Along with the identity given in Eq. (<ref>) we further use the Bianchi identity, G_ρσ;μ+G_σμ;ρ+G_μρ;σ = 0, to simplify terms.] that simplify the HKCs and allow us to write them in terms of independent operators. Finally, we find the independent operators of the form 𝒪(D^4 U^3) as
[b_5] U^3 = 𝒪(D^4 U^3) = U^3 (G_μν)^2 + 2/3 U^2 G_μν U G_μν + 1/3 U^2 J_μ U_;μ + 1/3 U G_μνU_;μU_;ν
+ 1/3 U U_;μU_;ν G_μν - 1/3 U^2 U_;μJ_μ + U U_;μμU_;νν + 2/3 U_;μμ (U_;ν)^2.
§.§ Relevant Coefficients at Coincidence point
The necessary and relevant HKCs ([b_k]) computed at the coincidence point can be written in a compact form as
[b_0]= I,
[b_1]= U,
[b_2]=[U^2+1/6 (G_μν)^2],
[b_3]=[U^3-1/2 (U_;μ)^2+1/2U G_μνG_μν -1/10(J_ν)^2+1/15 G_μν G_νρ G_ρμ],
[b_4]=[U^4+ U^2 U_;μμ + 4/5U^2 (G_μν)^2 + 1/5 (U G_μν)^2 + 1/5 (U_;μμ)^2 -2/5 U U_;ν J_ν
- 2/5 U(J_μ)^2 +2/15 U_;μμ (G_ρσ)^2 +4/15 U G_μνG_νρ G_ρμ +8/15 U_;νμ G_ρμ G_ρν
+1/35(J_μ;ν)^2 + 16/105G_μνJ_μJ_ν+ 1/420 (G_μνG_ρσ)^2 +17/210(G_μν)^2(G_ρσ)^2
+ 1/105 G_μνG_νρG_ρσG_σμ +2/35(G_μνG_νρ)^2+16/105 J_ν;μ G_νσG_σμ],
[b_5] U^5,U^4,U^3,U^2 =[ U^5 + 2 U^3 U_;μμ + U^2(U_;μ)^2 + U^3 (G_μν)^2 + 2/3 U^2 G_μν U G_μν
- 1/3 U^2 U_;μJ_μ + 1/3 U^2 J_μ U_;μ + 1/3 U G_μνU_;μU_;ν + 1/3 U U_;μU_;ν G_μν
+ U U_;μμU_;νν + 2/3 U_;μμ (U_;ν)^2 + 𝒪(D^6 U^2)],
[b_6] U^6,U^5,U^4 =[U^6 + 3 U^4U_;μμ+2 U^3(U_;μ)^2 + 12/7U^2 U_;νμU_;μν + 17/14 (U_;μU_;ν)^2
+ 9/7 U U_;νμU U_;μν +26/7 U_;νμ U_;μU_;νU + 18/7 U_;νμ U_;μU U_;ν
+26/7 U_;νμ U U_;μU_;ν + 9/7 (U_;μ)^2(U_;ν)^2 + 18/7 G_μνU U_;μU_;νU
+ 5/7 U^4(G_μν)^2+ 8/7 U^3G_μνU G_μν + 18/7 G_μνU_;μU^2U_;ν
+ 9/14 (U^2G_μν)^2 + 26/7 G_μνU_;μU U_;νU + 8/7 G_μνU U_;μU U_;ν
+ 24/7 G_μνU_;μU_;νU^2 - 2/7 G_μνU^2U_;μU_;ν],
[b_7] U^7,U^6 =[U^7 - 5 U^4 (U_;μ)^2-8 U^3U_;μU U_;μ -9/2 (U^2 U_;μ)^2 ],
[b_8] U^8 =[U^8].
§ ONE-LOOP EFFECTIVE LAGRANGIAN AND HEAT-KERNEL COEFFICIENTS
The one-loop effective Lagrangian obtained from Eq. (<ref>) in the Euclidean space is given by
ℒ_= c_s log (-P^2+U+M^2),
where P_μ=iD^E_μ, with D_μ^E being the derivative operator in the Euclidean signature[From now onwards we will drop the superscript E and will use D uniformly.], and c_s=+1/2 and +1 for ϕ being a real scalar and complex scalar background respectively. In the Minkowski signature, the d'Alembertian operator is a hyperbolic second-order partial differential operator for which the Heat-Kernel expansion (HKE) is not convergent. Hence, by performing Wick's rotation to Euclidean space, the second-order partial differential operator is transformed into an elliptical one for which a convergent HKE is well-defined.
The following identity,
lnλ = -∫_0^∞dt/t e^-tλ,
helps to recast the one-loop effective action in terms of the HK as
ℒ_=c_s ∫_0^∞dt/t K(t,x,x,Δ).
Employing the ansatz, noted in Eq. (<ref>), the ℒ_ can be expressed in terms of the coincident limit HKCs ([b_k]) as
ℒ_ =c_s ∫_0^∞dt/t (4π t)^-d/2 e^-t M^2∑_k (-t)^k/k ! [b_k]
=c_s/(4π)^d/2∑_k (-1)^k/k!∫_0^∞ dt t^k-1-d/2 e^-t M^2 [b_k].
A suitable change in variable t M^2 →τ^2, the above integral reads as
ℒ_=c_s/(4π)^d/2∑_k M^d-2k(-1)^k/k! 2∫_0^∞ dτ τ^2(k-d/2)-1 e^-τ^2 [b_k].
Note that the integral over τ mimics the integral representation of gamma function Γ[z]
Γ[z]=2∫_0^∞ dτ τ^2z-1 e^-τ^2,
and that eases out writing down a compact form of the one-loop effective Lagrangian as
ℒ_=c_s/(4π)^d/2∑_k=0^∞ M^d-2k(-1)^k/k! Γ[k-d/2] [b_k].
§.§ Effective Contributions to Renormalisable Lagrangian
It is evident from Eq. (<ref>), that for k≤ d/2 the Gamma function has simple poles. Thus, for such cases, we need to renormalise the theory employing dimensional regularisation, and MS renormalisation scheme.
We are working with 4-dim Euclidean space. Assuming d=4-ϵ, we find
Γ[k-d/2]=(ϵ/2-3+k)!/(ϵ/2-1)! Γ[ϵ/2].
In case of 4-dim, d→4, i.e., ϵ→ 0, the Gamma function has simple poles as Γ[ϵ/2]=2/ϵ-γ_E+𝒪(ϵ). In that scenario, the divergent part of the one-loop effective Lagrangian can be written as
ℒ^(k)_div=c_s/(4π)^2-ϵ/2 M^d-2k(-1)^k/k! (ϵ/2-3+k)!/(ϵ/2-1)! Γ[ϵ/2] [b_k],
with k=0,1,2.
These three cases are explicitly demonstrated below.
§.§.§ k=0
ℒ^(0)_div =c_s/(4π)^2-ϵ/2 M^d (ϵ/2-3)!/(ϵ/2-1)! Γ[ϵ/2] [b_0]
=c_s/(4π)^2-ϵ/2 M^4-ϵ 1/(ϵ/2-1)(ϵ/2-2) Γ[ϵ/2] [b_0]
=c_s(M^2/4π)^2(4π/M^2)^ϵ/2 1/(ϵ/2-1)(ϵ/2-2) Γ[ϵ/2] [b_0].
Taylor expansion in limit ϵ→ 0 leads to
ℒ^(0)_div=c_s/(4π)^2 M^4 1/2 (2/ϵ-γ_E-ln[M^2/4π]+3/2) [b_0].
Employing MS scheme, we can write the finite part as
ℒ^(0)_=c_s/(4π)^2 M^4 [-1/2 (ln[M^2/μ^2]-3/2) [b_0]].
§.§.§ k=1
ℒ^(1)_div =-c_s/(4π)^2-ϵ/2 M^2-ϵ (ϵ/2-2)!/(ϵ/2-1)! Γ[ϵ/2] [b_1]
=-c_s (m/4π)^2(4π/M^2)^ϵ/2 1/(ϵ/2-1) Γ[ϵ/2] [b_1].
Taylor expansion in limit ϵ→ 0 leads to
ℒ^(1)_div=-c_s/(4π)^2 M^2 (-1) (2/ϵ-γ_E-ln[M^2/4π]+1) [b_1].
Employing MS scheme, we can write the finite part as
ℒ^(1)_=c_s/(4π)^2 M^2 [-(ln[M^2/μ^2]-1) [b_1]].
§.§.§ k=2
ℒ^(2)_div =c_s/(4π)^2-ϵ/2 M^-ϵ1/2 (ϵ/2-1)!/(ϵ/2-1)! Γ[ϵ/2] [b_2] =c_s/(4π)^2(4π/M^2)^ϵ/21/2 Γ[ϵ/2] [b_2].
Taylor expansion in limit ϵ→ 0 leads to
ℒ^(2)_div=c_s/(4π)^2 M^0 1/2(2/ϵ-γ_E-ln[M^2/4π]) [b_2].
Employing MS scheme, we can write the finite part as
ℒ^(1)_=c_s/(4π)^2 M^0 1/2[-(ln[M^2/μ^2]) [b_2]].
§.§ Renormalised one-loop effective Lagrangian
After renormalising the effective Lagrangian for three cases, k=0,1,2 we collect all the finite parts, and the one-loop effective contributions to renormalisable part of the Lagrangian that contains operators up to mass dimension four can be written as
ℒ_^ren=c_s/(4π)^2{ M^4 [-1/2 (ln[M^2/μ^2]-3/2) [b_0]] + M^2 [-(ln[M^2/μ^2]-1) [b_1]]
+ M^0 [-1/2(ln[M^2/μ^2]) [b_2]] }.
§ PURE HEAVY SCALAR LOOP UOLEA UP TO D8
In order to facilitate the readability of our result, we organise the effective operators according to the number of appearances of its constituents, i.e., covariant derivatives (P), and the light field-dependent functional (U). Here, if an operator is composed of i number of covariant derivatives and j number of U, the corresponding quantities are represented by the integrer-superscripts (i,j). The Lorentz invariance allows i to be only even integer. The effective Lagrangian up to mass dimension eight can be written as
Ł_ = c_s/(4π)^2∑_i,jŁ^(i,j)_ = c_s/(4π)^2∑_i,j,k𝒞_k^(i,j) O_k(P^i U^j).
Here, 𝒞_k^(i,j) is the coefficient associated with different operator O_k(P^i U^j). The k sums over the number of operators in each class with i,j=[0,8]. To keep in agreement with the literature, we use the following notation P_μ = i D_μ. With the help of the HKCs computed at the coincident point, see Subsec. <ref>, and using Eq. (<ref>), here, we catalogue the operators associated with the one-loop effective Lagrangian.
§.§ O(P^8 U^j)
Ł^(8,0)_ = 1/M^41/24[ 17/210[P_μ,P_ν] [P_μ,P_ν] [P_ρ,P_σ][P_ρ,P_σ]
+ 2/35[P_μ,P_ρ][P_ρ,P_ν][P_μ,P_σ][P_σ,P_ν]
+ 1/105 [P_μ,P_ν][P_ν,P_ρ] [P_ρ,P_σ][P_σ,P_μ]
+ 1/420 [P_μ,P_ν][P_ρ,P_σ][P_μ,P_ν][P_ρ,P_σ]
+ 1/35[P_μ, [P_ρ,[P_ρ,P_ν]]] [P_μ, [P_σ,[P_σ,P_ν]]]
+ 16/105 [P_μ, [P_ρ,[P_ρ,P_ν]] [P_ν,P_σ][P_σ,P_μ]
+ 16/105[P_μ,P_ν][P_σ,[P_σ,P_μ]][P_ρ,[P_ρ,P_ν]] ],
𝒞^(8,0)_1 = 1/M^417/5040, 𝒞^(8,0)_2 = 1/M^41/420, 𝒞^(8,0)_3 = 1/M^41/2520, 𝒞^(8,0)_4 = 1/M^41/10080,
𝒞^(8,0)_5 = 1/M^41/840, 𝒞^(8,0)_6 = 1/M^42/315, 𝒞^(8,0)_7 = 1/M^42/315.
§.§ O(P^6 U^j)
Ł^(6,0)_ = 1/M^21/6[ 1/15[P_μ,P_ν][P_ν,P_ρ] [P_ρ,P_μ] -1/10 [P_μ, [P_μ, P_ν]] [P_ρ,[P_ρ,P_ν]],
𝒞^(6,0)_1 = 1/M^21/90, 𝒞^(6,0)_2 = - 1/M^21/60.
Ł^(6,1)_ = 1/M^41/24[-4/15 U [P_μ,P_ν][P_ν,P_ρ] [P_ρ,P_μ] + 2/5 U[P_ρ,[P_ρ,P_μ]][P_σ,[P_σ,P_μ]]
-2/15 [P_μ, [P_μ, U]] [P_ρ,P_σ][P_ρ,P_σ] +8/15 [P_μ, [P_ν, U]] [P_ρ,P_μ] [P_ν,P_ρ] ],
𝒞^(6,1)_1 = -1/M^41/90,
𝒞^(6,1)_2 = +1/M^41/60,
𝒞^(6,1)_3 = -1/M^41/180,
𝒞^(6,1)_4 = +1/M^41/45.
Ł^(6,2)_ = [ 𝒞^(6,2)_1 U^2 [P_μ,P_ν] [P_ν,P_α][P_α,P_μ] + 𝒞^(6,2)_2 U [P_μ,P_ν] U [P_ν,P_α][P_α,P_μ]
+ 𝒞^(6,2)_3 U^2 [P_μ,[P_μ,P_ν]] [P_α,[P_α,P_ν]] + 𝒞^(6,2)_4 U [P_μ,[P_μ,P_ν]] U [P_α,[P_α,P_ν]]
+ 𝒞^(6,2)_5 U [P_μ,[P_μ,U]] [P_ν,P_α] [P_ν,P_α] + 𝒞^(6,2)_6 [P_μ,[P_μ,U]] U [P_ν,P_α] [P_ν,P_α]
+ 𝒞^(6,2)_7 U [P_μ,U] [P_ν[P_ν,P_α] [P_μ,P_α] + 𝒞^(6,2)_8 U [P_μ,P_α] [P_ν[P_ν,P_α] [P_μ,U]
+ 𝒞^(6,2)_9 U [P_μ,U] [P_μ,P_ν] [P_α[P_α,P_ν] + 𝒞^(6,2)_10 U [P_α[P_α,P_ν] [P_μ,P_ν] [P_μ,U]
+ 𝒞^(6,2)_11 U [P_μ,P_ν] [P_α,[P_α,U] [P_μ,P_ν] + 𝒞^(6,2)_12 [P_μ,U] [P_μ,U] [P_ν,P_α] [P_ν,P_α]
+ 𝒞^(6,2)_13 [P_μ,U] [P_ν,[P_ν,P_α]] U [P_μ,P_α] + 𝒞^(6,2)_14 [P_μ,P_α] U [P_ν,[P_ν,P_α]] [P_μ,U]
+ 𝒞^(6,2)_15 [P_μ,U] [P_ν,U] [P_μ,P_α] [P_α,P_ν] + 𝒞^(6,2)_16 [P_μ,U] [P_ν,U] [P_ν,P_α] [P_α,P_μ]
+ 𝒞^(6,2)_17 [P_μ,U] [P_μ,P_ν] [P_α,U] [P_α,P_ν] + 𝒞^(6,2)_18 [P_μ,U] [P_α,P_ν] [P_α,U] [P_μ,P_ν]
+ 𝒞^(6,2)_19 [P_μ,U] [P_ν,P_α] [P_μ,U] [P_ν,P_α]
+ 𝒞^(6,2)_20 [P_μ,[P_μ,U]] [P_ν,U] [P_α,[P_α,P_ν]]
+ 𝒞^(6,2)_21 [P_μ,[P_μ,U]] [P_α,[P_α,P_ν]] [P_ν,U] + 𝒞^(6,2)_22 [P_μ,U] [P_ν,U] [P_μ,[P_α,[P_α,P_ν]]]
+ 𝒞^(6,2)_23 [P_μ,U] [P_μ,[P_α,[P_α,P_ν]]] [P_ν,U]
+ 𝒞^(6,2)_24 [P_μ,[P_ν,[P_ν,U]]] [P_μ,[P_α,[P_α,U]]]
+ 𝒞^(6,2)_25[P_α,P_μ][P_μ,P_β]U [P_α,[P_β,U]] + 𝒞^(6,2)_26[P_α,P_μ][P_μ,P_β][P_α,[P_β,U]] U ],
§.§ O(P^4 U^j)
Ł^(4,0)_ = - M^0 1/12ln[M^2/μ^2] [ [P_μ,P_ν][P_μ,P_ν]], ⇒ 𝒞^(4,0)_1 =- M^0 1/12ln[M^2/μ^2].
Ł^(4,1)_ = - 1/M^21/12[U [P_μ,P_ν][P_μ,P_ν]], ⇒ 𝒞^(4,1)_1 =- 1/M^21/12.
Ł^(4,2)_ = 1/M^41/24[ 4/5U^2 [P_μ,P_ν][P_μ,P_ν] + 1/5 U [P_μ,P_ν] U [P_μ,P_ν]
+ 1/5 [P_μ,[P_μ,U]] [P_ν,[P_ν,U]] -2/5 U [P_ν,U] [P_ρ,[P_ρ,P_ν]] ],
𝒞^(4,2)_1 = 1/M^41/30,
𝒞^(4,2)_2 = 1/M^41/120,
𝒞^(4,2)_3 = 1/M^41/120,
𝒞^(4,2)_4 = -1/M^41/60.
Ł^(4,3)_ = 1/M^61/60[ - U^3 [P_μ,P_ν][P_μ,P_ν] - 2/3 U^2 [P_μ,P_ν] U [P_μ,P_ν]
+ 1/3 U^2 [P_μ,U][P_ρ,[P_ρ,P_μ]] - 1/3 U^2 [P_ρ,[P_ρ,P_ν]] [P_ν,U]
- 1/3 U [P_μ,P_ν][P_μ,U][P_ν,U] - 1/3 U [P_μ,U][P_ν,U][P_μ,P_ν]
- U [P_μ,[P_μ,U]] [P_ν,[P_ν,U]] - 2/3 [P_μ,[P_μ,U]] [P_ν,U][P_ν,U] ],
𝒞^(4,3)_1 = -1/M^61/60, 𝒞^(4,3)_2 = -1/M^61/90, 𝒞^(4,3)_3 = 1/M^61/180, 𝒞^(4,3)_4 = -1/M^61/180,
𝒞^(4,3)_5 = -1/M^61/180, 𝒞^(4,3)_6 = -1/M^61/180, 𝒞^(4,3)_7 = -1/M^61/60, 𝒞^(4,3)_8 = -1/M^61/90.
Ł^(4,4)_ = 1/M^81/120 [ 12/7U^2[P_μ,[P_ν,U]][P_ν,[P_μ,U]] + 9/7 U[P_μ,[P_ν,U]]U[P_ν,[P_μ,U]]
+26/7 [P_μ,[P_ν,U]] [P_μ,U][P_ν,U]U + 18/7 [P_μ,[P_ν,U]] [P_μ,U]U[P_ν,U]
+26/7 [P_μ,[P_ν,U]] U[P_μ,U][P_ν,U] + 17/14 [P_μ,U][P_ν,U] [P_μ,U][P_ν,U]
+ 9/7 [P_μ,U][P_μ,U] [P_ν,U][P_ν,U] + 5/7 U^4[P_μ,P_ν][P_μ,P_ν]
+ 8/7 U^3[P_μ,P_ν]U[P_μ,P_ν] + 9/14 U^2[P_μ,P_ν]U^2[P_μ,P_ν]
+ 18/7 [P_μ,P_ν][P_μ,U]U^2[P_ν,U] + 18/7 [P_μ,P_ν]U[P_μ,U][P_ν,U]U
+ 8/7 [P_μ,P_ν]U[P_μ,U]U[P_ν,U] + 26/7 [P_μ,P_ν][P_μ,U]U[P_ν,U]U
+ 24/7 [P_μ,P_ν][P_μ,U][P_ν,U]U^2 - 2/7 [P_μ,P_ν]U^2[P_μ,U][P_ν,U]],
𝒞^(4,4)_1 = 1/M^81/70, 𝒞^(4,4)_2 = 1/M^83/280, 𝒞^(4,4)_3 = 1/M^813/420, 𝒞^(4,4)_4 = 1/M^83/140,
𝒞^(4,4)_5 = 1/M^813/420, 𝒞^(4,4)_6 = 1/M^817/1680, 𝒞^(4,4)_7 = 1/M^83/280, 𝒞^(4,4)_8 = 1/M^81/168,
𝒞^(4,4)_9 = 1/M^81/105, 𝒞^(4,4)_10 = 1/M^83/560, 𝒞^(4,4)_11 = 1/M^83/140, 𝒞^(4,4)_12 = 1/M^83/140,
𝒞^(4,4)_13 = 1/M^81/105, 𝒞^(4,4)_14 = 1/M^813/420, 𝒞^(4,4)_15 = 1/M^81/35, 𝒞^(4,4)_16 = -1/M^81/420.
§.§ O(P^2 U^j)
Ł^(2,2)_ = 1/M^21/12[ -[P_μ,U] [P_μ,U]] ], ⇒ 𝒞^(2,2)_1 = -1/M^21/12.
Ł^(2,3)_ = 1/M^41/24[ -U^2 [P_μ,[P_μ,U]] ], ⇒ 𝒞^(2,3)_1 = -1/M^41/24.
Ł^(2,4)_ = 1/M^61/60[ 2 U^3 [P_μ,[P_μ,U]] + U^2[P_μ,U][P_μ,U] ],
𝒞^(2,4)_1 = -1/M^81/40, 𝒞^(2,4)_2 = -1/M^84/60.
Ł^(2,5)_ = 1/M^81/120[-3 U^4[P_μ,[P_μ,U]]-2 U^3[P_μ,U][P_μ,U]],
𝒞^(2,5)_1 = -1/M^81/40, 𝒞^(2,5)_2 = -1/M^84/60.
Ł^(2,6)_ = 1/M^101/210[-5 U^4 [P_μ,U][P_μ,U] - 8 U^3 [P_μ,U] U [P_μ,U]
- 9/2 U^2 [P_μ,U]U^2 [P_μ,U] ],
𝒞^(2,6)_1 = -1/M^101/42, 𝒞^(2,6)_2 = -1/M^104/105, 𝒞^(2,6)_3 = -1/M^103/140.
§.§ O(P^0 U^j)
Ł^(0,0)_ = M^4 1/2 [3/2-ln[M^2/μ^2]], ⇒ 𝒞^(0,0)_1 = M^4 1/2 [3/2-ln[M^2/μ^2]].
Ł^(0,1)_ = M^2 (1-ln[M^2/μ^2]) [ U], ⇒ 𝒞^(0,1)_1 = M^2 (1-ln[M^2/μ^2]).
Ł^(0,2)_ = - M^0 ln[M^2/μ^2] [ U^2], ⇒ 𝒞^(0,2)_1 = - M^0 ln[M^2/μ^2].
Ł^(0,3)_ = -1/M^21/6 tr [U^3], ⇒ 𝒞^(0,3)_1 = -1/M^21/6.
Ł^(0,4)_ = 1/M^41/24 tr [ U^4], ⇒ 𝒞^(0,4)_1 = 1/M^41/24.
Ł^(0,5)_ = -1/M^61/60 tr [ U^5], ⇒ 𝒞^(0,5)_1 = -1/M^61/60.
Ł^(0,6)_ = 1/M^81/120 tr [ U^6], ⇒ 𝒞^(0,6)_1 = 1/M^81/120.
Ł^(0,7)_ = - 1/M^101/210 tr [ U^7], ⇒ 𝒞^(0,7)_1 = - 1/M^101/210.
Ł^(0,8)_ = 1/M^121/336 tr [ U^8] ⇒ 𝒞^(0,8)_1 = 1/M^121/336.
§ VALIDATION USING COVARIANT DIAGRAM
We present the one-loop effective action up to dimension eight after integrating out heavy degenerate scalars in Sec. <ref>. We agree with the results so far available in the existing literature <cit.>. To cross-check the new results computed in the earlier section, we employ a new method of computation based on covariant diagram techniques, see Refs. <cit.> for details. A brief review of this method has been given in Appendix <ref>. We focus on computing only relevant one-loop diagrams that can generate the results depicted in the previous section.
§.§ Method of Covariant diagram
Here, we illustrate some aspects of the covariant diagram method that are pertinent to our objectives. Eq. (<ref>), given in Appendix <ref>, reduces to Eq. (<ref>) up to additive constant while integrating out heavy scalar field or multiple degenerate heavy scalar fields <cit.>.
ℒ_eff [ϕ] = -ic_s tr ∑_n=1^∞ 1/n∫d^d q/(2π)^d [(q^2-M^2)^-1 (2q.P-P^2+U)]^n.
This method allows one to map each integral of order n into a number of covariant diagrams consisting of n number of heavy propagators 1/(q^2-M^2) and along with all permissible combinations of 2q.P, -P^2, and U as vertex insertions. This automatically respects the covariant nature of the functional matching. The most generic form of the loop integrals at n^th order with 2n_c number of 2q.P vertices that can appear in the process, is given as
∫d^d q/(2π)^d q^μ_1⋯ q^μ_2n_c/(q^2-M^2)^n ≡ g^μ_1⋯μ_2n_c ℐ[q^2n_c]^n.
The completely symmetric tensor g^μ_1⋯μ_2n_c, in Eq. (<ref>), takes care of all possible contractions among the P_μ's. We present the explicit expressions for all the relevant and necessary master integrals “ ℐ " for our calculation in the Appendix <ref>.
§.§ A detailed case analysis: O(P^2 U^6)
The covariant diagram method maintains a record of all CDE terms and maps the integral at each order to one-loop diagrams. To exemplify we focus on the O(P^2U^6) that appears at n=8. Adhering to the ideas discussed in Ref. <cit.>, we provide all possible diagrams for this operator class in Table <ref>.
In order to discuss how to compute these diagrams, we will concentrate on Fig. <ref>. Since in this category only two P_μ's are present, they must be contracted among themselves to form a Lorentz-invariant structure. Following the conventions of <cit.>, we denote the vertices containing P_μ's and U's with black and white blobs on the loop diagrams respectively and the contraction of the P_μ's is shown with a dotted line. The structure that is unique to the diagram then can be read off starting from one particular blob and going clockwise until we exhaust all the vertices present in the diagram. Following the rule, the structure corresponding to this diagram can then be written as tr (P_μUUUP_μUUU).
[10]l4cm
4cm
< g r a p h i c s >
Representative diagram at 𝒪(P^2U^6).
Each of the black blobs corresponds to a 2q.P vertex factor, which leaves an additional factor of 2 along with P_μ's when the loop momentum q_μ's are taken inside the integral ℐ[q^2n_c]. When the diagram exhibits an N-fold rotational symmetry, we divide the total value of the loop-integral with a factor of N[This division is necessary to eliminate the overcounting when different operator structures under trace give rise to the same diagram.]. Thus, the contribution from this diagram reads as: -ic_s/2 2^2 ℐ[q^2]^8 = ic_s/2 2^2×(i/16π^21/M^1012/7!)[It should be noted that if a specific diagram and its mirror image cannot be superimposed onto each other even after rotation under trace (see e.g. the second diagram in Table <ref>), these diagrams are connected via Hermitian conjugation. The conjugate diagrams receive exactly similar contributions as their parent diagrams after the expansion of the covariant structures, so we avoid writing them separately.].
Along with this, two more independent diagrams can arise at this level. Table <ref> contains all the diagrams, their corresponding structures with open covariant derivatives, and individual contributions. The number of possible structures appearing at each class implies that the same number of independent covariant structures must be present where P_μ's only appear through commutators. To verify the results obtained in Eq. (<ref>), first, we start with three distinct forms of the effective operators given in that equation. Then we expand the commutators to encompass all the diagrams within this class as
C^(2,6)_1 tr(U^4 [P_μ,U] [P_μ,U]) + C^(2,6)_2 tr(U^3 [P_μ,U] U [P_μ,U])
+ C^(2,6)_3 tr(U^2 [P_μ,U] U^2 [P_μ,U])
= (2C^(2,6)_1-C^(2,6)_2) tr (P_μUP_μUUUUU)
+ (2C^(2,6)_2-2C^(2,6)_3-C^(2,6)_1) tr (P_μUUP_μUUUU)
+ (2C^(2,6)_3-C^(2,6)_2) tr (P_μUUUP_μUUU) + tr(⋯ P^2⋯)terms.
The contraction of two adjacent 2q.P vertices and the contribution from (-P^2) vertices can produce tr(⋯ P^2⋯) terms with the diagrams where the adjacent P_μ's are contracted. It is not necessary to consider these diagrams separately since their coefficients are functions of the same C_k^(i,j)'s which can be determined from other diagrams. The coefficients of the structures in the RHS of Eq. (<ref>), correspond to values of the diagrams given in the third column of Table <ref>,
2C^(2,6)_1-C^(2,6)_2 = -c_s/16π^21/M^1048/7!,
2C^(2,6)_2-C^(2,6)_1-2C^(2,6)_3 = -c_s/16π^21/M^1048/7!,
2C^(2,6)_3-C^(2,6)_2 = -c_s/16π^21/M^1024/7!.
Finally, we find the coefficients associated with the operators given in Eq. (<ref>) as
C^(2,6)_1 = -c_s/16π^21/M^101/42, C^(2,6)_2 = -c_s/16π^21/M^104/105, C^(2,6)_3 = -c_s/16π^21/M^103/140,
comparing with Eq. (<ref>), we can infer 𝒞^(2,6)_k =(c_s/(16π^2)) C^(2,6)_k, which validates our findings.
§.§ Coefficients obtained using Covaraint diagrams
§.§.§ ∙ O(P^8)
Starting with the covariant structures derived in Eq. (<ref>), we expand them to find the contributions to each of the diagrams of this class. The diagrams and their corresponding values are given in Table <ref>,
C^(8,0)_1 tr([P_μ,P_ν] [P_μ,P_ν][P_α,P_β][P_α,P_β]) + C^(8,0)_2 tr([P_μ,P_ν] [P_ν,P_ρ][P_μ,P_σ][P_σ,P_ρ])
+ C^(8,0)_3 tr([P_μ,P_ν] [P_ν,P_ρ][P_ρ,P_σ][P_σ,P_μ])
+ C^(8,0)_4 tr([P_μ,P_ν] [P_ρ,P_σ][P_μ,P_ν][P_ρ,P_σ])
+ C^(8,0)_5 tr([P_μ,[P_α,[P_α,P_ν]]] [P_μ,[P_ρ,[P_ρ,P_ν]]])
+ C^(8,0)_6 tr([P_μ,[P_α,[P_α,P_ν]]] [P_ν,P_ρ][P_ρ,P_μ])
+ C^(8,0)_7 tr([P_α,[P_α,P_μ]] P_β,[P_β,P_ν]] [P_μ,P_ν])
⊃ (4C^(8,0)_1 + 2C^(8,0)_3 + 2C^(8,0)_6 - 4C^(8,0)_7 ) tr(P_μP_νP_μP_ρP_σP_νP_ρP_σ)
+ (2C^(8,0)_2 - 2C^(8,0)_6 + 8C^(8,0)_5 ) tr(P_μP_νP_ρP_νP_μP_σP_ρP_σ)
+ ( 2C^(8,0)_2 - 4C^(8,0)_3 - 4C^(8,0)_6 + 4C^(8,0)_7 ) tr(P_μP_νP_μP_ρP_νP_σP_ρP_σ)
- (4C^(8,0)_2 - 2C^(8,0)_6 ) tr(P_μP_νP_μP_ρP_σP_νP_ρP_σ)
+ (C^(8,0)_2 - 8C^(8,0)_4) tr(P_μP_νP_ρP_σP_μP_νP_σP_ρ)
+ (C^(8,0)_3 + 4C^(8,0)_4) tr (P_μP_νP_ρP_μP_σP_ρP_νP_σ)
+ 4C^(8,0)_4 tr (P_μP_νP_ρP_σP_μP_νP_ρP_σ).
Equating the coefficients of the structures on the RHS of the Eq. (<ref>) with the values in Table <ref>, we find,
C^(8,0)_1 = c_s/16π^21/M^417/5040,
C^(8,0)_2 = c_s/16π^21/M^41/420,
C^(8,0)_3 = c_s/16π^21/M^41/2520,
C^(8,0)_4 = c_s/16π^21/M^41/10080,
C^(8,0)_5 = c_s/16π^21/M^41/840,
C^(8,0)_6 = c_s/16π^21/M^42/315,
C^(8,0)_7 = c_s/16π^21/M^42/315.
From Eq. (<ref>), one can check 𝒞^(8,0)_k = (c_s/(16π^2))C^(8,0)_k, which validates our findings.
§.§.§ ∙ O(P^6 U)
The structures derived in Eq. (<ref>) are expanded to map them back to the diagrams of this class given in Table. <ref>,
C^(6,1)_1 tr(U [P_μ,P_ν] [P_ν,P_ρ] [P_ρ,P_μ]) + C^(6,1)_2 tr(U [P_ν,[P_ν,P_μ]] [P_ρ,[P_ρ,P_μ]])
+ C^(6,1)_3 tr([P_α,[P_α,U] [P_μ,P_ν] [P_μ,P_ν]])
+ C^(6,1)_4 tr([P_β,[P_γ,U]] [P_γ,P_α] [P_α,P_β])
⊃ -( C^(6,1)_1 +4C^(6,1)_3 +C^(6,1)_4) tr(P_μP_νP_ρP_μP_ρP_νU)
+ (C^(6,1)_1 + C^(6,1)_4) tr(P_μP_νP_ρP_μP_ρP_νU)
+ (C^(6,1)_1+4C^(6,1)_2-2C^(6,1)_4) tr(P_μP_νP_μP_ρP_νP_ρU) - C^(6,1)_1 tr(P_μP_νP_ρP_μP_νP_ρU).
We equate the coefficients of the structures with the open covariant derivatives with their corresponding values and reproduce the results given in Eq. (<ref>),
C^(6,1)_1 = -c_s/16π^21/M^41/90,
C^(6,1)_2 = +c_s/16π^21/M^41/60,
C^(6,1)_3 = -c_s/16π^21/M^41/180,
C^(6,1)_4 = +c_s/16π^21/M^41/45.
With an appropriate scale factor, one can compare these coefficients to the previously obtained coefficients shown in Eq. (<ref>), the relation reads, 𝒞^(6,1)_k = (c_s/(16π^2))C^(6,1)_k.
§.§.§ ∙ O(P^4 U^3)
We start with the structures obtained in Eq. (<ref>). From Table <ref>, it can be seen that there are six independent diagrams that can arise in this category. Therefore, we assume that the covariant structures that have distinct Hermitian conjugates (h.c.'s), both of them should have the same coefficients to maintain the overall Hermiticity of the Lagrangian. In this way, we encounter exactly six independent variables that can be solved from six associated diagrams.
C^(4,3)_1 tr(U^3 [P_μ,P_ν] [P_μ,P_ν] ) + C^(4,3)_2 tr(U^2 [P_μ,P_ν] U [P_μ,P_ν])
+C^(4,3)_3 { tr(U^2 [P_μ,U] [P_ν,[P_ν,P_μ]])
-tr([P_μ,U] U^2 [P_ν,[P_ν,P_μ]])}
+ C^(4,3)_4 {tr([P_μ,U] [P_ν,U] U [P_μ,P_ν])+tr(U [P_μ,U] [P_ν,U] [P_μ,P_ν]) }
+ C^(4,3)_5 tr(U [P_μ[P_μ,U]] [P_ν[P_ν,U]])+ C^(4,3)_6 tr([P_μ[P_μ,U]] [P_ν,U][P_ν,U])
⊃ (2 C^(4,3)_1 +4C^(4,3)_3 ) tr(UUUP_νP_μP_νP_μ) + (2C^(4,3)_2 - 2C^(4,3)_4) tr(UUP_μP_νUP_μP_ν)
+ (-2C^(4,3)_2+2C^(4,3)_4+2C^(4,3)_6) tr(UUP_μP_νUP_νP_μ) - 2C^(4,3)_3 tr(UUP_μUP_νP_μP_ν)
+ 2C^(4,3)_4 tr(UP_μUP_νUP_μP_ν)-(2C^(4,3)_4-4C^(4,3)_5+4C^(4,3)_6) tr(UP_μUP_μP_νUP_ν).
It is evident from the covariant structures that the operators that have distinct h.c.'s, both yield identical contributions to the mirror-symmetric diagrams, while the conjugate operators produce the mirror images of the diagrams that are not mirror-symmetric.
C^(4,3)_1 = c_s/16π^2 𝒞^(4,3)_1= -c_s/16π^21/M^61/60,
C^(4,3)_2 = c_s/16π^2 𝒞^(4,3)_2= -c_s/16π^21/M^61/90,
C^(4,3)_3 = c_s/16π^2 𝒞^(4,3)_3 = -c_s/16π^2 𝒞^(4,3)_4 = c_s/16π^21/M^61/180,
C^(4,3)_4 = c_s/16π^2 𝒞^(4,3)_5 = c_s/16π^2 𝒞^(4,3)_6 = -c_s/16π^21/M^61/180
C^(4,3)_5 = c_s/16π^2 𝒞^(4,3)_7= -c_s/16π^21/M^61/60,
C^(4,3)_6 = c_s/16π^2 𝒞^(4,3)_8= - c_s/16π^21/M^61/90.
The coefficients match exactly with those obtained in Eq. (<ref>).
§.§.§ ∙ O(P^6 U^2)
Here, to begin with, we consider the operator structures given in Eq. (<ref>) derived using the Heat-Kernel method. Noting the possible independent covariant diagrams, allowed for this class (see Tables <ref> and <ref>), we anticipate that there should be at most seventeen independent covariant operators (excluding the h.c.'s). Regarding the h.c.'s, we follow the similar prescription discussed in the case of O(P^4 U^3). We express the last two operators in Eq. (<ref>) (i.e., [P_α,P_μ][P_μ,P_β]U [P_α,[P_β,U]], and [P_α,P_μ][P_μ,P_β][P_α,[P_β,U]] U) in terms of other ones.
C^(6,2)_1 tr(U^2 [P_μ,P_ν] [P_ν,P_α][P_α,P_μ]) + C^(6,2)_2 tr(U [P_μ,P_ν] U [P_ν,P_α][P_α,P_μ])
+ C^(6,2)_3 tr(U^2 [P_μ,[P_μ,P_ν]] [P_α,[P_α,P_ν]]) + C^(6,2)_4 tr(U [P_μ,[P_μ,P_ν]] U [P_α,[P_α,P_ν]])
+ C^(6,2)_5 {tr(U [P_μ,[P_μ,U]] [P_ν,P_α] [P_ν,P_α])+tr([P_μ,[P_μ,U]] U [P_ν,P_α] [P_ν,P_α])}
+ C^(6,2)_6 {tr(U [P_μ,U] [P_ν[P_ν,P_α] [P_μ,P_α])+tr(U [P_μ,P_α] [P_ν[P_ν,P_α] [P_μ,U])}
+ C^(6,2)_7 {tr(U [P_μ,U] [P_μ,P_ν] [P_α[P_α,P_ν])+tr(U [P_α[P_α,P_ν] [P_μ,P_ν] [P_μ,U])}
+ C^(6,2)_8 tr(U [P_μ,P_ν] [P_α,[P_α,U] [P_μ,P_ν]) + C^(6,2)_9 tr( [P_μ,U] [P_μ,U] [P_ν,P_α] [P_ν,P_α])
+ C^(6,2)_10 {tr( [P_μ,U] [P_ν,[P_ν,P_α]] U [P_μ,P_α])+tr( [P_μ,P_α] U [P_ν,[P_ν,P_α]] [P_μ,U])}
+ C^(6,2)_11 tr( [P_μ,U] [P_ν,U] [P_μ,P_α] [P_α,P_ν])+ C^(6,2)_12 tr( [P_μ,U] [P_ν,U] [P_ν,P_α] [P_α,P_μ])
+ C^(6,2)_13 {tr( [P_μ,U] [P_μ,P_ν] [P_α,U] [P_α,P_ν])+tr( [P_μ,U] [P_α,P_ν] [P_α,U] [P_μ,P_ν])}
+ C^(6,2)_14 tr( [P_μ,U] [P_ν,P_α] [P_μ,U] [P_ν,P_α])
+ C^(6,2)_15 {tr( [P_μ,[P_μ,U]] [P_ν,U] [P_α,[P_α,P_ν]])
- tr( [P_μ,[P_μ,U]] [P_α,[P_α,P_ν]] [P_ν,U])} + C^(6,2)_16 {tr( [P_μ,U] [P_ν,U] [P_μ,[P_α,[P_α,P_ν]]])
- tr( [P_μ,U] [P_μ,[P_α,[P_α,P_ν]]] [P_ν,U])}
+2 C^(6,2)_17 tr( [P_μ,[P_ν,[P_ν,U]]] [P_μ,[P_α,[P_α,U]]])
⊃ -(C_1+C_11) tr (P_μP_νP_ρUUP_μP_νP_ρ)+ (C_1 + 4C_3 - 4C_7 -C_12) tr (P_μP_νP_μUUP_ρP_νP_ρ)
- (C_1 + 2C_9 + 4C_6) tr (P_μP_νP_ρUUP_ρP_μP_ν)
+ (C_1+2C_6+C_11+2C_16) tr (P_μP_νP_ρUUP_νP_μP_ρ)
+ (4C_4-4C_10+2C_13) tr (UP_μP_νP_μUP_ρP_νP_ρ)+4C_14 tr (UP_μP_νP_ρUP_μP_νP_ρ)
- (2C_14-C_13) tr (UP_μP_νP_ρUP_νP_μP_ρ)-(4C_8+4C_13) tr (UP_μP_νP_ρUP_ρP_μP_ν)
+ (4C_8+2C_13+8C_17) tr (UP_μP_νP_ρUP_ρP_νP_μ)
- (1/2C_2+1/2C_11+2C_14) tr (UP_μP_νUP_ρP_μP_νP_ρ)
+ (C_2-C_12+4C_14) tr (UP_μP_νUP_ρP_νP_μP_ρ)
+ (C_2+C_11+2C_10+2C_16-2C_13) tr (UP_μP_νUP_μP_ρP_νP_ρ)
- (C_2+2C_10-C_12-2C_13-4C_15+2C_16) tr (UP_μP_νUP_νP_ρP_μP_ρ)
+ 2C_11 tr (P_μP_νP_ρUP_μUP_νP_ρ)
+ (2C_6+2C_9-C_12-4C_15+2C_16-4C_5-2C_7) tr (P_μUP_μUP_νP_ρP_νP_ρ)
- (2C_6+C_11-C_12-2C_7+2C_16) tr (P_μUP_νUP_ρP_μP_ρP_ν)
- (2C_11+4C_16) tr (P_μP_νP_ρP_νP_μUP_ρU)
The coefficients in the LHS of the Eq. (<ref>) can be obtained from the values of the loop diagrams,
C^(6,2)_1 = c_s/16π^21/M^61/210,
C^(6,2)_2 = c_s/16π^21/M^62/315,
C^(6,2)_3 = -c_s/16π^21/M^61/105,
C^(6,2)_4 = -c_s/16π^21/M^61/140,
C^(6,2)_5 = c_s/16π^21/M^61/105,
C^(6,2)_6 = -c_s/16π^21/M^61/210,
C^(6,2)_7 = -c_s/16π^21/M^61/105,
C^(6,2)_8 = c_s/16π^21/M^61/315,
C^(6,2)_9 = c_s/16π^21/M^611/1260,
C^(6,2)_10 = -c_s/16π^21/M^61/126,
C^(6,2)_11 = -c_s/16π^21/M^61/630,
C^(6,2)_12 = c_s/16π^21/M^61/126,
C^(6,2)_13 = -c_s/16π^21/M^61/420,
C^(6,2)_14 = -c_s/16π^21/M^61/2520,
C^(6,2)_15 = -c_s/16π^21/M^61/315,
C^(6,2)_16 = c_s/16π^21/M^61/630,
C^(6,2)_17 = -c_s/16π^21/M^61/840.
§ UNIVERSAL ONE-LOOP EFFECTIVE LAGRANGIAN UP TO D8
Relying on the computation based on the Heat-Kernel method and supported by the covariant diagram technique, we exhaustively compute all possible operator structures that can emerge after integrating out degenerate heavy scalars at the one-loop considering only heavy propagators in the loop. We collect all such terms and provide the universal one-loop effective Lagrangian up to dimension eight. This effective action does not necessarily depend on either the specific UV or low energy theories, and in that sense it is universal.
ℒ_^d ≤ 8 = ℒ_^ren + c_s(4π)^2[ Ł^(8,0)_ + Ł^(6,0)_ + Ł^(6,1)_ + Ł^(6,2)_ + Ł^(4,0)_ + Ł^(4,1)_ + Ł^(4,2)_ + Ł^(4,3)_ + Ł^(4,4)_
+ Ł^(2,0)_ + Ł^(2,1)_ + Ł^(2,2)_ + Ł^(2,3)_ + Ł^(2,4)_ + Ł^(2,5)_ + Ł^(2,6)_ + Ł^(0,0)_ + Ł^(0,1)_
+ Ł^(0,2)_ + Ł^(0,3)_ + Ł^(0,4)_ + Ł^(0,5)_ + Ł^(0,6)_ + Ł^(0,7)_ + Ł^(0,8)_]
= c_s(4π)^2 M^4 [-1/2 (ln[M^2/μ^2]-3/2) ] + c_s(4π)^2 { M^2 [-(ln[M^2/μ^2]-1) U]
+ M^0 1/2[- ln[M^2/μ^2] U^2 -1/6ln[M^2/μ^2] (G_μν)^2]
+ 1/M^21/6 [ -U^3 - 1/2 (P_μ U)^2-1/2U (G_μν)^2 - 1/10(J_ν)^2 + 1/15 G_μν G_νρ G_ρμ]
+ 1/M^41/24 [U^4 - U^2 (P^2 U) + 4/5U^2 (G_μν)^2 + 1/5 (U G_μν)^2 + 1/5 (P^2 U)^2
-2/5 U (P_μ U) J_μ + 2/5 U(J_μ)^2 - 2/15 (P^2 U) (G_ρσ)^2 +1/35(P_ν J_μ)^2
- 4/15 U G_μνG_νρ G_ρμ - 8/15 (P_μ P_ν U) G_ρμ G_ρν + 16/105G_μνJ_μJ_ν
+ 1/420 (G_μνG_ρσ)^2 +17/210(G_μν)^2(G_ρσ)^2 +2/35(G_μνG_νρ)^2
+ 1/105 G_μνG_νρG_ρσG_σμ +16/105 (P_μ J_ν) G_νσG_σμ]
+ 1/M^61/60 [ -U^5 + 2 U^3 (P^2 U) + U^2(P_μ U)^2 - 2/3 U^2 G_μν U G_μν - U^3 (G_μν)^2
+ 1/3 U^2 (P_μ U)J_μ - 1/3 U (P_μ U)(P_ν U) G_μν - 1/3 U^2 J_μ (P_μ U)
- 1/3 U G_μν(P_μ U)(P_ν U) - U (P^2 U)^2 - 2/3 (P^2 U) (P_ν U)^2 - 1/7 ((P_μ U)G_μα)^2
+2/7 U^2 G_μνG_ναG_αμ+8/21U G_μνU G_ναG_αμ-4/7U^2(J_μ)^2 -3/7 (U J_μ)^2
+4/7U (P^2U)(G_μν)^2 +4/7(P^2U)U(G_μν)^2 -2/7U (P_μ U)J_ν G_μν
-2/7(P_μ U)U G_μν J_ν -4/7U (P_μ U)G_μν J_ν -4/7(P_μ U)U J_ν G_μν
+4/21U G_μν(P^2U)G_μν +11/21(P_α U)^2(G_μν)^2 - 10/21(P_μ U)J_ν U G_μν
- 10/21(P_μ U) G_μν U J_ν - 2/21 (P_μ U)(P_ν U)G_μαG_αν + 10/21 (P_ν U)(P_μ U)G_μαG_αν
-1/7 (G_αμ(P_μ U))^2 - 1/42 ((P_α U)G_μν)^2 -1/14 (P_μ P^2 U)^2 -4/21 (P^2U) (P_μ U)J_μ
+4/21 (P_μ U)(P^2U)J_μ +2/21 (P_μ U) (P_ν U)(P_μ J_ν) - 2/21 (P_ν U) (P_μ U)(P_μ J_ν) ]
+ 1/M^81/120 [U^6 - 3 U^4 (P^2 U) - 2 U^3(P_ν U)^2 + 12/7U^2 (P_μ P_ν U)(P_ν P_μ U)
+26/7 (P_μ P_ν U) U (P_μ U)(P_ν U) +26/7 (P_μ P_ν U) (P_μ U)(P_ν U)U + 9/7 (P_μ U)^2(P_ν U)^2
+ 9/7 U (P_μ P_ν U)U (P_ν P_μ U) + 17/14 ((P_μ U)(P_ν U))^2 + 8/7 U^3G_μνU G_μν
+ 5/7 U^4(G_μν)^2 + 18/7 G_μν(P_μ U)U^2(P_ν U) + 9/14 (U^2G_μν)^2
+ 18/7 G_μνU (P_μ U)(P_ν U)U + 18/7 (P_μ P_ν U) (P_μ U)U (P_ν U)
+ ( 8/7 G_μνU (P_μ U)U (P_ν U) + 26/7 G_μν(P_μ U)U (P_ν U)U )
+ ( 24/7 G_μν(P_μ U)(P_ν U)U^2 - 2/7 G_μνU^2(P_μ U)(P_ν U))]
+ 1/M^101/210 [-U^7 - 5 U^4 (P_ν U)^2 - 8 U^3(P_μ U)U(P_μ U) -9/2 (U^2 (P_μ U))^2 ]
+ 1/M^121/336 [U^8] }.
Here, we consider that the tensors G_μν and J_μ are functions of P: G_μν = [P_μ,P_ν], and J_μ = P_ν G_νμ = [P_ν,[P_ν,P_μ]]. Please note that the hermitian conjugates are already fed in the above expression such that the effective Lagrangian is self-hermitian.
We agree with the effective action up to dimension six computed using the functional method <cit.> and covariant diagram <cit.>.
§.§ Dimension of U vs Dimension of Operator
We note that the dimension of scalar functional U does not always reflect the dimension of the emerged operator consisting of light fields (ϕ). As U is defined as a double functional derivative of action w.r.to. the heavy fields (Φ_i), its mass dimension (in a four-dimensional space-time case) is always +2. Thus, U may contain a single light scalar field accompanied by a non-zero mass-dimensional coupling. In that case, identification of the operator's mass dimension by naive power counting of U may be misleading. For example, the structure 𝒪(P^2mU^n) has mass dimension 2(m+n), with m,n ∈ℤ^+. But an operator of the mass dimension 2(m+n) can also be generated from all the structures up to 𝒪(P^2mU^2n). We demonstrate this notion through a simple toy example where the scalar potential takes the following form
V_scalar⊃λ (ϕ^†ϕ)^2 + λ_1 (Φ_1^†Φ_1)^2 + λ_2 (Φ_2^†Φ_2)^2 + λ_3 (ϕ^†ϕ)(Φ_1^†Φ_1)
+ λ_4 (ϕ^†ϕ)(Φ_2^†Φ_2) + λ_5 (Φ_1^†Φ_1)(Φ_2^†Φ_2)+(κ ϕ^†(Φ_1^†Φ_2)+h.c.).
The explicit structure of U can be obtained through,
U[ϕ] =
[ δ^2 V_scalarδΦ_1^†δΦ_1 δ^2 V_scalarδΦ_1^†δΦ_2; δ^2 V_scalarδΦ_2^†δΦ_1 δ^2 V_scalarδΦ_2^†δΦ_2 ][-30pt]1pt70pt_ Φ_1=Φ_1,c[ϕ],
Φ_2=Φ_2,c[ϕ].
As the Lagrangian does not contain terms linear in each of the heavy fields, their classical solutions correspond to Φ_1,c=Φ_2,c=0. So the final form of U can be written as
U[ϕ] =
[ λ_3 (ϕ^†ϕ) κ ϕ^†; κ^* ϕ λ_4 (ϕ^†ϕ) ].
Now, we compute the total contributions to the dimension eight operator (ϕ^†ϕ)^4
ℒ_eff^(0,4) ⊃ 1/24 M^4 (λ_3^4+λ_4^4) (ϕ^†ϕ)^4 ∼𝒪(U^4),
ℒ_eff^(0,5) ⊃ -1/12 M^6 (|κ|^2λ_3^3+|κ|^2λ_3^2λ_4+|κ|^2λ_3λ_4^2+|κ|^2λ_4^3) (ϕ^†ϕ)^4 ∼𝒪(U^5),
ℒ_eff^(0,6) ⊃ 1/40 M^8 (3|κ|^4λ_3^2+4|κ|^4λ_3λ_4+3|κ|^4λ_4^2) (ϕ^†ϕ)^4 ∼𝒪(U^6),
ℒ_eff^(0,7) ⊃ -1/30 M^10 (|κ|^6λ_3+|κ|^6λ_4) (ϕ^†ϕ)^4 ∼𝒪(U^7),
ℒ_eff^(0,8) ⊃ 1/168 M^12 |κ|^8 (ϕ^†ϕ)^4 ∼𝒪(U^8).
As the mass dimension of U is +2, it was expected to have a solitary contribution from U^4 to the dimension eight operators (ϕ^†ϕ)^4. But, that is not certainly true and we find contributions to the same operator from U^5,6,7,8 all the structures. As an example, one can find a similar effect when the SM is extended by another Higgs Doublet (2HDM) and a gauge singlet scalar and low energy theory is SMEFT.
§ CONCLUSIONS
Effective Field Theory (EFT) has drawn much attention in recent times by its virtue. We are working to achieve more and more precision in EFT calculations. Also, higher dimensional effective operators have their own signatures and impact to crack the degeneracy among different UV theories as well. At this point, the computation of dimension eight effective operators will have a very large impact on our ongoing analysis. In this paper, we have stepped in that direction. We have enabled the Heat-Kernel (HK) method to compute the dimension eight one-loop effective Lagrangian after integrating out either a heavy or multiple degenerate heavy scalars. Our results do not depend on the form of either the UV or low energy theories. In this paper, we have computed the contributions from the loops that consist of only heavy scalar propagators. We have also employed the covariant diagram method to cross-check part of our results. These two methods complement each other to validate our results. We are in the process of extending this result by including heavy fermions and light-heavy propagator mixing in future works.
§ ACKNOWLEDGEMENTS
We acknowledge the useful discussions with Diptarka Das and Nilay Kundu. The authors would also like to acknowledge the initial discussions with Priyank Kaushik.
§ REVIEW OF COVARIANT DIAGRAM METHOD
In this section, we present a brief review of the development of the covariant diagram representation starting from the original gauge-covariant functional form of the effective action. The topic has been greatly discussed in Refs. <cit.>. As shown in Eq. (<ref>), the one-loop part of the effective action for a field can be given as,
Δ S_eff = ic_s Trlog(-P^2+M^2+U),
here, c_s = +1/2, or +1 depending on whether the heavy field is a real scalar or complex scalar. The trace “Tr" can then be evaluated by taking an integral over the momentum eigenstate basis,
∫ d^dx ℒ_eff[ϕ]
= ic_s ∫ d^d q/(2π)^d⟨q|trlog(-P^2+M^2+U)|q⟩
= ic_s∫ d^dx∫d^d q/(2π)^d⟨q|x⟩⟨x|trlog(-P^2+M^2+U)|q⟩
= ic_s∫ d^dx∫d^d q/(2π)^d e^i q.x trlog(-P^2+M^2+U) e^-i q.x.
By following a straightforward manipulation of introducing the completeness relation
for the basis of the spatial eigenstates, we find the following form of the effective one-loop Lagrangian,
ℒ_eff[ϕ] = ic_s∫d^d q/(2π)^d trlog(-P^2+M^2+U)_P→ P-q
= ic_s∫d^d q/(2π)^dtrlog(-P^2-q^2+2q.P+M^2+U)
= ic_s∫d^d q/(2π)^dtr{log(-q^2+M^2)
+log[1-(q^2-M^2)^-1(-P^2+2q.P+U)]}.
After performing the momentum integral, the first term in Eq. (<ref>) reduces to a constant, while the second term can be expanded in an infinite series as shown in Eq. (<ref>).
§.§ Covariant loop diagrams, their structures and values
In this subsection, we present all the diagrams that can contribute to dimension eight interactions at each order of P^2nU^m with possible contractions among P_μ's and note down the corresponding operator structures containing open covariant derivatives (P_μ's) and value for the loops.
§.§.§ 𝐎(𝐏^4 𝐔^3)
§.§.§ 𝐎(𝐏^8)
§.§.§ 𝐎(𝐏^6 𝐔^2)
§.§.§ 𝐎(𝐏^6 𝐔)
§.§ Master integrals for heavy loops
Each of the covariant diagrams mentioned in Sec. <ref> corresponds to a loop integral with n heavy propagators and 2n_c contractions which can be generalised in the following form
∫d^d q/(2π)^d q^μ_1⋯ q^μ_2n_c/(q^2-M^2)^n ≡ g^μ_1⋯μ_2n_c ℐ[q^2n_c]^n,
We have used Package-X <cit.> to compute the loop integrals. In Table <ref>, we have listed the results for the loop integrals discussed in Sec. <ref>.
JHEP
|
http://arxiv.org/abs/2306.06867v6
|
20230612045724
|
A Novel Generalization of the Liouville Function $λ(n)$ and a Convergence Result for the Associated Dirichlet Series
|
[
"Sky Pelletier Waterpeace"
] |
math.NT
|
[
"math.NT",
"11M26, 11M06"
] |
Rowan University, Glassboro, NJ, 08028, USA
Southern New Hampshire University, Manchester, NH, 03106, USAStatements and Declarations This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors and there are no interests to declare. Affiliation information is for contact purposes only and does not imply any support or affiliation with this work.
[email protected]@snhu.edu
We introduce a novel arithmetic function w(n), a generalization of the Liouville function λ(n), as the coefficients of a Dirichlet series, and as a special case of a parametrized family of functions w_m(n). We prove some useful special properties of these arithmetic functions and then focus on convergence of their Dirichlet series. In particular, we show that each function w_m(n) injectively maps ℕ into a dense subset of the unit circle in ℂ and that F_m(s) = ∑_n w_m(n)/n^s converges for all s with (s)∈(1/2,1). Finally, we show that the family of functions w_m(n) converges to λ(n) and that F_m(s) converges uniformly in m to ∑_n λ(n)/n^s, implying convergence of that series in the same region and thereby proving a particularly interesting property about a closely related function.
[MSC Classification]11M26, 11M06
A Novel Generalization of the Liouville Function λ(n) and a Convergence Result for the Associated Dirichlet Series
Sky Pelletier Waterpeace 0000-0002-2231-0160
July 31, 2023
===================================================================================================================
§ INTRODUCTION
Definition of F(s) and w(n)
We define an arithmetic function w(n) as the coefficients of the Dirichlet series given by
F(s) = ∑_n=1^∞w(n)/n^s :=
∏_p, prime( 1 - e^iψ(p)/p^s
+e^iψ(p^2)/p^2s
- e^iψ(p^3)/p^3s
+ ⋯)
where ψ(p^k) is given by
ψ(p^k) := π/p^2G(1 - (p-1/p)^k)
in which we are letting G=∑_p p^-2 be the sum of the reciprocals of the primes squared. We note that F(s) is a kind of Euler product defining each w(n) based on the prime factorization of n.
w(n) has the following properties:
* w(n) is multiplicative: w(ab) = w(a)w(b) for any coprime a,b∈ℕ
* w:ℕ→ C is injective, where C = {z∈ℂ | |z|=1}, the unit circle in ℂ.
* (w(n))∈[0,π) for λ(n)=1 and (w(n)) ∈(-π,0) for λ(n)=-1, where λ(n) is the Liouville function.
Properties <ref> and <ref> follow directly from the arithmetic of fractions with coprime denominators. We therefore begin by proving Property <ref>.
Observe that if the prime factorization of n is given by n=p_1^k_1 p_2^k_2⋯ p_J^k_J, then w(n)=e^iθ(n) where
θ(n) = (w(n))=π/2(λ(n)-1) + ∑_j=1^J ψ(p_j^k_j).
From the definition of w(n) in (<ref>) it is clear how the sum in (<ref>) results from each prime factor p_j^k_j of n contributing a factor of exp(i ψ(p_j^k_j)) to w(n), and likewise the term π/2(λ(n)-1) results from the alternating signs in the series in (<ref>), whereby each prime factor of n contributes a factor of -1=e^iπ to w(n), thereby shifting θ(n) by π. Next, observe that ψ(p^k) is the k^th partial sum of the geometric series with a=π/p^3G and r=p-1/p. Therefore, lim_k→∞ψ(p^k) = π/p^2G, and so ∑_p (lim_k→∞ψ(p^k)) = π, since we defined G to be the sum of the reciprocals of the primes squared.
Imagine a hypothetical number n whose prime factorization contains all prime numbers in infinite multiplicity. Then for all p_j^k_j in its prime factorization, k_j→∞ for each j∈ℕ. So the value of ∑_j=1^∞ψ(p_j^k_j) in the argument of w(n) for such a theoretical n would be π, and for any actual n∈ℕ the sum would be less than π. Therefore, if n has an even number of prime factors, the argument of w(n) would be in [0,π)
(zero being the argument of w(1)), and since if n has an odd number of prime factors we introduce an extra factor -1, having the effect of subtracting π from (w(n)), in that case (w(n))∈(-π, 0). This proves Property <ref>.
The set of coefficients {w(n)} for n∈ℕ is dense in C, the unit circle in ℂ.
We saw above that w(n) = e^iθ(n) where
θ(n) =π/2(λ(n)-1)+∑_p|nπ/p^2 G(1 - (p-1/p)^k_p),
with G=∑_p 1/p^2 and where k_p is the multiplicity of the prime factor p|n. Thus, to show {w(n)} dense in C, we will show {θ(n)} dense in (-π,π).
To begin, let x∈(0,π) and choose ϵ >0. We will show there exists an infinite sequence {n_k} such that θ(n_k) is within an ϵ-radius of x for all sufficiently large k. We will construct {n_k} by identifying prime factors to use to build θ(n_k) to be in the proper interval when k is sufficiently large. We will accomplish this essentially by identifying terms of π/G∑_p^∞1/p^2 which will yield a sum in the ϵ-neighborhood of x.
First, let x_0 = x and choose b_0 as large as possible such that
B_0 = π/G∑_j=b_0^∞1/p_j^2 > x_0-ϵ.
Next, if B_0 > x_0+ϵ, choose t_0 as large as possible such that
T_0 = π/G∑_j=t_0+1^∞1/p_j^2 > B_0 - (x_0+ϵ)
(If we have that B_0 ≤ x_0 + ϵ, instead choose t_0 sufficiently large so that T_0 is small enough for |x_0-(B_0-T_0)|<ϵ). Clearly t_0+1 ≥ b_0, since B_0 > B_0 - (x_0+ϵ) and t_0 is chosen to be the largest such that (<ref>) holds. If t_0+1>b_0 then let S_0 = B_0 - T_0 = π/G∑_j=b_0^t_01/p_j^2. Otherwise, if t_0+1 = b_0, then by the choice of t_0 it follows that
π/G∑_j=t_0+2^∞1/p_j^2≤ B_0 - (x_0 +ϵ) = ( π/G∑_j=b_0^∞1/p_j^2) - (x_0 +ϵ),
and so since t_0+1 = b_0 we have
x_0 + ϵ + π/G∑_j=t_0+2^∞1/p_j^2≤π/G∑_j=t_0+1^∞1/p_j^2,
and therefore x_0 + ϵ≤π/G1/p_j^2 for j=t_0+1=b_0. In this case, let p=p_j and choose k_0 the largest such that ψ(p^k_0) ≤ x_0, and let S_0 = ψ(p^k_0). Note that this choice of k_0 is always possible: if K is the set of all k∈ℕ such that ψ(p^k) ≤ x_0, then K is finite, since otherwise lim_k→∞ψ(p^k) = π/G1/p_j^2≤ x_0, but this is a contradiction since we had that x_0+ϵ≤π/G1/p_j^2. It remains to be seen that K is nonempty, so assume to get a contradiction that
x_0 < ψ(p^1) = π/p^2G(1 - p-1/p) = π/p^3G,
but since b_0 was chosen to be the largest such that π/G∑_j=b_0^∞1/p_j^2 > x_0 - ϵ, therefore
π/G∑_j=b_0+1^∞1/p_j^2≤ x_0 - ϵ,
and if we let q be the next prime after p, so q=p_b_0+1, then we have in particular that π/G1/q^2 < x_0 - ϵ. However, for any successive primes p, q, from Bertrand's Postulate we have that q<2p, or equivalently, 1/p < 2/q, and so it follows that
π/Gq/q^3 =
π/G1/q^2< x_0 - ϵ < x_0 < π/G1/p^3 < π/G8/q^3,
which is a contradiction unless q<8. However, if q<8 then p∈{2,3,5}, in which case we know numerically that
ψ(p^1) = π/p^3G < π/G∑_j=b_0+1^∞1/p_j^2,
and the sum on the right is less than or equal to x_0 -ϵ by our choice of b_0. This contradicts our assumption that x_0 < ψ(p^1). Hence that assumption is false; K is nonempty, and there exists some maximum k_0 such that ψ(p^k_0) ≤ x_0.
We have now that S_0 = B_0 - T_0 or S_0 = ψ(p^k_0), and in either case S_0 has been constructed such that S_0 < x + ϵ. Finally, let ϵ_0 = |x-S_0|, and if ϵ_0 ≥ϵ, it follows that S_0 < x-ϵ. In that case repeat the process above, this time letting x_1 = ϵ_0, choosing the maximum possible b_1 and t_1 to form B_1>x_1 - ϵ and T_1>B_1-(x_1+ϵ) as in (<ref>) and (<ref>), and letting S_1 = S_0 + (B_1 - T_1), or if B_1=T_1 letting S_1 = S_0 + ψ(p^k_1) for p=p_b_1 and k_1 maximum such that ψ(p^k_1)≤ x_1. Let ϵ_1 = |x - S_1|, and note that S_1 < x+ϵ and that ϵ_1 < ϵ_0.
Continue until we have ϵ_k < ϵ for some k. This is possible since given a sum S_m and ϵ_m = |x - S_m| with ϵ_m ≥ϵ, we can always find a tail of π/G∑_p 1/p^2 using only primes larger than were used to form S_m and which when added to S_m takes us greater than x-ϵ, and we have already shown a method for cutting off the tail in the event that it takes us greater than x+ϵ; this process reduces ϵ_m and the process terminates when we have ϵ_k < ϵ for some k, yielding S_k, a sum in the range (x-ϵ, x+ϵ).
Recall that we are in the process of identifying primes to use to form {n_k}, so at each iteration of this process we must necessarily identify primes which have not yet been used. To see that we can always find a sufficiently large tail of the series which does not use any primes already used, assume we have obtained a sum S_m and ϵ_m = x-S_m as described above, with ϵ_m ≥ϵ, and for notational convenience let n=m+1. Let b_n be maximum such that
B_n = π/G∑_j=b_n^∞1/p_j^2 > x_n-ϵ
(following the same method as before, having at this stage replaced x with x_n = ϵ_m, the distance from S_m to x). We must consider two cases: the first case in which the last term added to form S_m was (B_m - T_m), in which we need to show that b_n>t_m, and the second case in which the last term added was ψ(p^k_m) for p=p_b_m and appropriate choice of k_m
In the first case, note that if the term containing p_t_m+1 were included in B_m-T_m, then we would have B_m - T_m > x_m+ϵ, by choice of t_m as in (<ref>). Therefore, if we let b_n=t_m+1, then we would have already B_n > x_n -ϵ. Hence, since b_n is the maximum which allows B_n to fulfill this condition, we have that b_n>t_m.
Now in the second case, in which S_m = S_m-1 + ψ(p^k_m) where p=p_b_m, since k_m was chosen as the maximum such that ψ(p^k_m) ≤ x_m (or equivalently, such that S_m = S_m-1+ψ(p^k_m) ≤ x), and since we must have S_m ≤ x-ϵ or else we would not need to continue, then the difference ψ(p^k_m+1) - ψ(p^k_m) > x_n - ϵ (since the distance from our current S_m to x is exactly ϵ_m = x_n). Hence it suffices to show there exists a B_n greater than this difference, but note that
ψ(p^k_m+1) - ψ(p^k_m) is the difference between successive partial sums of a convergent geometric series, and so we have that ψ(p^k_m+1) - ψ(p^k_m) ≤ψ(p^2) - ψ(p) < ψ(p^2).
Finally, since
ψ(p^2) = π/p^2G(1-(p-1/p)^2) =π/p^2G(2/p - 1/p^2) < 2π/p^3G,
we see that if we can find a B_n larger than 2π/p^3G we are done. In fact, if we let q be the next prime after p, so q=p_b_m+1, then since q<2p implies 2q^2 < 8p^2, which implies
2π/8p^2G < π/q^2G,
we have that if p>8 it follows that
2π/p^3G <2π/8p^2G < π/q^2G,
and so ψ(p^2) < π/q^2G < B_n, letting
B_n = π/G∑_j=b_m+1^∞1/p_j^2.
On the other hand if p<8 it is easily verifiable numerically that ψ(p^2) < B_n with B_n as defined in (<ref>).
We have that, in either case, the algorithm producing a value S_m with ϵ_m ≥ϵ will always continue with a B_n > x_n - ϵ (with none of the terms of B_n using primes used to form S_m) with appropriate choice of T_n or ψ(p^k_n), providing S_n < x+ϵ, with S_n >S_m, until we have that ϵ_k = |x-S_k| < ϵ for some k.
Now, let I index the primes used in all the sums of the form π/G∑_j=b^t 1/p_j^2 and J index the primes and multiplicities used in the form ψ(p^k). Let n = ∏_j∈ I p_j and let m =∏_j∈ J p_j^k_j. (If I is empty, instead let I index a sufficiently large prime p so that π/p^2G+S_k is still within ϵ of x, and if J is empty, instead let m=1). Now if λ(m) = -1 (†) replace m by pm for some p indexed by I. Then for all k∈ℕ, let n_k =mn^2k, noting that λ(n_k)=1, and as k→∞ the contribution to θ(n_k) by the primes indexed by I will converge on π/G∑_j∈ I1/p_j^2, and so {θ(n_k)} will have infinitely many terms within the interval (x-ϵ,x+ϵ), as desired. Hence, {θ(n)} is dense in (0,π).
A similar argument, adjusting line (†) appropriately, shows there exists an infinite sequence of integers {n_k}, with each n_k having λ(n_k)=-1, which sequence {θ(n_k)} converges to any x∈(-π, 0). Hence we have finally that {w(n)}, n∈ℕ, is dense in the unit circle in ℂ, as desired.
§ CONVERGENCE
We will now demonstrate the following fact about our series F(s).
Define
F_N(s) := ∑_n=1^Nw(n)/n^s,
where w(n) is as defined in (<ref>). Then
lim_N→∞ F_N(s) = F(s) = ∑_n=1^∞w(n)/n^s
converges for all s with (s)∈(1/2,1).
We will take advantage of the following fact:
Proposition 1.7.7, p43 in <cit.>
Let a(n) be a sequence, and define A(x) = ∑_n≤ x a(n). If |A(x)| ≤ Mx^α for all x ≥ 1, where α≥ 0, then [the Dirichlet series] ∑_n=1^∞a(n)/n^s is convergent for all s=σ + it with σ>α.
We will show that there exists an N_0 and M such that for all N>N_0, |∑_n=1^N w(n)| < MN^α for α∈(1/2,1). With the proposition above, this will demonstrate the convergence desired.
For each N ∈ℕ, fix α∈(1/2,1) and define J=⌊ N^α⌋ and K=⌊ N^1-α⌋. Then N=JK+R_N. (If it happens that R_N ≥ J, we instead define K=⌈ N^1-α⌉, so that in either case R_N<J). Let Θ_N = {(w(n)) ∈ [0,2π), n≤ N} be the set of principal arguments of w(n) with n≤ N, and let Θ_N^* = Θ_N - Θ_R_N be the set after removing the arguments of w(n) for n≤ R_N. Order the JK elements of Θ_N^* such that
θ_1,1 < θ_1,2 < ⋯ < θ_1,K < θ_2,1 < ⋯ < θ_j,k < ⋯ < θ_J,K.
Letting N^* = |Θ_N^*|= JK, we have
∑_n=R_N+1^R_N + N^* w(n) = ∑^J∑^K e^iθ_j,k = ∑^K∑^J e^iθ_j,k.
Therefore,
|∑_n=R_N+1^R_N + N^* w(n)| ≤|∑^J e^iθ_j,1| + ⋯ + |∑^J e^iθ_j,K|
≤ K ·max_k |∑^J e^iθ_j,k|.
Define |∑^J e^iθ_j| = max_k |∑^J e^iθ_j,k|, and we have
|∑_n=R_N+1^R_N + N^* w(n)| ≤ K|∑^J e^iθ_j| = |∑^J e^iθ_jK|.
Therefore,
2π/J|∑_n=R_N+1^R_N + N^* w(n)| ≤2π/J|∑^J e^iθ_jK| = |∑^J e^iθ_jK2π/J|.
Consider the partition of [0, 2π] induced by the {θ_j} and extend this partition to cover the interval [0,K2π] by
replacing each of the θ_j with {θ_j, θ_j+2π, θ_j+4π, ⋯, θ_j + (K-1)2π}. Now choose θ_1^* from among the first K values, θ_2^* from among the next K, and so forth, in such a way that the set {e^iθ_j^*} = {e^iθ_j}. Then we have that
|∑^J e^iθ_jK2π/J| =
|∑^J e^iθ_j^*K2π/J|.
This last sum is similar to a Riemann sum approximating I = ∫_0^K2π e^iθdθ (if each θ_j^* were necessarily in its associated interval of width K2π/J we would already have a Riemann sum). We will show that the norm of the partition of [0,K2π] naturally induced by the set {θ_j^*} goes to zero, making the corresponding sum a Riemann sum which converges to the indicated integral, and that in the limit the above sum converges to that Riemann sum. Specifically, we note that
N^1-α/N^α = N/N^2α = 1/N^2α - 1,
and 2α - 1>0 since α > 1/2. Therefore, as N→∞, N^1-α/N^α→ 0. Furthermore, since K≤ N^1-α+1, K2π/N^α→ 0. Also, if Δθ_J is the norm of the partition naturally induced by {θ_j^*} for a given J = ⌊ N^α⌋, then we have from the Density Lemma that as N→∞, Θ^*_N becomes dense, and so Δθ_J→ 0 (observing that the θ_j^* were each selected from among K as ordered in (<ref>) above, so that as J→∞, the density of Θ_N^* ensures that the width of the interval containing each set of K points in (<ref>) goes to zero. Furthermore, the J points θ_j^* are distributed among K subintervals of width 2π, and as N→∞, we have that J/K→∞, so each subinterval ends up with infinitely many points θ_j^*). Also, 0≤K2π/J≤Δθ_J for all such J. So Δθ_J →K2π/J, or, equivalently, K2π/J→Δθ_J.
In fact, if we let Δθ_j be the distance between any two successive θ_j^*, θ_j+1^*, then for any Δθ_j, either 0≤Δθ_j ≤K2π/J or K2π/J≤Δθ_j ≤Δθ_J, and so every Δθ_j →K2π/J as N→∞, or, equivalently, K2π/J→Δθ_j.
We have finally that
lim_N→∞2π/N^α|∑_n=R_N+1^n=R_N + N^* w(n)|
≤lim_N→∞|∑^J e^iθ_j^*K2π/N^α|
≤lim_N→∞|∑^J e^iθ_j^*K2π/J|
=lim_N→∞|∑^J e^iθ_j^*Δθ_j|
= lim_K→∞|∫_0^K2π e^iθdθ|
= 0.
We have shown that if we remove the first R_N points from consideration, then the magnitude of the sum of the remaining N^* points is O(N^α) (We actually showed a stronger condition but it suffices that the sum is O(N^α)). We will next show that the magnitude of the sum of the first R_N points is also O(N^α), in which case, by the Proposition mentioned above, the Dirichlet series F(s) converges for (s) ∈ (1/2, 1).
Our claim is that there exist numbers N_0, M ∈ℕ such that for all N>N_0,
|∑^R_N w(n) 2π/N^α| ≤ M
for all R_N = N-JK.
To demonstrate this claim, we first choose some M_0 ∈ℕ. Then, since
lim_N→∞|∑^N w(n) 2π/N|=|∫_0^2π e^iθ dθ| = 0
(since the sum on the left is equal to a Riemann sum approximating the integral, by a similar argument to that above),
then there exists N_0∈ℕ such that for all N>N_0, |∑^N w(n) 2π/N| ≤ M_0. Now, for each N>N_0 and letting R=N-JK, either R>N_0 or R≤ N_0. If R>N_0, clearly |∑_r=1^R w(r) 2π/R| ≤ M_0. On the other hand, if R≤ N_0, then let M_1 ∈ℕ such that M_1 ≥max_R_0≤ N_0|∑_r=1^R_0 w(r) 2π/R_0| and then certainly |∑_r=1^R w(r) 2π/R| ≤ M_1. So let M=max{M_0, M_1}, and it follows that for all N>N_0, for all R=N-JK, |∑_r=1^R w(r) 2π/R| ≤ M.
However, we want actually that |∑_r=1^R w(r) 2π/N^α| ≤ M. Now if R≤ N^α, then N^α = γ R for some γ≥ 1. We therefore have
|∑_r=1^R w(r) 2π/R| = |∑_r=1^R w(r) 2π/N^α/γ| = γ|∑_r=1^R w(r) 2π/N^α| ≤ M,
and the result follows since γ≥ 1.
Now, suppose that N^α < R and so J<R. Let K' = K+1, R'=R-J, and we have N=JK'+R', and if R'≤ J, the result follows similarly to above. Suppose to get a contradiction that R'>J, then let K” = K'+1 = K+2 and R” = R'-J = R-2J. Then for all N > N_0, we have N=JK” + R”, with R”>0. So N=JK” + R” = J(K+2) + R” = JK + 2J + R”. Recall that J=⌊ N^α⌋ and K=⌊ N^1-α⌋, so let N^α = J+x and N^1-α = K+y and observe that x,y∈[0,1). Then
N = N^α N^1-α = (J+x)(K+y) = JK +yJ + xK +xy,
but we have above that N= JK + 2J + R”, so therefore yJ + xK +xy = 2J + R”, with R”>0. So R” = (y-2)J + xK + xy > 0. However, since y∈[0,1), we know that (y-2)∈[-2,-1), and so
0 < R” < -J +xK + xy J<xK+xy J-xK < xy.
Now, since α∈(1/2, 1), then N^α > N^1-α, so either J>K or J=K and x>y. If J>K then J-K≥1 and therefore J-xK>1, but xy<1, which is a contradiction. Suppose then that J=K, then we have
0<R” =(y-2)J + xK +xy
= (x+y-2)J +xy
= J(x-1 + y-1) +xy,
and it follows that
J(x-1 + y-1) = R” -xy >0,
but the left side in (<ref>) is negative, a contradiction. Thus our original assumption R'>J is false.
We have therefore shown there exists M such that for all N > N_0, we have that | ∑_r=1^R w(r) 2π/N^α| ≤ M, unless R>J, in which case we let N=JK'+R' and the result holds for R'. Recall that in the beginning of the proof to this theorem, in the event that R>J we set K= ⌈ N^1-α⌉ corresponding to the case for R' here. In either case we have that for α∈(1/2,1) there exist M,N_0 such that for all N>N_0
|∑_n=1^N w(n)| ≤M/2πN^α.
Therefore, by Proposition 1.1.7 from <cit.>, referenced previously, we have thus shown the convergence of F_N(s) to F(s) for all s with (s)∈(1/2, 1). This completes the proof of Theorem <ref>.
§ GENERALIZATION OF F_N TO F_M,N
Let us now extend our definition of F to a parametrized family of Dirichlet series F_m,N(s) with coefficients w_m(n) as follows:
F_m,N(s) := ∑_n=1^Nw_m(n)/n^s,
where w_m(n) = e^iθ_m(n), and
θ_m(n) = π/2(λ(n)-1) +
∑_j=1^Jψ_m(p_j^k_j),
where we define ψ_m(p^k) to be
ψ_m(p^k) = 1/mψ(p^k) = π/mp^2G(1 - (p-1/p)^k),
again letting n=p_1^k_1p_2^k_2⋯ p_J^k_J be the prime factorization of n. (We note that F(s),w(n) defined earlier are thus the special cases F_1(s),w_1(n).)
We state first an important corollary to Lemma <ref>, omitting the proof, which is substantially similar to the proof of Lemma <ref>.
Given m∈ℕ, the arguments A={θ | θ=(w_m(n)), n∈ℕ} for w_m(n) as defined above are dense in the set A_m=[0,π/m) ∪(π,π+π/m).
We now demonstrate a crucial fact about F_m,N(s).
For fixed s∈ℂ with (s)∈(1/2,1),
F_m(s):= lim_N→∞ F_m,N(s) = lim_N→∞∑_n=1^N w_m(n)/n^s = ∑_n=1^∞w_m(n)/n^s
converges uniformly in m∈ℕ.
The convergence of F_m,N(s) with m>1 follows a similar argument to Theorem <ref>. We note first that the difference between F_1,N and F_m,N is exactly that the coefficients w_1(n) are scaled to be w_m(n) with arguments in the interval [0, π/m) when λ(n)=1 or (π, π+π/m) when λ(n)=-1. Thus, proof of convergence is accomplished by replacing the integral ∫_0^K2π e^iθdθ in (<ref>) with ∫_0^K2πδ(θ) e^iθdθ, where δ(θ) is an indicator function defined to be 1 when θ∈ A_m and 0 otherwise, here letting
A_m = ⋃_a=0^∞[0+2π a, π/m+2π a) ∪(π+2π a, π + π/m+2π a),
so that the integrand is nonzero only on the intervals where the arguments of the w_m(n) may exist.
The convergence of each F_m,N(s) to F_m(s) for (s)∈(1/2, 1) then follows from the argument set forth in Theorem <ref>.
It remains to be seen that this convergence is uniform in m∈ℕ. Defining J and K as in the proof of Theorem <ref>, we had from (<ref>) that
lim_N→∞2π/N^α|∑_n∈Θ^*_N w_1(n)|≤lim_N→∞|∑_j=1^J e^iθ_j^*K2π/N^α|,
and we established that the sum on the right in (<ref>) converges to a Riemann sum which itself converges to zero. Hence, for any ϵ>0, there exists N_0(ϵ) such that for all N>N_0(ϵ),
|∑_j=1^J e^iθ_j^*K2π/N^α|
< ϵ.
Let N>N_0 and consider the set Θ_N^* of arguments of w_1(n) for R_N < n≤ N as before. (We note that the proof of the corresponding result for F_m,N regarding the first R_N points from the latter part of the proof of Theorem <ref> follows the same argument as in that proof, and so we will consider here only the convergence to zero of the magnitude of the sum of the JK-many points in Θ_N^*.) For any m∈ℕ, m>1, let Ψ_m,N = Θ_N^* ∩ A_m for A_m as defined in (<ref>). Then if we restrict the sum in (<ref>) to include only those θ_j^* ∈Ψ_m,N, we have now
|∑_θ_j^* ∈Ψ_m,N e^iθ_j^*K2π/N^α|
< ϵ,
since this last sum, by the same argument as previously, is also a Riemann sum, which in this case converges to
∫_0^K2πδ(θ) e^iθdθ = 0,
with δ(θ) as defined above. That the sum in (<ref>) will be less than ϵ for all N greater than the same N_0(ϵ) is due to the fact that the intervals of the integral zeroed out by the indicator function δ(θ), namely where θ∉ A_m, would otherwise exactly cancel each other in the integral. Removing the points θ^*_j ∉Ψ_m,N has the effect of being a perfectly convergent Riemann sum for these intervals, so that if the total error of a given Riemann sum were ϵ = ϵ_m + ϵ_0, with ϵ_0 being the error from the θ_j^* ∉Ψ_m,N, then the sum in (<ref>) would effectively have ϵ_0=0 and the result follows.
Now, let the norm of the partition of A_m ∩[0,(K-1)2π) induced by θ_j^* ∈Ψ_m,N in (<ref>) be Δ_m, and let Δ_m,N be the norm of the partition induced by the θ_j^* from F_m,N, noting that these are the arguments θ_1(n) of the J points w_1(n) from the proof of Theorem <ref>, scaled to be θ_m(n)∈ A_m. Call this set of points Θ_m,N, and observe that Δ_m,N < Δ_m. Therefore, the partition induced by Θ_m,N is finer than the partition induced by Ψ_m,N, and since both converge to ∫_0^K2πδ(θ) e^iθdθ = 0, we must have that
|∑_θ_j^* ∈Θ_m,N e^iθ_j^*K2π/N^α|
≤|∑_θ_j^* ∈Ψ_m,N e^iθ_j^*K2π/N^α|
< ϵ.
Note that our choice of m>1 was arbitrary and that (<ref>) holds for all N>N_0, where N_0(ϵ) depends only on ϵ and not on our choice of m. Hence,
lim_N→∞ F_m,N(s) = F_m(s)
converges uniformly in m, as desired.
It remains to be seen that F_m(s) converges as m→∞, for which we need the following result.
F_m(s) is Cauchy.
Fix s∈ℂ such that (s) ∈(1/2,1) and let ϵ>0. We will show that there exists M∈ℕ such that for all m,q>M, | F_m(s) - F_q(s) | < ϵ.
Since F_m,N(s) converges to F_m(s) uniformly in m, there exists N_0 such that for all N>N_0 we have that |F_m,N(s) - F_m(s)| < ϵ/4 for all m. In particular, for any such N and for any m,q∈ℕ, we have that
|F_m(s) - F_q(s)| = |F_m(s) - F_m,N(s) + F_m,N(s) - F_q,N(s) + F_q,N(s) - F_q(s)|
≤|F_m(s) - F_m,N(s)|
+ |F_m,N(s) - F_q,N(s)| + |F_q,N(s) - F_q(s)|
< ϵ/4 + |F_m,N(s) - F_q,N(s)|+ ϵ/4.
We thus seek M∈ℕ such that for all m,q>M, |F_m,N(s) - F_q,N(s)| < ϵ/2. Now, F_m,N(s) := ∑_n=1^N w_m(n)/n^s, and likewise for F_q,N(s), so
|F_m,N(s) - F_q,N(s)|
= |∑_n=1^N w_m(n)/n^s -
∑_n=1^N w_q(n)/n^s|
= |∑_n=1^N (w_m(n)-w_q(n)/n^s)|
≤∑_n=1^N |w_m(n)-w_q(n)/n^s|.
However, for each n∈ 1,…, N
|w_m(n)-w_q(n)/n^s| = |e^iθ_m(n)-e^iθ_q(n)/n^s|
≤|θ_m(n)-θ_q(n)|,
if we consider the principal arguments. Note that if λ(n)=1, θ_m(n) ∈[0,π/m), and if λ(n)=-1, θ_m(n) ∈(π,π+π/m), for all m, and likewise for θ_q(n) for all q. Hence, choose M sufficiently large such that π/M < ϵ/2N. Then for all m,q>M and each n, |θ_m(n)-θ_q(n)| < π/M < ϵ/2N, and therefore
∑_n=1^N |w_m(n)-w_q(n)/n^s| < ϵ/2,
as desired. Therefore, F_m(s) is Cauchy.
We are nearly complete, and for our final result we will use the following theorem.
If s(n,m) is a double sequence such that
* the iterated limit lim_m→∞(lim_n→∞s(n,m)) = a, and
* the limit lim_n→∞ s(n,m) exists uniformly in m∈ℕ
then the double limit lim_n,m→∞ s(n,m) = a.
It follows from Theorems <ref>, <ref>, and <ref> that F_m(s) converges to some function W(s). Since each w_m(n) clearly converges to λ(n), we have that
W(s) := lim_m→∞ F_m(s) =lim_m→∞∑_n=1^∞w_m(n)/n^s
= ∑_n=1^∞(lim_m→∞w_m(n)/n^s)
= ∑_n=1^∞λ(n)/n^s,
where passing the limit inside the infinite sum is justified by the uniform convergence in m of the sums, and it follows that F_m(s) converges uniformly in m to W(s) = ∑_n=1^∞λ(n)/n^s. Therefore, W(s) converges for all s with (s) ∈(1/2, 1), and we have our final result:
Define ζ(s) as the analytic continuation of the function given by
ζ(s) = ∑_n 1/n^s
for s∈ℂ with (s)>1. Then ζ(s)≠ 0 for all s with (s) ∈(0, 1/2) ∪(1/2, 1).
We have from above that W(s) = ∑_n=1^∞λ(n)/n^s, and therefore, this series being well-known, that W(s) = ζ(2s)/ζ(s). Furthermore, W(s) converges for all s with (s)∈(1/2, 1)
Since ζ(2s) is known to be absolutely convergent and nonzero in this region, we see that 1/ζ(s) converges everywhere in the same region, and therefore we have that ζ(s)≠ 0 when (s)∈(1/2, 1). The symmetry of ζ(s) about the line (s)=1/2 for s with (s)∈(0, 1) being well known, the result follows.
§ ACKNOWLEDGEMENTS
The author would like to thank Dr. Abdul Hassen and to especially thank Dr. Marcus Wright, both of Rowan University, for their invaluable guidance and help checking and proofreading this paper. The author thanks Drs. Barry Mazur and William Stein for their excellent expository book Prime Numbers and the Riemann Hypothesis, which inspired the present work. The author dedicates this work to his dear friend, the late Dr. Tom Osler.
|
http://arxiv.org/abs/2306.12306v2
|
20230621143603
|
Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift
|
[
"Florian Seligmann",
"Philipp Becker",
"Michael Volpp",
"Gerhard Neumann"
] |
cs.LG
|
[
"cs.LG"
] |
bibliography.bib
px
gauss2
1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))
compat=1.18
positioning
calc
fit
backgrounds
fillbetween
groupplots
discard if not/.style 2 args=
x filter/.append code=
#1
#2
inf
bayesplotmmmmm
[#5]
mm
#1
#2
Ommm
[#4, discard if not=#2, #1]
plot [error bars/.cd, x dir = both, x explicit]
table[y=, x=, x error=, col sep=comma];
paretoplotoommmmmm
#1
[#8]
Ommm
[
#4, discard if not=#2, discard if not=, #1
]
plot [error bars/.cd, x dir = both, y dir = both, x explicit, y explicit]
table[x=, x error=, y=, y error=, col sep=comma];
[
#4, discard if not=#2, #1
]
plot [error bars/.cd, x dir = both, y dir = both, x explicit, y explicit]
table[x=, x error=, y=, y error=, col sep=comma];
,4*1/exp(((-3)^2)/2)
paretostyle/.style=
ticklabel style =
font=,
tick style =
draw=none
,
legend style=
column sep=1ex,
draw=gray!50,
rounded corners
,
axis line style =
draw=gray!50
,
every axis plot/.append style=
very thick,
error bars/.style=
very thick,
error mark=none
,
,
error bars/error mark=none,
grid=major,
grid style=
line width=.5pt,
draw=gray!10
,
scaled y ticks=false,
scaled x ticks=false,
yticklabel style=
/pgf/number format/fixed,
/pgf/number format/precision=2
,
single/.style=
mark = *,
mark size = 3pt,
mark options =
solid,
draw = none
,
multix/.style =
mark = square*,
mark size=3pt,
mark options =
solid,
line width=0.1pt,
draw = black
,
calstyle/.style =
ticklabel style =
,
tick style =
draw=none
,
axis line style =
draw=gray!50
,
grid=major,
grid style=
line width=.5pt,
draw=gray!10
,
xmin=0,
xmax=1,
ymin=0,
ymax=1,
clip=false,
xlabel=Confidence,
ylabel=Accuracy
,
calline/.style =
color = calline,
solid,
mark = *,
ultra thick
Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift
Florian SeligmannCorrespondence to mailto:[email protected]@student.kit.edu.
Karlsruhe Institute of Technology
Philipp Becker
Karlsruhe Institute of Technology
Michael Volpp
Bosch Center for Artificial Intelligence
Gerhard Neumann
Karlsruhe Institute of Technology
July 31, 2023
=================================================================================================================================================================================================================================================================================================================================================
Bayesian deep learning (BDL) is a promising approach to achieve well-calibrated predictions on distribution-shifted data.
Nevertheless, there exists no large-scale survey that evaluates recent SOTA methods on diverse, realistic, and challenging benchmark tasks in a systematic manner.
To provide a clear picture of the current state of BDL research, we evaluate modern BDL algorithms on real-world datasets from the WILDS collection containing challenging classification and regression tasks, with a focus on generalization capability and calibration under distribution shift.
We compare the algorithms on a wide range of large, convolutional and transformer-based neural network architectures.
In particular, we investigate a signed version of the expected calibration error that reveals whether the methods are over- or underconfident, providing further insight into the behavior of the methods.
Further, we provide the first systematic evaluation of BDL for fine-tuning large pre-trained models, where training from scratch is prohibitively expensive.
Finally, given the recent success of Deep Ensembles, we extend popular single-mode posterior approximations to multiple modes by the use of ensembles.
While we find that ensembling single-mode approximations generally improves the generalization capability and calibration of the models by a significant margin, we also identify a failure mode of ensembles when finetuning large transformer-based language models.
In this setting, variational inference based approaches such as last-layer Bayes By Backprop outperform other methods in terms of accuracy by a large margin, while modern approximate inference algorithms such as SWAG achieve the best calibration.
§ INTRODUCTION
Real-world applications of deep learning require accurate estimates of the model's predictive uncertainty <cit.>.
This is particularly relevant in safety-critical applications of deep learning, such as medical applications <cit.> and self-driving cars <cit.>.
Therefore, we want our models to be calibrated: A model should be confident about its prediction if and only if the prediction will likely be correct.
Only then it is sensible to rely on high-confidence predictions, and, e.g., to contact a human expert in the low-confidence regime <cit.>.
Calibration is particularly relevant when models are evaluated on out-of-distribution (o.o.d.) data, i.e. on inputs that are very different from the training data, and hence, the model cannot always make accurate predictions.
However, typical deep neural networks are highly overconfident on o.o.d. data <cit.>.
Bayesian deep learning (BDL) promises to fix this overconfidence problem by marginalizing over the posterior of the model's parameters.
This process takes all explanations that are compatible with the training data into account.
As desired, explanations will disagree on o.o.d. data, so that predictions will have low confidence in this regime.
While computing the exact parameter posterior in BDL is infeasible, many approximate inference procedures exist to tackle this problem, aiming at making BDL applicable to real-world problems.
Yet, recent BDL algorithms are typically only evaluated on the comparatively small and curated MNIST <cit.>, UCI <cit.>, and CIFAR <cit.> datasets with artificial o.o.d. splits.
Existing BDL surveys <cit.> concentrate on a few popular but relatively old algorithms such as Bayes By Backprop, Deep Ensembles, and Monte Carlo Dropout.
In the light of recent calls for more realistic benchmarks of state-of-the-art (SOTA) algorithms <cit.> – with some experts going as far as calling the current state of BDL a “replication crisis”[<https://nips.cc/Conferences/2021/Schedule?showEvent=21827>] – we aim to provide a large-scale evaluation of recent BDL algorithms on complex tasks with large, diverse neural networks.
Contributions.
i) We systematically evaluate a comprehensive selection of modern, scalable BDL algorithms on large image- and text-based classification and regression datasets from the WILDS collection <cit.> that originate from real-world, safety-critical applications of deep learning (<Ref>).
In the spirit of <cit.>, we focus on generalization capability and calibration on o.o.d. data, but consider more diverse and modern algorithms (<Ref>) on realistic datasets with distribution shift.
In particular, we include recent advances in variational inference such as natural gradient descent (iVON <cit.>) and low-rank posterior approximations (Rank-1 VI <cit.>).
Furthermore, we use modern neural network architectures such as various ResNets <cit.>, a DenseNet <cit.>, and a transformer architecture <cit.>.
ii) We present the first systematic evaluation of BDL for finetuning large pre-trained models and show that using BDL gives a significant performance boost compared to standard deterministic finetuning (<Ref>).
iii) Inspired by the success of Deep Ensembles <cit.>, we systematically evaluate the benefit of ensembling single-mode posterior approximations <cit.> (<Ref>).
iv) We use a signed extension of the expected calibration error (ECE) called the signed expected calibration error (sECE) that can differentiate between overconfidence and underconfidence, allowing us to better understand in which ways models are miscalibrated (<Ref>).
v) We compare the posterior approximation quality of the considered algorithms using the HMC samples from <cit.> (<Ref>) and show that modern single-mode BDL algorithms approximate the parameter posterior better than Deep Ensembles, with further gains being achieved by ensembling these algorithms.
Overall, our work is similar in spirit to <cit.>, but we compare the algorithms on more diverse datasets and focus on pure calibration metrics, thereby revealing failure modes of SOTA BDL algorithms that are not yet present in the literature.
We provide code for all implemented algorithms and all evaluations[<https://github.com/Feuermagier/Beyond_Deep_Ensembles>].
§ RELATED WORK
Several recent publications <cit.> review the SOTA in uncertainty quantification using Bayesian models without providing experimental results.
<cit.> compare a wide range of Markov Chain Monte Carlo <cit.> and approximate inference <cit.> methods on toy classification and regression datasets.
<cit.> perform a large-scale experimental evaluation of a small selection of popular BDL algorithms on o.o.d. data and conclude that Deep Ensembles <cit.> perform best while stochastic variational inference <cit.> performs worst.
<cit.> use a similar selection of algorithms but only evaluate on a single, large computer vision task not considering o.o.d. data.
<cit.> artificially create o.o.d. splits for UCI datasets <cit.> and again find that variational inference performs worse than the Laplace approximation <cit.>.
<cit.> and <cit.> compare Monte Carlo Dropout <cit.> and Deep Ensembles in the context of semantic segmentation and depth completion, but, again, do not consider o.o.d. data.
Competitions such as <cit.> and <cit.> also provide insights into the performance of different algorithms.
However, the employed algorithms are typically highly tuned and modified for the specific tasks and are therefore of limited use to assess the general quality of the underlying methods in more diverse settings.
Importantly, all winners of <cit.> use ensemble-based algorithms.
The work of <cit.> is the most similar to ours, as they evaluate several BDL algorithms, including ensembles of single-mode posterior approximations, on two large image-classification datasets and consider o.o.d. data.
Compared to <cit.>, we evaluate a different set of algorithms such as SWAG <cit.> and natural gradient descent variational inference <cit.> on a more diverse selection of datasets and network architectures, including transformer-based models and finetuning tasks, thereby revealing new failure modes of SOTA BDL methods.
§ BAYESIAN DEEP LEARNING ALGORITHMS
We assume a neural network with parameters that models the likelihood p(|,) of an output given an input .
By treating as a random variable and given a training dataset = {(_i,_i) | i=1,..,N} of input-output pairs, the parameter posterior is given by Bayes' theorem as
= ∏_i p(_i|_i,) p()/p(),
where is a prior over parameters.
The posterior assigns higher probability to data points that fit the training data well and conform to our prior beliefs.
Using , a prediction given an input vector is defined as
p(|,) = ∫ p(|,) d = ∼p(|,).
This so-called Bayesian model average (BMA) <cit.> encompasses the information of all explanations of the training data that are consistent with the parameter posterior.
The BMA is especially valuable when dealing with large neural networks that are typically underspecified by the training data, where marginalizing over parameters can mitigate overfitting and promises significant accuracy and calibration gains <cit.>.
§.§ Scalable Approximations for Bayesian Deep Learning
[22]r0.5
.23
!
MAP
.23
!
Single-Mode
.23
!
Ensemble
.23
!
MultiX
Posterior Approximation Types. MAP approximates a single posterior mode with a point estimate, while probabilistic single-mode approximations additionally capture the shape of the mode. Deep Ensembles approximate multiple modes with a mixture of point estimates.
Likewise, MultiX employs a mixture of single-mode approximations to capture the shape of multiple modes. Figure adapted from <cit.>.
As computing the marginalization integral defining the normalization constant of the parameter posterior (<Ref>) is intractable for neural networks, we have to resort to approximations.
Approximate inference algorithms approximate the posterior, either by sampling from it or by inferring approximating distributions.
Sampling-based Markov Chain Monte Carlo (MCMC) methods <cit.> such as Hamiltonian Monte Carlo (HMC) <cit.> sample directly from the true posterior and are therefore asymptotically exact.
However, they are computationally very expensive and hence typically intractable in the context of BDL.
Deterministic methods such as variational inference construct local approximations at a mode of the parameter posterior and are generally more computationally performant than MCMC <cit.>, as they transform the posterior inference problem into an optimization problem that can be efficiently solved with standard gradient-based optimization techniques <cit.>.
Therefore, we focus on these algorithms in this work.
This framework also encompasses standard deep learning, which is equivalent to a “Maximum A Posteriori” estimate, i.e., a point estimate at the posterior maximum.
In this section, we give a brief overview of the algorithms that we evaluate.
See <Ref> for more detailed explanations and <Ref> for implementation details.
Variational Inference.
Variational inference (VI) minimizes the Kullback-Leibler divergence <cit.> between the approximate posterior and the true posterior <cit.>.
Bayes By Backprop (BBB) <cit.> approximates the posterior with a diagonal Gaussian distribution and optimizes the mean and variance parameters with Stochastic Gradient Descent (SGD) <cit.>.
Rank-1 variational inference (Rank-1 VI) <cit.> in contrast uses a low-rank posterior approximation, which reduces the number of additional parameters and allows the use of multiple components in the low-rank subspace.
The improved Variational Online Newton (iVON) algorithm <cit.> still uses a diagonal Gaussian posterior but uses second-order information to better optimize the distribution parameters with natural gradients.
Stein Variational Gradient Descent (SVGD) <cit.> is a non-parametric VI algorithm that approximates the posterior with multiple point estimates.
SVGD is similar to a Deep Ensemble (see below) but adds repulsive forces between the particles to push them away from each other in parameter space.
Other Algorithms.
<cit.> introduce Deep Ensembles that approximate the posterior with a few, typically five to ten, independently trained MAP models.
As such, Deep Ensembles were originally considered a competing approach to Bayesian models <cit.> but can be viewed as Bayesian as they form a sum of delta distributions that approximate the posterior <cit.>.
We follow this interpretation.
The Laplace approximation <cit.> approximates the posterior with a second-order Taylor expansion around the parameters of a MAP model.
We only consider the last-layer Laplace approximation <cit.> with diagonal and Kronecker-factorized <cit.> posterior approximations, which <cit.> find to achieve the best tradeoff between performance and calibration.
Monte Carlo Dropout (MCD) <cit.> utilizes the probabilistic nature of dropout units that are part of many common network architectures to construct an approximation of a posterior mode.
Stochastic Weight Averaging-Gaussian (SWAG) <cit.> periodically stores the parameters during SGD training and uses them to build a low-rank Gaussian posterior approximation.
§.§ MultiX
While single-mode posterior approximations such as BBB and SWAG capture the shape of a single mode of the parameter posterior, Deep Ensembles cover multiple modes but approximate each with a single point estimate.
Hence, ensembling single-mode approximations promises even better posterior coverage and therefore improved uncertainty estimates (see <Ref>).
This concept is not new: <cit.> experiment with an ensemble of BBB models on small datasets.
<cit.> use an ensemble of Concrete Dropout <cit.> models and <cit.> use MCD models.
Both report accuracy improvements compared to a Deep Ensemble.
<cit.> introduce MultiSWAG, an ensemble of SWAG <cit.> models.
The winning teams of <cit.> also show that ensembling Bayesian neural networks yields good posterior approximations.
Similar to <cit.> and <cit.>, we ensemble all considered single-mode posterior approximations (<Ref>) to assess the performance gains on a per-algorithm basis.
We use the term “MultiX” to refer to an ensemble of models trained with algorithm “X”.
We make an exception for “MultiMAP”, which we keep referring to as Deep Ensemble for consistency with the existing literature.
§ CALIBRATION METRICS
A calibrated model is defined as a model that makes confident predictions if and only if they will likely be accurate.
While this definition directly implies a calibration metric for classification tasks <cit.>, it has to be adapted for regression tasks, as “being accurate” is not a binary property in the regression case.
§.§ Unsigned Calibration metrics
Calibrated Classification.
The calibration of a classification model can be measured with the expected calibration error (ECE) <cit.>.
By partitioning the interval [0,1] into M equally spaced bins and grouping the model's predictions into those bins based on their confidence values, we can calculate the average accuracy and confidence of each bin.
The expected calibration error is then given by ECE = ∑_m=1^M |B_m|/|'| |acc(B_m) - conf(B_m)|
where B_m is the set of predictions in the m-th bin, and acc(B_m) and conf(B_m) are the average accuracy and confidence of the predictions in B_m (see <Ref> for details).
An ECE of zero indicates perfect calibration.
Calibrated Regression.
The confidence intervals of the predictive distribution can be used to measure the calibration of a regression model.
Selecting M confidence levels ρ_m allows the computation of a calibration error based on the observed probability p_obs(ρ_m), calculated as the fraction of predictions that fall into the ρ_m-confidence interval of their respective predictive distributions: QCE = 1/M∑_m=1^M |p_obs(ρ_m) - ρ_m|.
We refer to this as the quantile calibration error (QCE), which simply replaces the quantiles in the definition of the calibration error from <cit.> by confidence intervals.
Using the confidence intervals allows a simpler interpretation of the resulting reliability diagrams (see <Ref>).
§.§ Signed Calibration Metrics
Models can be miscalibrated in two distinct ways: Overconfident models make inaccurate predictions with high confidence, and underconfident models make accurate predictions with low confidence.
Arguably, overconfidence is worse in practice when applicants want to rely on the model's confidence to assess whether they can trust a prediction, for example in safety-critical applications of deep learning.
However, none of the presented metrics can differentiate between overconfidence and underconfidence.
Until now, this information was only apparent in reliability diagrams <cit.>.
We propose two simple extensions of the ECE and the QCE that condense the information about overconfidence and underconfidence into a single scalar value by removing the absolute values: sECE and sQCE.
We define these signed calibration metrics as
sECE = ∑_m=1^M |B_m|/|'|(acc(B_m) - conf(B_m)) and sQCE = 1/M∑_m=1^M (p_obs(ρ_m) - ρ_m).
A positive signed calibration error indicates that a model makes predominantly underconfident predictions and a negative signed calibration error indicates predominantly overconfident predictions.
Perfectly calibrated models have a sECE/sQCE of zero.
For models that are overconfident for some inputs but underconfident for others the signed calibration metrics may be zero, even though the model is not perfectly calibrated.
This is typically not an issue in practice, as our experiments in <Ref> show that the absolute value of the signed metrics is usually very close to the absolute value of the corresponding unsigned metric, as most models are either overconfident or underconfident for nearly all predictions.
Nevertheless, we always report the signed calibration metrics together with the unsigned calibration metrics to avoid any ambiguity.
§ EMPIRICAL EVALUATION
For our comparison of the BDL algorithms introduced in <Ref>, we focus on i) the ability of the models to generalize to realistic distribution-shifted data, ii) the calibration of the models under distribution shift, and iii) how well the models approximate the true parameter posterior.
To assess the generalization capability and calibration of the models under realistic distribution shift, we use a subset of the WILDS dataset collection <cit.>.
We assess the posterior approximation quality by comparing the model's predictive distributions to those of the HMC approximation provided by <cit.> for CIFAR-10 <cit.>.
We also report results for a subset of the smaller UCI <cit.> and UCI-Gap <cit.> tabular regression datasets in <Ref>.
<Ref> contains information about the used computational resources, and details regarding hyperparameters, training procedures, and additional results can be found in <Ref>.
The results on all datasets are reported with a 95% confidence interval.
§.§ The WILDS Datasets
WILDS consists of ten diverse datasets that originate from real-world applications of deep learning in which models need to perform well under distribution shift.
Standard o.o.d. datasets such as MNIST-C <cit.>, CIFAR-10-C <cit.> and UCI-Gap <cit.> create distribution-shifted data by selectively removing data from the training split or artificially adding data corruptions onto the data in the evaluation split.
WILDS represents real-world distribution shifts and is therefore more suitable for an application-oriented evaluation of BDL.
We systematically evaluate all considered algorithms (see <Ref>) on six of the ten datasets: The image-based regression task PovertyMap, the image classification tasks iWildCam, FMoW, and RxRx1, and the text classification tasks CivilComments and Amazon.
Aside from PovertyMap, all datasets are finetuning tasks, where we initialize the model's parameters from a model that has been pre-trained on a similar task.
We also evaluate most algorithms on the Camelyon17 image classification dataset but find that the performance degradation on the o.o.d. evaluation split is to a large part a consequence of the use of batch normalization rather than the o.o.d. data, making the datasets less interesting for a fair comparison on o.o.d. data (see <Ref> for details).
As we want to evaluate the posterior approximation, generalization, and calibration capability of all models given the true parameter posterior, none of our models use the metadata (e.g. location, time) associated with the input data, nor do we consider approaches that are specifically designed for o.o.d. generalization or augment the dataset, for example by re-weighting underrepresented classes, contrary to the algorithms evaluated by <cit.>.
[32]r.5
.48!
data/poverty.dat
ood pearson
ood pearson_std
model
[
paretostyle,
xlabel=Worst U/R Pearson ↑,
height=7cm, width=7cm,
legend columns=1,
group style=
group size=3 by 1,
horizontal sep=2cm,
vertical sep=1.5cm
,
legend style=
legend cell align=left
]
[ylabel=sQCE, legend to name=povertylegend]
ood sqceood sqce_std
map-1MAPcolor=map, single
MAP;
map-5Deep Ensemblecolor=map, multix
Deep Ensemble;
mcd-1MCDcolor=mcd, single
MCD;
mcd-5MultiMCDcolor=mcd, multix
MultiMCD;
swag-1SWAGcolor=swag, single
SWAG
swag-5SWAGcolor=swag, multix
MultiSWAG
bbb-1BBBcolor=bbb, single
BBB;
bbb-5MultiBBBcolor=bbb, multix
MultiBBB;
rank1-1Rank-1 VIcolor=rank1, single
Rank-1 VI;
laplace-1Laplacecolor=laplace, single
Laplace;
laplace-5MultiLaplacecolor=laplace, multix
MultiLaplace;
svgd-1SVGDcolor=svgd, single
SVGD;
(current bounding box.north east)–(current bounding box.south east) coordinate[midway] (group center);
[xshift=2cm, yshift=0.5cm, inner sep=0pt] at(group center) povertylegend;
PovertyMap-wilds: Worst urban/rural Pearson coefficient between the model's predictions and the ground truth plotted against the sQCE on the o.o.d. test split of the image-based regression task.
All models achieve similar, but noisy <cit.>, Pearson coefficients, indicating similar generalization capabilities.
Multi-mode approximations are consistently better calibrated than single-mode approximations (note that Rank-1 VI's components and SVGD's particles give them multi-mode approximation capabilities).
Regarding calibration, the relative ordering of the single-mode models does not translate to the MultiX models: BBB is among the best-calibrated single-mode models, but MultiBBB is the worst calibrated MultiX model.
Laplace and SWAG are very similarly calibrated, therefore the data points of SWAG are hidden behind the data points of Laplace.
iVON performs significantly worse than the other algorithms and is therefore excluded.
Large-Scale Regression.
PovertyMap-wilds <cit.> is an image-based regression task, where the goal is to better target humanitarian aid in Africa by estimating the asset wealth index of an area using satellite images.
As the task is significantly easier when buildings are visible in the images, the evaluation set is split into images containing urban and images containing rural areas.
The accuracy of the models is evaluated on both splits by the Pearson coefficient between their predictions and the ground truth, and the worst Pearson coefficient is used as the main evaluation metric.
All models are based on a ResNet-18 <cit.>.
See <Ref> for the Pearson coefficient and sQCE on the o.o.d. evaluation split and <Ref> for further details.
Finetuning of CNNs.
iWildCam-wilds <cit.> is an image classification task that consists of animal photos taken by camera traps across the world.
The model's task is to determine which of 182 animal species can be seen in the image.
As rare animal species, which are of special interest to researchers, are naturally underrepresented in the dataset, the macro F1 score is used to evaluate the predictive performance.
The o.o.d. evaluation split consists of images from new camera locations.
All models are based on a ResNet-50 <cit.>.
See <Ref> for the macro F1 score and sECE on the o.o.d. evaluation split and <Ref> for further details.
FMoW-wilds (Functional Map of the World) <cit.> is an image classification task, where the inputs are satellite images and the class is one of 62 building and land use categories.
The o.o.d. evaluation split consists of images from different years than the images in the training set.
Models are separately evaluated on five geographical regions of the world, with the lowest accuracy taken as the main evaluation metric.
All models are based on a DenseNet-121 <cit.>.
See <Ref> for the accuracy and sECE on the region of the o.o.d. evaluation split the models perform worst on and <Ref> for further details.
RxRx1-wilds <cit.> is an image classification task, where the inputs are three-channel images of cells, and the classes are 1139 applied genetic treatments.
The o.o.d. evaluation split is formed by images from different experimental batches than the training data.
Following <cit.>, we only use three of the six available input channels to limit the computational complexity of the models.
This makes the task considerably harder and leads to the low accuracy of the models, but makes our results comparable to those of <cit.>.
All models are based on a ResNet-50 <cit.>.
See <Ref> for the accuracy and sECE on the o.o.d. evaluation split and <Ref> for further details.
Finetuning of Transformers.
CivilComments-wilds <cit.> is a binary text classification dataset, where the model's task is to classify whether a given comment is toxic or not.
The comments are grouped based on whether they mention certain demographic groups, such as LGBTQ or Muslim identities.
Models are evaluated based on the group on which they achieve the lowest accuracy on the evaluation set.
All models are based on the DistilBERT architecture <cit.>.
See <Ref> for the accuracy and sECE on the group of the o.o.d. evaluation split the models perform worst on and <Ref> for further details.
Amazon-wilds <cit.> consists of textual product reviews, where the task is to predict the star rating from one to five.
The o.o.d. evaluation split consists of reviews from reviewers that are not part of the training split.
Models are evaluated based on the accuracy of the reviewer at the 10% quantile.
All models are based on DistilBERT <cit.>.
See <Ref> for the accuracy and sECE on the o.o.d. evaluation split and <Ref> for further details.
§.§ The Corrupted CIFAR-10 Dataset
CIFAR-10-C <cit.> is a corrupted version of the evaluation split of the image classification dataset CIFAR-10 <cit.>, where images are corrupted with increasing levels of noise, blur, and weather and digital artifacts.
We compare the considered algorithms on the standard evaluation split of CIFAR-10 as well as the corruption levels 1, 3, and 5 of CIFAR-10-C.
Following <cit.>, all of our models on CIFAR-10-(C) are based on the ResNet-20 architecture.
See <Ref> for details.
§.§ Generalization to Realistic Distribution Shift
We measure the generalization capability of the models with the task-specific accuracy metrics proposed by <cit.> that are based on the real-world origin of the respective tasks.
The metrics typically emphasize the performance on groups or classes that are underrepresented in the training data, as avoiding bias against these groups is crucial in safety-critical applications of BDL.
Except for the text classification tasks, MultiX always generalizes better than single-mode posterior approximations.
Overall, the relative ordering of the MultiX models depends on the dataset and in many cases does not correlate with the relative ordering of the corresponding single-mode approximations.
Large-Scale Regression.
All models achieve similar Pearson coefficients, with MultiX being slightly more accurate.
The Deep Ensemble is competitive with the best performing algorithm of the WILDS leaderboard <cit.> with a Pearson coefficient of 0.52 compared to 0.53 of C-Mixup <cit.>.
However, due to the large standard errors resulting from the different difficulties of the folds, the results are not significant.
Note that <cit.> report similarly large standard errors.
Finetuning of CNNs.
Confirming the overall trend, MultiX models generalize better than single-mode models, with MultiSWAG and MultiMCD performing particularly well.
On iWildCam, the single-mode posterior approximations SWAG and MCD are competitive with the Deep Ensemble.
SVGD performs similarly to MAP, even though it is based on an ensemble, likely due to the repulsive forces pushing the particles away from the well-performing pre-trained model.
While the VI algorithms' accuracy is similar to the accuracy of MAP on iWildCam and FMoW, all VI algorithms except SVGD perform significantly worse than the non-VI algorithms on RxRx1.
Similarly, Laplace underfits on FMoW and RxRx1 (see <Ref> and <Ref>).
Finetuning of Transformers.
BBB and Rank-1 VI are the most accurate models on both tasks, with no benefit from the multiple components of Rank-1 VI.
Interestingly, iVON is significantly less accurate than BBB, even though it is also based on mean-field VI, indicating that the natural gradient-based training is disadvantageous on the transformer-based BERT architecture.
To see whether the better performance of BBB is due to less regularization compared to MAP, we also experiment with a smaller weight decay factor for MAP on CivilComments.
While we find that the accuracy increases, BBB is still more accurate (see <Ref>).
Finally, we also check whether the better performance of BBB is due to its last-layer nature.
We experiment with last-layer versions of MCD and SWAG on Amazon (see <Ref>), but find that both are still significantly less accurate than BBB.
[35]r.35
[b].2
!
[
paretostyle,
xmin=0, xmax=1, ymin=0, ymax=1,
hide axis,
legend columns=2,
legend style=
legend cell align=left
]
color=map, single
MAP;
color=map, multix
Deep Ensemble;
color=mcd, single
MCD;
color=mcd, multix
MultiMCD;
color=swag, single
SWAG;
color=swag, multix
MultiSWAG;
color=laplace, single
Laplace;
color=laplace, multix
MultiLaplace;
color=bbb, single
BBB;
color=bbb, multix
MultiBBB;
color=ivon, single
iVON;
color=ivon, multix
MultiiVON;
color=rank1, single
Rank-1 VI;
color=svgd, single
SVGD;
[b].28
!
data/civil.datworst_acc accuracyworst_acc accuracy_stdworst_acc seceworst_acc sece_std
paretostyle,
xlabel=Accuracy (Worst Group) ↑,
ylabel=sECE (Worst Group),
height=10cm, width=10cm
mapMAPcolor=map, single
map_4Deep Ensemblecolor=map, multix
mcdMCDcolor=mcd, single
mcd_4MultiMCDcolor=mcd, multix
swagSWAGcolor=swag, single
swag_4MultiSWAGcolor=swag, multix
laplaceLaplacecolor=laplace, single
laplace_4MultiLaplacecolor=laplace, multix
bbbBBBcolor=bbb, single
bbb_4MultiBBBcolor=bbb, multix
rank1Rank-1 VIcolor=rank1, single
ll_ivoniVONcolor=ivon, single
ll_ivon_5MultiiVONcolor=ivon, multix
svgdSVGDcolor=svgd, single
CivilComments-wilds
[b].28
!
data/amazon.datood 10th_percentile_accood 10th_percentile_acc_stdood seceood sece_std
paretostyle,
xlabel=10% Accuracy ↑,
ylabel=sECE,
height=10cm, width=10cm
[dashed] (axis cs:/pgfplots/xmin,0) – (axis cs:/pgfplots/xmax,0);
mapMAPcolor=map, single
map_5Deep Ensemblecolor=map, multix
mcdMCDcolor=mcd, single
mcd_5MultiMCDcolor=mcd, multix
swagSWAGcolor=swag, single
swag_5MultiSWAGcolor=swag, multix
bbbBBBcolor=bbb, single
bbb_5MultiBBBcolor=bbb, multix
rank1Rank-1 VIcolor=rank1, single
ll_ivoniVONcolor=ivon, single
ll_ivon_5MultiiVONcolor=ivon, multix
laplace_1Laplacecolor=laplace, single
laplace_5MultiLaplacecolor=laplace, multix
svgdSVGDcolor=svgd, single
Amazon-wilds
Text classification with pre-trained transformers. Except for MultiBBB on Amazon-wilds, MultiX performs nearly identically to the corresponding single-mode approximation. VI improves the accuracy of the models. MCD is the least accurate model on CivilComments. We experiment with different dropout rates in <Ref> but find that MCD never outperforms MAP.
MultiX is no more accurate than the corresponding single-mode approximation, contrary to the results on all other datasets.
We suspect that this effect is to a large part due to the finetuning nature of the tasks, where all ensemble members start close to each other in parameter space and therefore converge to the same posterior mode.
Note that the failure of ensembles is most likely due to the task and the network architecture and not due to the training procedure: While we train for fewer epochs than on the image classification tasks, the datasets are larger.
On iWildCam we perform 97k parameter updates, compared to 84k parameter updates on CivilComments.
§.§ Calibration under Realistic Distribution Shift
We measure calibration with the sECE for classification tasks and with the sQCE for regression tasks (see <Ref>).
We additionally report the unsigned ECE/QCE and the log-likelihood for the regression task in <Ref>.
MultiX is almost always less overconfident than single-mode approximations.
When all models are already comparatively well calibrated, MultiX tends to become underconfident.
Thus, we find that MultiX is typically only less confident, but not automatically better calibrated than single-mode approximations.
On the transformer-based text classification tasks, MultiX is almost never better calibrated than the respective single-mode approximation.
Large-Scale Regression.
MultiX, when based on a probabilistic single-mode approximation, is generally better calibrated than the Deep Ensemble.
SVGD is better calibrated than the Deep Ensemble, showing the benefit of the repulsive forces between the particles.
Finetuning of CNNs.
Again, we find that MultiX generally performs better than a Deep Ensemble.
However, MultiSWAG in particular is more overconfident than the Deep Ensemble, even though SWAG is better calibrated than MAP.
BBB is better calibrated than other single-mode approximations such as SWAG and MCD.
On iWildCam, Laplace is the best calibrated single-mode approximation, and correspondingly MultiLaplace is the most underconfident multi-mode approximation.
This result is unique to iWildCam, as Laplace underfits on the other image classification datasets.
Finetuning of Transformers.
Except for MultiBBB on Amazon, ensembles are similarly calibrated than the respective single-mode approximations.
SWAG is the least confident model on both tasks, which leads to underconfidence on Amazon.
MCD's calibration is inconclusive, as it is better calibrated than MAP on Amazon, but more overconfident on CivilComments.
BBB and Rank-1 VI are not better calibrated than MAP and on Amazon significantly more overconfident than MAP.
§.§ Posterior Approximation Quality
While approximate inference is commonplace in BDL, the large size of the neural networks typically makes it computationally intractable to measure how well a model approximates the true parameter posterior.
Following <cit.> and using the HMC samples provided by <cit.>, we measure the approximation quality of a model by the total variation (TV) between the model's predictions and HMC and the top-1 agreement with HMC on CIFAR-10-(C) <cit.>.
<Ref> displays the TV of the evaluated models under increasing levels of image corruption.
For further results regarding the accuracy, sECE, ECE, and top-1 agreement with HMC see <Ref>.
Overall, a good probabilistic single-mode approximation is the most important factor for a good posterior approximation.
MultiX, when based on probabilistic single-mode approximations, consistently approximates the parameter posterior better than single-mode-only approximations and the Deep Ensemble.
MultiiVON approximates the posterior best across all corruption levels as measured by the TV, with MultiSWAG being a close contender.
Even single-mode approximations such as MCD and SWAG achieve better TVs under data corruption than the Deep Ensemble.
As expected, MAP has the highest TV, with only a small improvement made by Laplace.
§ CONCLUSION
We presented a comprehensive evaluation of a wide range of modern, scalable BDL algorithms, using distribution-shifted data based on real-world applications of deep learning.
We focused on the generalization capability, calibration, and posterior approximation quality under distribution shift.
Overall, we demonstrated that BDL is in many cases competitive with algorithms that are specifically designed for o.o.d. generalization.
Our analysis has shown that ensembles are almost always required to obtain well-calibrated models that generalize well under realistic distribution shift, even when all members start from the same pre-trained model checkpoint.
Ensembling probabilistic single-mode approximations further improves the calibration and accuracy of the models, but the relative performance of the algorithms is heavily dependent on the task.
However, we also identified a failure mode of ensembles when finetuning large transformer-based language models.
On the other hand, we have shown that last-layer VI scales well to these models and generalizes better than SOTA BDL algorithms such as SWAG and MCD.
Limitations.
While we evaluate on a wide range of datasets from different domains and using different network architectures, the choice of tasks is still limited.
In particular, we do not consider LSTMs <cit.> as <cit.> do.
Given the limitations of WILDS <cit.>, we evaluate on a single large-scale regression dataset.
As both text classification experiments use DistilBERT <cit.>, it is conceivable that the failure of ensembles is limited to this particular architecture.
We do not include algorithms that are based on function-space priors <cit.>.
Broader Context.
Bayesian deep learning aims to provide reasonable uncertainty estimates in safety-critical applications.
Hence, we do not expect any societal harm from our work, as long as it is ensured by proper evaluation that accuracy and calibration requirements are met before deployment.
§ APPROXIMATE INFERENCE
This section provides further details on the algorithms introduced in <Ref>.
§.§ Variational Inference
Variational inference (VI) minimizes the KL divergence <cit.> between the true posterior p(|) and the approximate posterior q(|) <cit.>.
While the KL divergence cannot be computed by itself, as the true posterior is unknown, it can still be minimized by maximizing the evidence lower bound (ELBO) given the parameter prior p():
ELBO = ∼ q(|)log p(|) - q(|)p()
Maximizing the ELBO means maximizing the likelihood of the training data, therefore fitting the data well, while staying close to the parameter prior <cit.>.
Bayes By Backprop (BBB).
BBB <cit.> is an application of VI to deep neural network.
BBB approximates the parameter posterior with a diagonal Gaussian distribution that cannot model covariances between parameters.
The per-parameter means and variances are learned with standard Stochastic Gradient Descent (SGD) <cit.> using the negative of the ELBO as the loss function.
The ELBO by itself is not differentiable as it depends on the randomly chosen parameters.
However, the reparameterization trick <cit.> applies to diagonal Gaussians and allows us to use the negative ELBO as the loss function.
Further runtime performance improvements are possible by using the local reparameterization trick <cit.> or Flipout <cit.>.
While there have been reports of BBB performing well when used on neural networks <cit.>, the current consensus of the research community seems to be that BBB falls short when compared to e.g. ensembles <cit.>, even though it has been shown that the diagonal Gaussian posterior is not significantly less expressive than a posterior that models covariances <cit.>.
In recent years significant work has been done to improve the performance of VI in a deep learning setting.
To assess whether these improved algorithms can compete with SOTA Bayesian algorithms, we also evaluate promising improvements on posterior parameterizations (Rank-1 VI, SVGD) and optimization procedures (iVON).
Rank-1 Variational Inference (Rank-1 VI).
Rank-1 VI <cit.> enhances the posterior approximation of BBB by approximating a full-rank covariance matrix with a low-rank approximation.
Rank-1 VI learns a diagonal Gaussian distribution over two vectors per layer, whose outer product is then element-wise multiplied to a learned point estimate of the layer's weights.
The bias vector is kept as a point estimate.
The limited number of additional parameters allows Rank-1 VI to learn a multi-component Gaussian distribution for the two low-rank vectors, which gives Rank-1 VI ensemble-like properties.
Rank-1 VI is both less expressive than BBB with the mean field approximation in the sense that it has fewer variational parameters, and is more expressive as it can model covariances between parameters within a layer and can express multi-modality in a limited way.
Improved Variational Online Newton (iVON).
The usage of SGD for the optimization of variational parameters is problematic, as these parameters form a complex, non-euclidean manifold <cit.>.
Natural gradient descent (NGD), recently formalized as the Bayesian learning rule <cit.>, exploits this structure to speed up training.
VOGN <cit.> applies NGD to neural networks but has scaling problems, as it requires per-example gradients in minibatch training.
iVON, based on the improved Bayesian learning rule <cit.>, no longer has this problem.
While iVON still uses the mean-field approximation of BBB, it is expected to converge faster, and, importantly, halves the number of trainable parameters by implicitly learning per-parameter variances.
Stein Variational Gradient Descent (SVGD).
SVGD <cit.> is a non-parametric VI algorithm that does not assume the posterior to be of a particular shape but approximates it with p particles (i.e. point estimates).
The particles can be viewed as members of a Deep Ensemble <cit.>, and the use of VI adds a repulsive component to the loss function based on the RBF kernel distance between the parameters of the particles.
While this repulsive component can prevent the particles from converging to the same posterior mode, it prohibits the independent training of the particles.
§.§ Other Algorithms
Deep Ensembles.
<cit.> introduce Deep Ensembles that combine the predictions of multiple independently trained neural networks to improve uncertainty estimates.
Originally, Deep Ensembles have been seen as a competing approach to Bayesian algorithms <cit.>.
However, ensembles can be considered to be a Bayesian algorithm that approximates the posterior with a sum of delta distributions <cit.>.
We consider all ensembles to be Bayesian: While they are missing the principled posterior approximation approach of VI, basically hoping that the members converge to different posterior modes, the approach results in a posterior approximation that is in many cases better than the approximation of for example BBB (<Ref>, <cit.>).
Ensembles are usually considered SOTA in uncertainty estimation <cit.>.
However, the training time scales linearly in the number of ensemble members.
This makes them highly expensive in cases where training a single member is already expensive, such as with large networks, and opens the space for new, cheaper posterior approximations.
Monte Carlo Dropout (MCD).
MCD <cit.> uses dropout <cit.> to form a Bernoulli distribution over network parameters.
The dropout rates are typically not learned, but the dropout units that are present in many network architectures are simply applied during the evaluation of the model.
This very cheap posterior approximation has been criticized for not being truly Bayesian <cit.>.
Despite this criticism, it is still widely used, including in practical applications <cit.>.
When the dropout rate is learned, MCD can be considered to implicitly perform VI <cit.>.
Stochastic Weight Averaging-Gaussian (SWAG).
SWAG <cit.> forms its posterior approximations from the parameter vectors that are traversed during the training of a standard neural network.
During the last epochs of SGD training, SWAG periodically stores the current parameters of the neural network to build a low-rank Gaussian distribution over model parameters.
While SWAG has only a very small performance overhead during training, storing the additional parameters requires a significant amount of additional memory, and sampling parameters from the low-rank Gaussian distribution incurs a performance overhead during evaluation.
Laplace Approximation.
The Laplace approximation <cit.> builds a local posterior approximation from a second-order Taylor expansion around a MAP model.
We always use the last-layer Laplace approximation and switch between a full-rank posterior, diagonal posterior, and a Kronecker-factorized posterior <cit.> depending on the task.
In this configuration, the Laplace approximation is the only post-hoc algorithm that we consider: It can be fitted on top of an existing MAP model by performing a single pass on the training dataset.
§ UNSIGNED CALIBRATION METRICS
As mentioned in the main paper (<Ref>), a calibrated model makes confident predictions if and only if they will likely be accurate.
Based on this definition, we can directly derive a calibration metric for classification models: The expected calibration error (ECE) <cit.>.
In the regression case, neither “accuracy” nor “confidence” are well-defined properties of a prediction.
The notion of calibration must therefore be adapted for regression tasks.
In addition, the log marginal likelihood is commonly used to jointly evaluate the accuracy and the calibration in regression tasks.
See <Ref> for details.
Calibrated Classification.
In the classification case, each data point has an associated distribution Y over the possible labels.
Y represents the inherent aleatoric uncertainty of the label.
Given a prediction ŷ = _y p(y|,) made with confidence p̂ = max_y p(y|,), the model is perfectly calibrated if and only if
(ŷ = Y | p̂=p)=p ∀ p ∈ [0,1]
holds for every data point <cit.>.
Informally speaking, this means that if the model makes 100 predictions with a confidence of 0.8, 80 of these predictions should be correct.
The expected difference between the left and the right side of <Ref> is called the expected calibration error (ECE) of the model:
ECE = p∼([0,1]) | (ŷ = Y | p̂=p) - p |
It implies two properties of a well-calibrated model: If the accuracy is low, the confidence should also be low.
This means that the model must not be overconfident in its predictions.
Conversely, if the accuracy is high, the confidence should also be high, meaning that the model must not be underconfident in its predictions.
In practice, a model does not make enough predictions of the same confidence to calculate the calibration error exactly.
Therefore, the model's predictions on an evaluation set ' are commonly grouped into M equally spaced bins B_m based on their confidence values, and the average accuracy and confidence of each bin are used to calculate the ECE <cit.>:
ECE≈∑_m=1^M |B_m|/|'| |acc(B_m) - conf(B_m)|,
where B_m is the set of predictions in the m-th bin, and acc(B_m) and conf(B_m) are the average accuracy and confidence of the predictions in B_m:
acc(B_m) = 1/|B_m|∑_(,y) ∈ B_m1( y = _y' p(y'|,) )
conf(B_m) = 1/|B_m|∑_(,y) ∈ B_mmax_y' p(y'|,)
An ECE of zero indicates perfect calibration.
We always use ten bins (M = 10).
A main problem of the ECE is that bins with few predictions in them may exhibit a high variance <cit.>.
Therefore, <cit.> proposed an extension of the ECE that uses bins of adaptive width.
Calibrated Regression.
The confidence intervals of the predictive distribution can be used to measure the calibration of a regression model <cit.>.
The probability of the ground-truth output laying inside of the ρ-confidence interval of the predictive distribution of the model for input should be exactly ρ.
Formally, we say a regression model is perfectly calibrated on an evaluation dataset ' if and only if
(Q_ρ'() ≤≤ Q_1 - ρ'()) = ρ ∀ (,) ∈'
holds for every q-quantile Q_q() of the predictive distribution for input with ρ' = (1 - ρ)/2.
Selectively evaluating <Ref> for M confidence values ρ_m allows the practical computation of a quantile calibration error (QCE) on an evaluation dataset '
QCE = 1/M∑_m=1^M |(ρ_m - p_obs(ρ_m))|
with
p_obs(ρ_m) = 1/|'|∑_(,)∈)1(Q_ρ'() ≤≤ Q_1 - ρ'()).
The QCE simply replaces the quantiles in the definition of the calibration error from <cit.> by confidence intervals.
Using the confidence intervals allows a simpler interpretation of the resulting reliability diagrams: With the calibration error proposed by <cit.>, the reliability diagram of a perfectly calibrated regression model is a horizontally mirrored version of the reliability diagram of a perfectly calibrated classification model, as there are too many ground-truth values below the lower quantiles of their predictive distributions, and too few above the higher quantiles (<Ref>).
Using confidence intervals for the reliability diagram results in a plot that can be interpreted in the same way as a reliability diagram of a classification model (<Ref>).
We always use ten equally-spaced confident levels between 0 and 1 (M = 10).
§ SIGNED CALIBRATION METRICS
As described in the main paper, our signed calibration metrics (sECE and sQCE) may be zero even though the model is not perfectly calibrated.
However, we show that this is typically not an issue in practice, as for most models nearly all predictions are overconfident or nearly all predictions are underconfident.
The reliability diagrams in <Ref> confirm this for a representative selection of overconfident and underconfident models.
We always report the unsigned calibration metrics in <Ref> in addition to the signed calibration metrics mentioned in the main paper.
The unsigned metrics are in almost all cases very close to the absolute value of the signed metric, resulting in the same relative ordering of the algorithms.
On the other hand, the sECE provides valuable insights into the underconfidence of some algorithms such as MultiSWAG on CIFAR-10 and SWAG on Amazon-wilds.
§ IMPLEMENTATION DETAILS
Except for Laplace, we implement all algorithms ourselves as PyTorch <cit.> optimizers.
The implementation of the algorithms as well as code to reproduce all experiments is available at <https://github.com/Feuermagier/Beyond_Deep_Ensembles>, where we also provide a short tutorial on the usage of our implementation.
Bayes By Backprop.
We use the local reparameterization trick <cit.>.
As it is standard today <cit.>, we do not use the scale mixture prior introduced by BBB's original authors <cit.>, but a unit Gaussian prior.
For the experiments on CIFAR-10, we make the parameters of the Filter Response Normalization layers variational.
Rank-1 VI.
Following <cit.>, we keep the bias of each layer as a point estimate.
We also keep the learned parameters of batch normalization and Filter Response Normalization layers as point estimates.
We use five components in most cases which is close to the four components recommended by <cit.> and make Rank-1 VI directly comparable to other ensemble-based models that use five members.
iVON.
We adapt the data augmentation factor that <cit.> introduce for VOGN <cit.> to iVON.
We do not use the tempering parameter from VOGN.
Laplace.
We use the Laplace library from <cit.> due to the difficulty of implementing second-order optimization in PyTorch.
In all cases except for CivilComments-wilds, we use a Kronecker-factorized last-layer Laplace approximation.
On CivilComments-wilds, we use a diagonal last-layer Laplace approximation as the Kronecker-factorized approximation frequently leads to diverging parameters.
We do not use the GLM approximation as proposed by <cit.> but use Monte Carlo sampling to stay consistent with the other evaluated algorithms.
In all experiments we use the Laplace library's functions to tune the prior precision after fitting the Laplace approximation.
SWAG.
While the authors of SWAG argue that SWAG benefits from a special learning rate schedule <cit.>, they do not use such a schedule in most of their experiments with SWAG and MultiSWAG <cit.>.
Correspondingly, we use the same schedule with SWAG as with any other algorithm.
We use 30 parameter samples for building the mean and the low-rank covariance matrix of SWAG.
On CivilComments-wilds, we only use 10 parameter samples due to the storage size of the samples.
§ BATCH NORMALIZATION, DISTRIBUTION SHIFT AND BAYESIAN DEEP LEARNING
<cit.> find that a significant part of the accuracy loss on o.o.d. data is due to changing batch statistics that cannot be adequately normalized by the running batch normalization statistics that are based on the training data.
The authors propose to re-initialize the running statistics on a subset of the evaluation dataset.
We are able to reproduce the issue with o.o.d. data on the Camelyon17-wilds dataset from the WILDS collection <cit.> (<Ref>).
The o.o.d. evaluation set of Camelyon17 has been generated by selecting the images that were most visually distinct from the other images.
In addition, the employed ResNet-20 <cit.> architecture includes batch normalization layers.
We find that using only batch statistics, thereby essentially using the batch normalization layers in training mode during evaluation, entirely alleviates the i.d. - o.o.d. performance gap on Camelyon17, as well as the large standard deviations on the o.o.d. dataset.
Coincidentally, the WILDS leaderboard <cit.> shows that models that do not include batch normalization, such as a model based on the vision transformer <cit.>, or that use extensive data augmentation, perform best.
The running statistics of batch normalization layers also pose problems with Bayesian neural networks that sample parameters, as the running statistics depend on the parameters of the neural network.
<cit.> therefore propose to recalculate the batch normalization statistics for each parameter sample.
This is not necessary in our case as we never use running statistics for normalization layers.
By doing so we also avoid the aforementioned distribution-shift problem without requiring additional o.o.d. data during evaluation, and do not add any computation overhead.
§ COMPUTATIONAL RESOURCES
We use single NVIDIA Tesla V100, A100, and H100 GPUs for all tasks from Wilds <cit.> and CIFAR-10-(C) <cit.>.
See <Ref> for the GPUs that we use on the individual datasets as well as the runtime of MAP.
<Ref> displays the relative runtime of the BDL algorithms.
In total, we estimate that the evaluation required about 1600 of GPU time, of which about 25% were consumed during implementation, testing and hyperparameter optimization.
Training and hyperparameter optimization of the UCI models was performed on a single CPU in about 20.
§ ADDITIONAL EXPERIMENTAL RESULTS
§.§ UCI Datasets
We report results for both the standard and the gap splits <cit.> on the housing and energy datasets from the UCI machine learning repository <cit.>.
On energy, we can reproduce the catastrophic failure of VI both with BBB and Rank-1 VI, but not with iVON which performs still similarly to MultiSWAG.
Overall, we find that the benefit of ensembles is less clear than on the larger WILDS datasets, which emphasizes the importance of evaluating Bayesian algorithms on large datasets.
Hyperparameters.
All hyperparameters were optimized through a grid search on the validation set.
Note that for the gap splits the validation set is not part of the gap.
We considered 40, 100 and 200 epochs, learning rates of 0.01 and 0.001 and (where applicable) weight decay factors of 10^-4 and 10^-5.
For BBB, the prior standard deviations are 0.1, 1.0 and 10.0 and we scale the KL divergence in the ELBO by 0.2, 0.5, and 1.0, with colder temperatures generally leading to better results.
For iVON, we consider prior precisions of 10, 100, and 200, with 200 being selected in most cases.
BBB and iVON use five Monte Carlo samples during training.
For SWAG, we consider 60, 100, and 150 epochs, use 30 parameter samples and start sampling after 50%, 75%, or 90% of the training epochs were completed.
For Laplace, we always use a last-layer approximation with a full covariance matrix.
We use the Adam optimizer <cit.> to optimize the log-likelihood/ELBO and learn the output standard deviation jointly with the parameters.
We use 1000 parameter samples for each prediction.
§.§ CIFAR-10
Following <cit.>, we train a ResNet-20 <cit.> with Swish activations <cit.> and Filter Response Normalization <cit.>.
The use of Filter Response Normalization instead of batch normalization, which only uses batch statistics, eliminates the problems mentioned in <Ref>.
We train all models except iVON with SGD and a learning rate of 0.05 and Nesterov momentum of strength 0.9 for 300 epochs.
We use the learning rate schedule from <cit.>: The learning rate is kept at its initial value for the first 150 epochs, then linearly reduced to a learning rate of 0.005 at epoch 270 at which it is kept constant for the remaining 30 epochs.
For MCD, we use a dropout rate of 0.1 and insert dropout units after every linear and convolutional layer of the ResNet-20.
For BBB, we temper the KL divergence in the ELBO with a factor of 0.2.
Rank-1 VI uses an untempered posterior and four components.
BBB and iVON use two Monte Carlo samples during training.
The Laplace approximation is based on a diagonal last-layer approximation.
iVON is also trained for 300 epochs with a learning rate of 1 · 10^-4, a prior precision of 50, and a data augmentation factor of 10 (see <cit.> for details), but uses no learning rate schedule.
We found these changes to be necessary to ensure that iVON performs well, likely because iVON is much more similar to Adam <cit.> than to SGD and therefore needs a smaller learning rate.
We always use 50 parameter samples during evaluation.
<Ref> displays the accuracy, ECE, sECE, agreement with HMC, and TV compared to HMC.
MultiX models tend to become underconfident.
<Ref> shows detailed numerical results for all algorithms and corruption levels.
§.§ WILDS
We use the hyperparameters proposed by <cit.> where applicable, and set the other hyperparameters to standard values as suggested by the developers of the respective algorithms.
If the standard values lead to unexpectedly bad results, we tune the hyperparameters through a grid search.
In particular, we select the prior precision of iVON through a grid search over 1, 10, 100, and 500 per model architecture.
We find the prior precision of iVON to be hard to tune, as iVON frequently diverges for comparatively small prior precisions such as 1 and 10.
On the other hand, BBB works always well with the standard unit prior.
We also experiment with other priors but find no difference in performance except on RxRx1-wilds (see <Ref>).
BBB and iVON use two Monte Carlo samples during training.
See the sections below for the hyperparameters that were chosen on the individual datasets.
We use mixed precision training whenever possible.
The VI algorithms as well as the Laplace approximations are mostly trained without mixed precision, as this leads to unstable training.
We use 10 posterior samples per prediction during evaluation to constrain the computational overhead of the Bayesian algorithms, which is generally sufficient to capture the predictive distribution <cit.>.
Note that our results are not directly comparable to the results of <cit.>, as they build their Deep Ensembles and Laplace approximations from the pretrained models provided by <cit.>.
§.§.§ Camelyon17-WILDS
Following <cit.>, we train a DenseNet-121 <cit.> with SGD for 5 epochs with a learning rate of 0.001, weight decay 0.01 and momentum 0.9.
SWAG collects 30 parameter samples during the last epoch.
§.§.§ PovertyMAP-WILDS
We train a ResNet-18 <cit.> using the same hyperparameters as <cit.> where applicable: A learning rate of 10^-3 and no weight decay.
We only train for 100 epochs as all models were converged after that.
SWAG collects 30 parameter samples starting at epoch 50.
For BBB, we scale the KL divergence down with a factor of 0.2, as this significantly improves the MSE.
Rank-1 VI uses an unscaled KL divergence.
The ensembles, Rank-1 VI and SVGD use five members/components.
We optimize the log likelihood of the training data and use a fixed standard deviation of 0.1, as this is the value MAP converges to when jointly optimizing the standard deviation and the model's parameters.
For the final evaluations, we do not optimize the standard deviation, as this leads to unstable training with the VI algorithms.
Following <cit.>, we aggregate all results over the five folds of PovertyMap, with one seed per fold.
As mentioned in the main paper, iVON performs significantly worse than the other algorithms.
We conducted a grid search over prior precisions 1, 10, 100 and 500 with a single seed per value, and found that for 1 and 10 iVON diverges, for 100 iVON achieves an o.o.d. Pearson coefficient on the “A” split of 0.21 and for 500 it achieves a Pearson coefficient of 0.25.
Most likely due to their underfitting the non-diverged models are comparatively well calibrated with sECEs of -0.21 for a prior precision of 100 and -0.24 for a prior precision of 500.
Log Marginal Likelihood.
The log marginal likelihood is commonly used to jointly evaluate the accuracy and calibration of a regression model.
On an evaluation dataset ', the log marginal likelihood (LML) is given by
LML = log p('|) = log∫ p('|) d≈log∑_n p('|_n),
where the θ_n are samples from the parameter posterior.
When only few predictions are available because sampling parameters θ_n or evaluating the likelihood p('|_n) is expensive, the LML may become very noisy.
We therefore also report the per-sample log marginal likelihood
psLML = ∑_(_i, _i)∈'log p(_i|_i, )
= ∑_(_i, _i)∈'log∫ p(_i|_i, ) d
≈∑_(_i, _i)∈'log∑_n p(_i|_i, _n),
which has a lower variance than the LML.
We present the results for the LML, the psLML, the urban/rural Pearson coefficient (see <Ref>), and the sQCE in <Ref> and <Ref>.
<Ref> shows detailed numerical results.
§.§.§ iWildCam-wilds
Following <cit.>, we finetune a ResNet-50 <cit.>, pretrained on ImageNet <cit.>, for 12 epochs with the Adam optimizer <cit.>.
For each model, we replace the linear classification layer of the ResNet-50 by a randomly initialized one of the appropriate output dimension.
We use the hyperparameters that <cit.> found to work best based on their grid search: A learning rate of 3· 10^-5 and no weight decay.
For MCD, we try dropout rates of 0.1 and 0.2 and select 0.1 due to a slightly better macro F1 score on the evaluation split.
iVON uses a prior precision of 100, as optimized by a grid search.
We use three seeds per model and build all ensembles by training six models independently and leaving out a different model for each of the three evaluation runs.
<Ref> shows the results on the o.o.d. evaluation split that are not presented in the main paper.
<Ref> displays detailed numerical results on the o.o.d. evaluation split and on the i.d. validation split.
§.§.§ FMoW-wilds
Following <cit.>, we finetune a DenseNet-121 <cit.>, pretrained on ImageNet <cit.>, for 50 epochs with the Adam optimizer <cit.> with a batch size of 64 and a learning rate of 10^-4 that decays by a factor of 0.96 per epoch.
For each model, we replace the linear classification layer of the DenseNet-121 by a randomly initialized one of the appropriate output dimension.
iVON uses a prior precision of 100.
We use five seeds per model and build all ensembles by training six models independently and leaving out a different model for each of the five evaluation runs.
We report in the main paper that the Laplace approximation underfits, with a worst-region accuracy of 0.217 ± 0.012 and sECE of -0.583 ± 0.015 on the o.o.d. test split.
Similarly, MultiLaplace only achieves a worst-region accuracy of 0.301 ± 0.004 and sECE of 0.123 ± 0.004 on the o.o.d. evaluation split.
Note that the better results of <cit.> are most likely due to their usage of models pretrained with ERM.
<Ref> shows additional results for the other models across all regions on the o.o.d. evaluation split, as well as the ECE on the worst region.
§.§.§ RxRx1-wilds
Following <cit.>, we finetune a ResNet-50 <cit.>, pretrained on ImageNet <cit.>, for 90 epochs with the Adam optimizer <cit.>.
For each model, we replace the linear classification layer of the ResNet-50 by a randomly initialized one of the appropriate output dimension.
Following <cit.>, we use a learning rate of 10^-4 and weight decay 10^-5.
For MCD, we try dropout rates of 0.1 and 0.2 and select 0.1 due to a slightly better accuracy on the evaluation split.
iVON uses a prior precision of 100 as optimized by a grid search.
We use five seeds per model and build all ensembles by training six models independently and leaving out a different model for each of the five evaluation runs.
§.§.§ CivilComments-wilds
We use the pretrained DistilBERT <cit.> model from HuggingFace transformers <cit.> with a classification head consisting of two linear layers with a ReLU nonlinearity and a Dropout unit with a drop rate of 0.2 between them.
Following <cit.>, we finetune the pretrained checkpoint with a learning rate of 1 · 10^-5 and, where applicable, a weight decay factor of 1 · 10^-2 for three epochs using the Adam optimizer <cit.>.
SWAG collects ten parameter samples during the last two epochs of training.
iVON uses a prior precision of 500, as optimized by a grid search.
We use five seeds for all non-ensembled models.
The ensembles are build from four of the five single-model versions, leaving out a different member per model to create five different ensembled models of four members each.
We note in the main paper that MCD results in less accurate and more overconfident models.
We investigate this further by experimenting with different dropout rates in <Ref>.
While a dropout rate of 0.1 had no impact, dropout rates of 0.05 and 0.01 lead to progressively better accuracy and calibration, coming close to MAP.
However, there is still no accuracy or calibration benefit to be gained from using MCD.
§.§.§ Amazon-wilds
We use the pretrained DistilBERT <cit.> model from HuggingFace transformers <cit.> with a classification head consisting of two linear layers with a ReLU nonlinearity and a Dropout unit with a drop rate of 0.2 between them.
Following <cit.>, we finetune the pretrained checkpoint with a learning rate of 10^-5 and, where applicable, a weight decay factor of 10^-2 using the Adam optimizer <cit.>.
Contrary to <cit.>, we finetune for five epochs, as we find that the validation accuracy is still increasing after three epochs.
SWAG collects 30 parameter samples during the last two epochs of training.
We also experiment with last-layer versions of SWAG and MCD, but find both to perform very similar to MAP (see <Ref>).
iVON uses a prior precision of 500, as optimized by a grid search.
We use six seeds for all non-ensembled models.
The ensembles are build from five of the six single-model versions, leaving out a different member per model to create five different ensembled models of five members each.
|
http://arxiv.org/abs/2306.05732v1
|
20230609075242
|
Computing Algorithm for an Equilibrium of the Generalized Stackelberg Game
|
[
"Jaeyeon Jo",
"Jihwan Yu",
"Jinkyoo Park"
] |
cs.GT
|
[
"cs.GT",
"math.OC"
] |
Computing Algorithm for an Equilibrium of the Generalized Stackelberg Game
Computing Algorithm for an Equilibrium
of the Generalized Stackelberg Game
Jaeyeon Jo, Jihwan Yu, Jinkyoo ParkCorresponding author
Department of Industrial and Systems Engineering, KAIST, Daejeon, Republic of Korea,
{robin512, jihwan14, jinkyoo.park}@kaist.ac.kr
The 1-N generalized Stackelberg game (single-leader multi-follower game) is intricately intertwined with the interaction between a leader and followers (hierarchical interaction) and the interaction among followers (simultaneous interaction). However, obtaining the optimal strategy of the leader is generally challenging due to the complex interactions among the leader and followers. Here, we propose a general methodology to find a generalized Stackelberg equilibrium of a 1-N generalized Stackelberg game. Specifically, we first provide the conditions where a generalized Stackelberg equilibrium always exists using the variational equilibrium concept. Next, to find an equilibrium in polynomial time, we transformed the 1-N generalized Stackelberg game into a 1-1 Stackelberg game whose Stackelberg equilibrium is identical to that of the original. Finally, we propose an effective computation procedure based on the projected implicit gradient descent algorithm to find a Stackelberg equilibrium of the transformed 1-1 Stackelberg game. We validate the proposed approaches using the two problems of deriving operating strategies for EV charging stations: (1) the first problem is optimizing the one-time charging price for EV users, in which a platform operator determines the price of electricity and EV users determine the optimal amount of charging for their satisfaction; and (2) the second problem is to determine the spatially varying charging price to optimally balance the demand and supply over every charging station.
Stackelberg game, single-leader multiple-follower game, Stackelberg equilibrium, bilevel optimization, implicit differentiation
Integrating Usage Control into Distributed Ledger Technology for Internet of Things Privacy
Preprint - To be published in IEEE Internet Of Things journal
Nathanaël Denis, Maryline Laurent, Sophie Chabridon
July 31, 2023
==========================================================================================================================================================
§ INTRODUCTION
This paper addresses a (generalized) Stackelberg game (single-leader multi-follower problem) Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> which can be formulated as follows:
𝐲^* = _𝐲∈Ω_Lf_L(𝐲, (𝐱_i^*(𝐲))_i∈𝐅)
𝐱_i^*(𝐲) = _𝐱_i∈Ω_i(𝐲, 𝐱^*_-i(𝐲))f_i(𝐲, 𝐱_i, 𝐱^*_-i(𝐲)), ∀ i∈𝐅
where f_L be the objective function of the leader, f_i be the objective function of the follower i∈𝐅, 𝐲 be the leader's decision belonging to their strategy set Ω_L, 𝐱_i be the follower i's decision belonging to their strategy set Ω_i(𝐲, 𝐱_-i), 𝐱_-i=(𝐱_1, ⋯, 𝐱_i-1, 𝐱_i+1, ⋯, 𝐱_N) is the follower's joint decision except follower i, and 𝐅=[N]:={1, 2, ⋯, N} is a set of followers. In detail, the strategy set of the leader, and the follower i is as:
Ω_L = {𝐲∈ℝ^n_L|
h^j_L(𝐲) ≤ 0, ∀ j ∈[ p_L]
l^k_L(𝐲) = 0, ∀ k ∈[ q_L]
}
Ω_i(𝐲, 𝐱_-i) = {𝐱_i∈ℝ^n_i|
h^j_i(𝐲, 𝐱_i, 𝐱_-i) ≤ 0, ∀ j∈[ p_i ]
l^k_i(𝐲, 𝐱_i, 𝐱_-i) = 0, ∀ k∈[ q_i ]
}
where [n] ≜{1, ⋯, n}, p_L is the number of inequality constraints of the leader, q_L is the number of equality constraints of the leader, p_i is the number of inequality constraints of the follower i, and q_i is the number of equality constraints of the follower i. Then, the optimal solution (𝐲^*, (𝐱_i^*( 𝐲^*))_i∈𝐅) ∈Ω_L×∏_i∈𝐅Ω_i(𝐲^*,𝐱^*_-i(𝐲^*)) for equations (<ref>) and (<ref>), is said to be a (generalized) Stackelberg equilibrium <cit.>, in which neither leader nor followers have no incentives to deviate unilaterally.
The generalized Stackelberg game Γ is intricately intertwined with the interaction between a leader and followers (hierarchical interaction) and the interaction among followers (simultaneous interaction). Due to the complex interactions among the leader and followers, computing a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game is challenging. In this study, we propose a general methodology to find a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game (single-leader multi-follower game with followers' joint constraints). First, we provide the conditions where a generalized Stackelberg equilibrium always exists using the variational equilibrium concept. Next, to find an equilibrium in polynomial time, we transform the 1-N generalized Stackelberg game into the 1-1 Stackelberg game whose Stackelberg equilibrium is identical to that of the original. Finally, we propose an effective computation procedure based on the projected implicit gradient descent algorithm to find a Stackelberg equilibrium of the transformed 1-1 Stackelberg game. Figure <ref> illustrates this general procedure through a schematic diagram composed of theories and an algorithm.
To validate the effectiveness of the proposed modeling framework (i.e., generalized Stackelberg game) and its solution-finding algorithm, we consider two problems of deriving operating strategies for EV charging stations (sharing platform) under the assumption that every EV user will make their decision to minimize their cost. The first is optimizing the one-time charging price for EV users <cit.>, in which a platform operator determines the price of electricity and EV users determine the optimal amount of charging for their satisfaction. We show the convergence by comparing the results obtained by applying the proposed algorithm to the generalized Stackelberg equilibrium computed analytically. The second problem is to determine the spatially varying charging price to optimally balance the demand and supply over every charging station <cit.>. The second problem has a more complex relationship between the leader and the followers and doesn't have any known algorithm to obtain the generalized Stackelberg equilibrium. We compare the performance of the proposed algorithm with the proximal algorithm designed to find a stationary point without considering the hierarchical structure.
The novelty and importance of this study are summarized as follows:
* We propose a general methodology to find a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game. First, we provide the conditions where a generalized Stackelberg equilibrium always exists. Next, to find an equilibrium in polynomial time, we develop a method to convert the 1-N generalized Stackelberg game into the 1-1 Stackelberg game whose Stackelberg equilibrium corresponds to a generalized Stackelberg equilibrium of the original game. Finally, we propose a projected implicit gradient descent (PIGD) algorithm to find a Stackelberg equilibrium of the transformed 1-1 Stackelberg game in polynomial time.
* We validate the proposed algorithm through EV sharing platforms. We show that our algorithm can always compute a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game. Moreover, we experimentally verified the performance of a generalized Stackelberg equilibrium by comparing its equilibrium value to that of other solution concepts.
The organization of the paper is as follows. Section 3 discusses subgame of the generalized Stackelberg game. Section 4 provides the conditions where a generalized Stackelberg equilibrium always exists. Section 5 discusses the computational approach to find a generalized Stackelberg equilibrium. Section 6 introduces the problem description for EV sharing platforms, and Section 7 evaluates the performances of the proposed algorithm using simulation studies.
§ BACKGROUNDS AND RELATED WORKS
The 1-N generalized Stackelberg game is intricately intertwined with the hierarchical interaction between a leader and followers and the simultaneous interaction among followers, and these relationships are represented in the forms of bilevel optimization and generalized normal-form game, respectively. While this paper deals with both interactions simultaneously, previous studies have mostly addressed hierarchical interactions in bilevel optimization research and simultaneous interactions in generalized normal-form game research independently.
§.§ Hierarchical interaction between a leader and a follower
First, we discuss the hierarchical interaction between a leader and a follower. The problem of hierarchical structure is generally formulated in a kind of a bilevel optimization problem (BOP) <cit.>. A BOP (single-leader single-follower problem) Γ=<f_L, f_𝐅, Ω_L, Ω_F> which can be formulated as follows:
𝐲^* = _𝐲∈Ω_Lf_L(𝐲, 𝐱^*(𝐲))
𝐱^*(𝐲) = _𝐱∈Ω_Ff_F(𝐲, 𝐱)
where f_L be the objective function of the leader, f_F be the objective function of a follower, 𝐲 be the leader's decision belonging to their strategy set Ω_L, and 𝐱 be the follower's decision belonging to their strategy set Ω_F. Then, the optimal solution of Γ is defined as (𝐲^*, 𝐱^*(𝐲^*))∈Ω_L×Ω_F for equation (<ref>). We classify BOP into two classes: (1) problems where a leader and a follower each have a single objective function; and (2) problems where a leader and a follower have a multi-objective function.
Some studies investigate a method to find the optimal of a BOP consisting of a leader and a follower with a single objective function <cit.>. <cit.> suggest first- and second-order sufficient optimality conditions for a solution of BOP. <cit.> propose a progressive approximation algorithm and apply it to find the exact solution of a class of interdiction BOP. <cit.> introduce the method to compute the gradient of the follower's decision with respect to the leader's decision and use it to run the implicit gradient descent algorithm.
Other studies propose an algorithm to compute the optimal solution of the bilevel multi-objective optimization problem (BMOP) consisting of a leader and a follower with more than one objective function
<cit.>. <cit.> suggest a new sufficient optimality condition for a solution of a differentiable BMOP. <cit.> transform a BMOP into an equivalent nonsmooth multiobjective one-level optimization problem using Karush-Kuhn-Tucker (KKT) conditions. <cit.> relaxes a BMOP using the Fritz John (FJ) and KKT conditions to find its optimal solution.
§.§ Simultaneous interaction among followers
The modeling of simultaneous interaction among followers is classified depending on whether there are joint constraints among followers, meaning whether a follower's strategy depends on the decisions of other followers. Simultaneous interaction without joint constraints is modeled as a normal-form game and has a Nash equilibrium as a solution <cit.>. A normal-form game G = <𝐅, (f_i)_i∈𝐅, (Ω_i)_i∈𝐅> which can be formulated as follows:
𝐱_i^* = _𝐱_i∈Ω_if_i(𝐱_i, 𝐱^*_-i), ∀ i∈𝐅
where f_i be the objective function of the follower i∈𝐅, 𝐱_i be the follower i's decision belonging to their strategy set Ω_i, 𝐱_-i=(𝐱_1, ⋯, 𝐱_i-1, 𝐱_i+1, ⋯, 𝐱_N) is the follower's joint decision except follower i, and 𝐅={1, 2, ⋯, N} is a set of followers. Then, the optimal solution (𝐱_i^*)_i∈𝐅∈∏_i∈𝐅Ω_i for equation (<ref>) is said to be a Nash equilibrium <cit.>.
Some studies suggest an algorithm to compute a Nash equilibrium of the normal-form game <cit.>. <cit.> propose an algorithm, based on the regularized Nikaido-Isoda (NI) function, for finding the first-order Nash equilibrium of a two-player zero-sum game. <cit.> introduce an semidefinite programming for finding ϵ-approximate Nash equilibrium in bimatrix games. <cit.> present neural pseudogradient ascent (NPGA) algorithm to compute Bayesian Nash equilibrium in auction games.
In contrast, simultaneous interaction with joint constraints is modeled as a generalized normal-form game and has a generalized Nash equilibrium as a solution <cit.>. A generalized normal-form game G = <𝐅, (f_i)_i∈𝐅, (Ω_i)_i∈𝐅> which can be formulated as follows:
𝐱_i^* = _𝐱_i∈Ω_i(𝐱^*_-i)f_i(𝐱_i, 𝐱^*_-i), ∀ i∈𝐅
where f_i be the objective function of the follower i∈𝐅, 𝐱_i be the follower i's decision belonging to their strategy set Ω_i(𝐱_-i), 𝐱_-i=(𝐱_1, ⋯, 𝐱_i-1, 𝐱_i+1, ⋯, 𝐱_N) is the follower's joint decision except follower i, and 𝐅={1, 2, ⋯, N} is a set of followers. Then, the optimal solution (𝐱_i^*)_i∈𝐅∈∏_i∈𝐅Ω_i(𝐱_-i) for equation (<ref>) is said to be a generalized Nash equilibrium <cit.>.
Some studies propose an algorithm that computes a generalized Nash equilibrium of the generalized normal-form game where a player's constraints depend on the other player's decisions <cit.>. <cit.> and <cit.> suggest an algorithm to compute a generalized Nash equilibrium based on the regularized NI function. <cit.> propose two projection-like algorithms for solving a generalized normal-form game.
§.§ 1-N Generalized Stackelberg Game
There have been studies that use the generalized Stackelberg game concept to model the strategic interactions between a leader and followers and suggest the computational method to compute a generalized Stackelberg equilibrium. Depending on the generality of the problem formulation, the versatility of the problem-solving algorithm is different. We categorize the related studies into: (1) problems that can be solved with the non-deterministic polynomial-time(NP) algorithm, and (2) problems that can be solved with the polynomial-time algorithm.
Some studies propose an NP algorithm for finding a Stackelberg equilibrium of the small-size problem <cit.>. <cit.> model the competition in the telecommunication industry into the Stackelberg game. They compute a stochastic multiple-leader Stackelberg-Nash-Cournot equilibrium by solving mixed-integer non-linear programming (MINLP). <cit.> propose a generalized Stackelberg game modeling the interaction between EVs and power grid operations. This study computed the Stackelberg equilibrium using an MINLP. Although these two studies compute Stackelberg equilibrium using an MINLP solver, they are not adaptable to the complex Stackelberg game since MINLP is an NP algorithm.
Other studies propose a polynomial-time algorithm to find a Stackelberg equilibrium of a problem with an analytically solvable subgame <cit.> or a special structure of the objective function <cit.>. <cit.> propose a Stackelberg game modeling the interaction between retailers and customers who use electric vehicles (EVs). <cit.> modeled the interactions between multiple companies and energy users as an M-N Stackelberg game. By utilizing the property that the subgame can be analytically computed, these studies transform the original Stackelberg game into a normal-form game and compute its equilibrium. However, this method can only be applied to a simple Stackelberg game where the solution of the subgame can be obtained analytically. Wang et al. (2018) modeled an RF-powered cognitive radio network system as the generalized Stackelberg potential game and proposed the directional ascent method to compute the generalized Stackelberg equilibrium. Because the utility function of the potential game is independent of the other players' decisions, the generalized Stackelberg equilibrium is computed in polynomial time.
The studies discussed above compute a generalized Stackelberg equilibrium using strong restrictions of the game structure. The NP algorithm is applied only to small-size problems due to its time complexity. The polynomial-time algorithm is applied only to problems with restrictions on the subgame, or the utility functions. However, the current study computes the equilibrium for a Stackelberg game in polynomial time without sacrificing the generalized structure of the game, such as the structures of the subgame and the utility function of followers.
§ GENERALIZED NASH EQUILIBRIUM FOR THE SUBGAME OF A GENERALIZED STACKELBERG GAME
This section introduces the equilibrium analysis for a generalized normal-form game G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> where 𝐅=[N] is a set of followers, f_i be the objective function of the follower i∈𝐅, Ω_i is the strategy set of follower i∈𝐅. It models the non-cooperative behavior of followers with the joint constraints when the leader's decision is 𝐲. This followers' subgame G(𝐲) of the generalized Stackelberg game Γ is described in equations (<ref>) and (<ref>).
𝐱_i^*(𝐲) = _𝐱_i∈Ω_i(𝐲, 𝐱^*_-i(𝐲))f_i(𝐲, 𝐱_i, 𝐱^*_-i(𝐲)), ∀ i∈𝐅
where Ω_i(𝐲, 𝐱_-i) = {𝐱_i∈ℝ^n_i|
h^j_i(𝐲, 𝐱_i, 𝐱_-i) ≤ 0, ∀ j∈[ p_i ]
l^k_i(𝐲, 𝐱_i, 𝐱_-i) = 0, ∀ k∈[ q_i ]
}
where 𝐱_i is the decision of follower i, and 𝐱_-i=(𝐱_1, ⋯, 𝐱_i-1, 𝐱_i+1, ⋯, 𝐱_N) is the follower's joint decision except follower i.
Here, we provide the conditions where a generalized Nash equilibrium, that is the optimal solution of a generalized normal-form game G(𝐲), always exists by using the theorems regarding the existence and uniqueness of the variational equilibrium of the generalized normal-form game. This can be used to provide the equilibrium conditions between a leader and followers. Figure <ref> illustrates the procedure to conclude the existence of a generalized Nash equilibrium and its extension to a generalized Stackelberg game (this will be discussed in Section 4).
We first define a generalized Nash equilibrium <cit.>. A generalized Nash equilibrium (𝐱_i^*(𝐲))_i∈𝐅 is the joint decision where no user has an incentive to deviate from their decision unless other users change their decisions.
The joint decision 𝐱^*(𝐲)∈∏_i∈𝐅Ω_i(𝐲, 𝐱^*_-i(𝐲)) is a generalized Nash equilibrium of the subgame G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> if
f_i(𝐲, 𝐱^*(𝐲))≥ f_i(𝐲, 𝐱_i, 𝐱^*_-i(𝐲)), ∀𝐱_i∈Ω_i(𝐲, 𝐱_-i^*(𝐲)), ∀ i∈𝐅
where 𝐱^*(𝐲) = (𝐱^*_i(𝐲))_i∈𝐅, and 𝐱^*_-i(𝐲) = (𝐱^*_1(𝐲), ⋯, 𝐱^*_i-1(𝐲), 𝐱^*_i+1(𝐲), ⋯, 𝐱^*_N(𝐲)).
Next, we define a variational equilibrium <cit.> to prove the existence of a generalized Nash equilibrium. A variational equilibrium is defined as a solution of variational inequality.
The joint decision 𝐱^*(𝐲)∈∏_i∈𝐅Ω_i(𝐲, 𝐱^*_-i(𝐲)) is a variational equilibrium of the subgame G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> if
𝐃(𝐲, 𝐱^*(𝐲))^T(𝐱^*(𝐲)-𝐱)≥ 0, ∀𝐱∈∏_i∈𝐅Ω_i(𝐲, 𝐱_-i)
where 𝐃(𝐲, 𝐱^*(𝐲))=(∇_𝐱_if_i(𝐲, 𝐱^*(𝐲)))_i∈𝐅 is the gradient of followers' objective,
𝐱^*(𝐲) = (𝐱^*_i(𝐲))_i∈𝐅, 𝐱^*_-i(𝐲) = (𝐱^*_1(𝐲), ⋯, 𝐱^*_i-1(𝐲), 𝐱^*_i+1(𝐲), ⋯, 𝐱^*_N(𝐲)),
𝐱=(𝐱_i)_i∈𝐅,
and 𝐱_-i = (𝐱_1, ⋯, 𝐱_i-1, 𝐱_i+1, ⋯, 𝐱_N).
We cannot compute a generalized Nash equilibrium in general situations because each user's strategy set depends on the decisions of other users. In contrast, the variational equilibrium is also a generalized Nash equilibrium under certain conditions. Therefore, we compute the variational equilibrium to find a generalized Nash equilibrium of the generalized normal-form game that satisfies the conditions described in Theorem <ref>.
Theorem <ref> provides the condition for the existence and the uniqueness of the variational equilibrium in the generalized normal-form game. Then, we provide the condition of the existence of a generalized Nash equilibrium using the relationship with the variational equilibrium in Theorem <ref>.
Let 𝐃(·)=(∇_𝐱_if_i(·))_i∈𝐅 is the gradient of followers' objective.
If the following three conditions are satisfied, then the subgame G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> has the unique variational equilibrium.
* C
1. Ω_i is closed and convex for all i∈𝐅
* C
2. f_i is continuous on Ω_i for all i∈𝐅
* C
3. -𝐃(·) is strongly monotone on ∏_i∈𝐅Ω_i
Proof of Theorem <ref>
It is proven by Theorem 2.3.3 of <cit.>.
Next, we provide the conditions where the variational equilibrium becomes a generalized Nash equilibrium.
If the following three conditions are satisfied, then the variational equilibrium of G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> is also a generalized Nash equilibrium of G(𝐲).
* C
1. Ω_i is closed and convex for all i∈𝐅
* C
2. f_i is concave C^1-function on Ω_i for all i∈𝐅
* C
3. ∏_i∈𝐅Ω_i is jointly convex
Proof of Theorem <ref>
It is proven by Theorem 5 of <cit.>.
Theorems <ref> and <ref> provide the conditions where a generalized Nash equilibrium exists. That is, if a given subgame G(𝐲) satisfies the conditions in Theorem <ref>, we can find a generalized Nash equilibrium by computing the unique variational equilibrium of the game G(𝐲). These theorems will be utilized in Section 4 to provide the conditions where the equilibrium of a generalized Stackelberg game exists.
§ GENERALIZED STACKELBERG EQUILIBRIUM FOR THE GENERALIZED STACKELBERG GAME
In this section, we introduce the generalized Stackelberg equilibrium (optimal solution) of a 1-N generalized Stackelberg game (single-leader multi-follower game) Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅>, which is formulated in equations (<ref>) - (<ref>), is as follows:
𝐲^* = _𝐲∈Ω_Lf_L(𝐲, (𝐱_i^*(𝐲))_i∈𝐅)
𝐱_i^*(𝐲) = _𝐱_i∈Ω_i(𝐲, 𝐱^*_-i(𝐲))f_i(𝐲, 𝐱_i, 𝐱^*_-i(𝐲)), ∀ i∈𝐅
where Ω_L = {𝐲∈ℝ^n_L|
h^j_L(𝐲) ≤ 0, ∀ j ∈[ p_L]
l^k_L(𝐲) = 0, ∀ k ∈[ q_L]
}
Ω_i(𝐲, 𝐱_-i) = {𝐱_i∈ℝ^n_i|
h^j_i(𝐲, 𝐱_i, 𝐱_-i) ≤ 0, ∀ j∈[ p_i ]
l^k_i(𝐲, 𝐱_i, 𝐱_-i) = 0, ∀ k∈[ q_i ]
}
where f_L be the objective function of the leader, f_i be the objective function of the follower i∈𝐅, 𝐲 be the leader's decision belonging to the strategy set Ω_L, 𝐱_i be the follower i's decision belonging to their strategy set Ω_i(𝐲, 𝐱_-i), and 𝐅=[N] is a set of followers. Here, we provide the existence conditions of a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game Γ.
We first define a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game Γ. A generalized Stackelberg equilibrium (𝐲^*, (𝐱_i^*(𝐲^*))_i∈𝐅) is the joint decision where a leader makes the optimal decision 𝐲^* for maximizing its utility function f_L, while followers are on the generalized Nash equilibrium of the subgame G(𝐲^*) in equation (<ref>).
The joint decision (𝐲^*, 𝐱^*(𝐲^*))∈Ω_L×∏_i∈𝐅Ω_i(𝐲^*, 𝐱^*_-i(𝐲^*)) is a generalized Stackelberg equilibrium of Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> if
sup_𝐱^*(𝐲^*)∈GNE(𝐲^*)f_L(𝐲^*, 𝐱^*(𝐲^*)) ≥ inf_𝐱^*(𝐲)∈GNE(𝐲)f_L(𝐲, 𝐱^*(𝐲)), ∀𝐲∈Ω_L
where GNE(𝐲)={𝐱^*∈∏_i∈𝐅Ω_i(𝐲,𝐱^*_-i) | f_i(𝐲, 𝐱^*) ≥ f_i(𝐲, 𝐱_i, 𝐱_-i^*), ∀𝐱_i ∈Ω_i(𝐲, 𝐱^*_-i), ∀ i ∈𝐅} is a set of the followers' generalized Nash equilibrium when the leader's decision is 𝐲,
𝐱^*(𝐲) = (𝐱^*_i(𝐲))_i∈𝐅,
𝐱^*_-i(𝐲) = (𝐱^*_1(𝐲), ⋯, 𝐱^*_i-1(𝐲), 𝐱^*_i+1(𝐲), ⋯, 𝐱^*_N(𝐲)), 𝐱^*=(𝐱_i^*)_i∈𝐅
, and 𝐱^*_-i = (𝐱^*_1, ⋯, 𝐱^*_i-1, 𝐱^*_i+1, ⋯, 𝐱^*_N).
Next, we define a variational Stackelberg equilibrium of the 1-N generalized Stackelberg game Γ to prove the existence of a generalized Stackelberg equilibrium. A variational Stackelberg equilibrium (𝐲^*, (𝐱_i^*(𝐲^*))_i∈𝐅) is the joint decision where a leader optimize its decision 𝐲^* while the followers' joint decision is on the variational equilibrium of the subgame G(𝐲^*) in equation (<ref>).
The joint decision (𝐲^*, 𝐱^*(𝐲^*))∈Ω_L×∏_i∈𝐅Ω_i(𝐲^*, 𝐱^*_-i(𝐲^*)) is a variational Stackelberg equilibrium of Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> if
sup_𝐱^*(𝐲^*)∈VE(𝐲^*)f_L(𝐲^*, 𝐱^*(𝐲^*)) ≥ inf_𝐱^*(𝐲)∈VE(𝐲)f_L(𝐲, 𝐱^*(𝐲)), ∀𝐲∈Ω_L
where VE(𝐲)={𝐱^*∈∏_i∈𝐅Ω_i(𝐲,𝐱^*_-i) | 𝐃(𝐲, 𝐱^*)^T(𝐱^*-𝐱)≥ 0, ∀𝐱∈∏_i∈𝐅Ω_i(𝐲, 𝐱_-i)} is a set of the follower's variational equilibrium when the leader's decision is 𝐲, 𝐃(𝐲, 𝐱^*)=(∇_𝐱_if_i(𝐲, 𝐱^*))_i∈𝐅 is the gradient of the followers' objective, 𝐱^*(𝐲) = (𝐱^*_i(𝐲))_i∈𝐅,
𝐱^*_-i(𝐲) = (𝐱^*_1(𝐲), ⋯, 𝐱^*_i-1(𝐲), 𝐱^*_i+1(𝐲), ⋯, 𝐱^*_N(𝐲)), 𝐱^*=(𝐱_i^*)_i∈𝐅,
𝐱^*_-i = (𝐱^*_1, ⋯, 𝐱^*_i-1, 𝐱^*_i+1, ⋯, 𝐱^*_N), 𝐱=(𝐱_i)_i∈𝐅,
and 𝐱_-i = (𝐱_1, ⋯, 𝐱_i-1, 𝐱_i+1, ⋯, 𝐱_N).
The generalized Stackelberg equilibrium may not exist depending on the structures of the utility function and the strategy set. However, we can define the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> that always have a generalized Stackelberg equilibrium by utilizing Theorems <ref> and <ref>.
Let 𝐃(·)=(∇_𝐱_if_i(·))_i∈𝐅 is the gradient of followers' objective.
If the following four conditions are satisfied, then the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> has a variational Stackelberg equilibrium, and it is also a generalized Stackelberg equilibrium of Γ.
* C
1. Ω_i is closed and convex for all i ∈𝐅
* C
2. f_i is concave C^1-function on Ω_i for all i ∈𝐅
* C
3. ∏_i∈𝐅Ω_i is jointly convex
* C
4. -𝐃(·) is strongly monotone on ∏_i∈𝐅Ω_i
The proof is provided in Appendix <ref>.
In addition, the uniqueness of the generalized Stackelberg equilibrium is guaranteed if some additional conditions are satisfied.
Let Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> be a 1-N generalized Stackelberg game that satisfies the conditions of Theorem <ref>. Then, Γ has the unique generalized Stackelberg equilibrium if the following three conditions are satisfied: (1) f_L(𝐲, 𝐱(𝐲)) is strictly concave on 𝐲∈Ω_L; (2) Ω_L is closed and convex, and (3) the set of a generalized Nash equilibrium of G(𝐲) is unique for all 𝐲∈Ω_L.
The proof is provided in Appendix <ref>.
Theorem <ref> states the conditions where a 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> always have a generalized Stackelberg equilibrium. In the remaining section, we assume that Γ satisfies the conditions of Theorem <ref>.
§ COMPUTING METHOD
To find an equilibrium in polynomial time, we develop a method to convert the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> to the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F> having the same solution (Section 5.1). We then propose the projected implicit gradient descent (PIGD) algorithm to find a Stackelberg equilibrium of the transformed 1-1 Stackelberg game Γ̂ (Section 5.2). Figure <ref> illustrates the procedure to compute a generalized Stackelberg equilibrium for the 1-N generalized Stackelberg game Γ. Finally, we prove that the proposed computing method finds equilibrium in polynomial time (Section 5.3).
§.§ Transforming Method from the 1-N Generalized Stackelberg Game Γ to the 1-1 Stackelberg Game Γ̂
There is a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game Γ while the four conditions of Theorem <ref> are satisfied. However, there have been no algorithms that can solve the generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game in polynomial time. Therefore, to compute a generalized Stackelberg equilibrium in polynomial time, we transform the 1-N generalized Stackelberg game into a solvable problem. Specifically, we transform the 1-N generalized Stackelberg game Γ into a 1-1 Stackelberg game Γ̂ whose Stackelberg equilibrium is identical to that of Γ.
First, we define the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F> whose Stackelberg equilibrium is identical to that of Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> which can be formulated as follows:
𝐲^* = _𝐲∈Ω_Lf_L(𝐲, 𝐱^*(𝐲) )
𝐰^*(𝐲) = _𝐰∈Ω_F(𝐲)f_F(𝐲, 𝐰)
where 𝐰 := (𝐱, 𝐳, λ, μ)∈ℝ^n_F, n_F:=∑_i∈𝐅(2n_i+p_i+q_i)
f_F(𝐲, 𝐰) = (∇_𝐱_i f_i(𝐲, 𝐱))_i∈𝐅^T(𝐱-𝐳)
Ω_L = {𝐲∈ℝ^n_L|
h^j_L(𝐲) ≤ 0, ∀ j ∈[ p_L]
l^k_L(𝐲) = 0, ∀ k ∈[ q_L]
}
Ω_F(𝐲) = {𝐰∈ℝ^n_F|
-(∇_𝐱_i f_i(𝐲, 𝐱))_i∈𝐅^T+∑_i∈𝐅∑_j∈[p_i]λ_i^j∇_𝐳h_i^j(𝐲, 𝐳)
+∑_i∈𝐅∑_k∈[q_i]μ_i^k∇_𝐳l_i^k(𝐲, 𝐳) = 0
h^j_i(𝐲, 𝐱) ≤ 0, ∀ j∈[ p_i ], ∀ i∈𝐅
l^k_i(𝐲, 𝐱) = 0, ∀ k∈[ q_i ], ∀ i∈𝐅
λ_i^j h_i^j(𝐲, 𝐳) = 0, ∀ j∈[p_i], ∀ i∈𝐅
h_i^j(𝐲, 𝐳) ≤ 0, ∀ j∈[p_i], ∀ i∈𝐅
l_i^k(𝐲, 𝐳) = 0, ∀ k∈[q_i], ∀ i∈𝐅
λ_i^j≥ 0, ∀ j∈[p_i], ∀ i∈𝐅}
where f_L be the objective function of the leader, f_F be the objective function of the follower, 𝐲 is the leader's decision belonging to the strategy set Ω_L, and 𝐰 is the follower's decision belonging to the strategy set Ω_F(𝐲). To be specific, the follower's decision is composed of 𝐱=(𝐱_i)_i∈𝐅∈ℝ^∑_i∈𝐅n_i,
𝐳=(𝐳_i)_i∈𝐅∈ℝ^∑_i∈𝐅n_i,
λ=(λ_i^j)_i,j∈ℝ^∑_i∈𝐅p_i, and
μ=(μ_i^k)_i,k∈ℝ^∑_i∈𝐅q_i. Now, we prove that the Stackelberg equilibrium of Γ̂ is equivalent to the generalized Stackelberg equilibrium of Γ.
The set of the variational Stackelberg equilibrium of the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> is equivalent to the set of the Stackelberg equilibrium of the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F>.
Proof of Theorem <ref>
Let G(𝐲)=<𝐅, (f_i)_i∈𝐅, (Ω_i)_i∈𝐅> be the N followers' subgame of Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅>. By equation (<ref>) of Definition <ref>, the set of variational equilibrium of G(𝐲) is defined as
{𝐱∈∏_i∈𝐅Ω_i(𝐲,𝐱_-i)| 𝐃(𝐲, 𝐱)^T(𝐱 - 𝐳)≥ 0, ∀𝐳∈∏_i∈𝐅Ω_i(𝐲, 𝐳_-i)}
where 𝐃(𝐲, 𝐱)=(∇_𝐱_if_i(𝐲, 𝐱))_i∈𝐅 is the gradient of followers' objective, 𝐱=(𝐱_i)_i∈𝐅, 𝐱_-i=(𝐱_1, ⋯, 𝐱_i-1, 𝐱_i+1, ⋯, 𝐱_N), 𝐳=(𝐳_i)_i∈𝐅, and 𝐳_-i=(𝐳_1, ⋯, 𝐳_i-1, 𝐳_i+1, ⋯, 𝐳_N). Then, the variational Stackelberg equilibrium of Γ is the solution of the following decision-making problem by the definition of the variational equilibrium:
max_𝐲∈Ω_Lf_L(𝐲, 𝐱)
s.t. 𝐃(𝐲, 𝐱)^T(𝐱-𝐳)≥ 0, ∀𝐳∈∏_i∈𝐅Ω_i(𝐲, 𝐳_-i)
𝐱∈∏_i∈𝐅Ω_i(𝐲,𝐱_-i)
Let 𝐳^*(𝐲, 𝐱)=_𝐳∈∏_i∈𝐅Ω_i(𝐲, 𝐳_-i)𝐃(𝐲, 𝐱)^T(𝐱-𝐳). Obviously, the solution set of the problem (<ref>) is equivalent to the solution set of the following decision-making problem:
max_𝐲∈Ω_Lf_L(𝐲, 𝐱)
s.t. 𝐃(𝐲, 𝐱)^T(𝐱-𝐳^*(𝐲, 𝐱))≥ 0, 𝐱∈∏_i∈𝐅Ω_i(𝐲,𝐱_-i)
There is the unique variational equilibrium of G(𝐲) by Theorem <ref> since three conditions are satisfied: (1) Ω_i is closed and convex for i∈𝐅 by the first condition of Theorem <ref>; (2) f_i is continuous on Ω_i for i∈𝐅 by the second condition of Theorem <ref>; and (3) (-∇_𝐱_if_i(𝐲, 𝐱))_i∈𝐅 is strongly monotone on ∏_i∈𝐅Ω_i by the fourth condition of Theorem <ref>. It means that the variational equilibrium 𝐱^*(𝐲) of G(𝐲) is unique and always exists for all the leader's decisions 𝐲∈Ω_L.
Let 𝐱̂(𝐲)=_𝐱∈∏_i∈𝐅Ω_i(𝐲, 𝐱_-i)𝐃(𝐲, 𝐱)^T(𝐱 - 𝐳^*(𝐲, 𝐱)). Then, the following equation holds by the definition of 𝐱̂(𝐲).
𝐃(𝐲, 𝐱̂(𝐲))^T(𝐱̂(𝐲) - 𝐳^*(𝐲, 𝐱̂(𝐲))) ≥𝐃(𝐲, 𝐱^*(𝐲))^T(𝐱^*(𝐲) - 𝐳^*(𝐲, 𝐱^*(𝐲)))
By the definition of 𝐱^*(𝐲), equation (<ref>) is not empty.
𝐃(𝐲, 𝐱^*(𝐲) )^T(𝐱^*(𝐲) - 𝐳^*(𝐲, 𝐱^*(𝐲))) ≥ 0
When we substitute equation (<ref>) into equation (<ref>), we obtain 𝐃(𝐲, 𝐱̂(𝐲))^T(𝐱̂(𝐲) - 𝐳^*(𝐲, 𝐱̂(𝐲))) ≥ 0, that is, 𝐱̂(𝐲) is also a variational equilibrium of G(𝐲). Since G(𝐲) has the unique variational equilibrium for all 𝐲∈Ω_L, 𝐱^*(𝐲) is equal to 𝐱̂(𝐲). Therefore, the solution set of the problem (<ref>) is equivalent to the solution set of the following optimization problem:
max_𝐲∈Ω_Lf_L(𝐲, 𝐱^*(𝐲))
s.t. 𝐱^*(𝐲)=_𝐱∈∏_i∈𝐅Ω_i(𝐲, 𝐱_-i)𝐃(𝐲, 𝐱)^T(𝐱-𝐳^*(𝐲, 𝐱))
s.t. 𝐳^*(𝐲, 𝐱) = _𝐳∈∏_i∈𝐅Ω_i(𝐲, 𝐳_-i)𝐃(𝐲, 𝐱)^T(𝐱-𝐳)
Finally, we transform the problem (<ref>) into the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F> by using the KKT conditions of 𝐳^*(𝐲, 𝐱) = _𝐳∈∏_i∈𝐅Ω_i(𝐲, 𝐳_-i)𝐃(𝐲, 𝐱)^T(𝐱-𝐳).
max_𝐲∈Ω_Lf_L(𝐲, 𝐱^*(𝐲))
s.t. (𝐱^*(𝐲), 𝐳^*(𝐲), λ(𝐲), μ(𝐲) )=_(𝐱, 𝐳, λ, μ) ∈∏_i∈𝐅Ω_i(𝐲, 𝐱_-i)∩𝐒𝐃(𝐲, 𝐱)^T(𝐱-𝐳)
where λ=(λ_i^j)_i, j, μ=(μ_i^k)_i, k, and 𝐒 is the set of the KKT conditions of 𝐳^*(𝐲, 𝐱) = _𝐳∈∏_i∈𝐅Ω_i(𝐲, 𝐳_-i)𝐃(𝐲, 𝐱)^T(𝐱-𝐳). To be specifically, 𝐒 is defined as follows:
𝐒=
{𝐱, 𝐳, λ, μ|
-𝐃(𝐲, 𝐱)^T+∑_i∈𝐅∑_j∈[p_i]λ_i^j∇_𝐳h_i^j(𝐲, 𝐳)
+ ∑_i∈𝐅∑_k∈[q_i]μ_i^k∇_𝐳l_i^k(𝐲, 𝐳) = 0
λ_i^j h_i^j(𝐲, 𝐳) = 0, ∀ j∈[p_i], ∀ i∈𝐅
h_i^j(𝐲, 𝐳) ≤ 0, ∀ j∈[p_i], l_i^k(𝐲, 𝐳) = 0, ∀ k∈[q_i], ∀ i∈𝐅
λ_i^j≥ 0, ∀ j∈[p_i], ∀ i∈𝐅}
Because f_F=𝐃(𝐲, 𝐱)^T(𝐱-𝐳) and Ω_F is equivalent to ∏_i∈𝐅Ω_i(𝐲, 𝐱_-i)∩𝐒, the solution set of the problem (<ref>) is equivalent to the Stackelberg equilibrium of the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F>. It means that the set of the Stackelberg equilibrium of Γ̂ is equivalent to the set of the variational Stackelberg equilibrium of Γ.
Thus, we can compute a variational Stackelberg equilibrium of the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> by computing a Stackelberg equilibrium of the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F>. Then, the computed variational Stackelberg equilibrium is a generalized Stackelberg equilibrium by Theorem <ref>. Next, we discuss how to compute the Stackelberg equilibrium of the 1-1 Stackelberg game Γ̂.
§.§ Computing Method for the 1-1 Stackelberg Game Γ̂
To formally model the decision-making procedure of the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F>, let f_L be the objective function of the leader, and 𝐲 is the leader's decision belonging to the strategy set Ω_L. A leader solves the following decision-making problem:
max_𝐲∈Ω_Lf_L(𝐲, 𝐱^*(𝐲))
Once a leader makes a decision, a follower optimizes its objective function f_F as follows:
max_𝐰∈Ω_F(𝐲)f_F(𝐲, 𝐰)
where 𝐰 = (𝐱, 𝐳, λ, μ) is the follower's decision variable belonging to the strategy set Ω_F(𝐲).
We apply the projected implicit gradient descent (PIGD) algorithm to solve the 1-1 Stackelberg game Γ̂. The leader iteratively updates its decision to increase the objective until reaching the stationary point during which the follower (the fictitious follower that actually represents the N followers in the original 1-N generalized Stackelberg game) also iteratively updates its decision to maximize its objective. In solving this problem, it is essential to track how the leader's decision affects the solution of the lower-level problem and tracks back its influence for optimizing the leader's decision. Algorithm <ref> summarizes the overall process of computing the Stackelberg equilibrium of the 1-1 Stackelberg game using the PIGD algorithm.
Algorithm <ref> is an iterative algorithm composed of the updating procedure of the leader's decision and the solution computing procedure of the follower. To apply Algorithm <ref>, the solution 𝐰^(t) of the lower-level problem and the gradient of the leader's objective with respect to the leader's decision d/d 𝐲 f_L(𝐲^(t),𝐰^(t)) should be computed per every iteration.
§.§.§ Computing 𝐰^(t) from given 𝐲^(t).
Given the leader's decision 𝐲^(t), the lower-level solution 𝐰^(t)=(𝐱^(t),𝐳^(t),λ^(t),μ^(t)) is computed by applying the variational equilibrium concept. The solution can be simply induced from the variational equilibrium of the followers' subgame G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> of the original 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> because Theorem <ref> proves that 𝐱^(t), which is a component of the solution 𝐰^(t), is equivalent to the variational equilibrium of G(𝐲).
To compute the variational equilibrium, various algorithms such as projected gradient descent <cit.> and NI-function type method <cit.> can be employed. In this study, we compute the variational equilibrium of G(𝐲) by applying the projected gradient descent
𝐱^(k+1)= proj_𝐂^(k)(𝐱^(k)+ρ𝐃(𝐲^(t), 𝐱^(k)))
where 𝐂^(k)=∏_i∈𝐅Ω_i(𝐲^(t), 𝐱^(k)_-i) is the feasible reason, and ρ is a positive step size. 𝐳^(t) is identical to 𝐱^(t) because 𝐳^(t)=_𝐳∈∏_i∈𝐅Ω_i(𝐲^(t), 𝐳_-i)𝐃(𝐲^(t), 𝐱^(k))^T(𝐱^(t)-𝐳) and the objective is non-negative by the definition of the variational equilibrium. After computing 𝐱^(t) and 𝐳^(t), we can compute the remaining solutions, λ^(t) and μ^(t), since they are feasible in the set 𝐒 described in equation (<ref>).
§.§.§ Computing the gradient of the leader's objective.
The gradient of the leader's objective f_L(𝐲^(t),𝐰^(t)) with respect to its decision 𝐲 given the lower-level solution 𝐰^(t) can be computed using the chain rule, as shown in equation (<ref>). In the chain rule, ∂/∂𝐲 f_L(𝐲^(t),𝐰^(t)) and ∂/∂𝐰f_L(𝐲^(t),𝐰^(t)) can be computed explicitly from the given leader's objective function f_L. The challenging part is how to compute d/d 𝐲𝐰^(t)(𝐲^(t)), that is the gradient of the follower's decision 𝐰^(t)(𝐲) with respect to the leader's decision 𝐲. In most cases, we cannot differentiate the follower's decision 𝐰^(t)(𝐲) with respect to the leader's decision 𝐲 since the explicit formula of the 𝐰^(t)(𝐲) in terms of 𝐲 is unknown. Thus, we apply the implicit differentiation techniques to compute d/d 𝐲𝐰^(t)(𝐲^(t)) <cit.>.
When computing the gradient, handling inequality constraints in the follower's optimization problem is often complicated and intractable. To resolve this difficulty, <cit.> propose a simplified method for computing the implicit gradient while considering only the active inequality constraints (i.e., the inequality constraints whose value becomes zero given the current solution). Note that this choice can be justified in that a small step size ρ results in small changes of the leader's and followers' decisions. That is, the inactive inequality constraints of the follower's strategy set Ω_F will remain inactive after one iteration of updating the leader's decision with the gradient descent. In this sense, the inactive inequality constraints can be neglected while computing the gradient of the followers' decision with respect to the leader's decision in the 1-1 Stackelberg game Γ̂. The activity of the inequality constraints of Ω_F(𝐲) is directly determined by λ^(t).
Thus, we rewrite the follower's strategy set of Γ̂ at step t, Ω_F^(t)(𝐲^(t)), consisting of the active inequality constraints and the equality constraints of the strategy set Ω_F(𝐲) in the problem (<ref>):
Ω_F^(t)(𝐲^(t))={𝐰∈ℝ^n_w| l_k^(t)(𝐲^(t), 𝐰)=0, ∀ k ∈[q_F^(t)] }
where l_k^(t) represents the k-th equality constraint at step t. Then, we can express the Lagrangian of the lower-level problem as ℒ(𝐲^(t),𝐰^(t),λ^(t)) = f_F(𝐲^(t), 𝐰^(t)) + ∑_k ∈[q_F^(t)]λ_k^(t) l_k^(t)(𝐲^(t), 𝐰^(t)) where λ^(t) = (λ_k^(t))_k ∈[q_F^(t)].
Let's assume that M_F^(t) = (∇_𝐰l_k^(t)(𝐲^(t), 𝐰^(t)))_k ∈[q_F^(t)] is full rank. Then, the lower-level solution 𝐰^(t) has a corresponding Lagrange multiplier λ^(t) such that (𝐰^(t), λ^(t)) is a stationary point of the Lagrangian. In other words, at the stationary point, the gradient of the Lagrangian with respect to the follower's decision variable becomes a zero vector
∇_𝐰ℒ(𝐲^(t),𝐰^(t),λ^(t)) = ∇_𝐰f_F(𝐲^(t),𝐰^(t)) + λ^(t)^T M_F^(t) = 0
∇_λℒ(𝐲^(t),𝐰^(t),λ^(t)) = (l_k^(t)(𝐲^(t), 𝐰^(t)))_k∈[q_F^(t)]^T=0
We can compute the gradient of the follower's decision with respect to the leader's decision by differentiating the gradient of the Lagrangian in equation (<ref>) with respect to the leader's decision variable as:
d 𝐰^(t)(𝐲^(t))/d 𝐲 = M_FF^(t)^-1M^(t)_F^T(M^(t)_FM^(t)_FF^-1M^(t)_F^T)^-1
× (M^(t)_FM^(t)_FF^-1M^(t)_LF - M^(t)_L)-M^(t)_FF^-1M^(t)_LF
where
M_F^(t) = (∇_𝐰l_k^(t)(𝐲^(t), 𝐰^(t)))_k ∈[q_F^(t)]
M_LF^(t) = -∇^2_𝐲𝐰f_F(𝐲^(t),𝐰^(t))-∑_k∈[q_F^(t)]λ_k^(t)∇^2_𝐲𝐰l_k^(t)(𝐲^(t), 𝐰^(t))
M_L^(t) = (∇_𝐲l_k^(t)(𝐲^(t), 𝐰^(t)))_k ∈[q_F^(t)]
M_FF^(t) = -∇^2_𝐰𝐰f_F(𝐲^(t), 𝐰^(t)) - ∑_k∈[q_F^(t)]λ_k^(t)∇^2_𝐰𝐰l_k^(t)(𝐲^(t), 𝐰^(t))
and λ=-(M^(t)_FM^(t)_F^T)^-1M^(t)_F(∇_𝐰f_F(𝐲^(t),𝐰^(t)))^T by Proposition 4.6 of <cit.>.
To compute the gradient of the follower's decision with respect to the leader's decision using equation (<ref>), M_FF^(t) should be non-singular. Furthermore, the existence and uniqueness of the Lagrange multipliers are guaranteed only when M_F^(t) is a full rank matrix. However, these conditions do not hold for the general framework. Thus, we use pseudo-inverse to approximate the gradient of the follower's decision with respect to the leader's decision. Once it is computed, the gradient of the leader's objective with respect to the leader's decision d f_L(𝐲,𝐰)/d 𝐲 is computed by equation (<ref>).
§.§.§ Updating the leader's decision.
Once d f_L(𝐲^(t),𝐰^(t))/d 𝐲 is computed, we can then update the leader's decision using equation (<ref>).
In summary, we provide a PIGD algorithm that can be applied to the transformed 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F>. First, we find the lower-level solution of Γ̂ in each iteration by computing the variational equilibrium of the N followers' subgame G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> of the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> using the well-known projected gradient descent algorithm <cit.>. After obtaining the lower-level solution, we approximate the gradient of the leader's objective with respect to the leader's decision using the implicit differentiation techniques <cit.>.
§.§ Time Complexity of Computing Method
In the previous subsection, we develop a method to transform the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> into a 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F> (Section 5.1) and propose a PIGD algorithm to find a Stackelberg equilibrium of Γ̂ (Section 5.2). Now, we prove that our PIGD algorithm finds the Stackelberg equilibrium of Γ̂ in polynomial time.
Let Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> be a 1-N generalized Stackelberg game where the strategy set is defined as
Ω_L = {𝐲∈ℝ^n_L|
h^j_L(𝐲) ≤ 0, ∀ j ∈[ p_L]
l^k_L(𝐲) = 0, ∀ k ∈[ q_L]
}
Ω_i(𝐲, 𝐱_-i) = {𝐱_i∈ℝ^n_i|
h^j_i(𝐲, 𝐱) ≤ 0, ∀ j∈[ p_i ]
l^k_i(𝐲, 𝐱) = 0, ∀ k∈[ q_i ]
}
Suppose the following four conditions are satisfied:
(1) h_L^j(𝐲)'s are convex on 𝐲;
(2) l_L^k(𝐲)'s are linear on 𝐲;
(3) h_i^j(𝐲, 𝐱)'s are convex on 𝐱; and
(4) l_i^k(𝐲, 𝐱)'s are linear on 𝐱. Then, we compute the Stackelberg equilibrium of Γ in polynomial time with respect to the number of followers. Specifically, the time complexity of computing the Stackelberg equilibrium of Γ is O(N^3.5) using Algorithm 1.
Proof of Theorem <ref>
We can convert the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> to the 1-1 Stackelberg game Γ̂=<{1}, f_L, f_F, Ω_L, Ω_F> by Theorem <ref>. A single update of the leader decision 𝐲 in Γ̂ is conducted by the following three steps:
* Step1. Computing the lower-level solution 𝐰^(t) from given 𝐲^(t): Let n:=∑_i∈𝐅n_i, p:=∑_i∈𝐅p_i, and q:=∑_i∈𝐅q_i. Since the gradient of the objective function of follower i is strongly monotone, the iteration complexity of the projected gradient descent is O(log(1/ρ_1)) where ρ_1 is a residual error of search direction 𝐃(𝐲^(t), 𝐱^(k)), that is, ρ_1 ≥ ||𝐃( 𝐲^(t), 𝐱^(k))|| <cit.>. Next, the gradient update cost per iteration is O(n). Since h_i^j(𝐲, 𝐱)'s are convex and l_i^k(𝐲, 𝐱)'s are linear on 𝐱, the runtime to find ρ_2-approximate projection is O(n(p+2q)^2.5log^2(1/ρ_2) + (p+2q)^3.5log(1/ρ_2)) <cit.>. So, the overall runtime T_1 to compute the lower-level solution 𝐰^(t) from given 𝐲^(t) is O(log(1/ρ_1)×(n (p+2q)^2.5log^2(1/ρ_2) + (p+2q)^3.5log(1/ρ_2))).
* Step2. Computing the gradient of the leader's objective: Let 𝐒 be the set of the KKT conditions, that is defined as equation (<ref>). Since the number of inequality constraints of 𝐒 is 2p and the number of equality constraints of 𝐒 is n+p+q, the number of inequality constraints of Ω_F is 3p and the number of equality constraints of Ω_F is n+p+2q. So, the time complexity to compute the gradient of the follower's decision with respect to the leader's decision
d 𝐰^(t)(𝐲^(t))/d 𝐲 = M_FF^(t)^-1M^(t)_F^T(M^(t)_FM^(t)_FF^-1M^(t)_F^T)^-1(M^(t)_FM^(t)_FF^-1M^(t)_LF - M^(t)_L)-M^(t)_FF^-1M^(t)_LF is O((n_L+n+4p+2q)^3) when we use pseudo-inverse. Then, the runtime T_2 to compute the gradient of the leader's objective, is expressed in equation (<ref>), is O((n_L+n+4p+2q)^3 + n_L^2 + n_L^2 n)=O((n_L+n+4p+2q)^3).
* Step3. Updating the leader's decision 𝐲^(t+1): The time complexity to add the leader's gradient is O(n_L). The runtime to find ρ_3-approximate projection is O( n_L (p_L+2q_L)^2.5log^2(1/ρ_3) + (p_L+2q_L)^3.5log(1/ρ_3)) <cit.>. So, the overall runtime T_3 to update the leader's decision 𝐲^(t+1) is O(n_L (p_L+2q_L)^2.5log^2(1/ρ_3) + (p_L+2q_L)^3.5log(1/ρ_3)).
As a result, the time complexity of computing the Stackelberg equilibrium of Γ is T=t_max(T_1+T_2+T_3) where the maximum number of iteration of the PIGD algorithm is t_max. Since t_max, ρ_1, ρ_2, and ρ_3 are constant, and p, q, and n are linear to |𝐅|=N, the total runtime T to compute the Stackelberg equilibrium of Γ is O(N^3.5 + (N+n_L)^3 + n_L(p_L+2q_L)^2.5 + (p_L+2q_L)^3.5) = O ( N^3.5), that is polynomial to the number of followers.
In the following sections, we formulate real-world problems in the form of the 1-N Stackelberg games Γ and analyze the results of applying our algorithms.
§ SHARING PLATFORM PROBLEM
We employ the generalized Stackelberg game concept and its solution finding algorithm to model the operation of a sharing platform and compute its generalized Stackelberg equilibrium. Particularly, we consider the two problems of deriving operating strategies for EV charging stations:
* The first problem is optimizing the one-time charging price for EV users <cit.> in which a platform operator determines the price of electricity, and EV users determine the optimal amount of charging for their satisfaction. This problem is considered to verify that the proposed algorithm can produce a solution converging to the true equilibrium solution computed analytically.
* The second problem is the EV dispatching problem in optimizing the spatially varying charging price for EV users to optimally balance the demand and supply over every charging station <cit.> in which a platform operator determines the price of electricity and EV users determine which charging station to go to. The second problem is considered to validate that the proposed modeling and computing method can reliably increase the leader's objective in a more complex problem where there is no known analytical solution. We compare the performance of the proposed PIGD algorithm with the proximal algorithm designed to find a stationary point without considering the hierarchical structure.
§.§ One-time EV Charging Problem
§.§.§ Problem description.
We consider the EV charging problem with one operator and N EVs formulated in <cit.>. After the operator decides the electricity price p∈ℝ to maximize his profit, each EV i requests a charging amount x_i∈ℝ to maximize his level of satisfaction. In this problem, the followers should satisfy the joint constraint so that the total energy requirement does not exceed the available energy capacity. The 1-N generalized Stackelberg game of the one-time EV charging problem is formulated as follows:
Leader's problem : max_p p∑_i=1^Nx_i
Follower i's problem : max_x_i b_ix_i-1/2s_ix_i^2-px_i
s.t. ∑_j=1^Nx_j≤ C
where the parameters b_i, s_i, and C represent battery capacity, satisfaction parameter, and joint charging limit of EVs, respectively.
§.§.§ Transformation to 1-1 Stackelberg game.
We transform the one-time EV charging problem into a 1-1 Stackelberg game having the same solution by applying the proposed converting scheme. By following the transforming procedure described in Appendix <ref>, the one-time EV charging problem is given as follows:
Leader's problem : max_p p∑_i=1^Nx_i
Followers' joint problem : max_𝐱,𝐳,μ ∑_i=1^N(-b_i+s_ix_i+p)(z_i-x_i)
s.t. ∑_i=1^Nx_i≤ C, ∑_i=1^Nz_i≤ C
-b_i+s_ix_i+p+μ=0, ∀ i∈[N]
μ(∑_i^Nz_i-C)=0, μ≥ 0
where the last two constraints are obtained from the KKT condition of 𝐳^*=_𝐳∈∏_i∈𝐅Ω_i(p, 𝐳_-i)𝐃(p, 𝐱)^T(𝐱-𝐳).
§.§ EV Dispatching Problem
§.§.§ Problem description.
We reformulate the problem proposed by <cit.>, which is the Stackelberg game, into the generalized Stackelberg game by adding the joint constraints that the EVs should satisfy.
In this EV dispatching problem, the operators determine the price of electricity for M charging stations distributed in a city, and N EVs distributed over the city choose the charging station to use. The operator wants to regulate the expected number of EVs charging at the station m to be close to the given fixed value V^m. For that, the operator sets the electricity price vector 𝐩={p^m}_m∈[M] for the charging stations which is bounded by minimum price p_min and maximum price p_max. Station m has an energy limit L^m that can be supplied and the maximum number of EVs U^m that can be accommodated.
After the operator sets the electricity price, EV i decides which charging station to use for charging a fixed amount of electricity E_i. The decision of the EV i, denoted by 𝐱_i={x_i^m}_m∈[M], is represented as a probability distribution that EV i chooses the station m. Then the expected number of EVs that will charge at the station m is given by v^m=∑_i=1^Nx_i^m. EV i determines its destination to minimize the cost induced by the distance 𝐝_i={d_i^m}_m∈[M], electricity price 𝐩, and congestion level 𝐯={v^m}_m∈[M] of the charging stations. In addition, since the importance of each cost term varies from person to person, each EV has its own coefficients representing the priority of distance, price, and congestion level, denoted by α_i^d, α_i^ p, and α_i^v for each EV i, respectively.
The 1-N generalized Stackelberg game of the EV dispatching problem is formulated as follows:
Leader's problem : min_𝐩 ∑_m=1^M(v^m-V^m)^2
s.t. p_min≤ p^m≤ p_max,∀m∈[M]
Follower i's problem : min_𝐱_i ∑_m=1^M(α_i^dd_i^mx_i^m+α_i^pE_ip^mx_i^m+α_i^vx_i^mv^m)
s.t. 0≤ x_i^m≤ 1, ∀m∈[M]
∑_m=1^Mx_i^m=1
∑_j=1^Nx_j^mE_j≤ L^m, v^m≤ U^m, ∀m∈[M]
The last two inequalities represent the constraints due to the maximum amount of electricity that can be offered and the maximum number of EVs that can be accommodated at the station m.
§.§.§ Transformation to 1-1 Stackelberg game.
By following the transforming procedure described in Appendix <ref>, the EV dispatching problem is given as follows:
Leader's problem : min_𝐩 ∑_m=1^M(v^m-V^m)^2
s.t. p_min≤ p^m≤ p_max,∀m∈[M]
Followers' joint problem : max_𝐱,𝐳,μ, λ ∑_i=1^N∑_m=1^M(A_i^m+α_i^v(x_i^m+v^m))(z_i^m-x_i^m)
s.t. 0≤ x_i^m≤ 1, ∀i∈[N], ∀m∈[M]
∑_m=1^Mx_i^m=1, ∀i∈[N]
∑_i=1^Nx_i^mE_i≤ L^m, v^m≤ U^m, ∀m∈[M]
0≤ z_i^m≤ 1, ∀i∈[N], ∀m∈[M]
∑_m=1^Mz_i^m=1, ∀i∈[N]
∑_i=1^Nz_i^mE_i≤ L^m, ∑_i=1^Nz_i^m≤ U^m, ∀m∈[M]
A_i^m+α_i^v(x_i^m+v^m)-μ_i,m^1+μ_i,m^2
+E_iμ_m^3+μ_m^4+λ_i=0, ∀ i∈[N], ∀ m∈[M]
μ_i,m^1z_i^m=0, μ_i,m^2(z_i^m-1)=0, ∀ i∈[N], ∀ m∈[M]
μ_m^3(∑_i=1^N(z_i^mE_i)-L^m)=0, ∀ m∈[M]
μ_m^4(∑_i=1^Nz_i^m-U^m)=0, ∀ m∈[M]
μ_i,m^1,μ_i,m^2,μ_m^3,μ_m^4≥ 0, ∀i∈[N], ∀m∈[M]
where A_i^m=α_i^dd_i^m+α_i^pE_ip^m. Here, μ={μ_i,m^1,μ_i,m^2,μ_m^3,μ_m^4}_i∈[N], m∈[M], and λ={λ_i}_i∈[N] are Lagrange variables of the inequality constraints and equality constraints, respectively.
Like most generalized Stackelberg games, the EV dispatching problem has no analytic solution or known algorithm for computing it, which hinders the validation of the proposed computing method. Therefore, we consider a baseline pricing strategy induced by the proximal algorithm to compare with the result of the PIGD algorithm. The proximal algorithm iteratively updates the leader's and N followers' decisions until reaching the stationary point. The leader and N followers update their decision by best responding to others' decisions by optimizing a regularized objective function where the penalty term regularizes the changes of their decision compared to the previous step. The proximal algorithm does not consider the hierarchical structure of the problem but guarantees convergence to the generalized Nash equilibrium. The detailed algorithm is provided in Appendix <ref>.
The proximal algorithm is also used in the one-time EV charging problem as a baseline to show that the result from the proximal algorithm usually does not converge to the generalized Stackelberg equilibrium. In the next section, we discuss the performance of the proposed algorithm with the numerical results.
§ SIMULATION RESULTS AND ANALYSIS
We apply the PIGD algorithm to derive the generalized Stackelberg equilibrium of the proposed generalized Stackelberg games and compare it with the analytically found equilibrium and the baseline result. First, we assessed the convergence of the PIGD algorithm with the one-time EV charging problem by comparing the derived equilibrium strategy with the true generalized Stackelberg equilibrium computed in Appendix <ref>. After that, with the EV dispatching problem, we compared the objective values of both the leader and followers derived by the PIGD algorithm and the proximal algorithm to show that our algorithm performs better than the proximal algorithm from the leader's perspective.
§.§ Convergence of the Algorithm Using One-time EV Charging Problem
We first verify that the PIGD algorithm guarantees the convergence of the solution to the generalized Stackelberg equilibrium by solving the one-time EV charging problem. Figure <ref> (a) shows how the PIGD algorithm (red line) and the proximal algorithm (blue line) update the leader's decision for each iteration. Figure <ref> (b) shows the corresponding followers' decision change when the PIGD algorithm and the proximal algorithm are applied, respectively. As shown in the figures, the decision of the leader and the followers converge to the generalized Stackelberg equilibrium computed analytically. The process of computing the analytical solution is provided in Appendix <ref>. These results imply that the PIGD algorithm can find the generalized Stackelberg equilibrium, while the proximal algorithm fails to produce the generalized Stackelberg equilibrium.
Figure <ref> shows how the leader's and followers' objective value changes as the iteration of the PIGD algorithm and the proximal algorithm proceeds. The gradient-based algorithm continuously increases the leader's profit until reaching the generalized Stackelberg equilibrium (dashed line). However, the proximal algorithm fails to converge to the generalized Stackelberg equilibrium; it initially reaches the equilibrium but diverges from it. This result shows that the proximal algorithm is not suitable for solving the Stackelberg game because it ignores the hierarchy in which the leader acts first.
§.§ Efficiency of the Algorithm Using EV Dispatching Problem
To validate the effectiveness of the proposed algorithm, we solve the EV dispatching problems. In this subsection, we show how PIGD algorithm and the proximal algorithm solve a specific EV charging problem with parameters N=25, M=5, p_max=20, p_min=1. The EVs and charging stations are sampled uniformly from the rectangular 2-dimensional space. The require loads E_i of the EVs are sampled from uniform distribution on [0.2, 1] and the priority coefficient α_i^d, α_i^p, α_i^v are sampled from the uniform distribution on [0.2, 0.4]. We set the initial electricity price identically for all charging stations. Each algorithm iteratively updates the electricity price to indirectly control the EVs' destination so that the desired capacity is satisfied.
Figure <ref> shows the problem setting and experimental results of a single problem. Each station shows the target EV number, and the partially colored bar represents the relative electricity price. The PIGD sets the different charging prices for stations to influence an EVs' charging station allocation. It assigns high electricity prices for station 1 and 5, whose target numbers are small, and low electricity prices for station 3, whose target number is high. As a result, more EVs tend to go to station 3 and fewer EVs to station 1 and 5. Please note that the EV allocation is determined not solely by the charging price but also by the distance to the charging stations and the expected waiting time.
On the other hand, the proximal algorithm only induces the same charging price because it cannot model the hierarchical influence of the operator's decision on the EV users. As a result, the EVs determine the charging stations only considering the distance to the charging stations and the expected waiting time. In conclusion, the proposed algorithm can derive an effective operating strategy of the charging station platform while properly considering the hierarchical interactions among the operator and the EV users.
In Figure <ref> (b), each dashed line represents the objective value curve of a single EV. We align the ending point of the curves to be zero to show the convergence trend clearly. This shows that all the follower's decisions converge in both algorithms. Figure <ref> (c) shows how the average objective value of EVs varies with the iteration of the two algorithms. The figure shows that both algorithms continuously improve the average objective value (lower is better). It is worth noting that the proximal algorithm induces a lower average objective value of EVs, but it does not indicate the inefficiency of the proposed PIGD algorithm. In this Stackelberg game, the leader takes action first; thus, the leader has the advantage of achieving a better objective value than the followers, which can result in worse objective values for the followers. Therefore, there is no justification that the objective value of the followers of the PIGD algorithm should be better than the value of the proximal algorithm.
We investigate the general performance of the proposed PIGD algorithm by solving the EV dispatching problems with a different number N of EVs and M of charging stations. Mainly, we investigate the varying trend of the objective value of the platform operator for the proposed method with the increases in the problem size (i.e., N and M). We sampled a total of 30 problem instances per every combination of N and M.
Table <ref> compares the leader's objective values achieved by PIGD algorithm and the proximal algorithm. Since the optimal objective value varies depending on sampled problem instances (i.e., the locations of the EVs and stations), only the average objective of the leader is used as a performance measure. As the number N of EVs increase, the objective value of the leader (i.e., the deviation of the target EV assignments and the actual assignment) increases because the platform must manage more EVs. On the contrary, as the number M of charging stations increases, the objective value of the leader tends to decrease because the operator can deploy more various pricing strategies. The results show that PIGD algorithm induces a significantly lower leader's objective value regardless of the problem sizes, indicating that PIGD algorithm can effectively derive the control strategy of an EV charging station operator.
§ CONCLUSION
In this study, we propose the general method to find a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game (single-leader multi-follower game). First, we defined the 1-N generalized Stackelberg game and provided the conditions where a generalized Stackelberg equilibrium always exists. Then, we convert the 1-N generalized Stackelberg game into the 1-1 Stackelberg game and apply the PIGD algorithm to compute a generalized Stackelberg equilibrium in polynomial time. The numerical results demonstrate the convergence and effectiveness of our algorithm. The proposed methodology has no restrictions on the problem structure; thus, the proposed modeling and computing methods can be applied to derive an efficient operating strategy for various sharing platforms, such as ride-sharing, car-sharing, and space-sharing.
§ PROOF
§.§ Proof of Theorem <ref>
Let G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅> be the N followers' generalized normal-form game when the leader's decision is 𝐲. Since the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> satisfies the condition 1, 2, and 4 of Theorem <ref>, there is the unique variational equilibrium of G(𝐲) for all 𝐲∈Ω_L by Theorem <ref>. Then, Γ has a variational Stackelberg equilibrium (𝐲^*, 𝐱^*(𝐲^*)) by Definition <ref>.
Because the 1-N generalized Stackelberg game Γ satisfies the condition 1, 2, and 3 of Theorem <ref>, the unique variational equilibrium 𝐱^*(𝐲) of G(𝐲) is also a generalized Nash equilibrium of G(𝐲) for all 𝐲∈Ω_L by Theorem <ref>. Then, the following equation holds:
sup_𝐱(𝐲^*)∈GNE(𝐲^*)f_L(𝐲^*, 𝐱(𝐲^*)) ≥ sup_𝐱(𝐲^*)∈VE(𝐲^*)f_L(𝐲^*, 𝐱(𝐲^*))
= f_L(𝐲^*,𝐱^*(𝐲^*))
By equation (<ref>) of Definition <ref>,
f_L(𝐲^*,𝐱^*(𝐲^*)) ≥ f_L(𝐲,𝐱^*(𝐲))
≥ inf_𝐱(𝐲)∈GNE(𝐲)f_L(𝐲, 𝐱(𝐲)), ∀𝐲∈Ω_L
Thus, equation (<ref>) of Definition <ref> holds for the variational Stackelberg equilibrium (𝐲^*, 𝐱^*(𝐲^*)) by equations (<ref>) and (<ref>). Therefore, the variational Stackelberg equilibrium of Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> is also a generalized Stackelberg equilibrium Γ.
§.§ Proof of Proposition <ref>
Let 𝐱^*(𝐲) is a generalized Nash equilibrium of the followers' generalized normal-form game G(𝐲) = <𝐅, ( f_i)_i ∈𝐅, (Ω_i)_i∈𝐅>, and (𝐲^*, 𝐱^*(𝐲^*)) is a generalized Stackelberg equilibrium of the 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅>. Since the set of a generalized Nash equilibrium of G(𝐲) is unique for all 𝐲∈Ω_L, a generalized Stackelberg equilibrium (𝐲^*, 𝐱^*(𝐲^*)) satisfies the following equation:
f_L(𝐲^*, 𝐱^*(𝐲^*)) ≥ f_L(𝐲, 𝐱^*(𝐲)), ∀𝐲∈Ω_L
Assume that Γ has more than one generalized Stackelberg equilibrium. That is, there are two generalized Stackelberg equilibrium (𝐲^1, 𝐱^*(𝐲^1)) and (𝐲^2, 𝐱^*(𝐲^2)) which satisfy equation (<ref>). Since Ω_L is closed and convex, 𝐲^3:=α𝐲^1+(1-α)𝐲^2∈Ω_L for all α∈[0, 1]. Moreover, f_L(𝐲^3, 𝐱^*(𝐲^3)) ≥α f_L(𝐲^1, 𝐱^*(𝐲^1)) + (1-α) f_L(𝐲^2, 𝐱^*(𝐲^2)) because f_L(𝐲, 𝐱(𝐲)) is strictly concave on 𝐲∈Ω_L, that is f_L(𝐲^3, 𝐱^*(𝐲^3)) ≥ f_L(𝐲^1, 𝐱^*(𝐲^1)) or f_L(𝐲^3, 𝐱^*(𝐲^3)) ≥ f_L(𝐲^2, 𝐱^*(𝐲^2)). It contradicts that (𝐲^2, 𝐱^*(𝐲^2)) and (𝐲^2, 𝐱^*(𝐲^2)) satisfy equation (<ref>). Thus, 1-N generalized Stackelberg game Γ=<𝐅, f_L, (f_i)_i∈𝐅, Ω_L, (Ω_i)_i∈𝐅> has the unique generalized Stackelberg equilibrium.
§ FORMULA DERIVATION
§.§ Transformation of 1-N generalized Stackelberg game to 1-1 Stackelberg game
§.§.§ One-time EV charging problem
The one-time EV charging problem is formulated as follows:
Leader's problem : max_p p∑_i=1^Nx_i
Follower i's problem : max_x_i b_ix_i-1/2s_ix_i^2-px_i
s.t. ∑_j=1^Nx_j≤ C
With the given leader decision p, the variational inequality problem of the followers' subgame is finding 𝐱∈∏_i=1^NΩ_i(p,𝐱_-i)={𝐱 | ∑_i=1^Nx_i≤ C} which satisfies the following equation:
∑_i=1^N(-b_i+s_ix_i+p)(z_i-x_i) ≥ 0, ∀𝐳∈∏_i=1^NΩ_i(p,𝐳_-i)={𝐳 | ∑_i=1^Nz_i≤ C}
where 𝐱 and 𝐳 represent the vector of the followers' action.
Instead of solving the variational inequality problem directly, we change it to a bilevel problem with 𝐱 and 𝐳 as variables of each level. Then the 1-N generalized Stackelberg game is converted to a three-level optimization problem as follows:
Leader's problem : max_p p∑_i=1^Nx_i
Followers' upper-level problem : max_𝐱 ∑_i=1^N(-b_i+s_ix_i+p)(z_i-x_i)
s.t. ∑_i=1^Nx_i≤ C
Followers' lower-level problem : min_𝐳 ∑_i=1^N(-b_i+s_ix_i+p)(z_i-x_i)
s.t. ∑_i=1^Nz_i≤ C
In equation (<ref>), the optimal condition of the lower-level problem can be substituted with the KKT condition and added as constraints at the upper-level problem. By adding a new Lagrange variable μ, the three-level optimization problem is converted to 1-1 Stackelberg game as follows:
Leader's problem : max_p p∑_i=1^Nx_i
Followers' joint problem : max_𝐱,𝐳,μ ∑_i=1^N(-b_i+s_ix_i+p)(z_i-x_i)
s.t. ∑_i=1^Nx_i≤ C, ∑_i=1^Nz_i≤ C
-b_i+s_ix_i+p+μ=0, i∈[N]
μ(∑_i=1^Nz_i-C)=0, μ≥ 0
§.§.§ EV dispatching problem
The EV dispatching problem is formulated as follows:
Leader's problem : min_𝐩 ∑_m=1^M(v^m-V^m)^2
s.t. p_min≤ p^m≤ p_max,∀m∈[M]
Follower i's problem : min_𝐱_i ∑_m=1^M(α_i^dd_i^mx_i^m+α_i^pE_ip^mx_i^m+α_i^vx_i^mv^m)
s.t. 0≤ x_i^m≤ 1, ∀m∈[M]
∑_m=1^Mx_i^m=1
∑_j=1^Nx_j^mE_j≤ L^m, v^m≤ U^m, ∀m∈[M]
When the leader's decision 𝐩 is given, the definition of the variational inequality problem of the N followers' subgame is finding the followers' decision 𝐱∈∏_i=1^NΩ_i(𝐩,𝐱_-i) which satisfies the following equation:
∑_i=1^N∑_m=1^M(A_i^m+α_i^v(x_i^m+v^m))(z_i-x_i) ≥ 0, ∀𝐳∈∏_i=1^NΩ_i(𝐩,𝐳_-i)
where A_i^m:=α_i^dd_i^m+α_i^pE_ip^m and 𝐱, 𝐳 represent the vector of the followers' action.
Instead of solving the variational inequality problem directly, we change it to a bilevel problem with 𝐱 and 𝐳 as variables of each level. Then the 1-N generalized Stackelberg game is converted to a three-level optimization problem as follows:
Leader's problem : max_𝐩 ∑_m=1^M(v^m-V^m)^2
s.t. p_min≤ p^m≤ p_max,∀m∈[M]
Followers' upper-level problem : max_𝐱 ∑_i=1^N∑_m=1^M(A_i^m+α_i^v(x_i^m+v^m))(z_i-x_i)
s.t. 𝐱∈∏_i=1^NΩ_i(𝐩,𝐱_-i)
Followers' lower-level problem : min_𝐳 ∑_i=1^N∑_m=1^M(A_i^m+α_i^v(x_i^m+v^m·))(z_i-x_i)
s.t. 𝐳∈∏_i=1^NΩ_i(𝐩,𝐳_-i)
In equation (<ref>), the optimal condition of the lower-level problem can be substituted with the KKT condition and added as constraints at the upper-level problem. By adding new Lagrange variables μ={μ_i,m^1,μ_i,m^2,μ_m^3,μ_m^4}_i∈[N], m∈[M] for the inequality constraints and λ={λ_i}_i∈[N] for the equality constraints, the three-level optimization problem is converted to a 1-1 Stackelberg game as follows:
Leader's problem : min_𝐩 ∑_m=1^M(v^m-V^m)^2
s.t. p_min≤ p^m≤ p_max,∀m∈[M]
Followers' joint problem : max_𝐱,𝐳,μ, λ ∑_i=1^N∑_m=1^M(A_i^m+α_i^v(x_i^m+v^m))(z_i^m-x_i^m)
s.t. 0≤ x_i^m≤ 1, ∀i∈[N], ∀m∈[M]
∑_m=1^Mx_i^m=1, ∀i∈[N]
∑_i=1^Nx_i^mE_i≤ L^m, v^m≤ U^m, ∀m∈[M]
0≤ z_i^m≤ 1, ∀i∈[N], ∀m∈[M]
∑_m=1^Mz_i^m=1, ∀i∈[N]
∑_i=1^Nz_i^mE_i≤ L^m, ∑_i=1^Nz_i^m≤ U^m, ∀m∈[M]
A_i^m+α_i^v(x_i^m+v^m)-μ_i,m^1+μ_i,m^2
+E_nμ_m^3+μ_m^4+λ_n=0, ∀ n∈[N], ∀ m∈[M]
μ_i,m^1z_i^m=0, μ_i,m^2(z_i^m-1)=0, ∀ i∈[N], ∀ m∈[M]
μ_m^3(∑_i=1^N(z_i^mE_i)-L^m)=0, ∀ m∈[M]
μ_m^4(∑_i=1^Nz_i^m-U^m)=0, ∀ m∈[M]
μ_i,m^1,μ_i,m^2,μ_m^3,μ_m^4≥ 0, ∀i∈[N], ∀m∈[M]
§.§ Generalized Stackelberg Equilibrium of the One-time EV Charging Problem
To compute the generalized Stackelberg equilibrium, we first solve the followers' subgame with the fixed leader's decision p. We utilize the KKT condition to compute the variational equilibrium of the followers' subgame. The Lagrangian coefficient of the followers' joint constraint is identical for all followers at the variational equilibrium, which is proven in Theorem 9 <cit.>. We denote this Lagrangian coefficient of joint constraint of the followers' subgame in the one-time EV charging problem as μ. Then the KKT condition of the followers' subgame is derived as follows:
b_i-s_ix_i-p-μ=0, ∀i∈[N]
∑_i=1^Nx_i≤ C, μ(∑_i=1^Nx_i-C)=0, μ≥0
We can compute the solution of equation (<ref>) by dividing the case of μ=0 and the case of ∑_i=1^Nx_i-C=0. Then the variational equilibrium of the followers with the given leader decision p is given as follows:
x_i=
{[ b_i-p/s_i if B-pS ≤ C; 1/s_i(b_i-B-C/S) if B-pS > C ].
where B=∑_i=1^N(b_i/s_i) and S=∑_i=1^N(1/s_i)
Here B and S are constant since they only depend on the EVs' battery capacity and satisfaction parameter. Let's assume that B≤ 2C. When B-pS ≤ C, x_i=b_i-p/s_i is a variational equilibrium point of the followers. Substituting this into the leader's objective p∑_i=1^Nx_i, we get a concave quadratic expression for p which has an optimal solution p^*=B/2S. It also satisfies the condition B-p^*S=B/2≤ C by the assumption. The optimal solution of the follower i is x_i^*=b_i-p^*/s_i.
§ BASELINE ALGORITHMS
|
http://arxiv.org/abs/2306.17798v1
|
20230616155321
|
Masked Contrastive Graph Representation Learning for Age Estimation
|
[
"Yuntao Shou",
"Xiangyong Cao",
"Deyu Meng"
] |
cs.CV
|
[
"cs.CV"
] |
Xi'an Jiaotong University
Xi'an
China
[email protected]
Corresponding Author
Xi'an Jiaotong University
Xi'an
China
[email protected]
Xi'an Jiaotong University
Xi'an
China
[email protected]
Age estimation of face images is a crucial task with various practical applications in areas such as video surveillance and Internet access control. While deep learning-based age estimation frameworks, e.g., convolutional neural network (CNN), multi-layer perceptrons (MLP), and transformers have shown remarkable performance, they have limitations when modelling complex or irregular objects in an image that contains a large amount of redundant information. To address this issue, this paper utilizes the robustness property of graph representation learning in dealing with image redundancy information and proposes a novel Masked Contrastive Graph Representation Learning (MCGRL) method for age estimation. Specifically, our approach first leverages CNN to extract semantic features of the image, which are then partitioned into patches that serve as nodes in the graph. Then, we use a masked graph convolutional network (GCN) to derive image-based node representations that capture rich structural information. Finally, we incorporate multiple losses to explore the complementary relationship between structural information and semantic features, which improves the feature representation capability of GCN. Experimental results on real-world face image datasets demonstrate the superiority of our proposed method over other state-of-the-art age estimation approaches.
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Computer systems organization Embedded systems
[300]Computer systems organization Redundancy
Computer systems organization Robotics
[100]Networks Network reliability
Masked Contrastive Graph Representation Learning for Age Estimation
Deyu Meng
July 31, 2023
===================================================================
§ INTRODUCTION
Age estimation of face images has garnered significant interest from researchers in recent years, owing to the growing prevalence of deep learning techniques in the field of computer vision. The task of age estimation involves predicting a person's age based on the semantic features present in their facial image, and it is widely used in video surveillance and minors' anti-addiction systems. For example, in the anti-addiction system for minors, it is possible to prevent minors from indulging in games by estimating the age of players.
In the field of image processing, CNN and Transformer have become common network architectures, which play a decisive role in feature extraction capabilities. Different network architectures can model images in different ways. As shown in Fig. 1(a), CNNs usually process image data in a regular grid manner. As shown in Fig. 1(b), the Transformer treats image data as a sequence of pixels or patches. Both CNN and Transformer process images in Euclidean space and cannot be applied to data in non-Euclidean space. This modelling method is inflexible for complex and irregular objects. As shown in Fig. 1(c), the graph-based modelling approach can flexibly model complex and irregular objects. For example, a human face is composed of hair, eyes, nose, mouth, and ears, and these parts can naturally form a graph structure. By extracting the information in the graph structure, we can predict the age of the person. Furthermore, regular grid structures and sequence structures can be viewed as a special case of graph structures. However, the modelling method based on CNN and Transformer is easy to introduce redundant information when dealing with irregular data, which will bring noise interference to the model prediction age. As shown in Figure 2, we cut the areas that are not related to the face in the image and then input them into CNN or Vision Transformer (ViT) for age estimation, their MAE value will decrease and the cumulative score will increase. While using graph structure for age estimation, the performance of the model remains almost the same regardless of whether the model is clipped to regions not related to the face. In summary, the graph structure-based modelling method is more robust in dealing with images with a large amount of reductant information.
Therefore, this paper attempts to utilize the graph representation learning technique to deal with the image redundancy information issue for the age estimation task. Specifically, we propose a novel method called Masked Contrastive Graph Representation Learning for Age Estimation (MCGRL). This method is capable of modelling complex and irregular objects while more efficiently fusing complementary semantic information between structural information and semantic features. Firstly, we extract semantic features in images using CNN as anchor embeddings and divide images into patches as nodes in the graph. Secondly, to improve node feature representation, we utilize a masked graph convolutional neural network (GCN) to aggregate node information after masking some nodes in the graph. Thirdly, we generate positive embedding samples using masked GCN and neighbour sampling, and negative embedding samples using random row shuffling. Finally, we employ multiple loss functions to minimize the spatial distance between the anchor and positive embeddings, while maximizing the distance between positive and negative samples. Therefore, this fusion of complementary semantic information between structural information and semantic features can be achieved by our proposed MCGRL method.
In summary, the contributions of our method are as follows:
* This paper proposes a novel Masked Contrastive Graph Representation Learning (MCGRL) method to estimate the age of face images. The MCGRL method can alleviate the inflexibility issue of the existing age estimation methods in modeling irregular objects.
* The MCGRL method provides a general framework to model irregular objects in the image by using a graph structure. The representation ability of node semantic information can be improved by a graph convolutional network with a mask. Furthermore, the complementary semantic information between structural information and semantic features is captured by contrastive learning methodology.
* Extensive experiments demonstrate the superiority of the proposed MCGRL method compared with other state-of-the-art age estimation methods.
§ RELATED WORK
§.§ Age Estimation Methods
The age of face images is estimated to be widely used in many areas in the real world, and it has great application value. The age estimation method can be roughly divided into two categories, i.e., machine learning methods and deep learning methods.
Machine learning methods: The main idea of machine learning methods is to use some traditional regression methods to learn from the data extracted by hand, and obtain a coherent curve with the good generalization ability. Typical methods include Moving Window Regression (MWR) <cit.>, Consistent Ordinal Regression (CORAL) <cit.>, Quantifying Facial Age by Posterior <cit.>, Ranking SVM (RS) <cit.>, etc <cit.>.
Deep learning methods: The main idea of deep learning methods is to extract deep image features in an end-to-end manner from the primitive face image and then combine these image feature information to perform age estimation tasks. Currently, deep learning-based age estimation methods include CNN <cit.>, attention network <cit.>, and hybrid neural networks <cit.>. The CNN architecture is mainly to extract the deep-seated features of the local area in the image through convolution operations, and the local characteristics are combined through the full connection layer to make age prediction, e.g., Cascade Context-Based Age Estimation (C3AE) <cit.>, Agenet <cit.>, etc <cit.>. The attention mechanism mainly uses global modelling capabilities to pay attention to the key areas in the image and combine key information to perform age prediction, e.g., Attention-Based Dynamic Patch Fusion (ADPF) <cit.>, Hierarchical Attention-Based Age Estimation <cit.>, etc <cit.>, <cit.>. The hybrid neural network predicts age prediction by combining the characteristic extraction ability of deep learning and the fitting ability of traditional machine learning, e.g., <cit.>, <cit.>.
§.§ Graph Neural Network
The most commonly used network architectures in the field of computer vision are Convolutional Neural Networks (CNN) <cit.> and Vision Transformers (ViT) <cit.>. CNN and ViT can extract deep features for regular images, but have limited modelling capabilities for complex topological structures, and cannot be applied to non-Euclidean spaces. On the contrary, graph neural networks (GNN) can well extract relational information in topological architecture and obtain high-level feature representation of images. In recent years, the modelling method based on graph structure has been applied in the 3D point cloud computing <cit.>, action recognition <cit.> and other fields. GNNs can solve image processing tasks that can be naturally constructed as graphs.
§.§ Graph Contrastive Learning
Graph contrastive learning methods aim to learn discriminative feature representations by enlarging the distance between positive and negative embedding samples. For example, Deep Graph Infomax (DGI) <cit.> enhances the feature representation ability of positively embedded nodes by maximizing the mutual information of global and local node representations. Graph Contrastive Adaptive Augmentation (GCA) <cit.> strengthens the underlying semantic information in the graph by adding prior knowledge. Contrastive Multi-view Representation Learning (CMRL) <cit.> obtains more discriminative feature representations by comparing feature representations of first-order neighbour nodes with node representations of graph diffusion.
Nonetheless, the aforementioned graph-contrastive learning approach suffers from two problems. On the one hand, almost all methods need to construct multiple graph contrast views to generate positive and negative embedding samples, which is very computationally intensive. On the other hand, existing graph-contrastive learning methods cannot maintain a safe distance between intra-class and inter-class distances. To sum up, existing graph-contrastive learning methods suffer from high computational complexity and low discriminative power of feature representations.
§ PROPOSED METHOD
In this section, we propose a novel Masked Contrastive Graph Representation Learning (MCGRL) method for age estimation.
§.§ Graph Construction from Image
First, we segment a face image of size H × W× 3 into N patches. Each patch is transformed into a feature representation ξ_i ∈ℝ^d, where d represents the dimensionality of each feature representation and i={1,2,…,N}. Then, the feature representations of these patches are regarded as unordered nodes which are denoted as 𝒱={v_1,v_2,…,v_N}. For each central node in the graph, we find its K nearest neighbour nodes 𝒩(v_i) for edge building. Finally, we obtain a graph 𝒢=(𝒱,ℰ,ℛ,𝒲), where ℰ is the set of all edges, the directed edge r_ij (r_ij∈ℛ) indicates that there is a connection relationship between node v_i and node v_j. ω_ij (ω_ij∈ W, 0≤ω_ij≤ 1) represents the weight of edge r_ij, and r∈ℛ is the relation type of the edge.
Mask generator: We use mask operations on some nodes in the graph to improve the feature representation ability of GCN <cit.>. Specifically, we first set an actual mask rate p. Then we generate an all-ones matrix of the same size as the input image and randomly set some elements to 0 according to the masking rate p. The specific formula of the mask generator is defined as follows:
𝒱^[M]=M^p ⊙𝒱,
where 𝒱^[M] represents a masked graph, M^p is a mask matrix, and ⊙ represents a dot product operation. When p is equal to 0, M^p is an all-one matrix.
§.§ Anchor and Negative Embedding Generation
Previous work <cit.> always conducts contrastive learning by constructing multiple graph contrastive views and uses the node representations aggregated by GCN as anchor embeddings. However, the computational complexity of GCN is very large, which will lead to a very long training time for the model. To speed up the computation, we use a CNN with MLP on the input face image to generate anchor embedding samples containing semantic features. The formula for anchor embedding generation is defined as follows:
ξ^(l+1)=Dropout(LeakyReLU(conv(ξ^(l+1)) W^(l))),
H =ξ^(l+1) W^(l+1).
where ξ^0=ξ, W^(l) is the weight of layer l in MLP.
For the generation of negative embedding samples, we directly row-shuffle anchor embedding samples to obtain negative embedding samples. The formula is defined as follows:
H^-=Shuffle([ξ_1, ξ_2 …, ξ_N])
§.§ Positive Embedding Generation
§.§.§ Structural Information
GCN is used to obtain graph structure information of face images. To capture the key semantic information in nodes, we also introduce an attention mechanism to assign different weights to each edge. First, we utilize MLP to compute the correlation between node i and node j. The formula is defined as follows:
δ_i j^(l+1)=Dropout(LeakyReLU(W^(l)[ξ_i^(l)⊕ξ_j^(l)])),
where ⊕ indicates the concatenation operation.
Second, we use the softmax function to normalize the correlation coefficient and obtain the attention score of each edge. The formula is defined as follows:
ω_i j^(l+1)=softmax(δ_i j^(l+1))=exp(δ_i j^(l+1))/∑_η∈𝒩_iexp(δ_i j^(l+1)),
where 𝒩_i represents the neighbor nodes of node i.
Finally, node representations are updated by GCN with the LeakyReLU activation function. We take the updated structural information as positive embedding samples. The formula is defined as follows:
H_i^+^(l+1) =LeakyReLU(∑_r ∈ℛ∑_j ∈𝒩_i^r1/|𝒩_i^r|(ω_i j^(l) W_θ_1^(l)H_j^+^(l). .
+ . ω_i i^(l) W_θ_2^(l)H_i^+^(l))),
§.§.§ Neighbor Information
To obtain the neighbour information contained in the structure information, we sample the neighbour nodes of the central node and calculate their average value. In this way, we obtain the neighbour information of nodes as positive embedding samples:
h̃_j^+=1/n∑_j=1^n{h_j | v_j ∈𝒩_i},
where n represents the number of samples.
§.§ Multiple Loss for Contrastive Learning
Different from previous work <cit.> that constructs multiple graph contrasting views, we use triplet loss to compute the distance between anchor embeddings, positive embeddings and negative embeddings, and force the distance between positive embeddings to be closer and make the distance between negative embeddings and positive embeddings farther. In this way, the common characteristics among similar instances can be quickly learned. In addition, shrinking the intra-class variation and expanding the inter-class variation has also been proven to be an effective way to reduce the generalization error <cit.>. Specifically, the triplet loss is defined as follows:
ℒ_t r i=1/m∑_i=1^m{d(h, h^+)^2-d(h, h_i^-)^2+α}_+,
where m represents the number of negative samples, d(·) is used to calculate the distance between samples and α is used to control the minimum distance between positive and negative samples, {·}=max{·,0}.
Since Eq. (8) can only ensure that the distance between positive embedding and negative embedding can be enlarged without considering the inter-class difference, we thus construct two additional triplet losses to ensure that the inter-class differences can be enlarged. The formula is defined as follows:
ℒ_N=1/m∑_i=1^m{d(h, h^+)^2-d(h, h_i^-)^2+α}_+,
ℒ_M=1/m∑_i=1^m{d(h, h̃^+)^2-d(h, h_i^-)^2+α}_+.
Since the structure information h^+ and neighbor information h̃^+ are different, when d(h,h^+ )^2≥ d(h,h̃^+ )^2, ℒ_N is equal to 0 , ℒ_M may not be equal to 0. In the above cases, ℒ_N is not effective for the optimization of the model, while ℒ_M is effective. When d(h,h^+ )^2≤ d(h,h̃^+ )^2, the conclusion also holds. Therefore, by combining the two cases in Eq. (9), the model can learn complementary semantic information between structural information h^+ and neighbour information h̃^+ and thus can enlarge the inter-class differences.
The optimization goal of Eq. (8) is to make d(h,h^+ )^2 - d(h,h̃^+ )^2 closer to α. In this case, the distance between the anchor embedding and the negative embedding is enlarged. However, there are cases where the distance between the anchor embedding and the positive embedding is also enlarged. For the above cases, we introduce upper bounds β to ensure that the distances between anchor embeddings and negative embeddings, and also between anchor embeddings and positive embeddings, are within a controllable range. The formula is defined as follows:
α+d{h, h^+}<d(h, h^-)<d(h, h^+)+α+β,
According to the analysis of Eq. (10), we can expand Eq. (8) as follows:
ℒ_V=-1/m∑_i=1^m{d(h, h^+)^2-d(h, h_i^-)^2+α+β}_-,
Finally, we combine the three loss functions of Eq. (9) and Eq. (11) as our multiple loss. The formula for multiple loss is defined as follows:
ℒ=ω_1 ℒ_N+ω_2 ℒ_M+ω_3 ℒ_V,
where ω_1, ω_2, ω_3 are the hyper-parameters. In our experiments, they are empirically set as 1, 0.5, and 0.5, respectively.
§.§ Implementation Details
We implement the model MCGRL proposed in this paper on a server with a Linux operating system and a graphics card of A100. The experimental environment of this paper is Python 3.7.0, and Pytorch 1.9.1. For the hyperparameter settings of the model, we specified a maximum number of iterations for the model of 50, a batch size of 196, a dropout of 0.5, a weight decay coefficient of 0.0001 and a learning rate of 1e-4. We set the minimum distance α and upper bound β to 0.8 and 0.2, respectively.
§ EXPERIMENTS
§.§ Dataset and Evaluation Metrics
The MORPH-II[http://www.faceaginggroup.com/morph/], FG-Net[http://yanweifu.github.io/FG_NET_data/FGNET.zip], and CACD[http://bcsiriuschen.github.io/CARC/] datasets are widely used for age estimation. Therefore, this paper selects these three benchmark datasets to verify the effectiveness of our MCGRL method. The MORPH-II dataset contains 55,134 male and female face images, and each person has multiple images. The FGNET dataset consists of 1,002 face images ranging in age from 0 to 69, and each person has an average of 12 face images. Face images in the FGNET dataset are captured under different lighting and poses, so age estimation on this dataset is a challenging task. The CACD dataset consists of over 150000 images of 2000 celebrities collected from the internet. The dataset consists of a training set, a validation set, and a testing set, with 1800 people used for training, 80 people used for validation, and 120 people used for testing. To verify the generalization performance of the model, the FACES [http://faces.mpib-berlin.mpg.de], SC-FACE [https://www.scface.org/], and BAG datasets [http://multipie.org] are selected to verify the robustness of our MCGRL in cross-data evaluation. Two evaluation metrics, i.e., Mean Absolute Error (MAE) and Cumulative Score (CS), are chosen to evaluate the performance of each method. CS is the cumulative error of the model prediction results. CS represents the proportion of model prediction errors on the test set that are less than L years.
§.§ Performance Verification Experiment
To verify the effectiveness of our proposed MCGRL method, we test our method on three benchmark datasets of face images. The experimental results are shown in Table 1, Table 2 and Table 3. All three variants of our MCGRL are on par or ahead of the state-of-the-art methods, especially on the CACD dataset. We conjecture that the improved performance is due to the flexible modelling of irregular face images by the graph structure. However, existing comparison methods can only use regular grid structures or sequence structures to model face images, which contain redundant information. In addition, we also compare the results of cumulative scores under different years of error as shown in Figure <ref>. MCGRL maintains the best cumulative score at each stage.
§.§ Visualization of Prediction Results
We randomly select 8 images from the test set in the three datasets MORPH, FGNET and CACD for age prediction respectively. As shown in Figure <ref>, the prediction error of MCGRL on most face images is less than 1 year old, while the error between the prediction results and the real results on a small number of complex images is relatively large. Overall, the prediction results of MCGRL on the test sets are reliable.
§.§ Comparison of CNN and ViT methods
To illustrate the superiority of our method MCGRL more intuitively, we compared it with the traditional CNN method based on the grid structure and the Vision Transformer method based on the sequence structure (i.e., ResNet18, ResNet18, ResNet18, ViT-Base, ViT- Large, and ViT-Large). As shown in Figure <ref>, the MAE value of the MCGRL method in the initial training process is much higher than the ResNet and ViT methods, but as the number of training increases, the convergence speed of MCGRL far exceeds other methods. The improvement of the convergence speed may be attributed to the design of the mask contrastive learning mechanism, which can quickly learn similar features of the same kind of data from a large amount of data and encode them into advanced representations. While other methods outperform MCGRL in predicting the initial training process, it may be attributed to the powerful feature representation ability of the pre-trained model. Nevertheless, MCGRL outperforms CNN and ViT finally, we think this is because MCGRL is more flexible in modelling face images and can remove redundant information in images. Therefore, applying graph neural networks to the field of age estimation from face images has a potential application.
§.§ Generalization Performance Verification
Since the evaluation of model performance within the dataset cannot reflect the generalization performance of the model, we use a cross-dataset evaluation method to verify the generalization performance of our model. The experimental results are shown in Tables 4, 5, and 6. Our method performs significantly better than other algorithms on different data sets. The improvement in generalization performance may be attributed to the fact that the graph structure can flexibly model any complex image, while other methods contain redundant information when extracting features from irregular objects.
§.§ Ablation Study
We perform an ablation study to verify the effectiveness of each module of our proposed method on MORPH, FGNET and CACD datasets and use MCGRL-B as our network architecture.
§.§.§ Type of Graph Convolution
We test the feature representation capabilities of different variants of graph convolutional neural networks, i.e., Max-Relative GraphConv, EdgeConv, GraphSAGE, and GIN. As shown in 7, Max-Relative GraphConv achieves the best MAE on all three datasets. Therefore, we use Max-Relative GraphConv by default in our experiments.
§.§.§ Necessity of Triple Loss
We perform ablation experiments to analyze the effect of triplet loss (ℒ_N and ℒ_M) and upper bound loss (ℒ_V) on model performance. As shown in Table 8, the model achieves the best experimental results when all losses are included. When no modules are used (i.e. only GCNs are used for age prediction), the prediction results of the model are worst. If only ℒ_V is used, the contrast loss cannot be formed. If only ℒ_N or ℒ_M is used, the prediction results of the model are relatively poor. On the contrary, any combination of two losses can improve the prediction accuracy of the model.
§.§.§ Hyper-parameter Analysis
We investigate the effect of hyperparameters on MCGRL, i.e., the masking rate p. As shown in Figure <ref>, we set the mask rate p from 0.1 to 0.9, and MCGRL-B can achieve the best experimental results when the mask rate is 0.6. If the masking rate p is too large, the graph structure loses a large amount of semantic information, so that the features of the nodes cannot be restored.
§ CONCLUSION
In this paper, a novel Masked Contrastive Graph Representation Learning for Age Estimation architecture, named MCGRL, is proposed. Unlike previous architectures using CNN for feature extraction, MCGRL adopts a more flexible GNN to capture irregular and complex objects in images. In order to improve the fusion representation ability of node features and structures in graphs, we design a self-supervised masked graph autoencoder (SMGAE) to perform mask reconstruction on nodes. The feature vectors encoded and decoded by SMGAE have stronger semantic representation ability. Furthermore, to widen the difference between different classes and narrow the gap between the same classes, we introduce a contrastive learning mechanism to improve the generalization performance of the model. On four benchmark datasets for age estimation, MCGRL outperforms existing comparison algorithms.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.03010v1
|
20230605162533
|
Interval Load Forecasting for Individual Households in the Presence of Electric Vehicle Charging
|
[
"Raiden Skala",
"Mohamed Ahmed T. A. Elgalhud",
"Katarina Grolinger",
"Syed Mir"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY"
] |
§ INTRODUCTION
Affordable and reliable sources of electricity enable the sustainable growth of strong economies and can improve the average person’s quality of life <cit.> by providing reliable access to appliances, medical equipment, communication, entertainment, and other devices. The dependence on power grids to provide electricity is increasing due to the continuous integration of novel electronic devices into every aspect of modern life <cit.> as these devices rely on a reliable source of electricity. The mainstream adoption of electric vehicles (EVs) away from traditional internal combustion engine (ICE) vehicles for consumer use is set further to entrench reliance on access to electricity due to the increased electricity demand for charging. While this transition can be a positive step in reducing carbon emissions , embracing EVs will shift transportation energy requirements from petroleum-based products to electric grids. Of specific interest to this paper, is the demand created by charging EVs in residential households, which is frequently utilized due to its relative affordability and convenience.
As countries, such as Canada, plan to ban the sale of new ICE vehicles by 2035 <cit.>, preparations are required to ensure the success of this shift. This includes actions such as installing charging stations, increasing electricity generation capacity, investing into battery technologies, and improving infrastructure throughout the grid to handle the higher loads required by EV charging. The capability of electricity distribution companies to accurately forecast the hourly electricity consumption of residential households that own EVs is instrumental in the transition to EVs as it assists the utility companies to anticipate and manage increased energy demand, plan sufficient capacity to meet expected demand, and ensure grid stability. Failure in predictive ability poses a risk to the balance between electricity supply and demand, which can impose serious threats to grid stability, human life, and overall economic interest . A loss in the balance between supply and demand can lead to power outages, brownouts, and other disruptions. In turn, these electricity disruptions can severely disrupt critical infrastructures including transportation, communications, and financial systems, as well as essential services such as emergency response and healthcare.
There have been extensive efforts to create predictive load forecasting models using machine learning (ML) with historical energy consumption data collected by smart meters or similar technologies, often combined with meteorological information
. In recent years, deep learning techniques, especially those based on Recurrent Neural Networks (RNNs) have been outperforming other techniques . While these studies had great successes in terms of accuracy for a variety of use cases, they do not specifically address short-term forecasting for individual households in presence of EV charging <cit.> which introduces challenges due to variations of charging patterns.
Moreover, most predictive models for load forecasting generate point predictions instead of an interval for their expected electricity demand <cit.>, which limits the usefulness of the forecast in decision-making. By providing only a single value for expected electricity demand, these models fail to convey the range of potential outcomes and the degree of uncertainty associated with each prediction. Furthermore, providing only a point forecast, without the range values, may not offer sufficient information for effective risk management. In contrast, interval forecasting approaches provide decision-makers with a more nuanced understanding of the possible outcomes, enabling them to make more informed and effective decisions. For example, considering the full range of possible outcomes, instead of a single value, allows the stakeholders to plan for different scenarios and better mitigate risks.
To address these drawbacks, this paper proposes a probabilistic interval forecasting approach for predicting the hourly electricity demand in households with EV charging. By using probabilistic methods, our approach generates a range of likely outcomes rather than a single-point estimate which provides a more comprehensive understanding of the potential effects of EV charging on household electricity demand, gives information about the uncertainty associated with the predicted value due to dynamic charging behaviors, and offers decision-makers a more complete picture of the forecasted demand. The interval predictions are generated with Long Short-Term Memory Bayesian Neural Networks (LSTM-BNNs). LSTM was chosen as it is well-suited for capturing temporal dependencies in data while BNN was added to estimate the probability distribution of expected values for interval predictions. LSTM-BNN was trained using historical household electricity consumption data and local temperature data. To assess the effectiveness of the proposed LSTM-BNN model, its performance, measured using four metrics, is compared to the performance of the standard point prediction LSTM model. Additionally, due to the impact of the COVID-19 pandemic on electricity consumption patterns, the point and interval models have been examined on two datasets: one with the lockdown period and one without. The results show that the accuracy greatly varies among households, but for each household, the proposed LSTM-BNN achieves similar accuracy to point forecasts while providing the advantage of prediction intervals.
The remainder of the paper is organized as follows: Section 2 provides background information on LSTM and BNN techniques and introduces the four common performance measurements used for gauging the effectiveness of regression models while Section 3 reviews related work on load forecasting and interval forecasting. The proposed LSTM-BNN interval forecasting approach is described in Section 4 followed by the evaluation presented in Section 5. Finally, Section 6 concludes the paper.
§ BACKGROUND
This section begins by introducing Long Short-Term Memory (LSTM) networks and Bayesian Neural Networks (BNNs), followed by a discussion of the four performance measures commonly used for assessing regression models.
§.§ Long Short-Term Memory Neural Network
Neural networks are a type of machine learning model inspired by the human brain: they use interconnected artificial neurons to learn and process information by mimicking the way biological neurons signal to one another <cit.>. A recurrent neural network (RNN) is a type of neural network designed to process sequential data by using internal memory and recurrent connections, allowing it to capture temporal dependencies and patterns in the data. A Long Short-Term Memory (LSTM) neural network model is similar to RNN models in that it can capture temporal relationships by using an internal memory mechanism to keep track of past inputs and selectively remember or forget certain information. The main difference between an LSTM and RNN model is that LSTM models have additional structures, such as gating mechanisms, that provide better control over the flow of gradients and help prevent the vanishing and exploding gradient problems that can occur in standard RNNs, making them more effective for modeling longer sequences of data. LSTM computation at time t is given as follows:
f_t = σ(W_fxx_t + W_fhh_t-1 + b_f)
i_t = σ(W_ixx_t + W_ihh_t-1 + b_i)
o_t = σ(W_oxx_t + W_ohh_t-1 + b_o)
C̅_t =φ(W_cxx_t + W_chh_t-1 + b_c)
C_t =f_t⊙ C_t-1 + i_t⊙C̅_t
h_t = o_t⊙φ(C_t)
Equations (1)–(3) depict the computation at the forget f_t, input i_t, and output o_t gates respectively, while Equations (4)–(6) determine the cell state C_t and hidden state h_t. The sigmoid (σ) and tanh (φ) functions contribute to controlling exploding gradients by keeping values between zero to one and negative one to one respectively. The current cell input x_t and previous cell hidden state h_(t-1) are the inputs received by the LSTM cell. The gate biases b_f, b_i, b_o, and b_c, the current cell weight matrices W_fx, W_ix, W_ox, and W_cx and the hidden state cell weight matrices W_fh, W_ih, W_oh, W_ch of each LSTM cell are adjusted throughout the training process using backpropagation through time with the goal of minimizing the loss between the predicted and true values. The use of ⊙ indicates computing the elementwise Hadamard product of two matrices.
Due to its ability to capture temporal dependencies over long periods of time, the LSTM model has been very successful in many domains including load forecasting . For the same reason, we use the LSTM cells in the proposed LSTM-BNN interval forecasting approach.
§.§ Bayesian Neural Network
The Bayesian Neural Network (BNN) model <cit.> relies on Bayesian inference to determine the posterior predictive distribution with the ultimate goal of quantifying the uncertainty introduced by the models so as to explain the trustworthiness of the prediction. This is achieved by incorporating previous inputs X and outputs Y as well as model parameters ω in Bayes' theorem as follows:
P(ω|X, Y) = P(Y|X, ω)· P(ω)/P(Y|X)
Here, P() indicates the probabilities and P(·|·) are conditional probabilities.
By computing the integral of the full posterior distribution, given in (<ref>), multiple times using different samples from the model parameters, a distribution can be generated for a predicted value y_new using new inputs x_new:
P(y_new|x_new, X, Y) = ∫ P(y_new|x_new, ω)· P(ω|X,Y)dω
However, due to the full posterior probability being computationally demanding for deep neural networks, alternative approaches are required to make the use of Bayesian inference feasible in practice. Zhang and Mahadevan <cit.> demonstrated Monte Carlo dropout remaining active while a network generates predictions to be sufficient for approximating the posterior predictive distribution as it minimizes the relative entropy between the approximate and true posterior distributions while remaining computationally feasible. Consequently, our approach takes advantage of BNN and the dropout technique to generate interval load forecasts for households with EVs.
§.§ Performance Metrics
The four prominent performance metrics that are used for evaluating the margin of error between a prediction made by a machine learning model and the true value are: Mean Absolute Percent Error (MAPE), Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE) .While there are other metrics specifically dealing with evaluating probabilistic forecasts , we primarily use the mentioned metrics as they allow us to compare point and interval forecasts. These metrics are calculated according to the following equations:
MAPE = 100%/N∑_i=1^N| y_i - ŷ_̂î|/y_i
MSE = 1/N∑_i=1^N (y_i - ŷ_̂î )^2
RMSE = √((1/N)∑_i=1^N(y_i - ŷ_̂î)^2)
MAE = 1/N∑_i=1^N | y_i - ŷ_̂î|
where y_i is the true value of the i-th sample, ŷ_̂î is the predicted value for the i-th sample and N is the total number of samples.
MAPE has an advantage over the other three metrics as it is a scale-independent metric representing the error as a percentage of the actual value and therefore suitable for comparing models on datasets of different value scales. MSE and RMSE metrics are both based on the Euclidean distance to determine the level of error between predicted and true values. The difference between the MSE and RMSE metrics is that MSE provides more severe punishment for predictions that are very different from the true value. MAE is used to measure the mean absolute difference between predictions and true values and is less severe at penalizing large differences between predicted and true values than MSE or RMSE. To obtain a different view of forecasting accuracy, our study employs all four metrics.
§ RELATED WORK
This section first reviews recent load forecasting studies focusing on those based on machine learning and then discusses techniques for interval predictions in different domains.
§.§ Electricity Load Forecasting
This subsection first reviews recent load forecasting studies for a diversity of consumers including residential households and buildings. This provides insights into state-of-the-art models
and represents directions for forecasting in the presence of EV changing. Next, related work in predicting EV charging in various settings is examined.
An LSTM-based model for short-term load forecasting on the individual household level was proposed by Kong et al. <cit.>. They <cit.> found that a significant hurdle to creating forecasts at the household level is the large degree of diversity and volatility in energy consumption between households when compared to making forecasts at the substation level. This difficulty in residential forecasting due to load variability and concept drift also aligns with the findings of Fekri et al. <cit.>.
Residential load forecasting was also investigated by Zhang et al. <cit.>: while Kong et al. <cit.> used LSTM-based approach Zhang et al. <cit.> employed Support Vector Regression (SVR). In their study, Zhang et al. <cit.> investigated predicting daily and hourly electricity consumption for 15 households with data obtained from smart meters. The accuracy of the load predictions varied significantly across households, depending on the variability of energy-related behaviors among occupants. Daily load estimates were generally more accurate, as they mitigated the randomness in hourly changes.
L’Heureux et al. <cit.> presented tranformer-based architecture for electrical load forecasting. They adapted the transformer model from the Natural Language Processing (NLP) domain for load forecasting by modifying the NLP transformer workflow, adding N-space transformation, and designing a novel technique for handling contextual features. They examined the proposed transformer-based architecture on 19 different data streams and with four different forecasting horizons. For most data streams and forecasting horizons, the transformer accuracy was better than the Seq2Seq network; however, for 12-h forecast Seq2Seq was slightly better
A multi-node load forecasting was investigated by Tan et al. : they proposed multi-task learning combined with a multi-modal feature module based on an inception-gated temporal convolutional network for node load prediction. The feature extraction module captures the coupling information from the historical data of the node, while the multi-task learning utilizes a soft sharing mechanism to leverage the shared information across nodes to improve the forecast accuracy. Experimental results demonstrate the effectiveness of the proposed method in accurately forecasting load demand across multiple nodes.
Ribeiro et al. investigated short- and very short-term load forecasting for warehouses and compared several machine learning and deep learning models including linear regression, decision trees, artificial neural networks, and LSTM models. In their experiments RNN, LSTM, and GRU cells achieved comparable results.
Jian et al. also worked on very short-term load forecasting: they proposed a framework based on an autoformer which combines decomposition transformers with auto-correlation mechanism. Multi-layer perceptron layers are added to the autoformer for an improved deep information extraction. In their experiments, the proposed deep-autoformer framework outperformed several deep-learning techniques on the task of very short-term residential load forecasting.
An encoder-decoder RNN architecture with a dual attention mechanism was proposed by Ozcan et al. <cit.> to improve the performance of the RNN model. The attention mechanism in the encoder helps identify important features whereas the attention in the decoder assists the context vector and provides longer memory. In their experiments, the encoder-decoder RNN architecture achieved improved accuracy in comparison to LSTM; however, the computation complexity was increased.
Short-term load forecasting has been investigated by Sun et al. <cit.>: they proposed a framework based on LSTM and an enhanced sine cosine algorithm (SCA). The authors enhanced the performance of the SCA, a meta-heuristic method for optimization problems, by incorporating a chaos operator and multilevel modulation factors. In experiments, they compared the modified SCA with several other population intelligence algorithms including particle swarm optimization and the whale optimization algorithm and showed that SCA improves performance.
There are very few studies concerned with load forecasting for EV charging demand and they mostly consider scenarios such as parking lots, fleets, and regional demand. For example, Amini et al. <cit.> investigated forecasting of EV charging demand for parking lots. Their approach used an Autoregressive Integrated Moving Average (ARIMA) model with driving patterns and distances driven as inputs to determine the day-ahead demand of the conventional electrical load and charging demand of EV parking lots. Two simulated test systems, 6-bus and IEEE 24-bus systems, were used to examine the effectiveness of the proposed approach.
Yi et al. highlighted the importance of accurate demand forecasting for planning and management of electric vehicle charging infrastructure. They presented a deep learning-based method for forecasting the charging demand of commercial EV charging stations by utilizing LSTM as a base for the Seq2Seq model and combining it with a clustering technique. The evaluation on over 1200 charging sites from the State of Utah and the City of Los Angeles showed that the proposed method outperforms other forecasting models such as ARIMA, Prophet, and XGBoost.
For forecasting EV charging demand at charging stations in Colorado, Koohfar et al. proposed a transformer-based deep learning approach. The proposed approach was compared to time-series and machine learning models including ARIMA, SARIMA, LSTM, and RNN. While for longer time horizons the transformer outperformed other techniques, for short-term forecasting (7 days ahead), LSTM and transformer achieved comparable results.
A multi-feature data fusion technique combined with LSTM was proposed by Aduama et al. to improve the EV charging station load forecasting. They generate three sets of inputs for LSTM consisting of load and weather data pertaining to different historical periods. These three sets of data are then passed to the LSTM models which generate three predictions, and, finally, the LSTM outputs are combined using a data fusion technique. In their experiments, the proposed fusion-based approach achieved better accuracy than traditional LSTM in predicting EV charging station demand.
Zheng et al. <cit.> were interested in predicting the overall load from EVs in the city of Shenzhen, China. They recognize the diversity of charging patterns and therefore break down the fleet into four groups: private EVs, taxis, busses, and official EVs. Their approach provides a mid-and-long term EV load charging model based on the current utilization of EVs in Shenzhen using probabilistic models for EV charging profiles and forecasting EV market growth in the city using the Bass model. As they are concerned with the regional EV demand, some of the randomnesses of the individual EV charging is remedied through aggregation. Similarly, Arias and Bae <cit.> considered forecasting load for groups of EVs. Specifically, they take advantage of historical traffic data and
weather data to formulate the forecasting model. First, traffic patterns are classified, then factors influencing traffic patterns are identified, and finally, a decision tree formulates the forecasting model.
Strategies for handling growing EV charging demand were investigated by . They classify the EV control strategies into scheduling, clustering, and forecasting strategies recognizing that precise estimates of charging are critical for fault prevention and network stability. They note that the stochastic nature of EV charging demand requires advanced forecasting techniques, commonly combined with the need for extensive data including historical charging data, weather, and travel patterns, which may not be readily available. Forecasting studies Al-Ogaili et al. <cit.> examined include predictions for groups of EVs or geographical regions, charging stations, and specific types of EVs (e.g., busses).
The reviewed studies <cit.> on generic load forecasting represent the state-of-the-art in energy forecasting but their behavior in presence of EV charging has not been examined. Nevertheless, they represent a great foundation for forecasting EV charging load. On the other hand, EV-related studies <cit.> do consider EV charging but they do so for a group of EVs, parking lots, charging-station, or regions, and do not confider forecasting load for individual households in presence of EVs. In contrast, we focus on predicting power consumption for individual households in presence of EV charging. Moreover, in contrast to point predictions provided in the aforementioned studies, our study offers interval predictions.
§.§ Interval Predictions
This subsection reviews approaches that have been taken by authors across different domains to create regression models that provide an interval for predictions. In contrast to point predictions, interval predictions quantify uncertainties and provide additional information for decision-making.
Interval predictions were generated for electricity spot pricing by Maciejowska et al. <cit.> for the British power market using factor quantile regression averaging. First, point predictions are obtained with a collection of models including autoregressive models, threshold autoregressive models, semiparametric autoregressive models, neural networks, and others. Next, point predictions generated by the mentioned models are combined using quantile regression averaging to provide final interval forecasts. The proposed approach performed better than the benchmark autoregressive model.
Shi et al. <cit.> considered interval predictions for forecasting wind power generation to quantify uncertainties in renewable energy generation. They train an RNN model with two outputs, one for the upper and one for the lower bound of a regression interval of predictions using the Lower and Upper Bound Estimation (LUBE) method. A new cost function incorporating prediction interval was designed and the dragonfly algorithm was introduced to tune the parameters of the RNN prediction model. One of the major challenges associated with training neural networks using the LUBE method is the difficulty in achieving convergence and occasionally the model may not converge <cit.>. Consequently, Kabir et al. <cit.> developed a customizable cost function to improve the convergence of LUBE models and assist in constructing prediction intervals with neural networks.
Zhang and Mahadevan <cit.> proposed interval forecasting for flight trajectory prediction and safety assessment by combining deep learning with uncertainty characterized by a Bayesian approach. Two types of Bayesian networks (BNN), feedforward neural networks and LSTM networks, are trained from different perspectives and then blended to create final predictions. In both BNNs, the dropout strategy quantifies model prediction uncertainty. The BNN approach was also successful in the work of Niu and Liang <cit.> where they improve nuclear mass and single-neutron separation energy prediction accuracy for determining nuclear effective reactions. In their experiments, Niu and Liang <cit.> demonstrate that a Bayesian approach can be combined with various forecasting techniques to improve nuclear mass predictions.
The reviewed studies <cit.> created interval predictions with various machine learning and statistical methods in various domains; however, none of them considered forecasting household electricity load in the presence of EV charging.
Like our study, the works of Zhang and Mahadevan <cit.> and Niu and Liang <cit.> also employed BNN techniques to create interval prediction but they used it for very different use cases than load forecasting (flight trajectory <cit.> and nuclear mass predictions <cit.>).
§ INTERVAL LOAD FORECASTING IN PRESENCE OF EV CHARGING
This section presents the problem formulation and methodology of the proposed interval forecasting for household load prediction in the presence of EV charging. The approach uses only historical energy consumption data obtained from smart meters and weather data which makes it practical and scalable for real-world applications as there is no need to collect data regarding EV charging habits or EV specifications.
Problem Statement:
Consider a time series of historical data for a household with EV charging represented as a sequence of input-output pairs (x_t, y_t) where x_t is a vector of features describing the state of the electricity consumption at time t including contextual factors such as temperature, time of day, day of the week, and day of the year, and y_t is a vector of real-valued electricity consumption values for this household at time t. The goal is to learn a probabilistic model p(ŷ_t+1|x_t+1,D), where D represents historical observations and ŷ_t+1 represents the predicted value, that can predict the output for a new input with uncertainty quantification represented as an interval I.
This interval is created by generating multiple predictions through different network configurations to obtain the Bayesian approximation of the predicted value (interval center) as shown in Equation . The minimum and the maximum of the interval are computed as shown in Equations and respectively:
E[ŷ_t+1]=1/N∑_i=1^Nŷ_t+1^i
I_min=E[ŷ_t+1]-σ_ŷ_t+1
I_max=E[ŷ_t+1]+σ_ŷ_t+1
where N is the number of predictions generated for the time step (t+1) and σ_ŷ_t+1 is the standard deviation of the predictive distribution computed as follows:
σ_ŷ_t+1=√(∑_i=1^N(ŷ_t+1^i-E[ŷ_t+1])^2/N)
The overall interval forecasting process is shown in Figure , while details of each component are described in the following subsections.
§.§ Dataset Preparation
The two types of datasets being used are weather station data and historical household electricity consumption data. Each dataset undergoes preparation individually before they are merged.
The weather station data consists of multiple datasets from multiple weather stations in the approximate geographical areas surrounding the EV household. The features used from the weather station data are the hourly timestamps and the temperature recordings as the temperature is often considered the most influential weather factor in load forecasting <cit.>. The Weather Data Preparation conducted on the individual weather station datasets and shown in Figure <ref> includes filling in missing temperature readings and combining all the weather station data. Missing temperature readings are filled using weighted averaging of the nearest complete temperatures. Since beyond the city, there are no additional geographical details given for any of the EV households, the temperature data from all stations are combined by averaging the temperatures from several weather stations to create a single average temperature dataset. In order to match the timestamps in the EV Household datasets, the timestamps in the average temperature dataset are adjusted to adhere to daylight savings time (DST).
The household data here refers to hourly data recorded by the smart meter or a similar device. The two initial features in this data are the consumption period and the electricity consumed within that period. The consumption period for the household data initially contains both the start date and time, and the end date and time of the current electricity consumption period. These data, as indicated in Figure <ref>, undergo Household Data Preparation which involves isolating and only keeping the start date-time of the consumption period so that it can be merged with the weather station data. The electricity consumption feature from the initial dataset remains unchanged.
As part of the processing for both the weather station and household data, an additional time feature is generated. This feature is necessary because at the end of DST each year, the time is set backward one hour, resulting in two instances of the same date-time. This creates a conflict in merging the weather station and household data using only the date-time feature, as there are duplicate non-unique date-times that have no distinguishing differences. The additional time feature is added to both the household and weather station data to indicate if the specific date-time falls in DST or not. This removes the merging ambiguity for the duplicate date-times, as only the first occurrence of the date-time will occur during DST.
After the initial preparations are completed, each of the individual household datasets is merged with the average temperature dataset using the date-time and the additional time feature. In preparation for machine learning, the merged dataset proceeds to the preprocessing step.
§.§ Preprocessing
After the weather and household datasets are merged, the datataset undergoes the following preprocessing steps: feature engineering, splitting the data into train and test sets, and normalizing the train and test sets. Feature engineering step takes advantage of the recorded interval start date-time from the original household data to generate nine features as shown in Table <ref>. The purpose of creating additional features is to provide context information to the machine learning model, enabling it to generate better predictions.
Following feature engineering, the dataset was split into training and testing sets: the last 10% of readings are assigned to the test set and the remaining to the training set.
Then, a portion of the training set was separated to use as the validation set for model selection. As a result of the validation set creation, the distribution of the data becomes 80% for training, 10% for validation, and 10% for testing.
Next z-score normalization was used to reduce the dominance of large features and improve convergence. This technique was chosen over other normalization techniques as it is good at handling outliers present during peak electricity consumption events. The z-score normalization transforms the feature to the mean of 0 and the standard deviation of 1 as follows:
z_ij= x_ij-μ_j/σ_j
where x_ij and z_ij are the initial unscaled and scaled values of the i-th sample of the j-th feature respectively, and μ and σ are the mean and standard deviation of all the samples of the j-th feature respectively. Note that the mean and standard deviations are calculated only on the training set to avoid data leakage.
Next, the sliding window technique is employed to prepare data for the machine learning model and to provide the model with a fixed number of previous electricity consumption, time-date, and temperature features as the inputs for predicting the next time step. This is accomplished by creating an input window that contains all features including the time-step, temperature, and consumption data within the window size <cit.>. For instance, for a window of size w, electricity consumption together with all other features for the past w time steps are used as the input for predicting the next energy consumption values. The window slides for s steps to create the next sample. The advantage of the windowing technique for electricity forecasting is in allowing the model to consider the demand for recent time steps when making predictions. The exact window size w is determined within the optimization process.
The siding window technique is applied to each training, validation, and test set. After this step, the samples have the dimension of w × f where f is the number of features.
§.§ Training and Tuning
As shown in Figure <ref>, the Training and Tuning stage follows the Preprocessing step. The deep learning technique LSTM was selected as the machine learning model because in recent years it has demonstrated great successes in load forecasting and outperformed other forecasting techniques <cit.>. The hyperparameter search was carried out with Bayesian optimization as unlike grid or random search, this method performs a more directed exploration of a defined tuning space by selecting hyperparameters that lead to a local, ideally global, minimum loss <cit.>. The Bayesian optimization achieves a minimum loss by using the posterior distribution of the Mean Squared Error (MSE) loss function determined by previous models to guide the selection of new hyperparameter combinations. This directed selection process minimizes the time and computational resources needed for the exploration of the defined hyperparameter space <cit.>.
In this work, the search space explored included window size, batch size, the number of LSTM layers, the number of neurons in each LSTM layer, the learning rate for the Adam optimizer, and the dropout probability. The window size determines the number of previous time steps to be used as the input to the network while the batch size specifies the number of training windows a single batch contains. The number of LSTM layers and the number of LSTM neurons are adjusted to find a balance between increasing model complexity and variance to fit the training set while maximizing the model’s ability to generalize and make accurate predictions when given novel data points. The learning rate for the Adam optimizer is a critical parameter for training each model as it determines the rate at which updates are made to the weight and bias parameters of the model. Using a learning rate that is suitable for finding minima in the loss function enables the model to converge efficiently.
Finally, the dropout probability hyperparameter is used to prevent overfitting: it determines the probability that neurons in a layer will randomly be given zero values. Using a dropout probability that is too high can have a detrimental effect on overall performance as it could result in too many inactive neurons and prevent the model from learning. As the dropout technique is typically used only to reduce overfitting to training data, it is typically disabled when the model is making predictions (inference time). However, within the proposed LSTM-BNN model, dropout is also active while making predictions as it is the key component to creating the probabilistic interval predictions as described in the following subsection.
The LSTM model is trained and tuned using Bayesian optimization for each individual household independently. Once training and tuning are completed, the model is ready to proceed to the Interval Forecasting step.
§.§ Interval Forecasting
This subsection described how the trained and tuned LSTM model is used to create the prediction intervals. The approach is inspired by the works of Zhang and Mahadevan <cit.> and Niu and Liang <cit.>, and as those works, it employs the BNN technique to generate intervals. However, they used the BNN technique for different applications and with different networks.
With the active dropout, the trained model makes a sufficiently large number of predictions for each sample of the dataset. Due to dropout being active, the model has a high likelihood to produce a different point prediction each time it makes a prediction even though all inputs are the same. This variation in predictions is because while the dropout is active there is a probability that any component, excluding input and output neurons, can be removed from the prediction calculation. The varying point predictions due to the use of different components in prediction calculations allow for the construction of an interval prediction as a variational approximation of Bayesian inference for the model uncertainty <cit.>.
After multiple predictions were made for the same input samples, the mean and standard deviation for each sample is determined using the collection of point predictions the model created. Finally, the interval prediction is given as one standard deviation above and below the mean value of the point predictions for each sample.
A summary of the four steps taken in creating the interval prediction is given in the following steps:
* Make multiple predictions for a given input.
* Compute the mean and standard deviation of the predictions for each input sample.
* Center the interval at the mean value.
* Define the upper bound of the interval as one standard deviation above the mean, and the lower bound as one standard deviation below the mean.
All the models generated by Bayesian optimization are evaluated on the training and validation sets while only the best-performing model for each household selected on the validation set is evaluated on the test set. In other words, the model selection is carried out on the validation set.
§.§ Statistical Tests
Household energy consumption is dependent on the behaviors of its occupant, and as such changes when those behaviors change. We are interested to examine the effect the COVID-19 related pandemic lockdowns had on households with EV charging. As the test showed that the datasets do not follow a normal distribution, electricity consumption with and without lockdown in order to determine if lockdowns created a change in household electricity consumption habits to the extent that it is statistically different and could impact the predictive capacity of the model.
To carry out this analysis, two datasets are considered: the first dataset, referred here as the lockdown dataset
, contains the entire original EV household dataset including data from the lockdown as well as before lockdowns. The second dataset, non-lockdown dataset is a subset of the EV household dataset containing only data collected outside of the lockdown period. Both the lockdown and non-lockdown datasets go through the same preparation, preprocessing, and prediction processes for the creation of the prediction model.
As seen in Figure <ref>, evaluation using is carried out before the normalization is applied. The test is performed after the dataset is split into components training, validation, and testing. A comparison of the differences in the results helps us determine if there is a greater similarity between training and test sets for the lockdown dataset or for the non-lockdown dataset which is a subset of the lockdown dataset.
There are three possible outcomes for the comparison of the results for the lockdown and non-lockdown datasets. The first is that there is no significant difference between training and test sets for either the lockdown or non-lockdown datasets. The second possible outcome is that there is a greater difference between the lockdown dataset training and test sets than for the non-lockdown dataset. And the third possibility is that there is a greater difference between the non-lockdown dataset training and test sets than for the lockdown dataset.
The results are compared with the final performance of the models that are trained on the lockdown and non-lockdown datasets to observe whether there is a correlation between differences in the datasets and model predictive performance. The analysis of the results and the predictive performance of the model will improve our understanding of the conditions under which the generated models are reliable. Understanding when a model is reliable is critical for mitigating the risks of a blackout because it ensures that decisions are made based on reliable forecasting information.
§ EVALUATION
This research was carried out in collaboration with London Hydro, a local electrical distribution utility for the city of London, Ontario, Canada. The real-world dataset provided by London Hydro was shared through Green Button Connect My Data (CDM), a platform for secured sharing of energy data with the consumer’s consent. Through work like this, London Hydro is preparing for the increased proliferation of EVs and the corresponding increase in electricity demand. London Hydro needs home EV charging data to identify nonwire solutions such as scheduling charging during off-peak hours
when there is solar generation.
In this evaluation, we consider four households with EVs and refer to them as EV1, EV2, EV3, and EV4. The time period ranges for all four households' recordings are very similar, as given in Table <ref>. Note that the time period ranges non-lockdown dataset are the same for all four households. The Weather Station Data was obtained from Environment and Climate Change Canada and consists of two datasets from two weather observation stations in the London area that were merged by averaging their temperature readings, as discussed in Section <ref>.
For comparison of lockdown to non-lockdown data, additional four subsets of the EV household electricity consumption datasets are created by removing the lockdown data following the start of lockdowns on 1 March 2020. For each of the four households,
two individual LSTM-BNN predictor models were trained and tuned. The first model for each household is trained and evaluated on the entire dataset which contains lockdown and non-lockdown electricity consumption data, and the second model for each household is trained and evaluated using only the non-lockdown electricity consumption data.
All experiments were coded in Python with the use of the PyTorch machine learning framework and the Ray Tune library for model training and tuning. This remainder of this section consists of three subsections: first, the results of analysis are discussed, next the hyperparameter search space is defined and training behavior is summarized for model optimization, and finally, the predictive performance is analyzed.
§.§ Statistical Test Results
Two trials are completed for each household, one for the full dataset with lockdown data and the other for the subset without lockdown data. After initial preparations are completed according to the described methodology, the dataset for non-lockdown was split into train and test sets, similar in proportions to those used for the complete dataset. The shift in behavior due to lockdowns was analyzed to determine if there was a statistically significant difference in the distribution of the electricity consumption between train and test sets for lockdown and non-lockdown conditions.
In order to interpret the results of the , the null hypothesis was established. The null hypothesis in this scenario is that there is no statistically significant difference between the training and test set for any of the datasets. For the significance level of 5%, the null hypothesis was rejected for cases where the P-value of the is less than 0.05 (5.00 × 10^-2).
The p-value results of the analysis comparing training and test datasets shown in Table <ref> confirm that the null hypothesis can be rejected for all datasets, as they fall significantly below the threshold value of 5.00 × 10^-2. Therefore, the results indicate that there is a statistically significant difference between the training and test datasets regardless of lockdowns for all households.
§.§ Model Training and Tuning
For each of the datasets outlined in Table <ref>, 80 models were considered using Bayesian optimization within a defined hyperparameter search space. The hyperparameters tuned for the model were batch size, window size, the number of hidden layers, the number of neurons in the hidden layers, the Adam optimizer learning rate, and the dropout probability. The defined search space for each of the hyperparameters is summarized in Table <ref>. Every model that was trained had its performance evaluated using the performance metrics outlined in Section <ref>.
The input and output layers were each set to a fixed size. The number of neurons in the input layer was set by the number of features in the input dataset, and the output layer has a single neuron for the regression prediction output. Different sliding window sizes w were used to provide the model with varying numbers of previous time steps to use as inputs. The window size is an important consideration because using a different number of previous time steps may help the models capture distinct patterns in each household’s electricity consumption. Each model predicts the energy consumption one hour ahead.
The options for dropout probability tuning were based on the tuning range used by Zhang and Mahadevan <cit.> for creating BNN models. A dropout of zero was also included to act as a benchmark for how a non-BNN neural network would perform for each of the households. The hidden layer space and an epoch of 150 were determined by referring to the hyperparameters used for LSTM models for electricity load forecasting used by Kong et al. <cit.>. A sample of the training behavior of the optimal LSTM-BNN for EV3 and the non-lockdown dataset model is shown in Figure <ref>. From this figure, it can be seen that the early stopping could be beneficial in the reduction of computational resources, as a very minor improvement of the validation performance can be observed beyond approximately 80 epochs.
In experiments, the proposed LSTM-BNN interval forecasting is compared to the point forecasting. Both use LSTM as their base model and both undergo exactly the same dataset preparation, preprocessing, and training and tuning steps, as described in Sections and , and , respectively. The difference is that in point forecasting, at inference time, the dropout is not active, and, therefore, point forecasting results in a single precision for each time step. In contrast, the proposed LSTM-BNN generates multiple predictions and forms an interval with the BNN technique.
§.§ Predictive Performance Analysis
The performance results are explored in three parts: first, the overall performance of point and interval prediction models are examined, followed by the analysis of the performance among households. Next, interval forecasts are compared to point forecasts, and lockdown is compared to non-lockdown. Finally, the correlation between Mann-Whitney results and model performance is examined.
§.§.§ Overall Performance
Table <ref> shows the average MAPE values for the four households for each of the two datasets, lockdown and non-lockdown, and for each of the two approaches, point and interval prediction approaches. For interval forecasts, the four performance metrics were calculated using the mean of the generated interval from the set of the forecast generated with the Bayesian technique as described in Section <ref>.
For each household, dataset, and point/interval approach, the model was tuned, and the results from the tuned models were averaged and reported in this table. All MAPE values are significantly higher than those reported in the literature <cit.>, but that is to be expected as EV charging behavior adds remarkable variability and randomness to power consumption pattern compared with office buildings or households without EVs. In general, excluding electricity consumption lockdown data from March 2020 or later did not create an improvement in the predictive performance of the models. While an increase in the error between actual and predicted values is expected between training and testing, there is a much greater increase for the non-lockdown dataset than for the lockdown dataset.
The average interval prediction performance shows that model performance was better overall on the full dataset that included lockdown data. Point predictions for the lockdown were also better than for the non-lockdown for the test dataset. This result is somewhat surprising considering that it would be expected that electricity consumption would be more difficult to predict in a lockdown environment than in a non-lockdown environment when the historical data used for creating models are based mainly on non-lockdown behavior. A possible reason for this is that with the full dataset, the model has more data to learn from.
Figure <ref> shows an example of interval forecast results; specifically actual energy consumption and predicted interval forecasts for EV3 and the lockdown dataset. It can be observed that the prediction interval varies throughout time indicating uncertainties in the forecasted values.
While the focus of this work is the comparison of interval and point predictions, here we further examine interval forecast for the example EV3. Prediction Interval Coverage Probability () measures the fraction of actual values that lay within the prediction interval. In the case of the proposed LSTM-BNN, this will vary depending on how many standard deviations are forming the interval. For the interval of is only 0.28, while increasing to to 0.51, 0.68, 0.82, respectively. Note that similar to MAPE values from Table , these values indicate high errors caused by the randomness of EV charging.
§.§.§ Performance Among Households
The results for the best-performing model for each individual household, for point and interval forecasts, are shown in Tables <ref> and <ref> for lockdown and non-lockdown datasets respectively. Note that models are created for each household individually, and not for a group of homes. The results include all four metrics MAPE, MSE, RMSE, and MAE for all datasets train, validation, and test. The best-performing point and interval prediction models in terms of test set MAPE were for households EV3 and EV4 respectively on the lockdown dataset. Although there are some cases where the models generate better predictions, such as test MAPE of about 31% for EV3 with lockdown dataset and point predictions, most others exhibit higher error. Despite the success of LSTM models in forecasting electricity consumption for offices and households , the results show that in the presence of EV charging their accuracy is greatly reduced. As noted by other studies the variability in household electricity consumption makes creating accurate predictions at this granularity challenging, which seems to only be exacerbated further with the additional consideration of EV charging. This variability results in much higher MAPE, irrelevant of the dataset or the household, than those typically reported in the literature. However, the literature commonly considers offices, schools, or a group of buildings that have much more predictable energy consumption patterns. Moreover, MAPE can produce very high values when the actual values are close to zero. Note that, while MSE, RMSE, and MAE are included in Tables <ref> and <ref>, we are not comparing among households based on those metrics as they are dependent on the scale of the actual values.
To analyze household differences, Figure <ref> shows the MAPE values for point and interval predictions for each household for lockdown and lockdown data while Figure <ref> does the same for the non-lockdown scenario. It can be observed that there are some households that were easier for the predictive models to capture. The EV3 and EV4 household datasets produced the best-performing models but the models trained on the EV1 and EV2 households were much less successful. The predictive performance of the model trained on the EV2 lockdown dataset which produces the highest p-value highlights that there is no direct relationship between results and model performance. The Mann-Whitney results from Table <ref> indicate that in all scenarios, there is a significant difference between the training and test dataset which could be one of the reasons for the high MAPE results.
To analyze this data from a different perspective, Figure shows the MAE values for the four EVs for point and interval prediction in lockdown and non-lockdown periods. Since MAE is scale dependent and consumption scales vary among households, different approaches should be compared for each household individually and not among households. For four scenarios, point forecasting achieves lower error than interval forecasting; however, interval forecasting has the advantage of providing uncertainty information.
§.§.§ Interval Forecasts Compared to Point Forecasts
Figure <ref> compares interval forecasts to point forecasts in terms of MAPE for the lockdown test dataset. In addition to point and interval MAPE for each household, it includes the averages for the four households. It can be observed that the average MAPE for the point forecast is about 5% lower than the average MAPE for interval forecasts. Also, this figure highlights that EV2 was the hardest household to predict for both point and interval approaches. At the same time, EV3 was the most straightforward prediction for point forecasting while EV4 achieved the lowest prediction error for interval forecasting. Figure <ref> shows the same comparison but for non-lockdown test data. Again, MAPE for interval forecasting is higher than for point forecasting but the difference between average MAPE for interval and point forecasts is much larger for the non-lockdown dataset than for the lockdown dataset. Interval forecasting for EV1 performed especially poorly which raised the average MAPE for interval forecasts.
§.§.§ Lockdown Compared to Non-lockdown
Figure <ref> and Figure <ref> compare forecasting with lockdown and non-lockdown for point and interval forecasting respectively. Figure <ref> shows that average point MAPE for non-lockdown data is about 8% higher than for lockdown. All lockdown MAPE values except EV2 were much lower than non-lockdown predictions. Similarly, Figure <ref> indicates that interval prediction for lockdown achieves better results than for non-lockdown except EV2. Overall, accuracy varies greatly among households.
§.§.§ Analysis of Relation between Predictive Performance and Mann-Whitney Results
Figure <ref> and Figure <ref> relate model performance on the test set (MAPE) and P-value between training and test data. These figures further demonstrate that there is no conclusive relationship between the P-value and model performance on the test set. This is due to comparing the datasets statistically while the prediction performance depends on many other factors such as the quality of features and randomness of data. Still, the Mann-Whitney test demonstrated that there is a significant difference between the train and test datasets, regardless of whether lockdown data were included or not, which is a contributing factor to the higher error values observed.
§ CONCLUSIONS
Accurately predicting electricity consumption is an important factor in providing an adequate and reliable energy supply. The electricity demand will continue to grow as society transitions away from using ICE vehicles to EVs that are able to be charged in residential homes. However, predicting energy consumption at the individual household level is more challenging than forecasting for office buildings, schools, or regions due to the high variability in electricity consumption patterns<cit.>. This challenge is further amplified by the need to accommodate EV charging.
This paper proposes LSTM-BNN interval load forecasting for individual households in presence of EV charging based on the LSTM deep learning model and Bayesian inference. The LSTM model incorporates the dropout layer which is active during the inference time and responsible for generating a set of point predictions for a single input sample. Then, the Bayesian technique is employed to create interval forecasts from this set of predictions. The achieved accuracy varies greatly among households due to the variability and randomness of their energy consumption patterns. Examining the performance of point and interval prediction models shows that the LSTM-BNN interval prediction model performs similarly to a standard LSTM point prediction model with the benefit of providing an interval for the prediction. Although the proposed LSTM-BNN is more complex and involves longer training time than the traditional LSTM point forecasting models, LSTM-BNN predictions quantify uncertainly and offer additional information for decision-making. This paper also examined the impact of the COVID-19 lockdown on the load forecasting for these households: results show that the proposed LSTM-BNN achieves similar results for lockdown and non-lockdown periods. We stipulate that the randomness of the EV charging patterns outweighs the impact of change due to the lockdowns.
As demonstrated in our study, EV charging is highly variable and predicting household energy consumption in presence of EV charging is difficult. For use cases such as infrastructure planning, forecasting energy consumption for a neighborhood block may be sufficient. For such scenarios, aggregating energy consumption on the block level would remove some of the randomnesses and improve forecasting accuracy.
Future work will examine the results in terms of the size of the prediction interval to be able to better relate different interval forecasts. Moreover, alternative methods to will be considered to acquire better insight into the potential changes in consumption habits during different periods of time. As energy consumption patterns, including EV charging patterns, change over time resulting in what is known as concept drift, the techniques such as online learning could be integrated with the proposed approach to better capture changes over time.
Conceptualization, R.S., S.M.; methodology, R.S., K.G.; software, R.S.; validation, R.S.; formal analysis, R.S., M.E., K.G.; investigation, R.S., M.E., K.G.; resources, K.G. and S.M.; writing—original draft preparation, R.S.; writing—review and editing, K.G., M.E., S.M.; visualization, R.S., M.E.; supervision, K.G.; project administration, K.G.; funding acquisition, K.G., S.M. All authors have read and agreed to the published version of the manuscript.
This research has been supported by Ontario Centre of Innovation under grant OCI #34674 and by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant ALLRP 570760-21. Computation was enabled in part by the Digital Research Alliance of Canada.
Not applicable.
Not applicable.
Data analyzed in this study are obtained from London Hydro and are protected under a signed non-disclosure agreement. Approvals are needed for sharing this data.
The authors would like to thank London Hydro for supplying industry knowledge and data used in this study.
The authors declare no conflict of interest.
-0cm
References
999
[Zhao and Guo(2015)]1su7054783
Zhao, H.; Guo, S.
External Benefit Evaluation of Renewable Energy Power in China for
Sustainability.
Sustainability 2015, 7, 4783–4805.
[2en()]2energy.gov
Grid Modernization and the Smart Grid.
Available online:
<https://www.energy.gov/oe/grid-modernization-and-smart-grid> (accessed
on 13 August 2022).
[3en()]3energy.gov
Alternative Fuels Data Center: Emissions from Electric Vehicles.
Available online:
<https://afdc.energy.gov/vehicles/electric_emissions.html> (accessed on
13 August 2022).
[Ghunem(2022)]4ghunem_2022
Ghunem, R.
Smarter, Faster and Smaller Power Grids: A Step towards a Green
Economy. 2022.
Available online:
<https://nrc.canada.ca/en/stories/smarter-faster-smaller-power-grids-step-towards-green-economy>
(accessed on 13 August 2022).
[Yamashita et al.(2008)Yamashita, Joo, Li, Zhang, and
Liu]5yamashita2008analysis
Yamashita, K.; Joo, S.K.; Li, J.; Zhang, P.; Liu, C.C.
Analysis, control, and economic impact assessment of major blackout
events.
Eur. Trans. Electr. Power 2008, 18, 854–871.
[Ozcan et al.(2021)Ozcan, Catal, and Kasif]24s21217115
Ozcan, A.; Catal, C.; Kasif, A.
Energy Load Forecasting Using a Dual-Stage Attention-Based Recurrent
Neural Network.
Sensors 2021, 21, 7115.
[Sehovac and Grolinger(2020)]sehovac2020deep
Sehovac, L.; Grolinger, K.
Deep learning for load forecasting: Sequence to sequence recurrent
neural networks with attention.
IEEE Access 2020, 8, 36411–36426.
[Sun et al.(2022)Sun, Qin, Przystupa, Majka, and
Kochan]23sun2022individualized
Sun, L.; Qin, H.; Przystupa, K.; Majka, M.; Kochan, O.
Individualized Short-Term Electric Load Forecasting Using Data-Driven
Meta-Heuristic Method Based on LSTM Network.
Sensors 2022, 22, 7900.
[Jung et al.(2021)Jung, Moon, Park, and Hwang]jung2021attention
Jung, S.; Moon, J.; Park, S.; Hwang, E.
An attention-based multilayer GRU model for multistep-ahead
short-term load forecasting.
Sensors 2021, 21, 1639.
[Al-Ogaili et al.(2019)Al-Ogaili, Hashim, Rahmat, Ramasamy,
Marsadek, Faisal, and Hannan]6al2019review
Al-Ogaili, A.S.; Hashim, T.J.T.; Rahmat, N.A.; Ramasamy, A.K.; Marsadek, M.B.;
Faisal, M.; Hannan, M.A.
Review on scheduling, clustering, and forecasting strategies for
controlling electric vehicle charging: Challenges and recommendations.
IEEE Access 2019, 7, 128353–128371.
[Yu et al.(2019)Yu, Si, Hu, and Zhang]yu2019review
Yu, Y.; Si, X.; Hu, C.; Zhang, J.
A review of recurrent neural networks: LSTM cells and network
architectures.
Neural Comput. 2019, 31, 1235–1270.
[Fekri et al.(2022)Fekri, Grolinger, and
Mir]8fekri2022distributed
Fekri, M.N.; Grolinger, K.; Mir, S.
Distributed load forecasting using smart meter data: Federated
learning with Recurrent Neural Networks.
Int. J. Electr. Power Energy Syst.
2022, 137, 107669.
[Jagait et al.(2021)Jagait, Fekri, Grolinger, and
Mir]jagait2021load
Jagait, R.K.; Fekri, M.N.; Grolinger, K.; Mir, S.
Load forecasting under concept drift: Online ensemble learning with
recurrent neural network and ARIMA.
IEEE Access 2021, 9, 98992–99008.
[Hastie et al.(2009)Hastie, Tibshirani, Friedman, and
Friedman]25hastie2009elements
Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H.
The Elements of Statistical Learning: Data Mining, Inference,
and Prediction; Springer: Berlin/Heidelberg, Germany,
2009.
[Zhang and Mahadevan(2020)]10zhang2020bayesian
Zhang, X.; Mahadevan, S.
Bayesian neural networks for flight trajectory prediction and safety
assessment.
Decis. Support Syst. 2020, 131, 113246.
[Wan et al.(2013)Wan, Xu, Pinson, Dong, and
Wong]wan2013probabilistic
Wan, C.; Xu, Z.; Pinson, P.; Dong, Z.Y.; Wong, K.P.
Probabilistic forecasting of wind power generation using extreme
learning machine.
IEEE Trans. Power Syst. 2013, 29, 1033–1044.
[Kong et al.(2017)Kong, Dong, Jia, Hill, Xu, and
Zhang]9kong2017short
Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y.
Short-term residential load forecasting based on LSTM recurrent
neural network.
IEEE Trans. Smart Grid 2017, 10, 841–851.
[Fekri et al.(2021)Fekri, Patel, Grolinger, and
Sharma]12fekri2021deep
Fekri, M.N.; Patel, H.; Grolinger, K.; Sharma, V.
Deep learning for load forecasting with smart meter data: Online
Adaptive Recurrent Neural Network.
Appl. Energy 2021, 282, 116177.
[Zhang et al.(2018)Zhang, Grolinger, Capretz, and
Seewald]13zhang2018forecasting
Zhang, X.M.; Grolinger, K.; Capretz, M.A.; Seewald, L.
Forecasting residential energy consumption: Single household
perspective.
In Proceedings of the 2018 17th IEEE International Conference on
Machine Learning and Applications, Orlando, FL, USA, 17–20 December
2018; pp. 110–117.
[L’Heureux et al.(2022)L’Heureux, Grolinger, and
Capretz]22l2022transformer
L’Heureux, A.; Grolinger, K.; Capretz, M.A.
Transformer-Based Model for Electrical Load Forecasting.
Energies 2022, 15, 4993.
[Tan et al.(2022)Tan, Hu, Chen, Wang, and Li]tan2022multi
Tan, M.; Hu, C.; Chen, J.; Wang, L.; Li, Z.
Multi-node load forecasting based on multi-task learning with modal
feature extraction.
Eng. Appl. Artif. Intell. 2022,
112, 104856.
[Ribeiro et al.(2022)Ribeiro, do Carmo, Endo, Rosati, and
Lynn]ribeiro2022short
Ribeiro, A.M.N.; do Carmo, P.R.X.; Endo, P.T.; Rosati, P.; Lynn, T.
Short-and very short-term firm-level load forecasting for warehouses:
a comparison of machine learning and deep learning models.
Energies 2022, 15, 750.
[Jiang et al.(2022)Jiang, Gao, Dai, Si, Hao, Zhang, and
Gao]jiang2022very
Jiang, Y.; Gao, T.; Dai, Y.; Si, R.; Hao, J.; Zhang, J.; Gao, D.W.
Very short-term residential load forecasting based on
deep-autoformer.
Appl. Energy 2022, 328, 120120.
[Amini et al.(2016)Amini, Kargarian, and
Karabasoglu]11amini2016arima
Amini, M.H.; Kargarian, A.; Karabasoglu, O.
ARIMA-based decoupled time series forecasting of electric vehicle
charging demand for stochastic power system operation.
Electr. Power Syst. Res. 2016, 140, 378–390.
[Yi et al.(2022)Yi, Liu, Wei, Chen, and Dai]yi2022electric
Yi, Z.; Liu, X.C.; Wei, R.; Chen, X.; Dai, J.
Electric vehicle charging demand forecasting using deep learning
model.
J. Intell. Transp. Syst. 2022, 26, 690–703.
[Koohfar et al.(2023)Koohfar, Woldemariam, and
Kumar]koohfar2023prediction
Koohfar, S.; Woldemariam, W.; Kumar, A.
Prediction of Electric Vehicles Charging Demand: A Transformer-Based
Deep Learning Approach.
Sustainability 2023, 15, 2105.
[Aduama et al.(2023)Aduama, Zhang, and
Al-Sumaiti]aduama2023multi
Aduama, P.; Zhang, Z.; Al-Sumaiti, A.S.
Multi-Feature Data Fusion-Based Load Forecasting of Electric Vehicle
Charging Stations Using a Deep Learning Model.
Energies 2023, 16, 1309.
[Zheng et al.(2020)Zheng, Shao, Zhang, and
Jian]14zheng2020systematic
Zheng, Y.; Shao, Z.; Zhang, Y.; Jian, L.
A systematic methodology for mid-and-long term electric vehicle
charging load forecasting: The case study of Shenzhen, China.
Sustain. Cities Soc. 2020, 56, 102084.
[Arias and Bae(2016)]15ARIAS2016327
Arias, M.B.; Bae, S.
Electric vehicle charging demand forecasting model based on big data
technologies.
Appl. Energy 2016, 183, 327–339.
[Maciejowska et al.(2016)Maciejowska, Nowotarski, and
Weron]16maciejowska2016probabilistic
Maciejowska, K.; Nowotarski, J.; Weron, R.
Probabilistic forecasting of electricity spot prices using Factor
Quantile Regression Averaging.
Int. J. Forecast. 2016, 32, 957–965.
[Shi et al.(2017)Shi, Liang, and Dinavahi]17shi2017direct
Shi, Z.; Liang, H.; Dinavahi, V.
Direct interval forecast of uncertain wind power based on recurrent
neural networks.
IEEE Trans. Sustain. Energy 2017, 9, 1177–1187.
[Kabir et al.(2021)Kabir, Khosravi, Kavousi-Fard, Nahavandi, and
Srinivasan]18KABIR2021106878
Kabir, H.M.D.; Khosravi, A.; Kavousi-Fard, A.; Nahavandi, S.; Srinivasan, D.
Optimal uncertainty-guided neural network training.
Appl. Soft Comput. 2021, 99, 106878.
[Niu and Liang(2018)]19niu2018nuclear
Niu, Z.; Liang, H.
Nuclear mass predictions based on Bayesian neural network approach
with pairing and shell effects.
Phys. Lett. B 2018, 778, 48–53.
[Mirasgedis et al.(2006)Mirasgedis, Sarafidis, Georgopoulou,
Lalas, Moschovits, Karagiannis, and Papakonstantinou]mirasgedis2006models
Mirasgedis, S.; Sarafidis, Y.; Georgopoulou, E.; Lalas, D.; Moschovits, M.;
Karagiannis, F.; Papakonstantinou, D.
Models for mid-term electricity demand forecasting incorporating
weather influences.
Energy 2006, 31, 208–227.
[Falkner et al.(2018)Falkner, Klein, and
Hutter]20pmlr-v80-falkner18a
Falkner, S.; Klein, A.; Hutter, F.
BOHB: Robust and Efficient Hyperparameter Optimization at Scale.
In Proceedings of the 35th International
Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 1437–1446.
[Wang et al.(2019)Wang, Fang, Zhang, Liu, Wei, and Shi]218781349
Wang, X.; Fang, F.; Zhang, X.; Liu, Y.; Wei, L.; Shi, Y.
LSTM-based Short-term Load Forecasting for Building Electricity
Consumption.
In Proceedings of the IEEE 28th International Symposium on Industrial
Electronics, Vancouver, BC, Canada, 12–14 June 2019; pp. 1418–1423.
[Grolinger et al.(2016)Grolinger, L’Heureux, Capretz, and
Seewald]grolinger2016energy
Grolinger, K.; L’Heureux, A.; Capretz, M.A.; Seewald, L.
Energy forecasting for event venues: Big data and prediction
accuracy.
Energy Build. 2016, 112, 222–233.
|
http://arxiv.org/abs/2306.05598v1
|
20230609000225
|
Enclosed Loops: How open source communities become datasets
|
[
"Madiha Zahrah Choksi",
"Ilan Mandel",
"David Goedicke",
"Yan Shvartzshnaider"
] |
cs.CY
|
[
"cs.CY"
] |
Both authors contributed equally to this research.
[email protected]
Cornell Tech
New York
New York
USA
10044
[1]
[email protected]
Cornell Tech
New York
New York
USA
10044
[email protected]
Cornell Tech
New York
New York
USA
10044
[email protected]
York University
Toronto
Ontario
Canada
M3J 1P3
Centralization in code hosting and package management in the 2010s created fundamental shifts in the social arrangements of open source ecosystems. In a regime of centralized open source, platform effects can both empower and detract from communities depending on underlying technical implementations and governance mechanisms. In this paper we examine Dependabot, Crater and Copilot as three nascent tools whose existence is predicated on centralized software at scale. Open source ecosystems are maintained by positive feedback loops between community members and their outputs. This mechanism is guided by community standards that foreground notions of accountability and transparency. On one hand, software at scale supports positive feedback loops of exchange among ecosystem stakeholders: community members (developers), users, and projects. On the other, software at scale becomes a commodity to be leveraged and expropriated.
We perform a comparative analysis of attributes across the three tools and evaluate their goals, values, and norms. We investigate these feedback loops and their sociotechnical effects on open source communities. We demonstrate how the values embedded in each case study may diverge from the foundational ethos of open communities as they are motivated by, and respond to the platform effects, corporate capture, and centralization of open source infrastructure. Our analysis finds that these tools embed values that are reflective of different modes of development - some are transparent and accountable, and others are not. In doing so, certain tools may have feedback mechanisms that extend communities. Others threaten and damage communities ability to reproduce themselves.
Enclosed Loops: How open source communities become datasets
Yan Shvartzshnaider
July 31, 2023
===========================================================
§ INTRODUCTION
Modern coding and development are often characterized by the assembly of complex webs of public and open-source packages. Developers typically resolve coding problems by looking at public source code or querying a search engine, technical documentation, discussions, and forums available online. From the origins of the Free Software movement to early open source to the age of the “software supply chain” <cit.>, open source ecosystems have continuously re-invented themselves. At each stage, these changes were enabled by technical, legal, and sociological innovations codified within the open-source movement. The open-source movement produced large amounts of public source code as well as community norms around sharing, collaboration, and education. Core aspects of an open community, such as information governance, depend on established policies and common acceptable behavior. Many of the rules are derived from established contextual norms and societal expectations. As open projects scale, both the resource and the community grows, and positive feedback loops of exchange thrive <cit.>.
Starting in the early 2010's open-source ecosystems saw increasing centralization in package managers and code hosting platforms. GitHub played a central role as the home for a diverse set of open-source communities <cit.>. Platform effects changed how developers write code and collaborate. Simultaneously the effect of centralizing nearly all open-source code produced new tools for interacting with software at scale. In this paper, we document three cases in Section <ref> (Dependabot, Rust/Crater, Copilot) that demonstrate those interactions. Further, we examine how changes in the broader open-source community are reflected in these tools.
We outline the attributes cultivated in past open projects that maintain community norms and values, such as collaboration, transparent knowledge generation, and copyright. Past projects have carefully constructed and maintained technical and normative inroads for communities. We demonstrate that long-standing technical and normative values maintain positive feedback loops for community participation and show how modern projects can diverge and disrupt those processes.
Further, we compare three new large-scale use of open-sourced code that operate on or within open-source projects. We have selected a set of key factors around community, knowledge production, and governing rules that we use to categorize the three case studies. We conclude with an outlook on how the change in interpretation of the aforementioned key factors might have an impact on open-source, code-sharing behavior, and community growth.
-centralization of open source - changed coding and created these case studies, and there is something to explore about this phenomenon/paradigm shift from a decentralized coding to centralized
-argument: there is something reflexive in the changes in the open source structures, and the existence of these case studies
-has changed modern coding practices
-has provided a centralized database from which the case studies we've selected derive
-
-GitHub owned by microsoft / microsoft has created an open source monopoly (NPM, VS Code)
-
§ BACKGROUND AND RELATED WORK
The development and growth of open source ecosystems relied on innovations in licensing that have created complex edge cases around what constitutes free use in both the legal and ethical sense <cit.>. In their work, <cit.> argue for greater scrutiny of the ethical deployment of Open Source AI projects and the resulting “downstream harms” they might cause, which are insufficiently mitigated under current norms or legal practices <cit.>.
In this section, we situate accountability and transparency in open-source movements by examining changes in the canonical values and norms operating in the broader ecosystem. Building on previous historical analysis of accountability <cit.>, we categorize a new mode of software at scale that raises novel questions for understanding contemporary open source systems.
This paper frames open source itself as a data commons. The mechanisms and historical provenance of datasets are a reoccurring concern in discussions of fairness, accountability, and transparency<cit.>. Drawing on Ostrom’s Principles of Data Commons Governance <cit.> to study Open Data Ecosystems (ODEs), this work extends the literature towards tools that use public source code as a material resource and commons <cit.>. The production of substantial amounts of open-source code from which these datasets are curated was predicated in legal innovations in licensing regimes. Current attempts to reign in harms from AI systems<cit.> are part of a broader historical context in the effects of changes in licensing regimes on practitioners.
In the following sections, we examine the ideological, legal, and technical origins of open source. We then discuss how those factors changed in the presence of centralized platforms. This section concludes with a brief introduction to the cases we term "software at scale" examined in the rest of this paper.
§.§ Ideological Origins of Open Source
The rise of the Free Software and Open Source movements in the 1980s and 90s instantiated novel norms for hosting, distributing, and accessing publicly available source code <cit.>. The Free Software Foundation (FSF), and the Open Source Initiative (OSI) worked to foster and guide these movements into popular existence <cit.>.
The enabling context for this mode of collaboration was the legal innovation of copyleft licensing, published by Richard Stallman and the FSF, known as the General Public Licence (GPL). The GPL used “intellectual property rules to create a commons in cyberspace”<cit.>. It enabled “a commons, to which anyone may add but from which no one may subtract' <cit.>. Broadly construed, copyleft licensing agreements enable copies of copyrighted work (namely programs and software) to be shared, used, and modified freely. The FSF operates as a moral crusade against proprietary software. Comparatively, the OSI focuses on the practical benefits of open-source software, such as enabling developers to modify and distribute code <cit.>. Both movements have played a significant role in shaping the modern open-source community and its values. The modern open-source ecosystem is a product of innovations that are technical (code), legal (licensing), and sociological (norms of sharing). These factors are not static; rather, they are informed by and shaped by each other.
§.§ Linux Made Open Source Big
Linux began as a side project in 1991 by Linus Torvalds to build a free operating system kernel. Its rapid popularity and its organizational mythology, as captured in Raymond's The Cathedral and the Bazaar, helped to propel the growth of open source <cit.>. Its characteristics would come to define open-source development. Features include a large group of strangers arranged largely non-hierarchically while voluntarily collaborating on software over the internet. Working collaboratively in public became the basis of open source code as a “knowledge commons” <cit.>.
The key innovation of Linux:
was not technical but sociological. Until the Linux development, everyone believed that any software as complex as an operating system had to be developed in a carefully coordinated way by a relatively small, tightly-knit group of people....
Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users
When Linux switched to the GPLv2 license, it was adopted “from the FSF, but [we believed] in it as an engineering choice and as a way to allow people to improve and share rather than as a moral imperative.” (Linus Torvalds, 2016 <cit.>, emphasis theirs). Unlike GNU and BSD, which had a core team of developers who were physically proximate and could act as the “inside group” guiding development, Linux was truly made by strangers online. Linux needed the GPL, and the licensing regime informed the technical and sociological modes of the community's development.
It is difficult to measure the totality of Linux's success in the intervening decades. Its integration in Android makes it the most popular OS in the world <cit.>. Further, Linux is used in virtually every supercomputer <cit.> and nearly all cloud providers and web servers <cit.>. As early as 1993, it was understood that Linux was more than a project but a methodology <cit.>. That methodology grew rapidly, setting the stage for a developer culture dominated by open-source software.
§.§ GitHub Made Open Source Centralized
Originally, compressed tar files with source code were shared via FTP and mailing lists, where they would typically be compiled on user's machines. Around 1993 some distributions of Linux developed package formats to simplify installing pre-built binaries that could be installed using package managers built into the OS <cit.>. These tools allowed users to install, upgrade and remove isolated packages. Debian released apt-get in 1998, making it possible to download a package and all of its dependencies <cit.>. This change meant code could be developed by compositing smaller, more modular components<cit.>. Language-specific package managers had a similar effect, changing “the idea of what might constitute an “open-source project” [which] became smaller, too, not unlike the shift from blog posts to tweets” <cit.>.
As the number of open source projects and communities expanded, there were pressures to develop organizational infrastructures on top of not just code but community as well. Git was released by Linus Torvalds in 2005 to manage the kind of decentralized collaboration that the Linux project had pioneered. In 2008 GitHub was founded as a “Social Coding” platform with community features layered on top of Git. GitHub emerged in the same era as YouTube, Facebook, and Twitter, as the rest of Web 2.0 was taking off. The platform integrated pull requests on top of Git, allowing developers to submit, review and merge changes to open-source code on the platform.
By 2018 GitHub announced 100 million repositories<cit.>. In January of 2023 they had 100 million registered users and an explicit vision to make GitHub “the home for all developers” <cit.>. Centralization has the effect of making a commons accessible as a resource <cit.>.
Working in Public documents the changing nature of open source development in the late 2010s as GitHub became the de facto home for open source communities <cit.>. <cit.> describes “Federations” and “Stadiums” as two possible structures for open source communities. The former is defined by high contributor growth, high user growth, and complex governance structures. Linux is a prototypical federation. Stadiums are characterized by low contributor growth and high user growth, typically powered by one or a few developers. Stadiums may have large numbers of casual contributors <cit.> whose connection to a project more closely resembles that of users or the parasocial relationship between creators and viewers on platforms such as Twitch and YouTube are stadiums. Stadiums also typify many of the smaller, modular libraries that are legally reused. The organization of open communities within “stadiums”, therefore, demonstrates a shift in community norms where developers identify as users of projects, instead of participatory and contributing members of open-source communities <cit.>.
§.§ Case Studies
The selected tools are made possible through large amounts of accessible code shared on centralized platforms and tightly integrated package managers. The cases reveal the divergent strategies of the modern open-source ecosystem as it conforms to extant platform effects. The case studies capture a novel mode of interacting with public software that is enabled by centralization while contrasting their embedded values and visions of open software development.
§.§.§ Dependabot
Dependabot is a tool that maintains source code dependencies and mitigates security vulnerabilities in the software supply chain <cit.>. Founded in 2017, Dependabot was acquired by GitHub in 2019, and has since been fully integrated into GitHub's platform <cit.>. Further, Dependabot alerts are on by default for public repositories on GitHub <cit.>. Security Advisories are synchronized from the National Vulnerability Database<cit.>, and repository owners can also raise security vulnerabilities in their code. If an advisory falls within one of GitHub's supported ecosystems, the advisory gets verified by the GitHub Security Team. While other dependency management bots exist, Dependabot's integration on GitHub makes it the most widely used. In 2019, 67% of bot-created pull requests came from the original and GitHub native versions of Dependabot <cit.>.
§.§.§ Crater
The rust programming language makes an aggressive commitment to stability across updates to the language and compiler <cit.>. Crater, originally called taskcluster-crater was introduced in 2015 as a tool to help guarantee stability by “compiling and running tests for every crate on crates.io (and a few on GitHub) <cit.>. Currently, there are 103,318 crates on <crates.io> (2023-01-29). Additionally, every single public repository on GitHub with a Cargo.lock file<cit.> is tested. Brian Anderson developed the initial version of Crater as “a tool to run experiments across parts of the Rust ecosystem.” <cit.>. Crater runs weekly or more <cit.>. When new errors cause the compilation to fail, Rust's compiler team may revert changes to the compiler.
§.§.§ Copilot
Copilot is a cloud-based “artificial intelligence” assistant for writing code. Copilot is installed as a plugin compatible with a number of code editors, where it acts as a form of advanced auto-complete. It is currently based on OpenAI's Codex model<cit.>. Codex is a fully trained GPT3 model fine-tuned on 54 million public software repositories hosted on GitHub, filtered down to 159GB of Python source code<cit.>. It is unclear what data the deployed model was fine-tuned on, likely much more. More than 1.2 million developers joined Copilot's free technical preview in 2021. Within 1 month of moving to a $10/month subscription model, 400,000 users signed up<cit.>.
§ COMPARATIVE ANALYSIS
The following subsections compare the aforementioned case studies across a number of attribute that create incentives for the community and maintain positive feedback loops. The goal of this analysis is to investigate how each tool is produced by platform effects in open source.
§.§ Project Goals and Centralization
Each tool explicitly lays out its own functional goals in documentation or marketing. In addition to these technical or operational objectives, each tool has implicit goals that define how it interacts with the broader open source ecosystem.
§.§.§ Dependabot
Within open source ecosystems, common wisdom suggests that by keeping individual packages up to date and security holes patched, the ecosystem as a whole is made safer <cit.>. Dependabot attempts to keep users' code secure by helping keep everyone's code secure and up to date.
§.§.§ Crater
In describing all the ways the Rust language is tested to maintain stability and safety one maintainer described how the existence and centralization of Github and Crates.io “allow us to treat the entire world of open source Rust code as our test suite” <cit.>. Those tests assist Rust language developers in maintaining the state of the compiler while keeping updates to the language timely. By allocating time and compute toward such a goal, they are demonstrating their commitment to stability<cit.> for all users of the language, within the open source and outside of it.
§.§.§ Copilot
Copilot's stated goal is to help developers focus on “bigger” problems, leaving the tedious parts of coding to an automated system <cit.>. Copilot's unstated goal is also to be a subscription-based product for GitHub to derive a profit <cit.>. Like nearly all social platforms from the 2010s, the free hosting of content is a means financial gain; this is no different for GitHub. Furthermore, Copilot collects telemetry data as discussed in section <ref>. This serves the other two goals by acting as a data flywheel <cit.> to improve future versions of Copilot making the tool more attractive as a product.
§.§ Platforms and Technical Implementations
All three case studies input data would not exist without the changes in the open source ecosystem, described in Section <ref>. We examine how each tool leverages centralization and platform effects to provision data as a resource. Further, we examine how the technological material itself reflects embedded values<cit.>.
§.§.§ Dependabot
GitHub scans user code and crosschecks it against a shared database of CVEs <cit.>. IN addition, GitHub is a numbering authority capable of adding vulnerabilities to that shared database.
Dependabot is primarily an interface to that database. The code for the bot is open-sourced but individuals cannot add to the database. They can only make suggestions that are reviewed by the GitHub security team.
§.§.§ Crater
The rust language centralizes its own package management creating a data source for Crater to run on. Additional packages can be sourced from GitHub because of its dominance over code hosting. Crater currently runs on an AWS c5.2xlarge with 2 terabytes of storage<cit.>. This costs $0.34/hour with runs taking a few days. The servers used by crater are restricted to certain rust team members, however the costs are nominal and individuals could presumably implement their own system.
§.§.§ Copilot
Without the centralization of GitHub, collecting the massive datasets necessary for training a Codex-like model would likely be much more difficult. GitHub already hosts the code, it is technologically trivial for them to use it as they wish. Additionally, if data-set size is to be the bottleneck in improvements in transformer-like models<cit.> it is necessary to ingest maximal amounts of source code regardless of licensing or authors preferences. Training large models is financially costly, while those numbers are not released based on <cit.> it could be $40-$60,000 if compute is priced similarly to AWS EC2 P3 instances. There would be significant costs to developing and running the infrastructure for running inference as well. The tools existence and underlying technology cannot be cleaved from the social dimensions of its development <cit.>. Copilot and similar models are fundamentally products of large well funded institutions.
§.§ Transparency and Relationship to Open Source Licences
§.§.§ Dependabot
Released under the prosperity public license, dependabot's source code is available online as is the database alerts are based on. The tool is self-activating and on by default for public repositories. The license creates strong limitations for commercial use by a offering a restricted trial for such uses. Dependabot does not create copyright infringement or risks as its operational function to maintain the sustainability of a given repository. Dependabot's technical processes would be classified as transformational use in that its use enhances the copyrighted code towards the goal of protecting packages from security risks.
§.§.§ Crater
Crater is licensed under both the MIT and Apache 2.0 licenses. The interests of the programmers who publish Rust code to crates.io and GitHub don't have expressive interests that could be violated by its reuse. Because these are public-facing repositories, regression testing doesn't violate terms of use as defined by the licenses. Further, this interest does not produce copyright issues from a fair-use perspective as it does not reduce the market value of the code that is posted. Instead, the code is made more valuable by ensuring that it continues to work. The Rust maintainers make noncommercial use of packages available online, and the functional processes Crater carries out are transformative. From a community perspective, there are no normative compelling reasons why a programmer would object to their code being scraped. Testing and stability do not invade an interest in personhood, nor do they create losses to the developer. They do not create unearned benefits to other parties. Rather,
testing using code at scale benefits the community of which the programmer is part. It doesn't interfere with their authorship interest in the expression in the code; it doesn't interfere with their incentives to create it.
§.§.§ Copilot
Whereas licenses enable others to obtain rights to copy and use software, terms of service agreements are a legal contract that outline a service relationship between a customer and a corporation. The Copilot case presents a copyright conundrum: a code recommendation system that generates copyright infringing works. However, there can be no secondary copyright liability without primary infringement, which means that Copilot is not itself liable unless it is used to infringe. Under U.S. copyright law, there are no copyrightable interests in infringing works. Therefore, Copilot as a recommendation tool that copies code produces a significant source of risk to its users. Further, Copilot is trained on code from repositories on GitHub, released under numerous open source licences but provides no attribution information. Arguably, contradicting its own normative standards for sharing code. GitHub itself admits that the text of the GPL was contained in the training data 700,000 times <cit.>. GitHub executives also argue that the tool satisfies fair use because the output belongs to the operator.
Ongoing litigation will determine the legality of laundering open source to sell as a subscription. Regardless of legality, a number of projects have left GitHub for at least violating the spirit of their license and their trust. One complaint asks why the models treat open source code as a free commodity but were not also trained on the proprietary code of GitHub and its owner Microsoft <cit.> if there are no copyright concerns with the system.
§.§ Interactions and Feedback Loops
In this section, we discuss how users interact with the tool and how much autonomy users have to not participate in the production of resources that enable these tools. Further, we explore how feedback loops are or are not created between users interactions, software at scale, communities, and the broader open source ecosystem.
§.§.§ Dependabot
Users interact with Dependabot when alerts are raised in their repository. Those alerts are only visible to maintainers otherwise they would risk outing vulnerable projects. While alerts are on by default for public repositories, users can opt out and turn alerts off. Users can also further engage with Dependabot by raising vulnerabilities, in their own packages or others. These features are well documented on GitHub.
The safer each individual package is the safer the broader open source ecosystem is describes a feedback loop between users individual security concerns and the security of all packages.
§.§.§ Crater
There is relatively little user interaction with crater by normal member's of the community. It is unclear how widespread its existence is to casual users of the programming language. While all code on https://crates.iocrates.io is subject to crater experiments users could avoid publishing their code to the package manager. Similarly, only projects on GitHub with a file are scraped and included in Crater tests. If those files are never pushed to a public repository they will be opted out of crater runs.
Though users may not interact with crater directly there exists a positive feedback loop between the open source ecosystem and the tool. Stability is a core commitment by the maintainers to the rust community <cit.> and a major draw to users <cit.> The better the Rust Foundation can keep that commitment the stronger the draw of the language, the more the ecosystem produces, providing more data for Crater to function. Even beyond the broad feedback loop, one rust maintainer described in some cases where code tested by Crater was itself unsound “ [they] often inform the crate[package] maintainer and sometimes even help them fix it” <cit.>.
§.§.§ Copilot
Of the tools listed Copilot has the tightest interaction loop between users and the least transparency in a feedback loop between the community and the tool. By implementing it as a plugin for a wide variety of code editors it adds functionality to the context where developers are comfortable working<cit.>. Copilot leverages textual norms of the medium of code to emphasize the “pair programming” modality <cit.>. Users report spending less time searching online for answers at the cost of understanding their own code less <cit.>. Github does collect telemetry data (though users can opt out) on which code suggestions have been accepted, edited, or rejected.
While these may be used to update the underlying model, feedback loops between the underlying model and the open source platform are limited temporally to updates to the underlying model. The authors of the Codex paper note that“the model may make suggestions for deprecated methods. This could increase open-source developers’ incentive to maintain backward compatibility, which could pose challenges given that open-source projects are often under-resourced”. One of the key innovations of working in public was the rapid iteration cycle that allows productive feedback loops to develop. Copilot is unlikely to be updated fast enough to produce those loops.
A significant number of users pushing Codex-like generated code to public repositories reinforces the use of popular languages and open source packages. The long term effect entrenches the disparities in package usage rates. The authors of Codex note how differential import rates might:
increase the dominance of an already influential set of individuals and organizations in the software supply chain.
...
Where a user may have done an Internet search before for 'which machine learning package to use' or 'pros and cons of PyTorch vs. Tensorflow' they might now just type '# import machine learning package' and trust Codex to do the rest. Users might be more inclined to accept the Codex answer under the assumption that the package it suggests is the one with which Codex will be more helpful. As a result, certain players might become more entrenched in the package market and Codex might not be aware of new packages developed after the training data was originally gathered.
At a sufficient scale of users this may hinder newer projects and libraries from gaining a foothold and developing their own community. An algorithmic and computational mono-culture.
§ IMPLICATIONS AND DISCUSSION
§.§ Incentivizing Community
Successful open source projects, software, and communities are those that have valuable code, an active community of users, and a broader network that stimulates collaboration, dialogue, and interaction <cit.>. However, these success-defining factors are not weighted equally. The quality of the resource is a culmination of the time and effort developers dedicate to make the code better <cit.>. This is sometimes achieved by organizing tasks into parts and incentivizing developers to contribute in small but highly meaningful ways. And while the end product of high quality code and technical achievements are important to code being correct, maintainable, and modular, the process is more significant than the outcome <cit.>.
The key to long-term success is a thriving community of developers who are motivated and incentivized to support the project solely for the social benefit that maintaining the project brings to others - regardless of their membership or activity within the community. Along these lines, our analysis investigates three attributes integral to a positive feed-back loops within an open source community: a project’s goals, values, and norms as they relate to the process.
§.§ Values in Design
The three case studies examined in this paper fit an emerging typology for applications of large source code data-sets. They are a fundamentally new technological object that communities will increasingly interact with, by choice or not. The values embedded in these tools and the norms they encourage will likely continue to shape open source communities for the foreseeable future. In open source ecosystems, accountability and transparency in technical systems are intimately tied to the historical arc these communities have traversed.
It is important to acknowledge that these paths are not predetermined. Understanding if implementers of these systems perceive of the open source ecosystem as a collaborator, a commons or a resource to be extracted is predictive of downstream concerns. Of the three cases, Copilot would seem to most challenge the norms and ethos of canonical open source development. This is not predicated on technological determinism, rather is was dependant on choices made by the developers of the system. Amazon's Copilot competitor makes an effort to specify when code completions are similar to specific samples in the dataset and will provide licence and attribution information <cit.>. The Software Freedom Conservancy has convened a working group to determine what a truly Free and Open Source (FOSS) Copilot alternative would look like <cit.>. The BigCode project has worked to produce a large language model for code using only permissively licensed code<cit.>. They are working towards that goal using only permissively licensed code with a robust opt-out process<cit.>.
§.§ Long Term Viability
The reliance on open source code or data raises the question of sustainability: For how long can new models and products rely on the open-source community to provision these goods? As the need for data increases so will the need for the provision of quality data becomes a concern <cit.>. Platforms will need to encourage participation to satisfy computational requirements <cit.>. Without supporting governance structures that lead to the production and maintenance of a digital commons, systems break due to unaddressed vulnerabilities, broken dependencies<cit.> and invalid datasets <cit.>.
Regardless of the social formulations that develop these tools, they push the open source ecosystems towards a model that contains more "stadiums." This model is comprised of a large number of open source packages that are co-dependent on one another, and a large user-base who are not necessarily members of those communities. For Dependabot and Crater, these qualities are implicit in the functionality of the tool, which are predicated on user bases that create many small packages and libraries rather than monolithic code bases with a persistent community. Copilot, explicitly encourages the "stadium" model. It promotes the most popular packages in its training data-set and substitutes understanding <cit.> for developer efficiency. Copilot functionally isolates developers from communities by being the first, and maybe only resource that they interact with. Popular discourse has expressed concerns about Copilot and large language models replacing developers, however, the real risk is the replacement of communities.
§ CONCLUSION
In this paper, we present an investigation into the socio-technical effects of feedback loops in open source communities. We first trace the historical and ideological origins of open source through the modern era. We examine how forces in centralizing those ecosystems produce distinct social formulations of open source communities. Similarly, centralization creates a resource of open source code as a data-set.
We classify a distinct form of ecosystem wide tools. Our comparative analysis of three open-source tools (Crater, Dependabot and Copilot) reveals how feedback loops coupled with divergence from open-source community goals, values, and norms, could hinder community formation and sustainability.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.07399v1
|
20230612195927
|
4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in varying outfits exhibiting large displacements
|
[
"Matthieu Armando",
"Laurence Boissieux",
"Edmond Boyer",
"Jean-Sebastien Franco",
"Martin Humenberger",
"Christophe Legras",
"Vincent Leroy",
"Mathieu Marsot",
"Julien Pansiot",
"Sergi Pujades",
"Rim Rekik",
"Gregory Rogez",
"Anilkumar Swamy",
"Stefanie Wuhrer"
] |
cs.CV
|
[
"cs.CV"
] |
[
Sensitivity potential to a light flavor-changing scalar boson with DUNE and NA64μ
P. Crivelli
July 31, 2023
===================================================================================
type=figure
< g r a p h i c s >
figureIdentity and outfit axes of the 4DHumanOutfit dataset. 20 actors were captured in 7 outfits each while performing 11 motions per outfit. The figure shows the identities and the outfits of the subset that we release, with male actors on the left and female actors on the right. First row shows minimal clothing, which we leverage to obtain body shape parameters by fitting a parametric body model.
]
[1]NAVER LABS Europe
[2]Inria centre at the University Grenoble Alpes
[3]Authors ordered alphabetically.
This work presents 4DHumanOutfit, a new dataset of densely sampled spatio-temporal 4D human motion data of different actors, outfits and motions. The dataset is designed to contain different actors wearing different outfits while performing different motions in each outfit. In this way, the dataset can be seen as a cube of data containing 4D motion sequences along 3 axes with identity, outfit and motion. This rich dataset has numerous potential applications for the processing and creation of digital humans, augmented reality, avatar creation and virtual try on. 4DHumanOutfit is released for research purposes at <https://kinovis.inria.fr/4dhumanoutfit/>. In addition to image data and 4D reconstructions, the dataset includes reference solutions for each axis. We present independent baselines along each axis that demonstrate the value of these reference solutions for evaluation tasks.
§ INTRODUCTION
4DHumanOutfit is a new dataset of 4D human motion sequences, sampled densely in space and time, with 20 actors, dressed in 7 outfits each, and performing 11 motions exhibiting large displacements in each outfit. We designed 4DHumanOutfit to enable the combined analysis of shape, outfit and motions with humans. This results in a dataset shaped as a cube of data containing 4D motion sequences with three different factors that vary along the axes identity, outfit, and motion. Fig. <ref> illustrates the morphology and clothing axes of this cube.
Analyzing and modeling the dynamics of human garments during motion and across actors is a well-studied problem in computer vision and computer graphics, with the goals of understanding human motion from partial data and generating realistic digital human animations. This has applications in video understanding, including action and fashion recognition; telepresence, including virtual change rooms and fashion transfers; and entertainment, including animation content generation. Many existing works study this problem from a data-driven perspective, where the goal is to learn motion dynamics from example data. To facilitate these studies, three main types of datasets of humans in motion have been introduced. The first type of dataset contains 2D videos of dressed humans in motion <cit.>, which allows to capture the appearance of rich clothing dynamics observed in real garments. More recently, large-scale 4D datasets of minimally clothed 3D human bodies in motion have been published, either captured using acquisition platforms <cit.> or computed by fitting models to sparse motion capture data <cit.>,
which allow to learn detailed 3D body shape deformations over time. To enhance such datasets with garments, recent works use physical simulation to drape clothing on this 4D data <cit.>, enabling therefore to model clothing dynamics for synthetically generated garments. 4DHumanOutfit contributes 4D data sampled densely in space and time of human bodies captured in different outfits and different motions. This combines the advantages of existing 2D datasets of capturing dynamic behaviour of layered clothing, including complex dynamics caused by seams and friction, with the advantages of existing 4D datasets of containing 3D shape information, including fine-scale geometric details.
The data we present has been captured in a multi-view acquisition platform that acquires 68 synchronized RGB streams at 50 frames per second, which are subsequently used to reconstruct densely sampled 3D geometric models with texture information per frame. For this dataset, we provide the RGB videos with masked background, and the reconstructed motion sequences in 4D in different spatial resolutions.
To demonstrate the potential of our dataset, we perform a baseline evaluation along each of the three axis independently. To this end, we introduce three tasks together with evaluation protocols. For the identity axis, we aim to predict the body shape of an identity given a 4D motion sequence of an actor wearing an arbitrary outfit. As reference solution for evaluating this task, we provide sequences captured in minimal clothing with body shapes resulting from fitting a standard parametric human body model <cit.> to the data. For the outfit axis, we aim to retrieve the outfit in a standardized pose from a given 4D motion sequence of an actor wearing an arbitrary outfit. As a reference solution to evaluate this task, we provide static scans of the outfit acquired on a mannequin. For the motion axis, we aim to retarget motion between actors from a 4D motion sequence showing the source motion to a static 3D scan of the target actor. When applied to data within the datacube, reference solutions are available as every actor was acquired performing each of the motions in each of the outfits. These tasks demonstrate that each axis of the datacube provides unique information that can be exploited in a large variety of practical applications.
The main contributions of this work are:
* The introduction of 4DHumanOutfit, a datacube of dynamic 4D human motion of 20 actors in 7 outfits each, performing 11 motions in each outfit, i.e. 1540 sequences in total. A large subset of 18 actors in 6 outfits and 10 motions is released for research purposes.
* The proposition of associated evaluation protocols and reference solutions for three tasks, along the identity, outfit, and motion axes of the datacube.
§ RELATED WORK
Capturing humans in clothing has attracted many efforts in computer vision and graphics. Existing works can be mainly clustered into datasets containing
(i) 2D images of clothed persons, (ii) synthetic 3D models, and (iii) 3D scans of humans in motion, different identities and various outfits.
We review them briefly to highlight how 4DHumanOutfit goes beyond the state of the art.
§.§ 2D fashion datasets
Many works focus on the creation of datasets of 2D images, as these are relatively easy to acquire. This includes early works, such as the Fashionista dataset <cit.> or works targeting to describe clothing with semantic attributes <cit.>,
as well as more recent datasets, such as FashionPedia <cit.>, among many others made available for research purposes.
A recent survey <cit.> provides an exhaustive list of published datasets until 2020, with a comprehensive classification of the tasks that can be addressed with 2D data. These include landmark detection, clothing parsing, retrieval, and attribute recognition. In addition, these datasets have allowed to tackle the task of virtual try on, where one can create high quality, compelling images <cit.> or videos <cit.> of how a person would look like wearing a given clothing in a target pose. While the generated images reach impressive realistic quality, their applicability is yet limited, as they cannot be used for actual metric fit assessment.
§.§ Synthetic datasets
Capturing data with cameras or 3D scanners usually requires tremendous human and material efforts. In order to circumvent this issue, generated synthetic datasets of clothed humans can be considered. In this category works have proposed different datasets, with one or multiple characters in different settings, by leveraging 3D editing tools, such as MakeHuman <cit.>, Mixamo <cit.>, or human body models, such as SMPL <cit.>.
For example, several datasets such as SURREAL <cit.>, MHOF <cit.> or LTSH<cit.>, place 3D models of humans on background images.
These datasets have been designed to address the task of 2D or 3D pose estimation from a single image.
Other datsets, such as AGORA <cit.>, have increased the challenge by
including images of multiple dressed persons with plausible interactions with the environment.
As all these images are static and lack 3D realism, they do not capture and model the complexity of real cloths' dynamics.
To include dynamics in the data, most works leverage physics simulators, which can account for the type of clothing through physical parameters.
Since the early work of Guan et al. <cit.>, several methods have considered different clothing parameters, which allow creating plausible variations in the wrinkle patterns present on cloths.
Synthetic datasets, with modest sizes such as BCNet <cit.>, or larger scale datasets, such as 3DPeople <cit.> and Cloth3D <cit.> provide 3D models of clothed humans. The last two explicitly explore the three dimensions of identity, motion, and clothing. While the explored range of subjects, poses, and cloth variations is impressive, the realism of these data are limited by the accuracy of the simulator used to create the data. The proposed 4DHumanOutfit dataset takes an alternative approach by capturing reality in a multi-view studio.
Interesting recent works have even modeled synthetic clothing at the sewing pattern level, allowing to automatically adjust the garment size to a personalized shape <cit.>. In our 4DHumanOutfit dataset we also release scans of the clothes on mannequins, which could allow to work on the clothes at the sewing pattern level.
All these works providing synthetic datasets have highly contributed to the community to advance the algorithmic approaches.
With our work we argue that the acquisition of actual humans performing dynamic motion in varied clothing is necessary to validate the applicability of existing approaches to real data.
§.§ Scanned 3D humans datasets
With 4DHumanOutfit we explore the three axis of motion, identity, and clothing. We briefly review existing datasets that consider similar axes.
Motion.
Human pose plays a crucial role in many application fields, such as medicine, sports or graphics, thus it has attracted many research efforts.
Marker based motion capture.
A classical approach to capture human motion is to use motion capture (MoCap) systems with reflective markers.
Following the pioneering dataset HumanEva <cit.>, many other datasets have been acquired: Human3.6M <cit.>, Total Capture <cit.>, AMASS <cit.> or HUMAN4D <cit.>.
The reflective markers allow to extract a good approximation of the pose, which is considered ground truth.
Other modalities, such as video, depth sensors or inertial sensors, are simultaneously acquired.
From this paired data, researchers have studied how to infer a pose from these other modalities.
In addition, massive datasets, such as AMASS <cit.>, have allowed to learn human pose priors which are widely used in the literature. While yielding precise information on poses with the marker locations, MoCap systems provide only sparse information on motion and imply complex setups with markers to be placed on subjects.
Markerless approaches. Another strategy to capture the human pose and motion is to use markerless systems, relying for that purpose on monocular settings <cit.>; on passive multi-view video systems, like for instance HUMBI <cit.> and
Hi4D <cit.>, the PanopticStudio <cit.>; or on active systems such as 3DMD, used to acquire for example DynamicFaust <cit.>, Flame <cit.> or Mano <cit.>. Depth camera setups have also been used to capture 4D motion sequences of multiple subjects <cit.>.
For our work we use the Kinovis acquisition platform <cit.>, a passive multi-camera system with a wide acquisition volume that enables dynamic displacements of the subjects and rich clothing dynamics.
Identity.
In the identity axis, the seminal CAESAR dataset <cit.>, created to study body morphology and clothing sizing purposes, contains scans of over 4500 individuals in 3 static poses each. Further datasets have scanned different persons in static <cit.> or dynamic <cit.> situations.
Hasler et al. <cit.> provide a total of 500 static poses of 114 individuals, while the FAUST dataset <cit.> contains 10 individuals in 10 static poses each.
DYNA <cit.> contains 10 individuals in a total of 129 sequences, which have been accurately registered for benchmarking in the DynamicFAUST dataset <cit.>.
All these datasets have allowed the study of identity and static or dynamic pose, but have not considered the clothing axis.
Outfit.
Early efforts have focused on capturing and analyzing sequences of few individuals captured in a single outfit <cit.>.
More recently, different tasks related to clothing have motivated the creation of additional 3D datasets of real clothed humans.
For example, to explore how different sizes of cloths drape on the same human, the dataset SIZER <cit.> captured around 2000 static scans, from 100 subjects wearing clothes of different sizes. As all subjects strike the same A-pose, this dataset does not allow to study cloth dynamics.
To tackle the problem of estimating the shape under clothing, Yang et al. <cit.> and Zhang et al. <cit.> acquired scans of different subjects, with and without clothing, performing several motions.
These datasets consider 6 subjects, 3 motions, and 3 outfits for Yang et al. <cit.> and 5 subjects, 3, motions and 2 outfits where 4DHumanOutfit considers 20 subjects in 7 outfits and 11 motions.
Other datasets have been acquired with consumer RGBD sensors <cit.>, providing lower quality than 4DHumanOutfit.
To learn a generative model of clothing, the dataset CAPE <cit.> was released. It also includes scans and SMPL mesh registrations from the ClothCap work <cit.> and contains 15 subjects in 4 different outfits performing different motions. The 4DHumanOutfit dataset is larger with 20 subjects and 7 clothing styles with richer dynamics.
In addition, the systematic acquisition of all subjects performing very similar motions in all outfits provides an unprecedented opportunity to study how dynamic cloth deformations behave depending on identity, motion, and clothing.
§ 4DHUMANOUTFIT DATASET
In the following we detail the acquisition setup, the constitution of a cohort of subjects, the selected outfits, motions, and their captures.
§.§ Data acquisition
Motion sequences were captured by 68 calibrated RGB cameras (4 megapixels, 50 frames per second, focal lengths between 8 and 16 mm) positioned roughly on a half-ellipsoid with radii 4m and 5m and height 5m looking towards the stage centre, for an average image resolution of 2.5mm per pixel at the scene centre. The total capture area covers a length of 5.5m and a width of 3.5m. Fig. <ref> shows the multi-camera platform <cit.>.
The capture and reconstruction pipelines, shown in Fig. <ref> and <ref>, consist of several steps. First, synchronized video streams are acquired and silhouettes are segmented with the software <cit.> of the multi-camera platform <cit.>. Second, the resulting images and silhouettes are undistorted using the calibration information and masked with projections of inflated visual hulls. This significantly reduces the size of the data.
3D reconstructions are computed independently per frame, which results in a densely sampled 3D mesh per time instant. These meshes are obtained by performing multi-view reconstruction <cit.> on the undistorted and masked images.
We decimate the resulting reconstruction into lower resolutions of 250k, 65k, 30k, and 15k vertices. The 3 lower resolution meshes are texture mapped using the capture platform software <cit.>, which is not designed to handle higher resolutions.
§.§ Identities
10 females and 10 males were recorded.
The participants were empirically chosen to cover the main variations of body shape according to eigenvectors computed on the CAESAR dataset <cit.>. The body shapes of all actors are shown in Fig. <ref>.
We release data of 9 female and 9 male actors, and keep the remaining data hidden to allow for future evaluations on unseen data.
§.§ Outfits
Motion recordings
Each actor was recorded wearing their own arbitrary clothes and 6 additional outfits, chosen to cover a wide range of typical casual European clothing. The outfits differ in terms of their fit, from tight to wide, and are made of various materials, which results in rich dynamic behaviour during motion. Outfits are different for males and females.
The following outfits, shown in Fig. <ref>, were used for women:
* own the actor's own clothes, unique to each actor, with the purpose to increase variability;
* tig socks, dotted white leggings, dotted salmon tank top, pink swimming cap (minimal clothing);
* sho white and pink sneakers, yellow shorts, purple T-shirt;
* jea yellow ballerinas, jeans, green and pink flowery shirt;
* cos yellow ballerinas, jeans, flowery purple dress;
* opt pink flip-flops or cream high heels, short grey dress or long loose blue dress or long tight red dress; as apparel for females offers more diversity than for males, we chose 3 optional outfits, using different materials and shapes to increase variability.
* hidden we recorded one additional outfit which will not be released to allow for future evaluations on unobserved data.
For men, the following outfits, shown in Fig. <ref>, were used:
* own the actor's own clothes;
* tig socks, beige shorts, grey tank top, blue swimming cap (minimal clothing);
* sho blue and white sneakers, beige shorts, orange T-shirt with picture;
* jea black moccasins, jeans, grey and white striped shirt;
* cos black moccasins, dark costume trousers, grey and white striped shirt, dark costume jacket;
* opt black moccasins, dark costume trousers, grey and white striped shirt, beige trench coat;
* hidden we recorded one additional outfit which will not be released to allow for future evaluations on unobserved data.
Each actor was recorded in 7 outfits, including all non-optional ones and one optional outfit.
Reference scans
In addition to clothed human motion data, we acquired scans of each outfit. They can serve as reference solution for an outfit retrieval task. These models were acquired using two different systems as static scans of each outfit worn by a standard mannequin. The first scanning system used is our multi-camera platform; it was used to scan the mannequins without clothing and to record 8 scans for each outfit to allow for some natural variability in the clothing folds. The second scanning system is an Artex Eva structured light scanner, with scan resolution of about 1500k vertices.
Fig. <ref> shows the female and male standard mannequins 65k reconstructions without clothing. Fig. <ref> shows the same mannequins with all outfits.
Fig. <ref> shows the reconstructed male mannequin mesh at resolutions 250k and 1500k.
§.§ Motions
Actors were asked to perform 11 motions involving significant displacements within the scene. We focus on motions with large displacements as these are yet rare in existing captured 4D datasets. For instance DYNA <cit.> captures soft-tissue dynamics during motions but without large displacements. We further choose the 11 motions to contain variations in upper and lower body motions, while covering common motions including different variants of walking. The motions, illustrated in Fig. <ref>, are:
* walk a simple walk across the studio;
* avoid a walk with last-second obstacle avoidance;
* back a walk with a U-turn;
* torso a walk with a torso rotation to look backwards;
* run a jog / run across the studio;
* jump jump on the spot;
* dance a dance with both legs and arms wide motion;
* hop hopscotch;
* 2 free motions to be chosen by the actor to increase the variability of the dataset, this included mostly martial art, dance and other sport motions;
* hidden we recorded one additional motion which will not be released to allow for future evaluations on unobserved data.
The duration of the recorded sequences ranges from 0.8 (for a free motion) to 17.2 (for dance motion) seconds.
§.§ Summary
A total of 1617 sequences were recorded, involving the processing of 459 080 frames. The computations were handled on 2 clusters (17 16-core servers equipped with Nvidia Quadro 4000 cards and 20 16-core Intel Xeon CPU servers) resulting in the generation of 540TB of total data during the project. Fig. <ref> provides an overview of the storage space required by the data during the generation of 4DHumanOutfit. The final 4D dataset consists of meshes in different resolutions (250k, 65k, 30k, 15k vertices), and undistorted and masked images and silhouettes. The total volume is 22TB, 20.5TB of which are occupied by the undistorted and masked images and silhouettes.
Concerning the timing, the project stretched over a bit more than 6 months, for about 5200 hours, divided as shown in Fig. <ref>. A significant amount of time was dedicated to packing and compressing.
§ EVALUATION
The 4DHumanOutfit datacube allows to learn correlations between identity, outfit and motion. To demonstrate its potential, we perform a simple evaluation along each of the three factors independently. This demonstrates that each axis of the datacube provides unique information that can be exploited in practical applications.
§.§ Identity
The first evaluation aims to estimate the body shape of the identity performing a motion from an arbitrary sequence of the datacube. That is, given as input a 4D motion sequence showing a person (of arbitrary identity and outfit) in motion, the aim is to estimate the undressed body shape in T-pose. This problem has been studied previously in <cit.>, and is of interest for virtual change room applications.
Evaluation protocol
To evaluate the accuracy of the retrieved identity in T-pose, we compute for each identity a reference solution of the naked body shape. This is achieved by fitting a parametric human body model to each frame of the person captured in minimal clothing, and by combining the resulting information. That is, we use the minimal clothing regime, which is very close to the actor's skin, as proxy for true body shape.
We use a parametric human body model with two sets of parameters, one representing body shape and one representing static pose. Changing the static pose parameters to the ones representing a T-pose allows to re-pose the body shape into standard pose. In practice, we use the SMPL model <cit.>.
To reliably fit the body model to the sequences captured in minimal clothing, we use an existing framework <cit.> based on SMPLify <cit.>. For a given timestep, we compute 2D keypoints in all images with alphapose <cit.>, and optimize a SMPL body model with respect to said keypoints, with an additional loss to force the reprojection of the SMPL mesh to fit inside the silhouette on all images, and the pose and shape priors used in SMPLify.
This is done for all sequences captured in minimal clothing. We then select a fit that minimizes the Chamfer distance to the corresponding 3D reconstruction. The resulting body shape parameters are used to reconstruct a model in T-pose, which is used as reference body shape. For each identity, its reference body shape is released along with the dataset.
To quantitatively evaluate the quality of a body shape estimate computed from a dressed 4D motion sequence of identity i, we compute the Chamfer distance between the result and the reference body shape of i, in a standard T-pose.
Baseline
As a baseline method for estimating body-shape parameters, we use the same optimization method described above, based on keypoints and silhouettes, but applied directly to dressed sequences. The setting is more complex, as estimated keypoints are generally less precise on frames with loose clothing, and silhouettes are farther from the silhouette of the body shape. This baseline is computed per-frame.
Results
We give here numerical and qualitative results of the baseline. The optimization is sensitive to the input keypoints, so results tend to be noisy. When computing the reference shape, this problem is addressed by using the 3D reconstruction to select the best fit. However, the results of the baseline shape estimation are affected by this problem. It sometimes leads to overly small or elongated shapes, when the optimization does not converge to a good solution. In cases of convergence, estimated body shapes are often too big, as the loose clothing is larger than the body shape.
Fig. <ref> shows color-coded results for female opt and male cos outfits. In these examples, the body shape estimates computed by our baseline are close to the reference shape in terms of height and overall body shape. However, the volume of the body shape is overestimated in areas that are occluded by clothing. The reason is that the baseline fits the largest body model that can fit in the silhouettes, and does so on a per-frame basis.
Limitations
For privacy concerns, we use sequences in tight clothing to compute reference shapes. While this causes small errors due to clothing folds and the width of the cloth, the resulting error is small compared to typical human motions.
We chose a simple baseline to illustrate the challenges of this task. It could be improved by taking information from the full sequence into account.
§.§ Outfit
The second evaluation aims to retrieve information related to outfit. Let J_ℳ^3d denote the pose of the mannequin shown in Fig. <ref>. Given as input a 4D motion sequence showing a dressed person in motion, the aim is to retrieve the outfit in pose J_ℳ^3d. This way of evaluating the outfit independently of identity and dynamics is novel to our knowledge. The related problem of inverse cloth design, where the goal is to retrieve a rest pose unaffected by physical forces for use in a physical simulator given the shape of a garment, has been studied <cit.>, but differs from our scenario as we are interested in retrieving the shape of the outfit in standard pose (as affected by physical forces).
Generating 3D garment deformations using physics-based simulators is challenging, because the generation of realistic detailed 3D garment models is typically done by trained artists and costly, and because physical simulators require input parameters that need to be tuned. 4DHumanOutfit contains accurate 3D garment deformations for a number of outfits, and has the potential to be used for data-driven garment synthesis without relying on physics-based simulators.
Evaluation protocol
To evaluate the accuracy of the retrieved outfit in pose J_ℳ^3d, we use the reference scans acquired for each outfit draped on the mannequin as pseudo ground truth. The retrieved outfit is compared to its corresponding reference scan by measuring the Chamfer distance between the two shapes. In particular, we report the Chamfer distance in mm between the retrieved garment mesh 𝒱_retr and the reference scan 𝒱_ℳ.
Baseline
We propose a simple baseline to solve this problem by framing it as a retrieval task. We first compute the 3D joints of each frame in a 4D motion sequence by fitting SMPL to the models as described in the previous section, and then retrieve the frame whose pose J_t^3d is closest to J_ℳ^3d. The pose distance is computed by considering a Procrustes-aligned distance as
J_t^3d(∑_i D_P( J_t^3d, J_ℳ^3d)),
where D_P is the distance in joint angle space. The 3D joints on the mannequin are obtained by manually annotating 3D points on the surface of the mannequin scan.
Results
Results are analyzed for three different subjects wearing three different outfits. Fig. <ref> shows three viewpoints of the retrieved garment and the reference scan for each of the three subjects. Note that the simple baseline already retrieves poses with garments that have visually similar wrinkle patterns. Tab. <ref> reports the Procrustes-aligned distance from Eq. <ref>. The error ranges from 26mm to 64mm across different subjects. This error has high variance because the retrieved outfit is one frame of the input sequence. If the input 4D motion does not contain frames in a pose similar to J_ℳ^3d, the error is high. Furthermore, the reference scan does not contain head surface, and the mannequin's body shape may not be close to the body shape of the input sequences.
Limitations
We propose a novel outfit retrieval task that has potential applications in online garment retrieval. The task along with the reference solutions that allow for quantitative evaluation have the potential to allow for further research in garment retrieval.
A major limitation of our protocol is that dynamic effects and body shape changes are currently not considered. That is, we frame the problem as a static one even though dynamic motion is present in the 4D motion sequence, and we ignore the influence of the wearer's body shape on the outfit geometry. The baseline we propose is simple and leaves significant room for improvement. However, it already demonstrates that outfit configurations visually similar to the reference scans can be found.
§.§ Motion
The third evaluation aims to examine the motion axis. Our evaluation considers the task of motion retargeting where the objective is to generate a 4D motion sequence of a given identity that performs the same motion as another given 4D motion sequence. In particular, given as input the 4D motion sequence showing identity i_1 in outfit o performing motion m along with a 3D model of identity i_2 wearing minimal clothing, the goal is to compute the 4D sequence showing identity i_2 while performing motion m.
A challenge when evaluating motion retargeting is the lack of realistic ground truth data. On the one hand, realistic 4D ground truth is lacking due to the sparse nature of existing large 4D human datasets <cit.> where all actors are not seen performing all motions. This lack of data encourages state of the art to evaluate on synthetically generated 3D motions. These synthetic motions are often generated with skinning methods and lack realistic local dynamic details. On the other hand, smaller 4D datasets <cit.> with dynamic details do not account for clothing.
In the following, we show that 4DHumanOutfit can be leveraged to evaluate motion retargeting methods by providing captured reference solutions with accurate geometric details.
Evaluation protocol
To evaluate the accuracy of the retargeted motion, we compare the 4D motion resulting from the retargeting to the target motion of identity i_2 in minimal clothing performing motion m captured in 4DHumanOutfit.
In this scenario, the retargeted motion M_1 = {m_1,j}_j=1^n and the target motion M_2 = {m_2,j}_j=1^m are characterized by sequences of 3D point clouds. The point clouds are not in correspondence, so we use the Chamfer distance to compare them. To compare M_1 and M_2, the Chamfer distance relies on a nearest neighbor search per point, which is heavily influenced by small variations of the global trajectory and temporal unfolding of M_1 and M_2. We remove this influence by spatio-temporally aligning M_1 and M_2, as this is common when evaluating retargeting approaches <cit.>.
To align the global trajectories, we center the pointclouds using their centroid c. To align the temporal unfolding of the motions, we use Dynamic Time Warping (DTW) <cit.>.
Given two sequences of point clouds, DTW computes the optimal monotonic path p^* between aligned frames as
p^* = p(∑_j D_Ch( m_1,j-c_1,j, m_2,p[j]-c_2,p[j])),
where D_Ch is the Chamfer distance. The proposed metric is then evaluated as the median error along this path as
jmed( D_Ch( m_1,j-c_1,j, m_2,p^*[j]-c_2,p^*[j]) ).
Baseline
Motion retargeting has been approached from different angles, either using deformation models that directly operate on the body surface <cit.>, using structured latent representations of 4D motion <cit.> or using skeletal representations which are linked to the body surface using an animation model <cit.>.
Some of these works require correspondences between the target and source bodies or temporal correspondences in the source motion. Our source motion is unregistered and we can leverage the template fitting from Sec. <ref> to have access to a target identity in minimal clothing in T-pose. From the applicable methods <cit.>, we choose <cit.> as our baseline because it generalizes well under various motion and shape preservation metrics and was already tested on the raw data of a multi-view acquisition setup.
The baseline operates in three steps. First, the source skeleton is extracted using a PointFormer network. Second, this skeletal motion is retargeted to the target at the skeletal level using a recurrent network. Third, the dense geometry of the target shape is recovered using a learnt skinning prior.
Results
Tab. <ref> reports the metric of Eq. <ref> for 4 retargetings considering 2 identities: a female and a male identity for 2 source motions and 2 source outfits. As the per sequence median error is
more informative when comparing different methods, we also visualize the spatio-temporal distribution of the error for two retargetings to differentiate error due to the natural variability and the error introduced by the retargeting method in Fig. <ref>.
Fig. <ref> shows the retargeting from male to female on a jumping motion and from female to male on a walking motion. The color coding shows that the method generates plausible poses overall with large error (yellow color) due to natural variability in the arm pose between the source and target jumping motions. It also highlights that the method could be improved in terms of head and arm pose transfer (red color) with the head facing down in the jumping retargeting and incorrect arm poses in some frames of the walking retargeting.
Limitations
It is known that a fixed type of motion performed by the same performer is performed slightly differently at different trials <cit.>. This variability is not implemented in our baseline, which instead outputs a deterministic retargeting solution. To address this limitation, our evaluation protocol normalizes global trajectory and temporal unfolding.
Second, existing retargeting baselines that operate on raw scan data are limited to outfits that are close to the body surface. Hence, we cannot leverage more ample outfits present in 4DHumanOutfit.
§ CONCLUSION
We presented 4DHumanOutfit, a large-scale dataset of 20 actors wearing 7 outfits each and performing 11 motions per outfit. This data captures detailed spatio-temporal dynamics of varying outfits, and their interaction with different morphologies and motions. We demonstrated that each axis of the resulting data-cube contains unique information using simple evaluations. This data has the potential of serving in many different applications involving digital humans including augmented or virtual reality applications (virtual change rooms), and in entertainment (animation content generation).
§ ACKNOWLEDGMENTS
This work was supported by French government funding managed by the National Research Agency under grants ANR-21-ESRE-0030 (CONTINUUM), ANR-19-CE23-0013 (3DMOVE), and ANR-19-CE23-0020 (Human4D).
abbrv
|
http://arxiv.org/abs/2306.04628v1
|
20230606132655
|
Systematic Analysis of Music Representations from BERT
|
[
"Sangjun Han",
"Hyeongrae Ihm",
"Woohyung Lim"
] |
cs.SD
|
[
"cs.SD",
"cs.MM",
"eess.AS"
] |
A New Approach to Measure Fundamental Microstructural Influences on the Magnetic Properties of Electrical Steel using a Miniaturized Single Sheet Tester
[
========================================================================================================================================================
There have been numerous attempts to represent raw data as numerical vectors that effectively capture semantic and contextual information.
However, in the field of symbolic music, previous works have attempted to validate their music embeddings by observing the performance improvement of various fine-tuning tasks.
In this work, we directly analyze embeddings from BERT and BERT with contrastive learning trained on bar-level MIDI, inspecting their musical information that can be obtained from MIDI events.
We observe that the embeddings exhibit distinct characteristics of information depending on the contrastive objectives and the choice of layers.
Our code is available at https://github.com/sjhan91/MusicBERT.
§ INTRODUCTION
Music consists of many repetitive components from motifs to phrases, and they have been conceptualized as forms of musical knowledge or atmosphere that humans are capable of understanding.
For instance, at the note level, performing successive multiple notes can convey harmonies and rhythmic dynamics for a short time.
At the bar-level, the performance can be expressed in chords, with chords being arranged in relationships among various bars.
At the song level, several features can serve as an overview of the composition, including played instruments, tempo, and genre.
Our focus is to understand bar-level symbolic music since it provides versatile capabilities for music analysis such as estimating musical similarity, extracting chords, and comprehending the whole structure of music.
Triggered by the field of natural language processing, there have been numerous attempts to represent raw data as numerical vectors that effectively capture semantic and contextual information.
Based on the Transformer blocks <cit.>, text embeddings can be extracted from encoder-only designs that incorporate bidirectional context <cit.>, decoder-only designs that facilitate text sequence generation <cit.>, and encoder-decoder designs that combine both functionalities <cit.>.
In the speech, wav2vec series (wav2vec 2.0 <cit.>, HuBERT <cit.>, and vq-wav2vec <cit.>) have applied BERT <cit.> for speech representations with embedding discretization.
Also in computer vision, Vision Transformer (ViT) <cit.> has transformed input images into multiple grid patches and introduced the use of class token embeddings for the classification problem.
After suggested in <cit.>, event-based representation of MIDI for machine learning has become widespread.
In this approach, each event token serves as an indicator of a specific musical action such as event changes of pitch, time shift, or velocity.
As in the following other research domains, the process of tokenization for MIDI enables us to utilize Transformer models to capture contextual information effectively.
MIDIBERT-Piano <cit.> has adopted a super token-level masking strategy for BERT pre-training and demonstrated promising performance in fine-tuning performance across four tasks.
Similarly, MusicBERT <cit.> has proposed an efficient concept of super-token, employing a bar-level masking strategy for BERT.
MuseBERT <cit.> also has adopted BERT model, but factorized MIDI representations into attribute sets and event relation matrix.
The aforementioned previous works have attempted to validate their music embeddings by observing the performance improvement of various fine-tuning tasks.
However, when dealing with music tasks at the bar-level (e.g. music information retrieval), it is more desirable to evaluate the embeddings based on their association with musical properties and semantics.
In this work, we directly analyze BERT embeddings trained on bar-level MIDI by inspecting their musical information that can be obtained from MIDI events.
Additionally, we compare BERT models that employ different contrastive objectives.
These models are trained with BERT loss and contrastive loss at the same time, enabling them to take into account longer contexts among bars.
This ensures that the models can be effectively adjusted to the user's intention or downstream tasks <cit.>.
Our results demonstrate that BERT embeddings can capture important musical features and semantic information in the bar-level MIDI.
Furthermore, we observe that the embeddings exhibit distinct characteristics of information depending on the contrastive objectives and the choice of layers.
§ METHOD
In this section, we introduce the process of data preparation, several design concepts for music embedding models, and the evaluation protocol.
Data Preparation
Among symbolic music datasets, Lakh MIDI Dataset (LMD) <cit.> is widely used since it comprises 176,581 MIDI files spanning diverse genres and tracks.
We convert each MIDI from LMD into REMI+ representation <cit.>, an extended version of REMI <cit.> that enables the expression of multiple tracks.
Our vocabulary contains 556 events for 8 categories; 1 <bar>, 32 <tempo>, 129 <instrument>, 128 <pitch>, 128 <pitch drum>, 48 <position>, 58 <duration>, and 32 <velocity>.
We adopt the same configuration for REMI+ as described in <cit.>.
In the end, we collect a total of 9,971,616 bars from the LMD.
Model Descriptions
Our embedding models are following BERTbase model configuration (the number of layers=12, the hidden size=768, the number of self-attention heads=12).
During the training process, we utilize masked language modeling loss (MLM loss) which involves masking a portion of input tokens and predicting those tokens.
At each iteration, 15% of input tokens are selected, among which 80% of the tokens are masked, 10% of the tokens are randomly replaced, and the remaining 10% of tokens remain unchanged.
We remove the next sentence prediction task from the original BERT.
We introduce three variant models derived from BERT; BERT-aug, BERT-neighbor, and BERT-dropout.
These models are trained MLM loss and contrastive loss (NT-Xent loss) formulated at SimCLR <cit.> simultaneously.
For a minibatch N, the NT-Xent loss can be defined as
L_NT-Xent(z^', z^'') = -log exp(sim(z^',z^'') / τ)/∑_k=1^2N1_[z_k≠ z^'] exp(sim(z^',z_k) / τ)
where z^' and z^'' represent a positive pair that is semantically identical, while z_k is sampled from a negative set.
Given a N batch, we generate a N new batch of positive views using predefined functions (e.g. augmentation), resulting in 2N samples in a batch.
When considering a single positive pair, the remaining 2(N-1) samples are regarded as the negative set.
In this context, Sim(·,·) denotes the cosine similarity, 1 does the indicator function, and τ does the temperature parameter.
Then, our total loss can be described as
L= L_MLM + α· L_NT-Xent
where α controls the degree of NT-Xent loss.
We set τ and α to 0.1 for all experiments.
Unlike previous studies <cit.>, since we train the two objectives (MLM loss and NT-Xent loss) concurrently, masked inputs (x^mask) inevitably are involved in the training process of contrastive learning as shown in Figure 1.
In other words, x^mask can be regarded as one of the augmented views from the predefined functions.
It is analogous to Mask Contrast <cit.> for image representations in that both the masked view and augmented view are participating in the contrastive loss.
Below, we provide a comprehensive explanation of the design principles behind our variant models.
* BERT-aug: To generate the positive view of samples, we apply data augmentation to original sample x by shifting randomly all pitches {-6, -5, ... , 5, 6} and velocities {-3, -2, ... , 2, 3} in a sequence, resulting in x^aug.
It still maintains the melodic contour, not undermining musical semantics.
Similar strategies for data augmentation can be found in <cit.>.
* BERT-neighbor: This is motivated by NNCLR <cit.> which is a contrastive model regarding nearest neighbors of the augmented view as positives.
In our setting, a sample x^neigh is considered to be a neighbor of x if they belong to the same music.
* BERT-dropout: This is motivated by SimCSE <cit.> which adopts Dropout <cit.> as a stochastic augmentation.
The model receives the same x^mask twice in the forward pass and generates two different embeddings in a positive relation.
We place the Dropout mask on attention maps and feed-forward networks in Transformer blocks and set the masking rate to 0.1.
Table 1 compares the MLM accuracy and NT-Xent accuracy of each model for the training and validation set.
Exceptionally, BERT-neighbor exhibits significant disparities between the training and validation accuracy even after adjusting the values of τ and α.
We can speculate that neighbors within the same music do not extensively share musical information.
Evaluation Methods
We evaluate the bar-level BERT embeddings on their alignment with human interpretable domain knowledge.
Referring to <cit.>, the metrics can be listed as follows; chords, groove patterns, instruments, tempo, mean velocity, mean duration, and song clustering.
The evaluation entails assessing the performance of linear probing tasks, including multi-class classification with a Ridge classifier, multi-label classification with a Ridge classifier, regression with a Ridge regressor, and clustering with K-means.
We provide a detailed explanation of each of these metrics and evaluation methods.
* Chords (C): As following <cit.>, we extract chords using an adapted version of the Viterbi algorithm.
They consist of 12 root notes and 7 qualities, resulting in a total of 84 possible chords.
Since multiple chords can be placed on a bar, we evaluate the performance of multi-label classification for the chords.
* Groove Patterns (GP): We label a position as 1 in a bar if any note is played and as 0 if no note is present.
We evaluate the performance of multi-label classification for the groove patterns.
* Instruments (I): We label an instrument as 1 if the instrument appears.
We evaluate the performance of multi-label classification for the instruments.
* Tempo (T): Tempos are quantized into 32 bins.
We evaluate the performance of multi-class classification for the tempos.
* Mean Velocity (MV): We compute the average of all velocity values within a bar.
We evaluate the performance of regression for the mean velocity.
* Mean Duration (MD): We compute the average of all duration values within a bar.
We evaluate the performance of regression for the mean duration.
* Song Clustering (SC): Using K-means, we compute the average of entropy based on the number of classes assigned to the bars in the same music.
The low entropy means that the bars in the same music tend to be clustered to the same class.
This metric can verify how shared semantics permeating through music can be extracted.
§ EXPERIMENTS
We empirically demonstrate quantitative evaluations of musical information from the BERT-variants models.
First, we inspect the information from the last layer of Transformer blocks and the changes in the amount of information across different layers.
The inspection of BERT embeddings from the last layer
Table 2 reports linear probing tasks for the embeddings of the last layer in terms of four models and seven metrics.
Remarkably, the original BERT exhibits the highest performance for the chord classification.
It can be inferred that during the augmentation process (or neighbor sampling), the other models augment pitch events in positive pairs so that they obtain invariant features for the chords.
Similarly, BERT-aug shows the lowest performance for the mean velocity since it specifically modifies the velocity values during the augmentation.
BERT-dropout controls all factors so that it decreases all performance metrics compared to the best performances.
The advantage of contrastive learning lies in its ability to extract shared information in a positive view.
For BERT-aug, the mean duration is not a variable factor between bars in a positive relationship, which may improve its performance.
For BERT-neighbor, it tends to successfully classify bars into their respective songs in terms of song clustering.
The results of groove pattern, instrument, and tempo classification are inconsistent with subsequent results that analyze the performance across all BERT layers.
We supplement the explanation in the following section.
The inspection of BERT embeddings across all layers
Table 3 reports the best performance and its layer number for the linear probing tasks.
Except for BERT-neighbor, the models with significantly lower association with the tempo show the performance improvement depending on the choice of layers.
The information on groove patterns and instruments from all models is distributed across all layers.
Figure 2 illustrates the layer-wise performance of the BERT-variants model.
§ DISCUSSION
Several studies have analyzed BERT embeddings for various NLP tasks <cit.>.
They indicate that BERT does not follow the classical NLP pipeline (simple to complex) or exhibits distributed information across its layers.
Nevertheless, our research provides consistent evidence for several factors and demonstrates the effectiveness of contrastive learning.
Especially for BERT-neighbor, it can be utilized to extract a musical theme effectively.
As mentioned in <cit.>, the integration of information from various layers will be important for enhancing the quality of information.
In this paper, we perform a systemic analysis of bar-level music embeddings from BERT and BERT with contrastive learning models.
For seven metrics, our linear probing tasks can assess the amount of musical information, revealing the effectiveness of specific models for certain metrics.
The bar-level embedding models will contribute to various musical tasks, including obtaining chord extraction, music similarity analysis, and music structure understanding.
unsrtnat
|
http://arxiv.org/abs/2306.04334v1
|
20230607110139
|
Echoes from Alexandria: A Large Resource for Multilingual Book Summarization
|
[
"Alessandro Scirè",
"Simone Conia",
"Simone Ciciliano",
"Roberto Navigli"
] |
cs.CL
|
[
"cs.CL"
] |
An Overview of Challenges in Egocentric Text-Video Retrieval
Burak Satar^1, 2
Hongyuan Zhu^1
Hanwang Zhang^2
Joo Hwee Lim^1, 2
^1Institute for Infocomm Research, A*STAR, Singapore
^2School of Computer Science and Engineering, NTU, Singapore
{burak_satar, zhuh, joohwee}@i2r.a-star.edu.sg, [email protected]
July 31, 2023
========================================================================================================================================================================================================================================================================
In recent years, research in text summarization has mainly focused on the news domain, where texts are typically short and have strong layout features.
The task of full-book summarization presents additional challenges which are hard to tackle with current resources, due to their limited size and availability in English only.
To overcome these limitations, we present , or in shortened form, "", a large resource for multilingual book summarization.
features three novel datasets: i) , for multilingual book summarization, ii) , for extremely-compressive multilingual book summarization, and iii) , for extractive book summarization.
To the best of our knowledge, – with its thousands of books and summaries – is the largest resource,
and the first to be multilingual, featuring 5 languages and 25 language pairs.
In addition to , we also introduce a new extractive-then-abstractive baseline, and, supported by our experimental results and manual analysis of the summaries generated, we argue that this baseline is more suitable for book summarization than purely-abstractive approaches.
We release our resource and software at <https://github.com/Babelscape/echoes-from-alexandria> in the hope of fostering innovative research in multilingual book summarization.
§ INTRODUCTION
Recent research in Automatic Text Summarization – the task of shortening a text while preserving its meaning – has mainly focused on news stories.
News texts are usually short documents; for example, 99.3% and 98.6% of the articles in XSum <cit.> and CNN/DailyMail <cit.>, respectively, are shorter than 2048 tokens.
Additionally, news stories are characterized by strong layout features, such as the “lead bias”, in which the first sentences usually contain the most relevant information for a summary.
Accordingly, the Lead-3 baseline, which uses the first three sentences of a news item as its summary, performs competitively on news summarization benchmarks <cit.>.
Although recent approaches have achieved high performance, it is still unclear how they behave on longer documents and whether they can generalize across domains and genres.
For this reason, the research community has been shifting toward more challenging settings, which include interviews <cit.> and scientific articles <cit.>.
One setting that has been attracting growing attention is full-book summarization <cit.>, i.e., the task of producing the plot of a book from its full text.
Summarizing a book is hard not only because of its average text length – currently not processable in a single forward pass even by architectures for long-form text processing <cit.> – but also due to other critical aspects, such as the presence of dialogues, rich discourse structures, parallel and non-linear lines of plot, and long-distance dependencies between entities, among others.
Therefore, we deem book summarization a complex testbed to challenge current approaches and investigate their capabilities and limitations.
Although the first small-scale datasets for the task were introduced several years ago <cit.>, the area has recently regained traction thanks to larger-scale resources, such as BookSum <cit.> and NarrativeQA <cit.>.
However, despite this recent progress, current resources for book summarization are still, i) limited in size, making them difficult to use for proper training and evaluation, and ii) monolingual (usually English-only).
To overcome these issues, we introduce (), the largest resource to date for book summarization and the first one providing books and summaries in multiple languages.
We use to investigate how current summarization approaches perform on a large-scale multilingual summarization dataset, concluding that current purely-abstractive approaches still struggle in our setting.
We additionally devise a new baseline, showing that the extractive-then-abstractive paradigm represents a promising direction for future research.
The main contributions of our work are the following:
* We introduce , the first multilingual resource for book summarization, with thousands of texts and plots in 5 languages, for a total of 25 language pairs. is also the largest resource among current English datasets for full-book summarization.
* We release the three datasets of : i) , for multilingual abstractive summarization, ii) , for extremely-compressive multilingual book summarization, and iii) , an English dataset for evaluating extractive book summarization.
* We leverage BookSum and to evaluate state-of-the-art systems, both in zero-shot and fine-tuning settings, bringing to light their inadequate generalization capabilities in book summarization.
* Our experiments demonstrate that an extractive-then-abstractive baseline outperforms the purely-abstractive counterpart on our datasets while achieving state-of-the-art results on BookSum.
* We provide a comprehensive manual evaluation of the automatically generated summaries and release the dataset with our human judgments.
We hope our work will foster research in multilingual long document understanding and summarization.
We release and our software for research purposes at <https://github.com/Babelscape/echoes-from-alexandria>.
§ RELATED WORK
Resources for summarization.
Research efforts to create summarization resources have steadily increased in numbers over recent years.
For the news domain, XSum <cit.> and CNN/DailyMail <cit.> are the de-facto standard datasets for training and evaluating summarization systems.
XSum comprises 226k news articles accompanied by a one-sentence abstractive summary.
In CNN/DailyMail, the authors retrieved 93k articles from CNN[https://www.edition.cnn.com/https://www.edition.cnn.com/] and 220k articles from DailyMail[https://www.dailymail.co.uk/https://www.dailymail.co.uk/] newspapers.
Both publishers supplement their articles with a list of bullet points containing the main information of the news text.
More recently, summarization resources have been shifting towards more challenging scenarios, i.e., where the documents of interest are longer and belong to different domains.
Notably, <cit.> released two large-scale datasets of long and structured scientific papers obtained from arXiv[https://arxiv.org/https://arxiv.org/] and PubMed[https://pubmed.ncbi.nlm.nih.gov/https://pubmed.ncbi.nlm.nih.gov/].
In these datasets, paper abstracts are used as ground truth summaries.
Another relevant example is MediaSum <cit.>, a collection of interview transcriptions from National Public Radio (NPR)[https://www.npr.org/https://www.npr.org/] and CNN, where overview and topic descriptions are employed as summaries.
In long-form text summarization research, a task that is attracting growing attention is book summarization.
Although this task was originally introduced several years ago by <cit.>, who released the first small-scale evaluation resource, book summarization regained traction thanks to a few notable endeavors.
The most important example is BookSum <cit.>, which provides a collection of resources for book summarization at three levels of granularity: paragraph, chapter, and full book.
Book texts are collected from Project Gutenberg, while summaries are obtained from the Web Archive.[https://web.archive.org/https://web.archive.org/]
BookSum features 222 unique book titles with a total of 6,987 book chapters and 142,753 paragraphs.
Relatedly, NarrativeQA <cit.> is a collection of 1572 stories retrieved from Project Gutenberg (783 books and 789 movie scripts) associated with summaries from Wikipedia. The annotators were required to generate questions and answers based on the summaries.
Even if NarrativeQA is primarily intended for Question Answering, it can also be used for book summarization.
Due to their limited size, however, BookSum (in the full-book setting) and NarrativeQA can be more useful for evaluating models on the task rather than for training purposes. It is also worth noting that these resources are monolingual, i.e., English-only, limiting their usefulness for researchers seeking to evaluate multilingual summarization models.
Despite the great work carried out so far, we argue that there is still ample room to improve book summarization resources.
Approaches to book summarization.
<cit.> conducted experiments on full-book summarization using a generate&rank strategy.
This approach involves training a system to generate paragraph-level summaries, which are then sorted by perplexity and concatenated to form a full-book summary.
More recently, <cit.> proposed an approach where passages are recursively summarized and concatenated to form a full summary.
However, generated summaries are affected by the errors accumulated from previous stages <cit.>.
Recursively generating a summary is a paradigm that has also been used by other works for long-document summarization <cit.>.
Another family of approaches is that of extractive-then-abstractive approaches.
This family of approaches first extracts key sentences from the input document and then uses such sentences as input to an abstractive model, which is tasked with generating a summary that captures the main ideas and themes of the source.
While it was successfully employed in previous works for short <cit.> and long-form summarization <cit.>, this paradigm has never been explored for summarizing books.
In this paper, we aim to fill this gap by presenting a new, simple extractive-then-abstractive model and showing its effectiveness for book summarization.
§
is the first collection of resources for book summarization in 5 languages: English, French, German, Italian, and Spanish.
With , we introduce the following three novel datasets:
* , in which we pair book texts with plots retrieved from a hand-curated list of Wikipedia page sections.
* , in which we pair book texts with extremely-compressive summaries, manually created starting from the lead section of Wikipedia pages.
* , an evaluation dataset for extractive summarization of short stories and fairy tales, composed of English manually-annotated extractive summaries.
We provide an overview of the main differences between and existing resources in Table <ref>.
§.§ Text collection
We collect the book texts that comprise from two main sources: Project Gutenberg and Wikisource.
Project Gutenberg is a digital library that provides free access to public-domain books and features over 60k texts.
We collect all the available books from Project Gutenberg by following their robot-access policies.[https://www.gutenberg.org/help/mirroring.htmlhttps://www.gutenberg.org/help/mirroring.html]
While often considered one of the most reliable sources of copyright-free books, Project Gutenberg provides only very limited coverage of non-English books and non-English translations of English books.
This is one of the reasons why we also rely on Wikisource.
Part of the Wikimedia Foundation, Wikisource contains a huge number of texts from a wide range of domains, e.g., books, and legal and historical documents, in various languages.
Therefore, for , we rely on Wikisource in English, French, German, Spanish, and Italian to retrieve other book texts and expand the coverage of books already available from Project Gutenberg.[Wikisource dumps are freely available to download at https://dumps.wikimedia.org/enwikisource/https://dumps.wikimedia.org/<l>wikisource/ where <l> ∈ { EN, FR, DE, ES, IT}. Last accessed: July 1, 2022.]
We call this set of full-text books B.
We note that Wikisource can also be used to expand to other languages.
Given the limited amount of work in multilingual summarization, we focus on the five above high-resource languages.
We defer the expansion of to future work.
While Project Gutenberg has already been used as a source of books in previous resources, such as BookSum and NarrativeQA, the use of Wikisource is what enables to become the largest resource for book summarization in English and the first resource for multilingual book summarization.
§.§ Pairing books with Wikipedia summaries
Book summaries from Wikipedia follow a standard set of guidelines[<https://en.wikipedia.org/wiki/Wikipedia:How_to_write_a_plot_summary>] and are often of remarkable quality, as they are continuously refined over time by the Wikipedia community.
Therefore, once we have collected our set of full-book texts (see Section <ref>), we iterate over the Wikipedia dumps[Wikipedia dumps are freely available to download at https://dumps.wikimedia.org/enwiki/https://dumps.wikimedia.org/<l>wiki/ where <l> ∈ { EN, FR, DE, ES, IT}. Last accessed: July 1, 2022.] in English, French, German, Italian, and Spanish.
Given our set B of full-book texts, and W, the set of Wikipedia pages, our objective is to uniquely associate a book b ∈ B to a page w ∈ W, such that w is the Wikipedia page of book b.
We obtain a set of potential matches by finding Wikipedia pages whose contents contain a hyperlink to a book in B.
To improve the accuracy of our mapping, we first apply a string distance metric[We used the Edit distance to retain only those pairs whose titles were highly similar, by setting a stringent threshold (0.2).] to compare the titles of the books and their associated Wikipedia pages. We then check if the lead section of the Wikipedia page in question mentions the surname of the author of the associated book. This additional step helps us further refine and ensure the validity of our associations.
After our matching process, we manually inspect the cases in which books are associated with multiple Wikipedia pages. We discover that the pages in excess refer to adaptations of the book in other mediums, such as movies and theatrical plays. To resolve this ambiguity, we utilize the mapping between Wikipedia pages and Wikidata nodes to obtain metadata about the medium, e.g., book, movie, play, and retain only the Wikipedia page that corresponds to the book.
At this point, given the Wikipedia page content, our goal is to extract only the book summary and discard other information, such as the biography of the author, historical background, prizes and accolades, and critical reception, among others.
To achieve this, we employ native speakers to manually identify a list of section names that, in the different languages, only contain plot information, aiming for high precision rather than coverage. We use the content of these identified sections as summaries and provide our list of section names in Appendix <ref> for reference.
We name the resulting set of (Wikipedia summary, full-text book) pairs .
We note that the average number of unique editors (220.6), revisions (421.4), and year of creation (2008) of the Wikipedia pages we select for the dataset are large: this indicates that their book summaries have been curated over time and suggests that they are of high quality. Table <ref> shows how compares against BookSum, the previous largest existing dataset for book summarization, to the best of our knowledge.
Besides being multilingual, it is worth noticing that is about 12 times larger than BookSum (5,001 vs. 405 books) while still featuring similar compression ratios (103.7 vs. 126.2).
§.§ Enabling extreme summarization of books
Inspired by the work of <cit.> on the news domain with XSum, which showcases the capabilities of highly-abstractive summarization, we introduce , a new dataset for training and evaluating systems for extreme summarization of books.
In , we pair full-text books with very short summaries.
These summaries contain the minimum number of sentences required to provide an overview of the main contents of a book, typically one to three sentences.
The main challenge posed by is dealing with the great disparity between the size of the input and the size of the output.
Indeed, as we can observe in Table <ref>, the compression ratio of (1624.0) is unprecedented in the field of summarization, being an order of magnitude greater than those of (103.7) and BookSum (126.2).
The extreme summaries in are the result of a manual annotation process, which involved an expert linguist who is a fluent speaker in all 5 languages of . The annotator was explicitly contracted for this task. Given a book and its previously-identified Wikipedia page (see Section <ref>), the annotator was tasked with extracting portions of text from the introduction that described the essential plot of a book.
An excerpt of a book text with the corresponding multilingual summaries from can be found in Appendix <ref>.
Notice that the portions of text extracted by the annotator are not necessarily contiguous, as long as the extracted text can be read independently of its original text.
As a rule of thumb for the annotation process, the linguist followed the definitions of Consistency, Relevance, Fluency, and Coherence of a summary <cit.>. The annotator spent an average of 5 minutes per sample.
We provide an example of the annotations produced in Appendix <ref>.
At the end of the manual creation of our extreme summaries, the resulting is still about 8 times larger than BookSum (3,383 vs. 405 books).[ includes fewer book/summary pairs than because the annotator was not able to find an extreme summary in the Wikipedia pages of some books.]
§.§ Classifying books into genres
Differently from existing resources, such as BookSum, which is limited by its relatively small size, the thousands of books in give us the opportunity to investigate book summarization more in-depth.
Indeed, books in cover a wide range of genres, including novels, theatrical plays, and poems, among others.
We argue that developing a strategy to automatically identify book genres provides valuable insights into the dataset and enables a fine-grained evaluation of current and future summarization approaches.
An analysis by genre can help us determine which genres are the most challenging to summarize.
Similarly to what was described in Section <ref>, we rely on a graph-based heuristic on the knowledge graph of Wikidata to identify genres.
More specifically, given a Wikipedia article of a book, we retrieve its corresponding Wikidata node, and analyze its relations (e.g., genre and form_of_creative_work) with its neighboring nodes.
This process is able to distinguish between 7 main genres: novels, plays, poems, epic poems, short stories, fairy tales, and essays.
Note that our heuristic may assign more than one genre to a single book.
Figure <ref> illustrates the distribution of the genres in the English partition of , showing that novels are the most represented genre, followed by short stories and plays.
§.§ Digging up extractive summarization
Over the past few years, the attention of the research community has gradually shifted from extractive to abstractive summarization, especially thanks to the advent of flexible sequence-to-sequence models, which have proven effective for summarizing short documents.
Thanks to genre classification (see Section <ref>), we are able to perform a small-scale investigation of extractive book summarization on two genres in .
More specifically, we construct , the first evaluation dataset for extractive summarization of fairy tales and short stories.
To create extractive summaries for , we set up the following manual annotation process: given the text of a book, and its abstractive summary from Wikipedia (Section <ref>), annotators are required to extract relevant sentences from the book text.
A sentence is relevant if it provides a piece of information that is also contained in the abstractive summary.
The annotators were asked to adhere as closely as possible to the concepts of Consistency, Relevance, and Coherence defined by <cit.>.
The annotators were drawn from a pool of fifty-eight Master-level students from the `Narrative Understanding and Storytelling' minicourse held at the Sapienza University of Rome by the last co-author, as part of the AI and Robotics degree.
The selected students carried out the task as part of their course assignments.
On average, each student annotated 3 texts, resulting in multiple annotations for each text. The annotation agreement was measured using Cohen's Kappa coefficient, which indicated substantial agreement (0.71).
A subset of annotations was further validated by our contracted annotator to ensure that the students were adhering to the guidelines.
Overall, provides extractive summaries for 197 documents,
about 4 times the size of the test set of BookSum.
§.§ Aggregating books across versions and languages
A book can be published in various editions after its original publication.
Perhaps most importantly, the same version of a book can also be translated into multiple languages.
Given the potentially large variety of versions and translations of a book, we argue that it is important to aggregate those versions.
Indeed, aggregating books across versions and translations can allow to also be employed for machine translation, cross-lingual sentence alignment, and cross-lingual summarization.
To achieve this objective, we leverage two characteristics of Wikipedia.
First, we aggregate all those book texts aligned to the same Wikipedia page (see Section <ref>).
We increase the accuracy of this step by taking into account the information found on some Wikisource pages, which list the editions available for some books.
Second, we navigate the Wikipedia interlanguage links, which connect pages that refer to the same concept/entity in different languages, to aggregate different translations and summaries (in different languages) of the same book.
Figure <ref> presents the number of book-summary and the version-summary pairs for all the language pairs in obtained after our aggregation process.
§ EXPERIMENTS AND RESULTS
In recent years, two promising paradigms have emerged from previous work on long-document summarization: recursive-abstractive and extractive-then-abstractive.
In this section, we evaluate and analyze their effectiveness on .
§.§ Recursive-abstractive approaches
Recursive-abstractive approaches consist in dividing the source document into smaller segments, referred to as chunks, and then using an abstractive summarization model to summarize each segment.
If the concatenated output summaries are still larger than a single chunk, the recursive-abstractive approach repeats the process by treating the concatenation as a new source document and summarizing it in the same way. The recursive process continues until the concatenated output summaries are short enough to be considered as the final summary, i.e., until their size is shorter than the maximum size of a single chunk.
Experimental setting.
In its simplest form, a recursive-abstractive approach requires a model trained on a standard summarization dataset; this model is then employed recursively, as described above.
For our experiments, we consider three sequence-to-sequence Transformer-based models – BART-large <cit.>, LED-base <cit.>, and LongT5-base <cit.> – and train them on XSum (short documents, news) and MediaSum (long documents, interviews).
Then, we evaluate our trained models on the test set of ,[We split and into train/dev/test sets using the standard 80/10/10 split.] whose summaries feature an average length similar to that of the summaries in XSum and MediaSum but belong to a different genre (books).
For the evaluation, we adopt standard summarization metrics, such as ROUGE-1, ROUGE-2, ROUGE-L, and BERTScore <cit.>.
Results.
Table <ref> (top) provides an overview of the results obtained by our recursive-abstractive baseline using different language models and trained on different summarization datasets.
Overall, we can observe that, independently of the language model and training dataset employed, the baseline does not achieve good results on .
Indeed, the best configuration (LED_XSum) obtains only 14.83 points in ROUGE-L on .
By comparison, the same configuration achieves 30.24 points on XSum.
Therefore, i) is empirically more challenging than XSum, ii) a simple recursive-abstractive approach is not sufficient to obtain acceptable results on , and, iii) different pretrained language models and different summarization datasets (from different genres/domains) do not significantly affect the results of a recursive-abstractive approach on our book summarization dataset.
§.§ Extractive-then-abstractive approaches
Since recursive-abstractive approaches yield unsatisfying results on (see Table <ref>), we propose a simple, novel baseline based on the extractive-then-abstractive paradigm.
Our model is composed of two submodules: the extractor extracts key sentences from the input text, while the abstractor uses the concatenation of these key sentences to generate an abstractive plot of the book.
Given an input text T = (s_1, s_2, …, s_|T|) where each s_i is a sentence, the extractor produces a score in [0.0, 1.0] for each s_i, quantifying its degree of importance for the target summary.
More formally:
𝐞_i^s = SentenceEncoder(s_i)
Score(s_i) = σ(W𝐞_i + 𝐛)
where 𝐞^s_i is the sentence representation of s_i from a SentenceEncoder.[We adopt a SentenceTransformer based on Distil-RoBERTa from https://www.sbert.net/https://www.sbert.net/.]
Then, the abstractor takes the subset T^* composed of the k sentences with higher scores according to the extractor, and uses T^* to generate the final summary.
To make the abstractor aware of the relative importance of each sentence, we multiply the embedding of each token by the score of its sentence, as follows:
𝐞^t_i,j = Score(s_i) · Embedding(t_i,j)
where 𝐞^t_i,j is the encoding of the j-th token of the i-th sentence, for each sentence in T^*.
The model is trained in an end-to-end fashion, i.e., the extractor and abstractor are trained jointly, by minimizing the cross-entropy loss between the reference summary and the generated summary.
Experimental setting.
We follow the experimental setting we used for our recursive-abstractive approach.
We train and evaluate 3 models – BART-large, LED-base, and LongT5-base – on .
Since pretraining on XSum results in slightly improved performance for the recursive-abstractive approach, we also evaluate how pretraining on XSum affects the performance of our extractive-then-abstractive approach.
Finally, we also train and evaluate our approach on and on BookSum (the latter to directly compare performance with the current state of the art).
Results.
Table <ref> (bottom) provides an overview of the results obtained by our extractive-then-abstractive approach on .
We can immediately notice that each configuration significantly outperforms the recursive-abstractive baselines by a large margin.
For example, the best extractive-then-abstractive model (BART_XSum) improves over the best recursive-abstractive model (LED_XSum) by 11.90 points in ROUGE-L (26.73 vs. 14.83), and this is true for all the metrics we consider (ROUGE-1, ROUGE-2, ROUGE-L, and BERTScore).
It is interesting to note that, while there is little difference in the results on of different model configurations, there is a significant difference between BART, LED, and LongT5 when evaluated on , as shown in Table <ref>.
We hypothesize that such a variance in performance is due to several factors, but the inadequacy of current non-semantic metrics plays a large role, as supported by our human evaluation (see Section <ref>).
Finally, we further assess the effectiveness of our extractive-then-abstractive approach on the standard test set of BookSum (Table <ref>).
In particular, our approach outperforms the system of <cit.> using 33% of its parameters, and is competitive with the system of <cit.> using only 0.1% of its parameters.
§ ANALYSIS AND DISCUSSION
Human evaluation.
Following common practice in the field of summarization, we set up a human evaluation process to assess the quality of the system-generated summaries.
The annotation task, performed by an expert English speaker, consists of reading the source text and rating the summaries using a Likert scale for Consistency, Relevance, Fluency, and Coherence, as outlined in <cit.>.
To make this experiment feasible in terms of time and resources, we focus our evaluation on fairy tales and short stories, which can be read by a human in a short time.
Interestingly, but not surprisingly <cit.>, the results of our human evaluation experiment tell a story that is different from ROUGE, as shown in Tables <ref> and <ref>.
However, the evaluation still highlights the effectiveness of our extractive-then-abstractive model compared to the recursive-abstractive baseline.
It is clear, however, that future work should focus in particular on improving the Consistency and Relevance of the summaries generated.
Challenges.
opens the door to several other analyses and experiments that were not possible with previous datasets.
For example, we can leverage to perform an analysis of the behavior of the extractor submodule of our extractive-then-abstractive approach, as we show in Appendix <ref>.
In Section <ref>, we examined the different book genres in ; LongT5 model performances are detailed for each genre in
Figure <ref>. We notice that epic poems are the hardest to summarize in this setting, while our model performs reasonably well on fairy tales.
Cross-lingual book summarization.
Additionally, can be employed as a multilingual and cross-lingual summarization benchmark, thanks to its coverage of 5 languages and 25 language pairs.
In particular, we argue that cross-lingual book summarization is a very interesting challenge, as it requires a model to compress vast amounts of information while transferring knowledge across languages.
Moreover, enabling cross-lingual book summarization is fundamental for all those cases in which we do not have the source text available in the language of interest, i.e., its translation may still be under copyright or may not exist at all.
To move the first step in this direction, we propose a summarize-then-translate approach, a simple baseline for cross-lingual book summarization on Echo-XSum.
As the name implies, our approach works by employing a monolingual model to produce a summary in the same language as the source text, and then it translates the summary from the source language to the desired target language.
We report the results of this baseline in Table <ref>.
While this is a strong baseline, it is still affected by two main issues: i) it requires two systems, a summarizer and a translator; ii) machine translation usually fails to translate language-specific items, e.g., character names may not be exact translations.
§ CONCLUSION
In this paper, we introduced , the first multilingual resource for book summarization and the largest among the English datasets.
features three novel datasets, namely, , , and , which address several limitations of existing book summarization resources, such as BookSum.
Indeed, previous datasets for full-text book summarization are, i) limited in size, and, ii) monolingual, i.e., usually covering English only.
In addition, we leveraged to bring to light the unsatisfying capabilities of current approaches to generalize to book summarization.
Finally, to mitigate this issue, we proposed a new extractive-then-abstractive baseline for book summarization, which outperforms its purely-abstractive counterpart on and , achieving results on the standard BookSum test set that are comparable with the current state of the art while using a number of parameters that is only 0.1% compared to the best-performing method.
We believe that will foster future work on long-document summarization, especially in the multilingual and cross-lingual setting.
§ LIMITATIONS
Despite the multilinguality of our resource, there is still a strong bias towards the English language, as the majority of books are in English and many translations are from English. This may result in the values of English literature being reflected, and these may differ from those of other cultures; summarizing literature from different cultures and regions may not be fully accurate, as every region has had its own historical development.
Language models used in the experiments can inherit biases from the training data and the tools, such as the ones used for preprocessing, and have limitations that have not been fully evaluated and could impact the results of this study.
This study includes the use of Web data, which – while marked as public domain – may be subject to copyright laws. The data used in this study was collected for research purposes and was not intended for any other use.
Additionally, it is worth noting that the majority of books used in our resource are copyright-free, and therefore, old. While this allowed us to include a large number of texts in our dataset, it also means that our resource may not fully capture contemporary literature and may not be representative of current linguistic trends and cultural values.
§ ACKNOWLEDGEMENTS
0.1
< g r a p h i c s >
0.70
The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research.
0.1
< g r a p h i c s >
The last author gratefully acknowledges the support of the PNRR MUR project PE0000013-FAIR. This work was carried out while Alessandro Scirè was enrolled in the Italian National Doctorate on Artificial Intelligence run by Sapienza University of Rome.
We would like to express our gratitude to Luigi Procopio and Edoardo Barba for their valuable insights on extractive-then-abstractive architectures, as well as to Fabrizio Brignone (Babelscape) for his exceptional support with the adaptation and use of Babelscape's keyword and phrase annotation interface.
acl_natbib
§ WIKIPEDIA SUMMARY SECTIONS
In Table <ref> we provide the list of Wikipedia section titles whose contents are used as summaries in .
§ EXAMPLE
In Figure <ref> we report an excerpt of the book text of the English version of "The Metamorphosis" by Franz Kafka, along with the multilingual extreme summaries from .
§ ANNOTATION TASK
In Figure <ref> we provide an example of a manually-annotated summary in . The annotator was tasked to highlight portions of text containing information related to the plot from the Wikipedia introduction.
§ EXTRACTOR ANALYSIS
We analyze the positions of the sentences selected by the extractor. This analysis is required to investigate the presence of any positional bias, e.g., the lead bias, which is known to affect systems trained on news stories. Figure <ref> depicts the distribution of the relative positions of the extracted sentences on texts from , i.e., fairy tales and short stories. We deduce that the extractions are not affected by any bias.
Thanks to extractive annotations, we are also able to evaluate the performance of the extractor component of the extractive-then-abstractive approaches.
We aggregate multiple extractive annotations in by retaining the intersecting sentences; we refer to these sentences as the gold sentences. We measure the Extractor performance by computing the overlap between the sentences extracted by the model and the gold ones. We compute the Precision@K by comparing the topK-ranked sentences with the references. We report the Extractor performance in Table <ref>. We observe relatively low scores, meaning that the extractor is only partially able to discriminate relevant sentences from irrelevant ones. This aspect confirms that there is still large room for improving the Extractor and, consequently, the relevance of the summaries.
|
http://arxiv.org/abs/2306.07983v1
|
20230602145232
|
A Hybrid Approach for Smart Alert Generation
|
[
"Yao Zhao",
"Sophine Zhang",
"Zhiyuan Yao"
] |
cs.NI
|
[
"cs.NI",
"cs.AI"
] |
A Hybrid Approach for Smart Alert Generation
Yao Zhao
Cisco Meraki
San Francisco, USA
[email protected]
Sophine Zhang
Cisco Meraki
San Francisco, USA
[email protected]
Zhiyuan Yao
Cisco Meraki
Paris, France
[email protected]
July 31, 2023
=======================================================================================================================================================================================================
Proc. of the International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME 2023)
19-20 July 2023, Tenerife, Canary Islands, Spain
Anomaly detection is an important task in network management.
However, deploying intelligent alert systems in real-world large-scale networking systems is challenging when we take into account (i) scalability, (ii) data heterogeneity, and (iii) generalizability and maintainability.
In this paper, we propose a hybrid model for an alert system that combines statistical models with a whitelist mechanism to tackle these challenges and reduce false positive alerts.
The statistical models take advantage of a large database to detect anomalies in time-series data, while the whitelist filters out persistently alerted nodes to further reduce false positives. Our model is validated using qualitative data from customer support cases. Future work includes more feature engineering and input data, as well as including human feedback in the model development process.
Anomaly Detection, Statistical Model, Alert System
§ INTRODUCTION
Network alert and anomaly detection systems are essential for predicting and preventing potential issues in networking systems <cit.>.
To ensure seamless operations, large organizations have developed their own anomaly detection services to monitor their products and services, which aim to detect anomalies and raise alerts for timely decision-making related to incidents <cit.>. For example, Yahoo has developed EGADS <cit.>, which automatically monitors and generates alerts for millions of time-series data related to different Yahoo properties. Microsoft also utilizes an anomaly detection service to monitor millions of metrics from Bing, Office, and Azure, which has enabled engineers to quickly address live site issues <cit.>.
Providing too many alerts can be overwhelming for customers and negatively impact the quality of service, while missing critical events can result in delayed reaction to incidents <cit.>.
Yet, it is challenging to develop an intelligent and effective alert system, that can accurately distinguish exceptional events that have the potential to lead to networking issues, especially in large-scale systems with a high volume of events.
Challenge 1: Scalability. With the huge amount of data generated by networking systems, it is critical to use efficient algorithms to process this data.
In addition, the system must be able to handle the large number of devices and customers that it serves.
Deep-learning based approaches demonstrate promising results, yet they incur significant overhead when deployed for large scale systems <cit.>.
Our proposed approach addresses this challenge by using a hybrid model that combines both a statistical model and a rule-based whitelist mechanism.
Challenge 2: Data Heterogeneity.
While efficient mechanisms have been proposed to extract observations from the data plane and help analyze system states <cit.>, it is intrinsically hard to determine if a networking issue has actually occurred, as there may be multiple factors that contribute to an event <cit.>.
To address this challenge, we incorporate qualitative data, i.e. customer support cases, to provide a more comprehensive understanding of the data.
This allows us to better identify exceptional events and improve the accuracy of the system.
Challenge 3: Generalizability and Maintainability.
As the system is deployed to millions of networking devices and millions of customers for a variety of alerts, it is essential to take into account multiple objectives, such as alert accuracy, reliability, and most importantly, generalizability and maintainability.
Both the statistical model and the additional whitelist mechanism in our proposed approach can be generalized and applied for a wide range of alert generations.
Our approach also allows incorporating human input to improve the reliability of the data-driven system and to enable iterative model development.
§.§ Related Works
Various anomaly detection algorithms have been proposed in the literature, including supervised learning, unsupervised learning, and statistical approaches.
To improve the accuracy of anomaly detection, supervised models have been investigated. For instance, EGADS <cit.> used a collection of anomaly detection and forecasting models along with an anomaly filtering layer to enable scalable anomaly detection on time-series data. Opprentice <cit.> achieved superior performance compared to traditional detectors by utilizing statistical detectors as feature extractors and detecting outliers with a Random Forest classifier.
However, supervised approaches are insufficient in online applications since continuous labels cannot be obtained in industrial environments. However, the networking domain poses unique challenges due to the absence of ground truth label data and the variability of data. Supervised learning methods require labeled data, which is difficult to obtain proactively until customers report connectivity issues.
To tackle these problems in industrial applications, unsupervised approaches have been studied. DONUT <cit.> is an unsupervised anomaly detection method based on Variational Auto-Encoder (VAE), which models the reconstruction probabilities of normal time-series. Abnormal data points are reported if the reconstruction error was larger than a threshold.
Luminol <cit.> computes anomaly scores by segmenting time-series into chunks and evaluating the frequency of similar chunks.
However, unsupervised learning algorithms may lack interpretability, making it difficult for engineers to understand the results and take appropriate actions.
As a result, our model is based on statistical algorithms that are simple and efficient, especially for large-scale network databases.
A variety of statistical models have been investigated, such as hypothesis testing <cit.>, wavelet <cit.>, singular value decomposition (SVD) <cit.>, auto-regressive integrated moving average (ARIMA) <cit.>, and Fast Fourier Transform (FFT) <cit.>. For instance, FFT helps identify time-series segments with high frequency changes, which can be verified with the Z-value test.
These algorithms provide interpretable results and insights into the system characteristics, which is crucial for early-stage development when engineers need to understand the context.
§.§ Statement of Purpose
In this paper, we propose a hybrid approach for building a smart alert system that balances multiple objectives, including alert accuracy, alert ratio, alert reliability, and model maintainability. Our approach takes multiple telemetries into account and incorporates both quantitative and qualitative data feedback from humans. We present a novel statistical model that integrates multiple data sources to generate alerts for exceptional events. We also introduce a framework for developing and deploying such systems that takes into account trade-offs among multiple objectives. The proposed framework enables us to operate the model at scale and upgrade it with more user feedback.
The main contribution of this paper is the proposed hybrid approach that leverages both quantitative and qualitative data for building a smart alert system that accurately identifies exceptional events in networking systems. Our approach not only addresses the challenge of building an accurate alert system, but also provides a framework for balancing multiple objectives when developing and deploying such systems.
We believe that our proposed approach can have significant implications for the networking industry, and we present our experimental results to demonstrate the effectiveness of our approach.
§ OVERVIEW
Network anomaly detection is critical for maintaining the health and stability of networking systems. One common use case in the networking domain is MAC-flap detection, which occurs when a Media Access Control (MAC) address is learned on multiple ports within a short period of time. MAC-flapping events can be persistent and hard to define the baseline pattern of on/off events. The Meraki switch has enabled MAC-flap detection as a default feature to monitor the MAC forwarding table and report flapping events to the dashboard, as depicted in Figure <ref>.
Our model takes data sources generated on an hourly basis (as depicted in Figure <ref>) and looks back at 5 weeks' worth of data to compute the statistical baseline for each node. Based on a threshold, the model checks if new data points are outliers. The overall process of generating the threshold is depicted in Figure <ref>. An interim threshold is generated from the hourly aggregated time series data by the statistical model (see Section <ref>). Since Mac-flap events are sometimes normal, we also incorporate a whitelist created from back-tested k weeks' alerts. This whitelist includes nodes that have received excessive alerts over multiple weeks, but no issues (i.e. support cases) were reported by customers. By incorporating both statistical thresholds and a whitelist, our model improves the accuracy of Mac-flap detection and reduces false positives.
§ METHODOLOGY
Our proposed model for anomaly detection consists of two components - a statistical model and a whitelist mechanism.
The statistical model is responsible for identifying anomalies in the time-series data, while the whitelist acts as an additional filter to eliminate persistently alerted nodes that are not true anomalies.
By incorporating this additional mechanism, we aim to improve the accuracy and control of the alert system, particularly in enterprise-level products where false positive alerts can cause significant disruptions to the user experience.
In this paper, we provide a detailed description of our model and demonstrate its effectiveness in real-world applications.
§.§ Statistical Model
Statistical methods rely on the assumption that anomalies are rare events that can be detected by deviations from the normal statistical distribution.
We chose statistical profiling for our use case because it is a simple and effective method of detecting anomalies in network organizations with millions of devices.
Statistical profiling uses statistical analysis to create a profile of normal behavior.
In our use case, for each network, we calculated an upper boundary for MAC-flap event frequency using statistical analysis of past data.
We collected raw data on hourly MAC-flap event counts and calculated the week-over-week percent change on the hourly count.
The whole process is shown in algorithm <ref>.
We used Laplace Smoothing (Add-One Smoothing) <cit.> to avoid division by zero error.
We used the week-over-week change on the MAC-flap event count instead of the count itself because it helps to account for the natural variability in the data over time.
By calculating the week-over-week change, we are able to capture any significant increases or decreases in MAC-flap events over a short period of time.
This is important because the occurrence of MAC-flap events can vary significantly depending on the time-of-day, day-of-the-week, and other factors.
In addition, using the week-over-week change allows us to compare the current level of MAC-flap events to a more recent baseline, rather than comparing it to a long-term average.
This helps to capture any changes in MAC-flap events that may have occurred recently, which may be missed if we were only looking at long-term averages.
By setting the upper boundary for MAC-flap events based on three standard deviations from the mean week-over-week change, we are able to account for natural variability in the data while still capturing any significant changes in MAC-flap events that may indicate an anomaly.
This approach helps us to detect anomalies in MAC-flap events with a high degree of accuracy, while minimizing false positives.
Using the past 5 weeks of data, we set the upper boundary for MAC-flap events based on three standard deviations (3-sigma) from the mean.
This upper boundary represents the expected range of MAC-flap event frequency for each network organization.
We then monitored the MAC-flap events for each organization and flagged any event that exceeded the upper boundary as an anomaly.
§.§ Whitelist Creation
Enterprise-level products require not only accurate anomaly detection but also consideration of quality of service (QoS) and user experience.
It is important to minimize false positive alerts to prevent the alert system from generating unnecessary noise for customers.
In addition to the statistical model, a whitelist mechanism is included in our framework to filter out nodes that are persistently and stably alerted with a similar number of alerts week-over-week.
This additional mechanism provides greater control over the alert system and helps to reduce false positives.
In the specific use case of MAC-flap alerts, there are instances where MAC-flap events are expected and should not trigger an alert.
For example, when a wireless client is roaming from one access point (AP) to another (e.g., when someone is talking on the phone over Wi-Fi and walking between two APs in a cafeteria).
Therefore, it is essential to include direct control in the framework to filter out these expected events and prevent them from generating false alarms.
An example of the whole threshold generation workflow is depicted in Figure <ref>.
The statistical model described above in Section <ref> generates the interim thresholds for week 6 (W6) based on previous 5 weeks' time series data (W1 to W5).
At the same time, the previous 5 weeks' time series data is back-tested with their corresponding interim thresholds to derive an alerting profile for each node, including i.e. number of weeks alerted, week-over-week percent change of number of alerts or hourly event counts[More constraints and thresholds can be added when creating the whitelist.].
These profiling results will be joined with support cases to determine a whitelist of nodes which will be exempted from being alerted – if they persistently received alerts over 50% of all the backtested weeks, yet no support case indicates that they are related to a connectivity issue.
Taking into account the interim threshold and the whitelist allows to effectively reduce false positive alerts.
Eventually, the final threshold of week 6 will be deployed in production to stream alerts by comparing against real-time hourly event counts.
Overall, the combination of the statistical model and the whitelist mechanism provides a more robust and effective approach for anomaly detection in enterprise-level products. This approach not only ensures accurate detection but also takes into account the QoS and user experience, providing customers with a more reliable and satisfactory service.
§ EVALUATION
This section describes the model validation conducted on 20% of switches (more than 117k nodes) across 12 weeks real-world data.
§.§ Model Performance
In order to understand the distribution of MAC-flap events occurrence on network switches, we investigate the histogram of MAC-flap events per hour, as depicted in Figure <ref>.
84.18% nodes have no MAC-flap events and 11.35% nodes have less than 10 MAC-flap events per hour.
These two buckets account for more than 95% of the whole population.
As depicted in Figure <ref>, the alert ratio of nodes based on the statistical model varies across the buckets and does not have linear dependency on the number of events. Network switches with 10 to 100 hourly MAC-flap events receive the highest ratio of alerts over time. However, nodes with more than 100 hourly MAC-flap events are alerted less frequently. While intuitively networking devices with excessive amounts of MAC-flap events should be alerted, it is normal for some nodes to consistently observe flapping MAC addresses (e.g. roaming among APs as mentioned in Section <ref>). Our model is able to differentiate nodes with persistently high amounts of hourly MAC-flap events.
Figure <ref> segments the generated alerts by organizations (each organization may have thousands of nodes) and depicts on 2 axes – i.e. the ratio of alerted hours and nodes. While ratio of alerted organizations (at least alerted once) is higher than 80%, the alerted ratios of hours and nodes per organization are low. Most organizations locate at the bottom left, with low alert ratio on both dimensions. In the figure, the top-left region represents the organizations that have a group of nodes alerted during a brief period of time, while the bottom-right region represents the organizations that have a small subset of nodes persistently alerted over time.
As the thresholds are generated on a weekly basis, we compared the alert ratios across the 12 weeks. Figure <ref> demonstrates that the ratios of alerts generated by the statistical model are stable over weeks.
§.§ Model Validation w/ Support Cases
The alerts are cross-validated with customer support cases related to MAC-flap events signaling connectivity issues they encountered. MAC-flap events happened before and after 30 days to when the customer support cases are opened are considered as ground truth labeled data[The delay is derived from an estimation of time for the customer to discover networking issues and to be ensured that the issues are entirely resolved.].
As depicted in Figure <ref>, 41.37% of hourly MAC-flap event counts fall within the 30-day (720-hour) time range before and after the case opening. The distribution of the temporal distance between MAC-flap events and support cases indicates that these events are potentially subject to connectivity issues.
Across 12 weeks, the statistical model performance is depicted in Figure <ref>, which also shows stability over weeks. High accuracy is achieved by predicting most true negatives (TN) (99.91% TN rate).
The output of the statistical model successfully alerts more than 60% of support cases, yet at the cost of alerting more than half of organizations and persistently alerting a subset of nodes with no relation to the support cases (the line of retention in the figure), which leads to low precision and recall.
These findings justified the requirement of including a whitelist to reduce false positives (FP), and decrease the ratio of alerted organizations, therefore make the alert system less noisy.
Based on the previous observations, a whitelist can be created based on the back-testing procedure described in Section <ref>. It aims for excluding network switches that are persistently alerted over weeks with stable amount of alerts each week, which are the bubbles located at the bottom-right region in Figure <ref>. For instance, it helps to reduce 32.3% of nodes from being alerted by configuring the thresholds to exclude nodes that are alerted more than 6 weeks across 12 weeks, and have the average of absolute week-over-week percent change of number of alerts lower than 10.
As depicted in Figure <ref>, conducting grid search over the 2 thresholds allows to largely decrease the ratio of alerted organizations from more than 60% to less than 20% while alerting more than 30% of MAC-flap-related support cases.
§.§ Discussion
This study presents an initial attempt towards developing a hybrid model for a more effective alert system by integrating statistical models, backtesting, and qualitative data validation. The hybrid model aims to enhance the performance of the existing alert system by reducing false positive alerts through a two-step approach, i.e., employing statistical models and utilizing backtesting over multiple weeks to filter out persistently alerted nodes. Furthermore, the model's effectiveness is validated through the analysis of qualitative data gathered from customer support cases.
Subsequent studies can focus on improving the proposed model's performance by expanding the input data set and conducting further feature engineering. The study suggests exploring correlations between events such as MAC-flap and layer-2 loop detection to identify underlying patterns that can aid in the development of more accurate alert systems.
Additionally, future research may consider including more human feedback than the support cases as a part of the model development process to improve the user experience. For instance, the study proposes A/B testing to gather customer feedback and ensure the usefulness of the new alert system in accurately predicting network connectivity issues.
§ CONCLUSIONS
In this paper, we proposed a hybrid model for an alert system that combines statistical models with a whitelist mechanism to reduce false positive alerts in network management. Our approach leverages the large database to detect anomalies and incorporates a whitelist mechanism to filter out persistently alerted nodes, resulting in a more accurate and efficient alert system. We have validated our model using qualitative data from customer support cases and identified opportunities for future work, including more feature engineering and input data, as well as incorporating more human feedback in the model development process. Our proposed approach offers a promising solution to the challenge of reducing false positive alerts in network management and has the potential to improve the user experience in enterprise-level products.
§ ACKNOWLEDGEMENT
We would like to express our sincere gratitude to our data engineers – Rajat Mehrotra, Sinduja Seethapathy, Vinay Kumar Abburi, and Nate Fung, for obtaining the data for model validation, maintaining the data infrastructure, and checking the data quality.
We greatly appreciate their invaluable support and dedication to this project.
IEEEtran
|
http://arxiv.org/abs/2306.08723v1
|
20230614195945
|
Hippocampus Substructure Segmentation Using Morphological Vision Transformer Learning
|
[
"Yang Lei",
"Yifu Ding",
"Richard L. J. Qiu",
"Tonghe Wang",
"Justin Roper",
"Yabo Fu",
"Hui-Kuo Shu",
"Hui Mao",
"Xiaofeng Yang"
] |
physics.med-ph
|
[
"physics.med-ph"
] |
Hippocampus Substructure Segmentation Using Morphological Vision Transformer Learning
Yang Lei^1, Yifu Ding^1, Richard L.J. Qiu^1, Tonghe Wang^2, Justin Roper^1, Yabo Fu^2, Hui-Kuo Shu^1,
Hui Mao^3 and Xiaofeng Yang^1*
1Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
2Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065
3Department of Radiology and Imaging Sciences and Winship Cancer Institute, Atlanta, GA 30308
*Corresponding author:
Xiaofeng Yang, PhD
Department of Radiation Oncology
Emory University School of Medicine
1365 Clifton Road NE
Atlanta, GA 30322
E-mail: [email protected]
Abstract
Background: The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus.
Purpose: To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MRI images, we developed a novel model, Hippo-Net, which uses a mutually enhanced strategy.
Methods: The proposed model consists of two major parts: 1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. 2) An end-to-end morphological vision transformer network is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MRI images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures.
A total of 260 T1w MRI datasets from Medical Segmentation Decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. The segmentations were evaluated with two indicators, 1) multiple metrics including the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), volume difference (VD) and center-of-mass distance (COMD); 2) Volumetric Pearson correlation analysis.
Results: In five-fold cross-validation, the DSCs were 0.900±0.029 and 0.886±0.031for the hippocampus proper and parts of the subiculum, respectively. The MSD were 0.426±0.115mm and 0.401±0.100 mm for the hippocampus proper and parts of the subiculum, respectively.
Conclusions: The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MRI images. It may facilitate the current clinical workflow and reduce the physicians’ effort.
Keywords: hippocampus substructure, segmentation, deep learning
§ INTRODUCTION
The hippocampus is a pair of medial and subcortical brain structures located in proximity to the temporal horn of the lateral ventricles, which is an active research area due to its implication in memory and neuropsychiatric disorders.<cit.> In radiation therapy, hippocampal avoidance whole brain radiation using volumetric modulated arc therapy (VMAT) plus the medication memantine has been shown to preserve cognitive function without compromising progression-free survival or overall survival when compared to classic whole brain radiation therapy plus memantine.<cit.> In Alzheimer’s Disease (AD), the progression of AD occurs from the trans-entorhinal cortex to the hippocampus, and finally to the neocortex.<cit.> These progression steps depend on the severity of the neurofibrillary tangles found in neuropathological studies. However, similar patterns can also be observed in the progress of brain atrophy found on MRI imaging studies. The atrophy of hippocampus measured from MRIs can be used as an early sign of AD progression.<cit.> Additionally, evidence of hippocampal atrophy as measured from MRIs can occur before the onset of clinical symptoms.<cit.> Therefore, accurate segmentation of the hippocampus from MRIs is a meaningful task in medical image analysis across multiple disciplines.<cit.>
To determine whether the hippocampus is atrophic, clinicians often need to segment the bilateral hippocampus on MRI scans and analyze their shape and volume.<cit.> This task is difficult, however, due to several factors. Firstly, the hippocampus has low contrast with the surrounding tissues on MRI scans,<cit.> since it is a gray matter structure. Secondly, the hippocampus has an irregular shape leading to a blurred boundary in cross-sectional slices.<cit.> Thirdly, the hippocampus is a small structure with limited volume as compared to other structures that are routinely delineated as organs-at-risk (OARs) in radiation therapy.<cit.> Finally, there are large variations in the size and shape of the hippocampus across patients.<cit.> Therefore, accurate and automatic segmentation of hippocampus is a challenging task. Until now, manual segmentation of hippocampus is still the standard in clinical practice.<cit.> However, manual segmentation is a tedious and error-prone process, which limits its application in big data and clinical practice. Thus, many efforts have been devoted to developing computer-aided diagnostic systems for automated segmentation of the hippocampus.
The existing automatic hippocampal segmentation methods can be categorized into two main types: atlas-based methods and machine learning-based methods. Atlas-based methods can be further divided based on the number of atlases used in the segmentation process into single-atlas-based, average-shape atlas-based, and multi-atlas-based approaches. For instance, Haller et al. first proposed to use the single-atlas-based approach for hippocampal segmentation.<cit.> However, single-atlas-based approaches are limited by inter-patient variations. To address this, average shape-based mapping approaches were proposed to overcome such limitations, but the segmentation results depend on the alignment quality of the target and average maps. Thus, a priori knowledge of medical mapping was incorporated into the multi-atlas-based segmentation approach. For example, Wang et al. proposed a robust discriminative multi-atlas label fusion approach to segment hippocampus by building the conditional random field (CRF) model that combines distance metric learning and graph cuts.<cit.> Wang’s approach is a patch embedding multi-atlas label fusion method that utilizes only the relationship between the target block and the atlas block, and ignores the possibility that unrelated atlas blocks may dominate the voting process. Existing atlas-based methods do not consider the anatomical differences in hippocampus among patients, and do not consider the correlation between atlases.
Machine learning-based methods can be further classified into traditional machine learning-based approaches and deep learning-based approaches. Traditional machine learning-based approaches mainly include support vector machine (SVM), Markov random field (MRF), principal component analysis (PCA), et al.<cit.> For instance, Hao et al. proposed a local label learning strategy to estimate segmentation labels of target images by using SVM with image intensity and texture features.<cit.> However, these traditional approaches to machine learning rely heavily on the quality of handcrafted features, and further suffer from slow segmentation, susceptibility to noise interference, and insufficient generalization performance.<cit.>
Because convolutional neural network (CNN) models can automatically extract the pixel feature information from images, they have been widely used in multiple medical image analysis tasks.<cit.> For example, CNN-based models can be used to segment the hippocampus from MRIs.<cit.> Qiu et al. proposed a multitask 3D U-net framework for hippocampus segmentation by minimizing the difference between the targeted binary mask and the model prediction, and optimizing an auxiliary edge-prediction task.<cit.> Cao et al. developed a two-stage segmentation method to perform the task of 3D hippocampus segmentation by localizing multi-size candidate regions and fusing the multi-size candidate regions.<cit.> These methods show promising results, demonstrating the potential of CNN-based models to improve the efficiency and accuracy of hippocampus segmentation.
However, most existing deep learning-based methods ignore the spatial information of the hippocampus relative to the entirety of the human brain. As a result, they cannot effectively fuse the shape features and the semantic features, which leads to lower segmentation accuracy. Hippocampal tracing began from anterior where the head is visible as an enclosed gray matter structure inferior to the amygdala, and continued posteriorly using surrounding white matter or CSF as boundaries. Subiculum (posterior parts of hippocampus) was included in the hippocampus. Delineation stopped when the wall of the ventricle was visibly contiguous with the fimbria. The subiculum occupies a portion of the para-hippocampal gyrus in the mesial temporal lobe and is a component of the medial temporal memory system. Therefore, in this work, we aim to develop a novel deep network framework to segment the hippocampus by introducing a spatial attention mechanism to capture the spatial location information of the hippocampus relative to the brain. We also designed a cross-layer dual encoding shared decoding network to extract the semantic characteristics of the hippocampus. By combining the spatial location information and semantic characteristics of the hippocampus, we enhanced the segmentation accuracy of the hippocampus. In this study, we trained a novel morphological visual transformer learning-based hippocampus substructure segmentation for accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MR images.
§ METHODS AND MATERIALS
§.§ Overview
Figure 1 outlines the schematic flow chart of this hippocampus multi-substructure segmentation process. The proposed network follows the same feedforward path for both training and inference. A collection of hippocampus images and multi-substructure contours was used for model training. The proposed model, named as morphological visual transformer-based network, takes the hippocampus image as input and generates the auto-contour of two substructures, which are the hippocampus proper and parts of the subiculum. The manual contours of these two substructures were used as ground truth to supervise the proposed network.
The proposed model consists of two deep learning-based subnetworks, i.e., a localization model and a segmentation model. The localization model is a hippocampus ) detection network that is used to detect the volume-of-interest (VOI) for both the hippocampus proper and parts of the subiculum<cit.> from the T1w MR image. The MR image is then cropped within the VOI before transfer to the segmentation subnetwork to ease the computational task. The segmentation model is implemented via an end-to-end morphological vision transformer network, which is used to perform substructures segmentation within the hippocampus VOI. The vision transformer incorporates the dominant features extracted from MR images. The integration of the morphological operators into the vision transformer increases the ability of separating the hippocampus into two substructures.
During inference, the trained localization model takes a hippocampus T1w MR image as input and detects the VOI of hippocampus as the first step. Then, the cropped image within the VOI is sent to the segmentation model, i.e., morphological visual transformer, to segment the substructures. Finally, based on the detected coordinates derived by the localization model, the segmented contour is converted back to its original coordinates to obtain the final segmentation.
§.§ Localization model
The aim of the localization model is to crop the image to a VOI that only covers the hippocampus to ease computational task of substructure segmentation. In order to preserve the spatial information of substructure, the coordinate the detected VOI is recorded during testing. Thus, the localization of ground truth hippocampus is used to supervise the localization model. To derive it, the manual contour is needed. For a set of MR images I_Img∈ R^(w× h× d), where w and h denote the width and height of the I_Img, d represents its depth, and the corresponding physician-delineated hippocampus, I_Seg=I_seg^p ∪ I_seg^s I_seg^p denotes the hippocampus proper. I_seg^s denotes the parts of the subiculum. Based on the I_Seg, the bounding box that only covers the hippocampus can be derived. This bounding box is defined as the ground truth volumes-of-interest (VOI). The coordinate of the VOI is represented by C=[x_c,y_c,z_c,w_c,h_c,d_c ]∈ R^6, where x_c, y_c and z_c denote the center of hippocampus VOI, w_c, h_c and d_c denote the width, height and depth of the VOI along the 3D direction.
The localization model design is inspired by a recently developed focal modulation network, which is used in object detection.<cit.> The localization model includes a hierarchical contextualization, which is used for feature extraction from different hierarchical levels, a modulator, which combines the features from different levels, and a neural network layer works for location position estimation. The details of the localization model is explained as follows.
Given input MRI I_Img∈ R^(w× h× d), with a first convolution layer for feature map initiating F_0, a multi-scale hierarchy feature map set are collected via the steps defined as follows iteratively:
F_k=GeLu(Conv(F_k-1 )),
where F_k-1 denotes the feature map from previous iteration, F_k is then derived by the operating convolution and Gaussian error linear units (GeLU) activation function.<cit.> After several iterations of Eq. (1), multi-hierarchical features are collected, we then match these feature maps to same size via interpolation and sum together
F_m=Σ_k BicubicInterpolate(F_k).
Then, by using a neural network layer, we aim to derive the estimation of C, labeled as Ĉ=[x̂_c,ŷ_c,ẑ_c,ŵ_c,ĥ_c,d̂_c], from the F_m. To achieve this aim, we set the loss function, as shown in Eq. (3) during the training of localization module.
L_loc=d((x_c,y_c,z_c ),(x̂_c,ŷ_c,ẑ_c))+λ(√(w_c^2-ŵ_c^2)/w+√(h_c^2-ĥ_c^2)/h+√(d_c^2-d̂_c^2)/d),
where d((x_c,y_c,z_c ),(x̂_c,ŷ_c,ẑ_c)) denotes the Euclidean distance between the two centers (x_c,y_c,z_c ) and (x̂_c,ŷ_c,ẑ_c).
§.§ Morphological visual transformer
For the next step, the MRI I_Img are cropped within a VOI box, whose center is defined as Ĉ This process mitigates the unrelated region for hippocampus segmentation and thus improve the efficiency of the model. To ensure the cropped image is uniformly sized for the following subnetwork, zero-padding is used. The processed image is then input into the morphological visual transformer (MVT). The MVT is built in an end-to-end fashion, meaning that the input and output share the same size. After several convolutional layers with a stride size of 2, the MVT uses two auto-learned morphological operators, dilation and erosion, to process the hidden feature maps. As compared to convolutional kernel with stride size of 2 or max-pooling layer, which can be regarded as a dilation with a flat square structuring element followed by a pooling, the learned morphological operator can be tuned to aggregate the most important information. This can further reduce the redundant and meaningless information for the next operator, the visual transformer, and therefore improve its performance. The output of the two morphological operators is then concatenated and fed into a projection convolutional layer and a linear projection operator to fit it to the input of visual transformer. A widely developed visual transformer is used.<cit.> Afterwards, several deconvolutional layers are applied until the output of this MVT model is equal in size to the input.
After the MVT step, consolidation can be used to transform the segmentation back to the original coordinate system (I_img), since the location information has been obtained from the localization model.
To supervise the MVT, a combination of two loss functions is used, which are generalized cross entropy loss L_GCE and generalized Dice loss L_GD. The L_GCE is used to evaluate the difference between the predicted label and the ground truth label at each voxel, which is defined as:
L_GCE=-Σ_il_ilogl̂_i
where l_i denotes the ground truth label at voxel i, l̂_i denotes the predicted label at voxel i.
The L_GD is used to address the issues about the voxel quantity imbalance of the segmented voxels (often a small portion of the whole image) and background (large portion), which is defined as:
L_GD=1-2Σ_il_i×l̂_i+ϵ/Σ_il_i^2+Σ_il̂_i^2+ϵ
where ϵ is a small value. The weighted sum of these two loss terms is then used to train the MVT model.
§.§ Dataset
In total, 260 T1w MR images from Medical Segmentation Decathlon were used in this study.<cit.> The Medical Segmentation Decathlon is a dataset consisting of T1-weighted magnetization-prepared rapid gradient echo (MPRAGE) MRIs of both healthy adults (ninety healthy adults) and adults with a non-affective psychotic disorder. The corresponding target Region of Interest (ROIs) were the anterior and posterior of the hippocampus, defined as the hippocampus proper and parts of the subiculum. This dataset was selected due to the precision needed to segment such a small object in the presence of a complex surrounding environment.
We conducted a five-fold cross-validation study on the first 200 T1w MR images. Then, a hold-out test was performed on the remaining 60 images using a model trained on the first 200 images. The segmentation was evaluated with multiple quantitative metrics including the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), volume difference (VD) and center-of-mass distance (COMD). A Bland-Altman analysis and volumetric Pearson correlation analysis were also performed.
§.§ Implementation and evaluation
The investigated deep learning networks were designed using Python 3.6 and TensorFlow and implemented on a GeForce RTX 2080 GPU that had 12GB of memory. Optimization was performed using the Adam gradient optimizer. The learning rate was 2×10-4. With the batch size setting of 20 during training, the percentage of utility of GPU memory is 96%. Once the network was trained, it only takes 1.5 mins for hippocampus segmentation. To demonstrate the utility of morphological operator, an ablation study was conducted. Namely, we tested the performance of the proposed method of with and without using morphological operator. To further demonstrate the significance of the proposed work, we compared the proposed method with another popular segmentation models, cascaded U-Net (CasU)<cit.> and visual transformer network (VIT).<cit.> Comparisons were performed using the same training and testing datasets and computational environment.
§ RESULTS
§.§ Comparing with state-of-the-art
The visual comparison between the proposed method and comparing methods are shown in Fig. 2. As can be seen from the first row, the proposed method shows good agreement with the ground truth, whereas the comparing methods cannot. In the second row it is observed that misclassification of posterior part occurs for the cascaded U-Net. To better demonstrate the segmentation accuracy, we performed absolute subtraction of the segmentation results of the proposed method and comparing methods with the manual contour’s binary masks. The difference images are shown in the fourth to sixth rows. As can be seen from the fifth and sixth rows, the difference images of the two comparing methods show greater error at the adjacent part between the hippocampus proper and parts of the subiculum.
The linear correlation coefficient calculated as target volume of ground truth and segmentation, is shown in Fig. 3. The linear correlation coefficient obtained using the proposed method was 0.999 and 0.993 on five-fold cross-validation and hold-out test, respectively. These values indicate a good agreement between the ground truth and proposed results, as compared to 0.989/0.983 and 0.991/0.979 obtained by the cascaded U-Net and VIT, respectively on five-fold cross-validation/hold-out test. On hold-out test, the VIT consistently underestimated the region , which became more pronounced for larger tumors.
The quantitative metrics of the proposed method and the alternate methods from the 200 cases’ cross-validation and 60 cases’ hold-out test are listed in Table 1 and 2, and Table 3 and 4, respectively. For the cross-validation experiment, the proposed model significantly outperformed Cascaded U-Net and VIT in all metrics. In five-fold cross-validation, the DSCs, HD95, MSD and CMD were 0.900±0.029 and 0.886±0.031, 1.156±0.277 and 1.133±0.264, 0.426±0.115 and 0.401±0.100, 0.491±0.300 and 0.738±0.452 for the hippocampus proper and parts of the subiculum, respectively.
In the hold-out test using external datasetthe proposed model is significantly superior to the alternate approaches, as shown in Table 3 and 4 in comparison with cascaded U-Net and VIT. In hold-out test, the DSCs, HD95, MSD and CMD were 0.881±0.033 and 0.863±0.034, 1.328±0.404 and 1.272±0.388, 0.494±0.113 and 0.466±0.112, 0.608±0.313 and 0.834±0.478 for the hippocampus proper and parts of the subiculum, respectively. As compared to five-fold cross-validation, the hold-out test did slightly worse with slightly higher standard deviation, which may be caused by the training data’s distribution not covering the range of cases in the hold-out test.
§ DISCUSSION
A novel hippocampus segmentation method (called MVT) is proposed by introducing a localization mechanism to aid segmentation and designing the morphological visual transformer network for substructures segmentation. The localization model is used to detect the VOI of hippocampus. The end-to-end morphological vision transformer network is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MRI images and is improved by learning-based morphological operators. The morphological operators integrated into the vision transformer enhance the ability to separate the hippocampus structure into two substructures.
Due to limited computational resources, our method focused on domain incremental learning with a cropped region for analysis. We plan to test the performance of our method in a class incremental setup. As the visual transformer contains several orders of magnitude larger number of parameters due to the self-adapting process as compared to the traditional CNNs, it is essential to investigate an effective optimization method to reduce the amount of GPU memory allocation as well as simplify the overall ViT U-Net architecture.
Our MVT is a supervised method, which means it still requires accurate manual contours as training labels. Currently, there are semi-supervised learning methods that can learn features from unlabeled data. We will extend the proposed method with the ensemble approach to improve its generalization performance by integrating the supervision learning and semi-supervised learning methods from the limited labeled data and large-scale unlabeled data of MRIs in a future study.
The auto-segmentation of substructures of hippocampus has significant clinical relevance. For example, in hippocampal sparing whole brain radiation therapy (HA-WBRT),<cit.> current intensity modulated radiation treatment (IMRT) and arc-based VMAT techniques can reduce dose to the hippocampus without sacrificing target coverage and homogeneity.<cit.> Further improvements in patient outcomes may be possible by considering substructures separately for optimal dose sparing; however, accurate segmentation is critical. With more accurate contouring of substructures of hippocampus, it is possible to have different dose constraints of these substructures in HA-WBRT,<cit.> allowing for better sparing of the critical part of the hippocampus.
§ CONCLUSION
We have developed a novel deep learning-based method to accurately segment the anterior and posterior of hippocampus. Our results showed good performance in terms of DSC and VD between the segmentation result and the ground truth.
ACKNOWLEDGEMENT
This research is supported in part by the National Institutes of Health under Award Number R01CA215718, R56EB033332, R01EB032680, and P30CA008748.
Disclosures
The authors declare no conflicts of interest.
plainnat
|
http://arxiv.org/abs/2306.03161v1
|
20230605181603
|
On the Role of Entanglement and Statistics in Learning
|
[
"Srinivasan Arunachalam",
"Vojtech Havlicek",
"Louis Schatzki"
] |
quant-ph
|
[
"quant-ph",
"cs.CC",
"cs.LG"
] |
On the Role of Entanglement and Statistics in Learning
Srinivasan ArunachalamIBM Quantum, Almaden Research Center, Vojtěch HavlíčekIBM Quantum, T.J. Watson Research Center,
Louis SchatzkiElectrical and Computer Engineering, University of Illinois Urbana-Champaign,
July 31, 2023
==============================================================================================================================================================================================================================
We make progress in understanding the relationship between learning models with access to entangled, separable and statistical measurements in the quantum statistical query () model. We show the following results.
Entangled versus separable measurements. The goal is to learn an unknown f from the concept class ⊆{f:1^n→ [k]} given copies of 1/√(2^n)∑_x |x,f(x)⟩. We show that, if T copies suffice to learn f using entangled measurements, O(nT^2) copies suffice to learn f using only separable measurements.
Entangled versus statistical measurements The goal is to learn a function f ∈ given access to separable measurements or statistical measurements. We exhibit a concept class based of degree-2 functions with exponential separation between learning and quantum learning with entangled measurements (even in the presence of noise). This proves the “quantum analogue" of the seminal result of Blum et al. <cit.> that separates classical learning from classical learning with classification noise.
lower bounds for learning states. We introduce a quantum statistical query dimension (), and use it to give lower bounds on the complexity of learning. We prove superpolynomial lower bounds for testing purity of quantum states, shadow tomography, learning coset states for the Abelian hidden subgroup problem, degree-2 functions, planted bi-clique states and learning output states of Clifford circuits of depth (n). We also show that an extension of characterizes the complexity of general search problems.
Further applications. We give unconditional separation between weak and strong error mitigation and prove lower bounds for learning distributions in the model. Prior works by Quek et al. <cit.>, Hinsche et al. <cit.> and Nietner et al. <cit.> proved analogous results assuming diagonal measurements and our work removes this assumption.
0.15
§ INTRODUCTION
Machine learning () has emerged as one of successful parts of artificial intelligence with wide-ranging applications in computer vision, image recognition, natural language processing. More recently, has been used in popular applications such as AlphaGo and Alpha zero (to play the games of Go and chess), chatGPT (to mimic a human conversation) and Alphafold (for solving some hard instances of protein folding). Simultaneously, understanding the power of quantum physics for has received much attention in the quantum computing community. Many quantum algorithms have been proposed for practically relevant tasks such as clustering, recommendation systems, linear algebra, convex optimization, support-vector machines, kernel-based methods, topological data analysis <cit.>. There are several surveys dedicated to understanding the power of quantum methods for <cit.>.
Quantum learning theory provides a theoretical framework to understand quantum advantages in . Here, there is a concept class which is a collection of n-qubit quantum states, a learner is provided with several copies of ρ∈, performs an arbitrary entangled operation on ρ^⊗ T and the goal is to learn ρ well-enough. This framework encompasses several results in quantum learning such as tomography, shadow tomography, learning interesting classes of states, learning an unknown distribution and functions encoded as a quantum state <cit.>.
A natural concern when considering the near-term implementation of such quantum learning algorithms is, it is infeasible to prepare several copies of ρ and furthermore, perform arbitrary entangled measurements ρ. More recently, motivated by near-term implementations, <cit.> introduced the model of quantum statistical query () learning, to understand the power of measurement statistics for learning, with variations of it finding applications in <cit.>. In the model, suppose a learning algorithm wants to learn an unknown n-qubit quantum state ρ, a learning algorithm can perform (n)-many efficiently-implementable two-outcome measurements {M_i,𝕀-M_i} with noise and the goal is to learn the unknown ρ well enough. Clearly this model is weaker than the model when given access to ρ^⊗ T, since the learner is only allowed access to expectation values over a single copy of ρ.
In this work, we primarily consider concept classes constructed from Boolean functions. In Valiant's probabily approximately correct (PAC) learning framework, a concept class ⊆{c:1^n→1} is a collection of Boolean functions. In the PAC model,[For simplicity, we discuss PAC learning under the uniform-distribution, i.e., x is drawn uniformly from 1^n.] a learning algorithm is given many uniformly random (x^i,c^⋆(x^i)) where c^⋆∈ is unknown and it uses these to learn c^⋆ approximately well. Bshouty and Jackson introduced the quantum () model <cit.> wherein a quantum learner is given quantum examples |ψ_c^⋆⟩^⊗ T, i.e., coherent superpositions
|ψ_c^⋆⟩=1/√(2^n)∑_x|x,c^⋆(x)⟩,
and it needs to learn the unknown c^⋆∈ well enough. The complexity measure here is the sample complexity, i.e., copies of classical or quantum examples used by the algorithm. There have been works that have looked at this model and proven positive and negative results for learning function classes (see <cit.> for a survey). Surprisingly, in <cit.> they observed that, many positive results using quantum examples can be transformed into algorithms in the weaker framework. This motivates the following two open questions:
1. Are entanglement measurements needed for learning function classes?
2. Do measurement statistics suffice for learning function classes?
§.§ Main results
In this work, we resolve both of these questions.
We show that (i) for learning Boolean function classes the sample complexity of learning with entangled measurements and separable measurements are polynomially related and (ii) there is an exponential separation between learning with separable measurements (even in the presence of classification noise) and learning with just measurement statistics. We now discuss these results in more detail.
Entangled versus Separable measurements. Understanding the role of entangled measurements in quantum information has received attention recently. Bubeck et al. <cit.> gave a property testing task for which entangled measurements are necessary for obtaining the optimal bounds. More recently, for learning classes of arbitrary quantum states (i.e., not necessarily states constructed from function classes), there were two recent works by <cit.> which showed exponential separation for learning properties of quantum states when given access entangled measurements in comparison to separable measurements. Here, we study if similar separations exist when considering function classes, a small subset of all quantum states.
Our first result shows that in order to exactly learn a function class, every learning algorithm using entangled measurements can be transformed into a learning algorithm using just separable measurements with a polynomial overhead in sample complexity. In contrast, if the goal is to learn a property about the unknown function, then entangled measurements can reduce the sample complexity exponentially compared to separable measurements.
For a concept class ⊆{c:1^n→1}, if T copies of |ψ_c⟩ suffice to learn c, then O(nT^2) copies to learn c using only separable measurements.
versus noisy- learning. In <cit.> they ask if there is a natural class of Boolean functions for which, learning can be separated from learning. Classically it is well-known that parities separates learning from learning. In <cit.>, it was observed that the class of parities, juntas, DNF formulas are learnable in the framework and a candidate class to separate from quantum- was unclear. Furthermore, Kearns posed the question if learning is equal to learning with classification noise. The seminal result of Blum et al. <cit.> resolves this question by showing that the class of parity functions on O((log n)·loglog n) separates these two models of learning (under constant noise rate). This motivates the following questions:
* In the noisy-quantum model <cit.>, a learning algorithm is given copies of
|ψ^n_c^⋆⟩=1/√(2^n)∑_x∈1^n|x⟩(√(1-η)|c^⋆(x)⟩+√(η)|c^⋆(x)⟩).
the goal is to learn c^⋆. Is there a class that separates noisy-quantum from learning?
* Admittedly, the class constructed by Blum et al. <cit.> is “unnatural", can we obtain the separation in (a) for a natural concept class?
* Does such a separation hold for non-constant error rate η?
Here, we describe a natural problem that witnesses this separation and resolves the questions above.
There is a concept class of n-bit Boolean functions, based out of degree-2 functions that can be solved in using quantum examples even in the presence of η-classification noise in time (n,1/(1-2η)), whereas every algorithm requires 2^Ω(n) queries to learn .
§.§ Further applications
Using our lower bounds for learning quadratic functions, we present two applications. First we give an exponential separation between weak and strong error mitigation, resolving an open question of Quek et al. <cit.> who proved the same separation assuming the observables are diagonal. Second, we show super-polynomial lower bounds for learning output distributions (in the computational basis) of n-qubit Clifford circuits of depth ω(log n) and Haar random circuit of depth-O(n). This extends the work of <cit.> who proved these lower bounds for algorithms wherein the observables are diagonal.
Error mitigation. Error mitigation () was introduced as an algorithmic technique to reduce the noise-induced in near-term quantum devices, hopefully with a small overhead, in comparison to building a full-scale fault-tolerant quantum computer <cit.>. In recent times, has obtained a lot of attention with several works understanding how to obtain near-term quantum speedups as a surrogate to performing error correction. More formally, an algorithm takes as input a quantum circuit C, noise channel 𝒩 and copies of |ψ'⟩=𝒩(C)|0^n⟩. In a strong protocol, needs to produce samples from a distribution D that satisfies (D,{⟨ x| C|0^n⟩^2}_x)≤ε and in the weak setting, given observables M_1,…,M_k the goal is to approximate ⟨ψ|M_i|ψ⟩ upto ε-error. In <cit.>, they asked the question: how large should k be in order to simulate weak by strong ? They show that when M_is are diagonal, then k= 2^Ω(n), i.e., they gave an exponential separation between weak and strong . In this work, our main contribution is to use Result <ref> to remove the assumption and show an exponential separation unconditionally between weak and strong .
Learning distributions. Recently, the works of Hinsche et al. <cit.> and Nietner et al. <cit.> initiated the study of learning output distributions of quantum circuits. In particular, they considered the following general question: Let |ψ_U⟩=U|0^n⟩ where U∈𝒰 and 𝒰 is a family of interesting unitaries and let P_U(x)=|⟨ x|U|0^n⟩|^2. How many queries does one need to learn the P_U to total variational distance at most ε? To this end, the works of <cit.> looked at diagonal Ms, i.e., M=∑_X ϕ(x) |x⟩⟨x| for ϕ:1^n→ [-1,1] and showed the hardness of approximately learning P_U for 𝒰 being ω(log n)-depth Clifford circuits and depth-d∈{ω(log n), O(n)} and d→∞-depth Haar random circuits.[They in fact prove that learning even a (1-exp(-n))-fraction of the circuits in these circuit families is hard in model when restricted to diagonal observables.] In this work, we improve upon their lower bounds by removing the assumption that M is diagonal and prove a general lower bounds for these circuit families that is considered in their work. We also observe that learning the output states of constant-depth circuit can be done in polynomial-time using queries.
§.§ Proof overview
In this section we give a brief overview of the results we described above.
Relating entangled and separable learning. Our starting point towards proving this result is, that one could use a result of Sen <cit.> that, given copies of |ψ_c^⋆⟩, one could apply random measurements on single copies of this state and produce an h that is approximately close to c^⋆ using at most T=(log |C|)/ε copies of |ψ_c^⋆⟩.[This idea was used in an earlier work of Chung and Lin <cit.> as well, but they weren't concerned with entangled and separable measurements.] So for separable learning, by picking ε=η_min as the minimum distance between concepts in , one could exactly learn using T quantum examples. Proving a lower bound on entangled learning is fairly straightforward as well: first observe that (log ||/n) is a lower bound on learning (since each quantum example gives n bits if information and for exact learning one needs Ω(log ||) bits of information) and also observe that 1/η_m is a lower since to distinguish just between c,c'∈ that satisfy _x[c(x)=c'(x)]=1-η_m, one needs 1/η_m copies of the unknown state. Putting this separable upper bound and entangled lower bound gives us ()≤ n·()^2 for all . We further improve the entangled lower bound as follows: let η_a=_c,c'∈_x [c(x)≠ c'x)], then using a information-theoretic argument (inspired by a prior work <cit.>) one can show that the entangled sample complexity of exact learning is at least max{1/η_m, (log ||)/(nη_a)}. Putting this entangled lower bound with the separable upper bound, we get that
≤ O(n··min{η_/η_ , }).
It is not hard to see that this bound is optimal as well for
the class of degree-2 functions, i.e.,
={f(x)=x^⊤ A x 2: A∈𝔽_2^n× n}.
For this class η_a=η_m=O(1) and recently it was shown <cit.> that =Θ(n^2) and =Θ(n).
A combinatorial parameter to lower bound complexity. A fundamental issue in proving our result is, what techniques could one use to prove these lower bounds? Prior to our work, in <cit.> they introduced two new techniques based on differential privacy and communication complexity that give lower bounds on complexity. However, both these lower bounds are exponentially weak! In particular, the lower bounds that they could prove were linear in n for learning an n-bit concept class. Classically, there have been a sequence of works <cit.> with the goal of proving lower bounds and finally the notion of statistical dimension was used to obtain close-to-optimal bounds for learning certain concept classes and the breakthrough works of <cit.> used it to settle the complexity of learning the planted k-biclique distribution.
In this work, our technical contribution is a combinatorial parameter to lower bound complexity akin to the classical parameter. To this end, we follow a three-step approach.
* We show that an algorithm that learns a concept class below error ε in trace distance using queries of tolerance τ can also be used to solve the following decision problem: for a fixed σ such that min_ρ∈d_(ρ,σ) > 2(τ+), decide if an unknown state is either some ρ∈ or equals σ. Calling the complexity of such decision problem, we show that:[A similar argument appeared in <cit.> for diagonal- complexity. We want to thank the authors for discussing their work with us during the completion of our work.]
()≥max_σ{(, σ) -1: min_ρ∈d_(ρ,σ) > 2(τ+)}.
* Next, we define the notion of quantum statistical dimension : for τ >0, a class of states and a σ∉,
the _τ(, σ) is the smallest integer such that there exists a distribution ν over queries M satisfying _M ∼ν[|(M(ρ - σ))| > τ] ≥ 1/d for all ρ∈.
From an operational perspective is natural, as it can be viewed as the smallest expected number of observables that can distinguish all states in from σ. We then show that if the decision algorithm succeeds with probability at least 1-δ, we have that:
(, σ)≥ (1-2δ)_τ(, σ).
* Even with this lower bound, proving bounds on (, σ) is non-trivial. To this end, we further give two lower bounding techniques for (, σ), one based on the variance of queries across (inspired by the work of Kearns <cit.>) and one based on average correlation (inspired by the work of Feldman <cit.>). We define two combinatorial quantities () and (, σ) which can be associated with every class and use it to lower bound ().
Putting the three points together, the complexity of learning can be lower bounded by the variance bound and the average correlation bound as summarized in the figure below.
We remark that although, our quantum combinatorial parameters are inspired by the classical works of Feldman et al. <cit.>, proving that they lower bound complexity and also giving lower bounds for the corresponding concept class using these parameters is non-trivial and is a key technical contribution of our work. Below, we apply these lower bounds to obtain our learning results.
versus noisy . We now sketch the proof of Result <ref>. In quantum learning theory, there are a few well-known function classes that are learnable using quantum examples: parities, juntas, DNF formulas, the coupon collector problem, learning codeword states. It was observed in <cit.> that the first three classes are learnable in already, primarily because a version of Fourier sampling is implementable in . In this work we first observe that the coupon collector problem and learning codeword states are also learnable in the framework. Simultaneously, there have been a few works that have shown exponential lower bounds for learning using separable measurements <cit.>, but all these lower bounds correspond to learning classes of mixed quantum states. Prior to our work, it was open if there is very simple structured function class such that quantum examples corresponding to this function class is hard for (in fact given our polynomial relation between entangled and separable learning, it is conceivable that for the small class of function states, are are polynomially related as well). In this work, we look at the degree-2 concept class defined in Eq. (<ref>).
Recently it was observed that <cit.> this class is learnable using O(n) quantum examples with entangled measurements and O(n^2) quantum examples with separable measurements. Our main contribution is in showing that the complexity of learning with tolerance τ is Ω(2^n·τ^2), in particular implying that a tolerance τ=1/(n) implies an exponential Ω(2^n) lower bound. The proof of this lower bound uses the variance lower bounding technique to lower bound (and in turn ). The essential idea is as follows: let |ψ_A⟩=1/√(2^n)∑_x |x,x^⊤ A x⟩, then we can show that for every M with M≤ 1, we have that the variance
(Mψ_A):=_A [(Mψ_A)^2]-(_A [(Mψ_A)])^2
is at most O(2^-n/2). Proving this upper bound is fairly combinatorial but crucially it involves understanding the properties of the ensemble {|ψ_A⟩}_A and its moments A picked uniformly at random. Finally, we observe that the concept class can be learned given noisy quantum examples like in Eq. (<ref>) using (n,1/(1-2η)) examples. This gives us the claim separation between and noisy-, the “quantum analogue" of the seminal result of Blum et al. <cit.> for a natural class and with non-constant error rate close to 1/2.
New lower bounds.
Using our lower bounding technique, we consider fundamental problems in quantum computing and prove lower bounds for these tasks.
Approximate designs An application of our variance-based lower bounds shows that learning ensembles of states forming approximate Haar 2-designs requires Ω(τ^2· 2^n) queries. This includes interesting ensembles such as stabilizer states, which are known to be efficiently learnable with even separable measurements, and (n)-depth random circuits <cit.>.
Hidden subgroup problem. Coset states appear often in the hidden subgroup problem () <cit.>, a fundamental problem in quantum computing. It is well-known that coset states of the Abelian can be learned exactly from separable measurements in polynomial sample complexity and for non-Abelian groups, it was well-known that separable measurements <cit.> require exponential many copies to learn a coset state. A natural question is, what is the complexity of learning coset states? Given that the standard approach for is based of Fourier sampling and <cit.> showed that a version of Fourier sampling is easy in , it is natural to expect that is implementable in . Surprisingly, in this work, we show that, even for Abelian groups, the sample complexity of learning the unknown coset state is exponentially large. In particular, we show a lower bound of Ω(τ^2· 2^n) on the complexity of learning using (τ) queries and the proof of this is done using the average correlation method. Thus, the abelian hidden subgroup problem cannot be solved in given access to only coset states.
Shadow tomography. The past few years have seen a lot of works understanding shadow tomography <cit.>. The goal here is, given copies of an unknown quantum state ρ, the learner has to predict the expectation value [O_i ρ] of a collection of known observables {O_i}_i∈ [k] up to error ε. It is well-known to be solvable using (n,log k) copies of ρ. In <cit.> the authors show Θ(2^n) copies of ρ are necessary and sufficient for shadow tomography using separable measurements. To prove the lower bounds the authors construct a many-vs-one decision task where σ = 𝕀/2^n and
= {ρ_i = 𝕀+3ε O_i/2^n}.
Assuming that [O_i] =0 and [O_i^2]=2^n for all O_i, then an algorithm which solves the shadow tomography problem also solves the decision problem. Thus, a lower bound on the latter is also a lower bound on the sample complexity of shadow tomography. Here we give a quadratically stronger lower bound of Ω(4^n) when given access to only measurements, which we prove using the average correlation method. Our result shows that even separable measurements and not just statistics play a non-trivial role in shadow tomography.
Does tolerance matter? A natural question when discussing learning is, is there a natural distribution learning task that can be solved with tolerance τ_Q≥τ_C such that classical (τ_C) queries cannot solve the task but (τ_Q) can solve the task? Here we consider the class of bi-clique states introduced in the seminal work of Feldman et al. <cit.>. In their work they showed that for detecting a planted bipartite k-clique distributions when the planted k-clique has size n^1/2-ε (for constant ε>0), it is necessary and sufficient to make superpolynomial in n many (k/n) queries. Here we show that one can achieve the same query complexity quantumly but with (√(k/n)), i.e., with quadratically larger tolerance we can detect a k-biclique. A classical algorithm cannot solve this task with τ_C = √(k/n) queries.
A doubly exponential lower bound? So far all our lower bounds for learning n-qubit quantum states are exponential in n. A natural question is, can one prove a doubly exponential lower bound for some task? In this work, we show that the natural problem of testing purity, i.e., given a quantum state ρ return an estimate of [ρ^2], requires exp(2^nτ^2) many queries to solve. Previous work of <cit.> showed that it is necessary and sufficient to use Θ(2^n) many copies of ρ to test purity if we were allowed separable measurements, but our work considers the weaker model and proves a doubly-exponential lower bound. The proof of this uses Levy's lemma and the ensemble of Haar random states to lower bound the quantum statistical dimension in a manner similar to that of the variance based technique.
General search problems. While it is beyond the scope of this paper's main goals in showing separations in learning complexity, in Appendix <ref> we give a combinatorial parameter, based on , characterizing the complexity of general search problems. We remark that the problems we considered above can all be cast as a search problem. We show that this combinatorial parameter both upper and lower bounds the number of queries needed to solve a search problem.
§.§ Open questions
There are a few natural questions that our work opens up: (i) Can we show that for every concept class , we have that ≤ O(n·)?,[ We remark that for distribution-independent approximate learning, this inequality is true. This uses the result of <cit.> (Proposition <ref> below). They showed how to learn every concept class using O((log |C| )/ε) many quantum examples. By Sauer's lemma, we know that log ||≤ n·(), so the upper bound is O(n ()/ε). In <cit.> it was shown that the quantum entangled sample complexity of ε-PAC learning is Ω(()/ε) for every . Putting together both these bounds proves the inequality.]
(ii) Following <cit.> what is complexity of learning the output distribution of constant-depth circuits assuming we only use diagonal operators? (iii) Theoretically our work separates weak and strong error mitigation, but in practice there are often assumptions in the mitigation protocols, can we show theoretical separations even after making these assumptions? (iv) Classically it is well-known that several algorithms can be cast into the framework, is the same true quantumly? If so, that would suggest that as a unifying framework for designing new learning algorithms. (v) What is the complexity of the Hidden subgroup problem when given access to function states, instead of coset states (which is the case only in the standard approach).
Acknowledgements. We thank Ryan Sweke for useful discussions and sharing a preprint of their work <cit.>. We also thank the Quantum algorithms group at IBM, Eric Chitambar, and Felix Leditzky for discussions. SA and LS were partially supported by IBM through the IBM-Illinois Discovery Accelerator Institute.
Organization. In Section <ref> we introduce a few useful theorems and all the learning models we will be concerned with in this paper, in Section <ref> we prove our polynomial relation between entangled and separable measurements, in Section <ref> we give a few concept class that can be learned in and prove our main theorem which gives a lower bounding technique for learning, in Section <ref> we prove our exponential separation between noisy- and , in Section <ref> we give further examples of states for which one can show an exponential lower bound for learning and finally in Section <ref> we discuss our two applications in error mitigation and learning distributions.
§ PRELIMINARIES
§.§ Quantum Information Theory
Qubits are unit vectors in ^2 with a canonical basis given as |0⟩ = [ 1; 0 ] and |1⟩ = [ 0; 1 ]. Pure quantum states composed of n qubits are unit vectors in ^2^n. Following Dirac notation we indicate a state by |ψ⟩ and its conjugate transpose, an element of (^2^n)^*, by ⟨ψ|. Additionally, note that all quantum states are defined up to an absolute phase, i.e., a state |ψ⟩ is an equivalence class e^i ϕ|ψ⟩ for an arbitrary ϕ∈ℝ. Outer products are indicated by the notation |ψ⟩⟨ϕ|. Mixed states ρ are positive semi-definite linear operators on ^2^n such that [ρ] = 1. Any mixed state can be decomposed into a probability distribution over projectors onto pure states ρ = ∑_i λ_i |u_i⟩⟨u_i| where λ_i ≥ 0 and ∑_i λ_i = 1. Pure states correspond to rank 1 mixed states. We will often label the mixed state corresponding to a pure state |ψ⟩ by ψ (instead of |ψ⟩⟨ψ|). Positive operator valued measures (POVMs) capture the most general notion of quantum measurements. These are given by ensembles of positive semi-definite operators {E_i}_i such that ∑_i E_i = 𝕀. The probability of measurement outcome i is given by [E_i ρ]. Observables M are bounded Hermitian operators on ^2^n, representing a measurement with values assigned to the outcomes. The expectation value of an observable is given by [Mρ]. Quantum computers operate by applying gates (unitary matrices) to (ideally) pure states which evolve like |ψ'⟩ = U|ψ⟩ where U is some unitary. One important gate we will see is the Hadamard gate, given by H = 1/√(2)[ 1 1; 1 -1 ].
§.§ Notation
Throughout for n≥ 1, we let [n]={1,…,n}. For quantum states |ψ⟩,|ϕ⟩, we denote (|ϕ⟩,|ψ⟩) as the trace distance between the states |ϕ⟩ and |ψ⟩, defined as
(|ϕ⟩,|ψ⟩)=√(1-|⟨ϕ|ψ⟩|^2),
and for mixed states ρ,σ, define (ρ,σ)=1/2ρ-σ_1 (where ·_1 is the Schatten-1 norm of the matrix). For mixed states ρ,σ, the operational interpretation of the trace distance is given by
(ρ,σ)=1/2max_M, M ≤ 1 | (M (ρ - σ))|.
For distributions, P,Q:1^n→ [0,1], we say _x∼ P to mean x is sampled from P. We indicate sampling x uniformly from some set by x∼. For example, x∼1^n and ρ∼ respectively denote sampling a bitstring or a state from an ensemble uniformly at random. Similarly, we say (P,Q) to mean the total-variational distance between P,Q defined as (P,Q)=1/2∑_x |P(x)-Q(x)|. Similarly, define the Hellinger distance between P,Q as
(P,Q)^2=1-(∑_x √(P(x) Q(x)))^2.
§.§ Useful theorems
Let f:S^d-1→ be a function on the d-dimensional unit sphere S^d-1. Let k be such that for every |ϕ⟩,|ψ⟩∈ S^d-1, we have that
|f(|ψ⟩)-f(|ϕ⟩)|≤ k·ϕ-ψ_2,
then there exists a constant C>1 such that
[|f(ψ)-[f(ψ)]|≥ε]≤ 2exp(-Cd ε^2/k^2),
where the probability and expectations are over the Haar measure of quantum states.
Let binary random variable 𝐛∈1 be uniformly distributed. Suppose an algorithm is given |ψ_𝐛⟩ (for unknown b) and is required to guess whether 𝐛=0 or 𝐛=1. It will guess correctly with probability at most 1/2+1/2√(1-|⟨ψ_0|ψ_1⟩|^2).
Note that if we can distinguish |ψ_0⟩,|ψ_1⟩ with probability ≥ 1-δ, then |⟨ψ_0,ψ_1⟩|≤ 2√(δ(1-δ)).
The class of degree-2 phase states {1/√(2^n)∑_x (-1)^x^⊤ A x|x⟩:A∈𝔽_2^n× n} can be learned using O(n) entangled measurements in time O(n^3).
The learning algorithm uses the Bell-sampling procedure: given two copies of |ϕ_A⟩=1/√(2^n)∑_x (-1)^x^⊤ A x|x⟩, perform n CNOTs between the first copy and second copy and measure the second register to obtain a uniformly random y∈𝔽_2^n. The resulting quantum state is
1/√(2^n)∑_x (-1)^x^⊤ A x+(x+y)^⊤ A(x+y)|x⟩=(-1)^y^⊤ Ay/√(2^n)∑_x (-1)^x^⊤(A+A^⊤)· y|x⟩.
The learning algorithm then applies the n-qubit Hadamard transform and measures to obtain bit string (A+A^⊤)· y. Repeating this process O(n) many times, one can learn n linearly independent constraints about A. Using Gaussian elimination, this procedure allows the learner to learn the off-diagonal elements of A. In order to learn the diagonal elements of A a learning algorithm applies the operation |x⟩→ (-1)^x_i· x_j|x⟩ if A_ij=1 for every i≠ j. The resulting quantum state is ∑_x (-1)^∑_i x_i A_ii|x⟩ and the learner can apply the n-qubit Hadamard transform to learn the diagonal elements of A.
For distributions p,q:𝒳→ [0,1], define |ψ_p⟩=∑_x∈𝒳√(p(x))|x⟩ and |ψ_q⟩ similarly. Then
(|ψ_p⟩,|ψ_q⟩)^2_≤ 2(p,q).
In order to see the fact, first we have that
(|ψ_p⟩,|ψ_q⟩)^2_=1-⟨ψ_p|ψ_q⟩^2=1-(∑_x √(p(x)q(x)))^2.
By the definition of the Hellinger distance, we have that d_H(p,q)^2=1-∑_z √(p(x)q(x)), so we have
(|ψ_p⟩,|ψ_q⟩)^2_=1-(1-∑_x √(p(x)q(x)))^2=2d_H(p,q)^2-d_H(p,q)^4≤ 2d_H(p,q)^2≤ 2(p,q),
where the final inequality used <cit.>.
For a distribution p:1^n→ [0,1], let |ψ_p⟩=∑_x √(p(x))|x⟩. Suppose there exists an algorithm that makes t queries and learns p up to total variation distance ε^2, then there exists an algorithm that makes t queries and learns |ψ_p⟩ up to trace distance √(2)ε.
By Fact <ref>, first observe that
(|ψ_p⟩,|ψ_q⟩)^2_≤ 2(p,q).
Now the lemma statement follows by immediately: suppose there exists an algorithm that makes queries to |ψ_p⟩ and outputs a q such that (p,q)≤ε^2, then that implies that (|ψ_p⟩, |ψ_q⟩) ≤√(2)ε.
[Discriminating coherent encodings of distributions]
For distributions D, D_0 over some domain X, let |ψ⟩ = ∑_x ∈ X√(D(x))|x⟩ and |ψ_0⟩ = ∑_x ∈ D_0√(D_0(x))|x⟩. We have that
max_ϕ: X → [-1,1] |∑_x (D(x) - D_0(x))ϕ(x)| = 2(D, D_0).
Choose ϕ = I(D(x) > D_0(x)) - I(D_0(x) ≤ D(x)), where I(·) is an indicator function from Boolean clauses to { 1, 0 } which evaluates to 1 if its argument evaluates to true and evaluates to 0 if its argument is false.
|∑_x ∈ X (D(x) - D_0(x)) ϕ(x) | = ∑_x ∈ X; D(x) > D_0(x) (D(x) - D_0(x)) + ∑_x ∈ X; D(x) ≤ D_0(x) (D_0(x) - D(x))
= ∑_x ∈ X |D(x) - D_0(x)| = 2 (D, D_0),
hence proving the fact.
For distinct A,B∈𝔽_2^n× n, we have that
_x∼1^n[x^⊤ Ax ≠ x^⊤ Bx] ≥ 1/4.
Pr_x∼1^n[x^⊤ Ax ≠ x^⊤ Bx] = _x∼1^n[x^⊤ Ax ⊕ x^⊤ Bx ≠ 0] ≥1/4, where the inequality follows from the Schwartz-Zippel lemma for Boolean functions <cit.>.
§.§ Learning models
In this section we first describe the learning models we will be concerned with in this paper.
Classical learning. Valiant <cit.> introduced the classical Probably Approximately Correct () learning model. In this model, a concept class ⊆{c:1^n→1} is a collection of Boolean functions. The learning algorithm obtains labelled examples (x,c(x)) where x∈1^n is uniformly random and c∈ is the unknown target function.[More generally in learning, there is an unknown distribution D:1^n→ [0,1] from which x is drawn. Throughout this paper we will be concerned with uniform-distribution learning, i.e., D is the uniform distribution, so we describe the learning model for the uniform distribution for simplicity.] The goal of an (ε,δ)-learning algorithm is the following: for every c∈, given labelled examples {(x^i,c(x^i))}_i, with probability ≥ 1-δ (over the randomness of the labelled examples and the internal randomness of the algorithm), output a hypothesis h:1^n→1 such that _x [c(x)=h(x)]≥ 1-ε. The (ε,δ)-sample complexity of a learning algorithm is the maximal number of labelled examples used, maximized over all c∈. The (ε,δ)-sample complexity of learning is the minimal sample complexity over all (ε,δ)-learners for . Similarly the (ε,δ)-time complexity of learning is the total number of time steps used by an optimal (ε,δ)-learner for .
Quantum learning. The quantum was introduced by Bshouty and Jackson <cit.> wherein, they allowed the learner access to quantum examples of the form
|ψ_c⟩=1/√(2^n)∑_x∈1^n|x,c(x)⟩.
Note that measuring |ψ_c⟩ in the computational basis produces a classical labelled example, so quantum examples are at least as strong as classical examples. Understanding their strength and weakness has been looked at by several works (we refer an interested reader to the survey <cit.>). Like the classical complexities, one can similarly define the (ε,δ)-sample and time complexity for learning as the quantum sample complexity (i.e., number of quantum examples |ψ_c⟩) used and quantum time complexity (i.e., number of quantum gates used in the algorithm) of an optimal (ε,δ)-learner for .
Quantum learning with classification noise. Classically, the η-classification noise model is defined as follows: for an unknown c∈, a learning algorithm is given uniformly random x∈1^n and b∈1 where, b=c(x) with probability 1-η and b=c(x) with probability η. In the same work, Bshouty and Jackson <cit.> defined quantum learning with classification noise, wherein a learning algorithm is given access to
|ψ^n_c⟩=1/√(2^n)∑_x∈1^n|x⟩⊗ (√(1-η)|c(x)⟩+√(η)|c(x)⟩).
Such quantum examples have been investigated in prior works <cit.>.
Learning with entangled and separable measurements. Observe that in the usual definition of above, a learning algorithm is given access to |ψ_c⟩^⊗ T and needs to learn the unknown c∈. In this paper we make the distinction between the case where the learner uses entangled measurements, i.e., perform an arbitrary operation on copies of |ψ_c⟩ versus the setting where the learner uses separable measurements, i.e., performs a single-copy measurement on every copy of |ψ_c⟩ in the learning algorithm. When discussing learning with entangled and separable measurements, in this paper we will be concerned with exact learning, i.e., with probability ≥ 2/3, the learner needs to identify c. We denote as the sample complexity of learning with entangled measurements and as the sample complexity of learning with separable measurements.
Quantum statistical query learning.
We now discuss the model, following the definitions given in <cit.>. We first discuss the classical statistical query () model for learning an unknown concept c^⋆∈. Classically, the learner has access to a statistical query oracle , that on input a function ϕ:1^n+1→ [0,1] and a tolerance τ and returns a number α satisfying
|α - _x∼1^n[ϕ(x,c^*(x))] | ≤τ .
A classical algorithm can adaptively make a sequence of queries {(ϕ_i,τ_i)}_i and based on the responses {α_i}, with probability ≥ 1-δ it outputs a hypothesis h:1^n→1. The goal of the classical algorithm is to output an h such that _x [h(x)=c(x)]≥ 1-ε. The query complexity of a classical algorithm is the number of queries the algorithm makes and the time complexity is the total number of gates used by the algorithm and in the description of the hypothesis.
A natural way to extend the learning model is to allow the algorithm quantum statistical queries. In the classical case, one can think of the input ϕ to the oracle as a specification of a statistic about the distribution of examples (x,c^*(x)), and the output of the oracle is an estimation of ϕ: one can imagine that the oracle receives i.i.d. labeled examples (x,c^*(x)) and empirically computes an estimate of ϕ, which is then forwarded to the learning algorithm. In the quantum setting, one can imagine the analogous situation where the oracle receives copies of the quantum example state |ψ_c^*⟩, and performs a measurement indicated by the observable M on each copy and outputs an estimate of ⟨ψ_c^*| M | ψ_c^*⟩.
Relaxing the assumption of example states, in order to learn an unknown (mixed) quantum state ρ in the model the learner makes queries that takes as input an operator M∈^2^n+1× 2^n+1 and tolerance τ and outputs a τ-approximation of (Mρ), i.e.,
: (M,τ)↦α∈ [(Mρ)+τ,(Mρ)-τ].
In order to learn the concept class using quantum examples, we define ρ=|ψ_c⟩⟨ψ_c|, so the action of the oracle is defined as
: (M,τ)↦α∈ [⟨ψ_c|M|ψ_c⟩+τ,ψ_c|M|ψ_c⟩-τ].
In this case, the goal of a learner is to output a hypothesis quantum state σ that satisfies d_(ρ,σ)≤ε. If ρ = ψ_f and σ = ψ_h this translates to _x ∼1^n[f(x) = h(x) ] ≥ 1 - √(). Thus, without loss of generality we will often talking about learning states with respect to trace distance, even for learning example states. Our results generally do not depend on assuming that the learner outputs an example state even when the concept class is composed of example states. Clearly, if the learning problem is hard without such a restriction, it is no easier with such a restriction.
We emphasize that the learning algorithm is still a classical randomized algorithm and only receives statistical estimates of measurements on quantum examples. The quantum query complexity of the algorithm is the number of queries the algorithm makes and the quantum time complexity is the total number of gates used by the algorithm and in the description of the hypothesis. There are three ways to motivate the model
* Clearly any binary measurement {M, -M} can be simulated with a query to M or - M. In the opposite direction, any observable M such that ‖ M ‖≤ 1 can be converted into the POVM {+M/2, - M/2}. Thus, a query is essentially the same model as approximately sampling from a binary POVM up to total variational distance Θ(τ). One can think of as a sort of noisy variant of binary measurements. From a theoretical perspective, performing noisy 2-outcomes separable measurements are weak (and easier to implement) than arbitrary separable measurements, which are in turn weaker (and easier to implement) than entangled measurements. So, it is useful to understand the power of such noisy measurements in quantum learning theory and captures this question in a theoretical framework.
* One could envision a situation where quantum states ρ are prepared in the “cloud" and the classical learning algorithm needs to only interact with the cloud classically. An efficient model allows a quantum advantage in learning in this framework.
* The model naturally extends recent works <cit.> wherein they consider the limitations of classical algorithms for learning a quantum state ψ_U=U|0^n⟩, i.e. they consider the model where M is diagonal specifiable as M=∑_x ϕ(x)|x⟩⟨x|, then
⟨ψ_U|M|ψ_U⟩=∑_x ϕ(x)⟨ x|U|0^n⟩^2=∑_x ϕ(x)P_U(x)=_x∼ P_U[ϕ(x)],
which is precisely α_ϕ they assume access to, in order to learn the unknown U.
Throughout this paper, for notational convenience we use the following notation: (i) for an n-bit problem, when we do not specify a tolerance for the oracle, we implicitly assume that the tolerance is τ=1/(n), (ii) we always make queries with an operator M that satisfies M≤ 1, so we do not explicitly state this when discussing queries, (iii) We say a n-bit concept class is learnable if can be learned using (n) many queries, each with tolerance τ=1/(n) and observable M which is implementable using (n) many gates.
§ RELATING SEPARABLE AND ENTANGLED MEASUREMENTS
Before proving our main theorem, we will use the following proposition, which was proven earlier in <cit.> in the context of learning quantum channels. We restate their proposition in the context of learning pure states using parameters that suit our application.
Let 𝒞⊆{c:1^n→1} and ε>0. Given
T=O(log|𝒞| + log 1/δ/ε)
copies of |ψ_c⟩ = 1/√(2^n)∑_x |x, c(x)⟩ for an unknown c∈, there exists an algorithm that uses separable measurements and with probability ≥ 1-δ, outputs a c'∈ such that _x [c(x)=c'(x)]≥ 1-ε.
Note that their proposition deals with general states and T = O(log|𝒞| + log 1/δ/^2), where is now error with respect to trace distance. For our purposes the factor of 1/^2 is improved to 1/ by the fact that (ψ_f,ψ_h) = implies that _x[f(x)≠ h(x)] = Θ(√()).
Let be a concept class ⊆{c:1^n→1} and
η_=min_c,c'∈_x[c(x)≠ c'(x)], η_=_c,c'∈_x[c(x)≠ c'(x)].
Then we have that
()≤ O(n·()·min{η_/η_ , ()}).
Furthermore, there exists for which this inequality is tight.
First observe that
≤ 2/η_m·log ||.
This is easy to see: fix ε=η_m/2 in Proposition <ref> and consider a separable approximate algorithm that, given copies of |ψ_c⟩, an approximate learning algorithm outputs c' such that _x [c(x)≠ c'(x)]≤ε, then c=c' by definition of η_m, hence this algorithm is a separable exact learner.
Next, we prove two lower bounds on :
≥max{1/η_m,log||/nη_a}
To see the first lower bound in Eq. (<ref>), observe the following: consider the c,c'∈ for which _x[c(x)≠ c'(x)]=η_m, then every exact learning algorithm needs to distinguish between c,c'. Since ⟨ψ_c |ψ_c'⟩=1-η_m, by Fact <ref>, this implies a lower bound of T=Ω(1/η_m) many quantum examples to distinguish between c,c' with bias Ω(1).
To see the second lower bound in Eq. (<ref>), first note that 1-η_a=_c,c'∈_x[c(x)= c'(x)]. Next, observe that
≥log||/nη_a.
The proof of this is similar to the information-theoretic proof in <cit.>. We prove the lower bound for using a three-step information-theoretic technique. Let 𝐀 be a random variable that is uniformly distributed over . Suppose 𝐀=c_V, and let 𝐁=𝐁_1…𝐁_T be T copies of the quantum example
|ψ_c⟩=1/√(2^n)∑_x∈1^n|x,c(x)⟩
for c∈. The random variable 𝐁 is a function of the random variable 𝐀.
The following upper and lower bounds on I(𝐀:𝐁) are similar to <cit.> and we omit the details of the first two steps here.
* I(𝐀:𝐁)≥Ω(log||) because 𝐁 allows one to recover 𝐀 with high probability.
* I(𝐀:𝐁)≤ T· I(𝐀:𝐁_1) using a chain rule for mutual information.
* I(𝐀:𝐁_1)≤ O(n·η_a).
Proof (of 3). Since 𝐀𝐁 is a classical-quantum state, we have
I(𝐀:𝐁_1)= S(𝐀)+S(𝐁_1)-S(𝐀𝐁_1)=S(𝐁_1),
where the first equality is by definition and the second equality uses S(𝐀)=log || since 𝐀 is uniformly distributed over , and S(𝐀𝐁_1)=log || since the matrix
σ=1/||∑_c∈|c⟩⟨c|⊗|ψ_c⟩⟨ψ_c|
is block-diagonal with || rank-1 blocks on the diagonal. It thus suffices to bound the entropy of the (vector of singular values of the) reduced state of 𝐁_1, which is
ρ=1/||∑_c∈|ψ_c⟩⟨ψ_c|.
Let σ_0≥σ_1≥⋯≥σ_2^n+1-1≥ 0 be the singular values of ρ. Since ρ is a density matrix, these form a probability distribution. Now observe that σ_0≥ 1-η_a: consider the vector u=1/||∑_c'∈|ψ_c'⟩ and observe that
u^⊤ρ u =1/||^3∑_c,c',c”∈⟨ψ_c|ψ_c'⟩⟨ψ_c|ψ_c”⟩
=_c[_c'[⟨ψ_c|ψ_c'⟩]]·[_c”[⟨ψ_c|ψ_c”⟩]]
≥(_c,c'[⟨ψ_c|ψ_c'⟩])·(_c,c”[⟨ψ_c|ψ_c”⟩])=(_c,c'∈_x[c(x)=c'(x)])^2≥ 1-2η_a,
where the first inequality is by Chebyshev's sum inequality (since all the inner products are non-negative) and the second inequality follows from the definition of η_a. Hence we have that σ_0=max_u{u^⊤ρ u / u^⊤ u}≥ 1-2η_a (where we used that u_2≤ 1).
Let 𝐍∈{0,1,…,2^n+1-1} be a random variable with probabilities σ_0,σ_1,…,σ_2^n+1-1, and 𝐙 an indicator for the event “𝐍≠ 0.” Note that 𝐙=0 with probability σ_0≥ 1-2η_a, and H(𝐍|𝐙=0)=0. By a similar argument as in <cit.>, we have
S(ρ) =H(𝐍)=H(𝐍,𝐙)=H(𝐙)+H(𝐍|𝐙)
=H(σ_0)+σ_0· H(𝐍|𝐙=0) + (1-σ_0)· H(𝐍|𝐙=1)
≤ H(η_a) + η_a(n+1)
≤ O(η_a(n+log (1/η_a))
using H(α)≤ O(αlog (1/α)).
Combining these three steps implies T=Ω(log || / (nη_a)). Now putting the relations between , together we get
≤ n·η_a/η_m·≤ n·^2,
hence we have the desired upper bound as in the theorem statement[We state the theorem as below, since it is apriori unclear as to why 1/η_m is a lower bound on .]
≤ O(n··min{η_/η_,}).
To show that this inequality is optimal, observe that: if is the class of degree-2 phase states, i.e., ={f_A(x)=x^⊤ A x:A∈1^n× n}, then η_m=η_a= Θ(1) by Fact <ref>. We saw in Fact <ref> that this class can be learned using Θ(n) entangled measurements, so = Θ(n) and the above upper bound implies =O(n^2), which was shown to be optimal in <cit.>.
§ LOWER BOUNDS FOR QUANTUM STATISTICAL QUERY LEARNING
Here we prove our main theorem which provides combinatorial quantities one can use to lower bound the complexity of various tasks. The techniques and parameters used in this section are inspired by several seminal classical works on the classical model <cit.>. We first define statistical decision problems, then define the quantum statistical dimension () which lower bounds the decision problem complexity and finally discuss the variance and average correlation lower bounds on . For notational convenience we discuss some notation we use in this section: throughout this section, we will let be a collection of n-qubit quantum states. We let _τ^,δ() be the complexity of learning to accuracy in trace distance using queries of tolerance τ and succeeding with probability at least 1-δ. Next, is the complexity of the decision problem, is the quantum statistical dimension, and is the average correlation bound.
§.§ Learning is as hard as deciding
Let τ∈ [0,1] and let σ∉. A quantum statistical decision problem for (, σ) is defined as: for an unknown state ρ, given (τ) access to ρ decide if ρ∈ or ρ=σ. Let _τ^δ(, σ) be the number of (τ) queries made by the best algorithm for the decision problem that succeeds with probability at least 1-δ.
We now prove our first lemma that is actually a lower bound on learning the concept class . We remark that a similar lemma appears for classical in <cit.>, we want to thank the authors for discussing and sharing their manuscript during the completion of our work.
Let ε≥τ > 0 and σ∉ be such that min_ρ∈ [(ρ, σ)] > 2(τ + ε). Let _τ^ε, δ() be the number of (τ) queries made by a algorithm that on input ρ outputs π, such that (π, ρ) ≤ε with probability ≥ 1-δ. Then
_τ^ε, δ() ≥_τ^δ(,σ) - 1.
We show this by solving the statistical quantum decision problem by querying a _τ^ε, δ() learning algorithm . For ρ∈𝒞, outputs, with probability ≥ 1-δ, a classical description of quantum state π such that (ρ, π) ≤ε. Note that for σ∉𝒞, the output of is not well-defined and we assume that can output anything.
Let the output of be π. If π is not a valid quantum state, return “ρ = σ”. We then check if min_ρ∈(π, ρ) > ε. If yes, return “ρ = σ”. At this point, we know the classical description of both π and σ and also know that there exists some ν∈, such that (π, ν) ≤ε. We can find such ν that is closest to π, as well as an operator Π_+ which is a projector onto the positive part of the spectrum of the hermitian operator ν-σ. Finding this may be computationally difficult, but does not require additional (τ) queries. We then query (τ) with Π_+ to obtain a response R. If | R - (Π_+ σ) | ≤τ, return “ρ = σ”. Return “ρ∈” otherwise.
The algorithm outputs “ρ = σ" on all inputs ρ = σ with certainty. On input ρ∈, the algorithm returns, with probability at least (1-δ) a description of a state π that is ε close to the input. Our algorithm then uses this information to find a state ν∈, such that d_(π, ν) ≤ε. We have from reverse triangle inequality that:
|(Π_+ (ρ - σ) )| ≥ ||(Π_+ (σ - ν) )|_(σ, ν) - |(Π_+ (ν-ρ))|_< 2ε | ≥(ν, σ) - 2ε > 2 τ,
where we used that |(Π_+ (ν-ρ))| ≤(ν, ρ) ≤(ν, π) + (ρ, π) < 2ε.[Note that this inequality is maximized if ν≠ρ. This can happen if the input state ρ∈ is less than 2ε far from another state ρ' ∈ and the learning algorithm outputs π that is closer to ρ' ∈. ] It follows that:
|R- (Π_+ σ)| ≥ |(Π_+(ρ-σ))|-|R-(Π_+ ρ)|_≤τ > τ,
The algorithm outputs “ρ∈” with probability at least (1-δ), as expected.
For completeness, we also include a proof of a lower bound on the learning complexity by a decision problem hidden completely (that is, σ∈) inside of .
Let ⊂ and let σ∈, σ∉, (, σ) > ε and ε≥τ > 0. Then:
^ε, δ_τ() ≥_τ^δ(, σ).
Let be a statistical ε, δ learning algorithm for that uses (τ) queries.
On input ρ∈, the algorithm outputs (with probability at least 1-δ) a state ν, such that (ν, ρ) ≤ε and uses _τ^ε, δ() many queries. Output “ρ = σ” if (ν, σ) < ε, otherwise output “ρ∈”. The algorithm clearly succeeds with probability at least 1-δ.
§.§ Quantum statistical dimension to bound the decision problem
With this lemma, in order to lower bound learning it suffices to lower bound , which we do by the quantum statistical dimension that we define now.
Let τ∈ [0,1] and μ be a distribution over a set of n-qubit quantum states and σ∉ be an n-qubit state. Define the maximum covered fraction:
κ_τ(μ, σ) = max_M: M ≤ 1{_ρ∼μ[|(M(ρ - σ))| > τ]}.
The quantum statistical dimension is:
_τ^δ(, σ)) = sup_μ[κ_τ(μ, σ) ]^-1,
where the supremum is over distibutions over .
This definition is essentially the same as Feldman's definition of randomized statistical dimension in <cit.>, but uses the difference between the expectation values of quantum observables. Sections 4, 5 and 6 of our work show that this has several interesting consquences. The following lemma, following similarly from Feldman's work <cit.>, will be convenient later:
Let τ > 0, be a set of quantum states and σ∉ be another quantum state. Let d be the smallest integer such that: there exists a distribution ν over queries M satisfying
∀ρ∈: _M ∼ν[|(M(ρ - σ))| > τ] ≥ 1/d,
then d=_τ(,σ).
See also <cit.>, which we generalize here. Suppose that the tolerance is fixed to τ and let ℳ be the set of all valid queries. Define G: ×ℳ→{ 0, 1 } as G(ρ, M) = δ[|(M(ρ-σ))| > τ], where δ[·] is the indicator function. Let μ be a distribution over and let ν be a distribution over ℳ. Consider the bilinear function of μ, ν:
F(μ, ν) = ∫_ℳ d ν(M) ∫_ d μ(ρ) G(M, ρ) = Pr_ρ∼μPr_M ∼ν[|(M(ρ-σ))| > τ].
Note that ℳ forms a compact subset of ^d × d. We further assume that is closed and thus also forms a compact subset of ^d × d. Then, the spaces of probability distributions on ℳ and form compact, convex spaces with respect to the weak-* topology. It follows by Sion's minimax theorem that:
min_μmax_ν F(μ, ν) = max_νmin_μ F(μ, ν) =: 1/d,
where the optimization is over possible distributions μ over and distributions ν over ℳ.
For a distribution μ over , there exists an optimal distinguishing measurement M ∈ℳ, from which:
min_μmax_ν F(μ, ν) = min_μmax_M ∈ℳPr_ρ∼μ [|(M(ρ-σ))| > τ].
Observe that:
d = sup_μ (max_M ∈ℳPr_ρ∼μ [|(M(ρ-σ))| > τ])^-1,
which is the definition of _τ by Eq. <ref>. Similarly, we have that:
d = ( max_νmin_μ F(μ, ν))^-1 = inf_ν (min_ρ∈Pr_ρ∼μ [|(M(ρ-σ))| > τ])^-1.
For a given distribution ν over ℳ, it then holds for all ρ∈ that _M ∼ν[|(M(ρ-σ))| > τ] ≥ 1/d. This is the definition in Lemma <ref>.
We now show that the complexity is lower bounded by .
For every σ∉ and τ∈ [0,1], we have that
_τ^δ(,σ)≥ (1-2δ) _τ(,σ).
Let be the best algorithm that solves (, σ) with probability at least 1-δ using q (τ) queries M_1, … M_q chosen according to the internal randomness of . Suppose, for contradiction, that the response to every such query was (M_i σ). Let p_ρ = _[ ∃ i ∈ [q] | |(M_i (ρ-σ)| > τ] be the probability that ρ∈ can be distinguished from σ by at least one of queries. If p_ρ≤ 1-2δ, then with probability 2δ, the responses (M_i σ) are valid (τ) responses. By correctness, on input ρ = σ, the algorithm can output “ρ∈” with probability at most δ. This however means that on input ρ∈, the algorithm can output “ρ∈” with probability at most δ (since the responses did not change), which contradicts the algorithm correctness. It follows that p_ρ≥ 1-2δ for every ρ∈ and with probability 1-2δ, there exists a M_i that distinguishes ρ and σ. Running and picking one of its queries uniformly randomly then gives _M[ |(M(ρ-σ)| > τ] ≥1-2δ/q. Lemma <ref> then implies that q ≥ (1-2δ)_τ(,σ).
§.§ Variance and correlation lower bound on quantum statistical dimension
We now present our main lower bound theorem, wherein we show that there are two combinatorial parameters that can be used to lower bound , which in turn lower bounds sample complexity in the model. Throughout the paper we will use these two two parameters to prove our lower bounds.
Let τ > 0, be the class of n-qubit states. Then
* Variance bound: Let μ be a distribution over , such that 𝔼_ρ∼μ[ρ] ∉.
Then:
_τ(, _ρ∼μ[ρ]) ≥τ^2·min_M, M≤ 1(_ρ∼μ [[ρ M]])^-1,
where
_ρ∼μ[(ρ M)]=_ρ∼μ[(ρ M)^2]-(_ρ∼μ[(ρ M)])^2.
* Average correlation: For a full-rank quantum state σ∉, define ρ̂ := (ρσ^-1 - 𝕀) and:
γ(, σ) = 1/||^2∑_ρ_1, ρ_2 ∈ |(ρ̂_1 ρ̂_2 σ)|, κ^γ_τ(_0, σ) := max_' ⊆_0{|'|/|_0| : γ(', σ) > τ}.
Let _τ(, σ) = sup__0 ⊆ (κ^γ_τ(_0, σ))^-1. Then,
_τ(, σ) ≥_τ^2(, σ).
1. We first prove the Let μ be a distribution over and M be a hermitian operator such that M ≤ 1. By Chebyshev's inequality, we have that:
_ρ∼μ[ | (ρ M) - _ρ∼μ[(ρ M)]| ≥τ] ≤_ρ∼μ[(ρ M)]·τ^-2,
where
_ρ∼μ[(ρ M)]=_ρ∼μ[(ρ M)^2]-(_ρ∼μ[(ρ M)])^2.
Let ν be a distribution over the queries M that are made by a randomized algorithm for the many-one distinguishing problem (, 𝔼_ρ∼[ρ]). Lemma <ref> implies that: for d = _τ(, 𝔼_ρ∼[ρ]),[Note that if 𝔼_ρ∼[ρ] ∈, then d →∞, since the average is not distinguishable from each state in by any measurement. This edge case happens for example for || =1.]
1/d≤_M ∼ν_ρ∼μ[ | (ρ M) - _ρ∼μ[(ρ M)]| ≥τ] ≤_M ∼ν_ρ∼μ[(ρ M)]/τ^2.
Since this inequality holds for any such distribution ν, it holds for every query M. Hence,
_τ(, σ) ≥min_M, M≤ 1τ^2/_ρ∼μ [[ρ M]].
2. We now prove the average correlation bound. Let '⊆. Let ρ̂ := (ρσ^-1 - 𝕀) and define:
γ(', σ) = 1/|'|^2∑_ρ_i, ρ_j ∈' |[ρ̂_̂îρ̂_̂ĵσ]|
We will first show that for any such ' and any observable M, M≤ 1, we have that:
( ∑_ρ∈' |(M(ρ-σ)| )^2 ≤ |'|^2 γ(', σ).
To that end, observe that:
( ∑_ρ∈' | (M(ρ-σ)| )^2 = ( ∑_ρ∈' | (Mρ̂σ)| )^2 = [ (√(σ) M ∑_ρ∈' ((Mρ̂σ) ρ̂√(σ))]^2
≤(σ M^2) ([ ∑_ρ∈' ((Mρ̂σ)ρ̂]^2 σ),
where the above follows from Cauchy-Schwartz inequality.
Since M ≤ 1, we have that (σ M^2) ≤ 1 and also that:
([∑_ρ∈' ((Mρ̂σ)ρ̂]^2 σ) = ∑_ρ_1, ρ_2 ∈' ((Mρ̂_̂1̂σ)) ((Mρ̂_̂2̂σ)) [ρ̂_1 ρ̂_2 σ]
≤ |'|^2 γ(', σ).
We show the claim by upper-bounding the κ_τ(μ, σ) for μ uniform over some subset _0 ⊆ by κ_τ^γ(_0, σ). Recall that for a distribution μ over quantum states, we have that:
κ_τ(μ, σ) = max_M, M≤ 1{_ρ∼μ[ |(M(ρ-σ)| > τ]}
For a uniform distribution μ__0 over _0 ⊆, this gives:
κ_τ(μ__0, σ) = max_M, M≤ 11/|_0|∑_ρ∈_0δ[ |(M(ρ-σ)| > τ],
where δ[x] = 1 if the clause x is true, and 0 otherwise. From here onwards, fix M to be the operator that maximizes the above expression. Let ' ⊆_0 to be the largest subset of _0, such that |(M(ρ - σ))| > τ for all ρ∈'. Then ∑_ρ∈' |(M(ρ - σ)| > |'|τ. Along with ∑_ρ∈'δ[|(M(ρ - σ)| > τ] = |'| this implies that:
∑_ρ∈_0δ[ |(M(ρ-σ)| > τ] ≤max_' ⊆_0[ |'| δ(∑_ρ∈' |(M ρ - σ)| > |'| τ)].
Combining this with Eq. (<ref>), this gives:
κ_τ(μ__0, σ) ≤max_C' ⊆ C_0{|'|/|_0||∑_ρ∈' | (M(ρ-σ)| > |'| τ}.
Using (∑_ρ∈' | (M(ρ-σ)|)^2 ≤ |'|^2 γ(', σ) implies
κ_τ-(μ__0, σ) ≤min_' ⊆κ_τ^2^γ-(', σ).
Hence we have that
_τ(, σ) = sup_μ(κ_τ(μ, σ)^-1) ≥ (κ_τ(μ__0, σ)^-1) ≥max_' ⊆(κ_τ^2^γ-(', σ)^-1) = _τ^2(, σ).
This proves the lower bounds in the theorem statement.
In many of the bounds to be proved in the following sections we consider converting a learning problem to a decision problem versus σ where min_ρ∈(ρ,σ) ≥ζ, where ζ is some constant. For large enough τ Lemma <ref> may no longer hold. Fixing an approximation error , Lemma <ref> then holds if ζ - 2 > 2τ. Note that the left hand side is some constant and we implicitly assume this upper bound on τ in the following proofs. This is without loss of generality: the existence of a algorithm with tolerance τ greater than or equal to ζ - 2 then further implies that one exists for all tolerances of smaller value. That is, smaller tolerance cannot increase the query/time complexity. As many of the results are asymptotic, requiring that τ be at most some constant does not change the results even when τ appears in the lower bound.
§ SEPARATIONS BETWEEN STATISTICAL AND ENTANGLED MEASUREMENTS
In this section we prove our main theorem separating noisy entangled learning and learning, and next show that for a “small" circuit one can witness such an exponential separation.
§.§ Separation between and with classification noise
In this section we prove our main theorem. Consider the class of function states
={|ψ_A⟩=1/√(2^n)∑_x∈1^n|x,x^⊤ Ax (mod 2 )⟩:A∈𝔽_2^n× n}.
The sample complexity of learning this class in the following models is given as follows
* Entangled measurements: Θ(n)
* Separable measurements: Θ(n^2)
* Statistical query learning: Ω(τ^2 · 2^n/2) making (τ) queries.
* Entangled η-random classification noise: O(n/(1-2η)^2). Algorithm runs in time O(n^3/(1-2η)^2).
Points (1),(2) above were proved in <cit.> and we do not prove it here. In the following two theorems we prove points (3),(4) above. We remark that this result partially resolves an open question in <cit.>, who asked if separable η-random classification noise and complexity can be separated in sample complexity.
The concept class
={|ψ_A⟩=1/√(2^n)∑_x∈1^n|x,x^⊤ Ax (mod 2 )⟩:A∈𝔽_2^n× n}
requires 2^Ω(n) many queries of tolerance τ=1/(n) to learn below trace distance 0.05 with high probability.
We prove the hardness for algorithms using queries using the variance lower bounding technique in Theorem <ref>. In particular, we show an exponentially small upper bound for the variance for any observable: for every n+1 qubit operator M such that ‖ M ‖≤ 1 we have that
_A([Mψ_A]) = 2^-Ω(n),
where we let ψ_A=|ψ_A⟩⟨ψ_A| for notational simplicity. To apply our results linking learning and decision problems we note that
(ψ_A, _B[ρ_B]) ≥ 1-√(_B[|⟨ψ_A|ψ_B⟩]|^2)≥ 1-√((2^n(n+1)/2-1)· 9/16 + 1/2^n(n+1)/2)≥ 1 - √(17/32) ,
where the first inequality follows from the lower bound on trace distance by fidelity <cit.> and the second by Fact <ref> and that ⟨ψ_A|ψ_B⟩ = _x[f_A(x) = f_B(x)]. Fix = 0.05. Then Lemma <ref> holds if τ < 0.085, which we assume without loss of generality as previously discussed[The choice of = 0.05 is arbitrarily and done for readability. A similar result holds for any < 1/2(1-√(17/32)) by the same argument.] Along with Theorem <ref>, we obtain our lower bound on the complexity of learning .
It remains to establish Eq. (<ref>). To this end, we need to understand
Var_A([Mψ_A]) = _A[[Mψ_A]^2] - (_A[[Mψ_A]])^2
To do so, we decompose ψ_A as follows. For every f_A: {0,1}^n →{0,1} given by f_A(x) = x^⊤ A x let |ψ_A⟩ = 1/√(2^n)∑_x |x,f_A(x)⟩ and |ϕ_A⟩ = ∑_x (-1)^f_A(x)|x⟩. For convenience we let |u⟩ = 1/√(2^n)∑_x |x⟩. Then we see that
(𝕀⊗ H)ψ_A (𝕀⊗ H) = 1/2∑_x,y,a,b (-1)^a· f_A(x)+b· f_A(y)|x,a⟩⟨y,b|
= 1/2(|ϕ_A⟩⟨ϕ_A|⊗|1⟩⟨1|-1/2|ϕ_A⟩⟨u|⊗|1⟩⟨0|-1/2|u⟩⟨ϕ_A|⊗|0⟩⟨1|+1/2|u⟩⟨u|⊗|0⟩⟨0|)
hence we have that
|ψ_A⟩⟨ψ_A|=1/2(|ϕ_A⟩⟨ϕ_A|⊗|-⟩⟨-|_ρ^A_1-|ϕ_A⟩⟨u|⊗|-⟩⟨+|_ρ^A_2-|u⟩⟨ϕ_A|⊗|+⟩⟨-|_ρ^A_3+|u⟩⟨u|⊗|+⟩⟨+|_ρ^A_4).
Any n+1 qubit observable M can be decomposed as M = ∑_a,b M_a,b⊗|a⟩⟨b| where a,b ∈{+,-}. Since ‖ M ‖≤ 1 we also have that ‖ M_a,b‖≤ 1, however the off-diagonal blocks now no longer need be Hermitian. In an abuse of notation we now discard the last qubit of ρ^f_i and denote the resulting state also as ρ^f_i. For convenience we further introduce the notation M_1 = M_-,-, M_2 = M_-,+, M_3 = M_+,-, and M_4 = M_+,+ Thus, we see that [Mψ_A] = 1/2∑_i[M_iρ^A_i] and further the variance can be written as
Var_A([Mψ_A]) = 1/4(_A[∑_i,j[M_iρ^A_i][M_jρ^A_j]] - (_A[∑_i[M_iρ^A_i]])^2)
= 1/4∑_i,j(_A[[M_i ρ^A_i]·[M_j ρ^A_j]]- [ M_i_A[ρ^A_i]]·[M_j _A[ρ^A_j]])
= 1/4∑_i,j( [ (M_i⊗ M_j)_A[ ρ_i^A ⊗ρ_j^A]] -[ M_i_A[ρ^A_i]]·[M_j _A[ρ^A_j]]).
Below we drop the factor of 1/4 and bound the magnitude of each term for i,j ∈ [4]. We show that each term is exponentially small. To do so, we use the following fact which will be proven later.
We have the following
_A[|ϕ_A⟩]=|0^n⟩/√(2^n), _A[|ϕ_A⟩^⊗ 2]=|Φ^+⟩/√(2^n), _A[|ϕ_A⟩⟨ϕ_A|]=𝕀/2^n,
_A[|ϕ_A⟩⊗⟨ϕ_A|] = 1/2^n∑_x |x⟩⊗⟨x|, _A[|ϕ_A⟩⊗|ϕ_A⟩⟨ϕ_A|] = 1/2^3n/2|0⟩⊗|0⟩⟨0|,
_A[|ϕ_A⟩⟨ϕ_A|⊗|ϕ_A⟩] = 1/2^3n/2(∑_x |x⟩⟨x|⊗|0⟩ + |x⟩⟨0|⊗|x⟩ + |0⟩⟨x|⊗|x⟩ - 2|0⟩⟨0|⊗|0⟩),
_A [|ϕ_A⟩⟨ϕ_A|^⊗ 2] =
1/4^n (𝕀+) + 1/2^n|Φ^+⟩⟨Φ^+| - 2/4^n∑_x |x,x⟩⟨x,x|,
where swaps two n-qubit registers, i.e., |ψ⟩⊗|ϕ⟩ = |ϕ⟩⊗|ψ⟩ and
|Φ^+⟩ = 2^-n/2∑_x |x,x⟩.
is the EPR state of 2n qubits.
First note that ρ^A_4 does not depend on A. Thus, for all i ∈ [4] we have that
_A[[(M_i⊗ M_4)(ρ_i^A ⊗ρ_4^A)]] = [M_i _A[ρ_i^A]]·[M_4_A[ρ_4^A]] .
The contribution to the variance from these cases equals 0. We are left with analyzing i,j∈ [3] and do so separately below.
Case i=j=1. Using Fact <ref> above, we get that
[ M_1 _A[ρ^f_1]]=[M_1 ·_A[|ϕ_A⟩⟨ϕ_A|]]=[M_1]/2^n.
Next, observe that
_A [(M_1ρ^A_1)^2] =[M_1⊗ M_1·_A[ρ^A_1⊗ρ^A_1]]
=[M_1⊗ M_1·(1/4^n (𝕀+) + 1/2^n|Φ^+⟩⟨Φ^+| - 2/4^n∑_x |x,x⟩⟨x,x|)]
=1/4^n( [M_1])^2 +1/4^n[M_1^2] + 1/2^n⟨Φ^+| M_1 |Φ^+⟩ - 2/4^n∑_x M_1(x,x)^2
≤1/4^n( [M_1])^2 +1/2^n + 1/2^n,
where the third equality used that [M_1⊗ M_1·]=[M_1^2], the fourth equality used that [M_1^2]≤ 2^n and ‖ M_1^⊗ 2‖≤ 1. Implicitly we have used that M_1 is Hermitian and ‖ M_1 ‖≤ 1. Hence we have that variance term contribution is
_A [[M_1 ρ^A_1]·[M_1 ρ^A_1]]- [ M_1_A[ρ^A_1]]·[ M_1 _A[ρ^A_1]]
≤1/4^n( [M_1])^2 +2/2^n-([M_1]/2^n)^2= 2/2^n.
Since this term must be non-negative, the norm is bounded by 2/2^n as well.
Case i=j=2. Using Fact <ref> above, we get that
[ M_2 _A[ρ^A_2] ]=[ M_2 _A[|ϕ_A⟩⟨u|]]=⟨ 0|M_2| u⟩/√(2^n).
Next note that
_A[[M_2ρ_2^A]^2] = [(M_2 ⊗ M_2) (_A[ρ_2^f ⊗ρ_2^A])]
= [(M_2 ⊗ M_2) (_A[|ϕ_A⟩^⊗ 2]⟨u|^⊗ 2)]
= 1/√(2^n)⟨u,u |M_2 ⊗ M_2|Φ^+⟩ = 1/2^n∑_x ⟨u| M_2 |x⟩^2 = 1/4^n∑_x (∑_y M_2(y,x))^2
We now bound the norm of each of these terms individual as then, by triangle inequality, the norm of the contribution from the case is exponentially small as well.
| (⟨ 0|M_2| u⟩/√(2^n)) | ≤1/√(2^n)√(‖|0⟩‖^2 ‖ M_2 |u⟩‖^2)≤1/√(2^n) ,
and thus |[M_2 _A[ρ^A_2]]^2 |≤1/2^n. For the other term we use that
|∑_x (∑_y M_2(y,x))^2 |≤∑_x |∑_y M_2(y,x) |^2.
Then we can rewrite this as 1/2^n‖ M_2 |u⟩‖_2^2 ≤1/2^n‖ M_2 ‖≤1/2^n. Thus, the norm of the contribution from this case is upper bounded by 2/2^n.
Case i=j=3. Since M_3 = M_2^† and ρ_3^A = (ρ_2^A)^†, this is the same as the case i=j=2, thus the norm of this case is upper bounded by 2/2^n as well.
Case i=2 and j=3 Using Fact <ref> note that
[(M_2 ⊗ M_3) _f[ρ_2^A ⊗ρ_3^A]] = [(M_2⊗ M_3) (𝕀⊗|u⟩)_A[|ϕ_A⟩⊗⟨ϕ_A|] (⟨u|⊗𝕀)]
= 1/2^n∑_x [M_2 ⊗ M_3 (|x⟩⟨u|⊗|u⟩⟨x|)
Now we use that M_3 = M_2^† to rewrite this as 1/2^n∑_x |⟨u|M_2|x⟩|^2
1/4^n∑_x |⟨u|M_2|x⟩|^2 = 1/2^n‖ M_2 |u⟩‖_2^2 ≤1/2^n
We have already shown the subtracted terms to be exponentially small in magnitude and thus the magnitude of this case must be upper bounded by 2/2^n as well.
Case i=1 and j=2,3. Here we work out j=2 as the result then holds similarly for j=3.
|[(M_1 ⊗ M_2) _A[ρ_1^A ⊗ρ_2^A]] | = |[(M_1 ⊗ M_2) _A[|ϕ_A⟩⟨ϕ_A|⊗|ϕ_A⟩](𝕀⊗⟨u|)]|
= |1/2^3n/2[(M_1 ⊗ M_2) (∑_x |x⟩⟨x|⊗|0⟩⟨u|+ |x⟩⟨0|⊗|x⟩⟨u|+ |0⟩⟨x|⊗|x⟩⟨u| - 2|0⟩⟨0|⊗|0⟩⟨u|)] |
We deal with each term in the trace above separately. First:
1/2^3n/2|[M_1⊗ M_2 (∑_x |x⟩⟨x|⊗|0⟩⟨u|)]| = 1/2^3n/2|[M_1]·⟨ u | M_2 | 0 ⟩|≤1/2^n/2 .
The next two terms are similar and we bound the first here (which implies the same upper bound for the second using nearly identical steps).
1/2^3n/2|[M_1⊗ M_2 (∑_x |x⟩⟨0|⊗|x⟩⟨u|)] = 1/2^n|⟨ 0,u | M_1 ⊗ M_2 |Φ^+ ⟩|≤1/2^n .
The last remaining term is bounded as follows:
1/2^3n/2|[M_1 ⊗ M_2 |0⟩⟨0|⊗|0⟩⟨u|]| = 1/2^3n/2|[M_1 |0⟩⟨0|]|·|[M_2 |0⟩⟨u|] |≤1/2^3n/2 .
Thus, the contribution from the second moments is of magnitude at most O(2^-n/2). While |[M_1 _A[ρ_1^A]] | may be large (up to 1), we also have that |[M_2 _A[ρ_2^A]] |≤1/√(2^n) and thus all terms in the case are exponentially small in norm as well. We have thus shown that the norms of the contributions for each case are all exponentially small. Thus, the variance must be exponentially small as well. It remains to prove Fact <ref> which we do now.
Most of the desired expectation values stem from the following observation : for the uniform distribution over upper triangular matrices A, we have that
_A[(-1)^x^⊤ A x + y^⊤ A y + z^⊤ A z] = _A[(-1)^⟨ X + Y + Z, A]
= δ_X+Y+Z,0 = δ_x,yδ_z,0 + δ_x,zδ_y,0 + δ_y,zδ_x,0 - 2δ_x,0δ_y,0δ_z,0
where X, Y, and Z are defined as xx^T, yy^T, and zz^T respectively and the second equality follows from E_z[(-1)^⟨ x,z ⟩]=δ_x,0. Now the first three equalities are now easy to see: set y=z=0 and
_A[|ϕ_A⟩] = _A[1/√(2^n)∑_x (-1)^x^⊤ A x|x⟩]=1/√(2^n)∑_x _A[(-1)^x^⊤ A x]|x⟩ =|0^n⟩/√(2^n) .
Similarly, setting z=0 yields
_A[|ϕ_A⟩^⊗ 2]=_A[1/2^n∑_x,y (-1)^x^⊤ A x+y^⊤ Ay|x,y⟩]=1/2^n∑_x,y_A[(-1)^x^⊤ A x+y^⊤ Ay]|x,y⟩
=1/2^n∑_x|x,x⟩ .
Similar reasoning implies _A[|ϕ_A⟩⟨ϕ_A|]=𝕀/2^n, _A[|ϕ_A⟩⊗⟨ϕ_A|] = 1/2^n∑_x |x⟩⊗⟨x|, and
_A[|ϕ_A⟩⟨ϕ_A|⊗|ϕ_A⟩] = 1/2^3n/2(∑_x |x⟩⟨x|⊗|0⟩ + |x⟩⟨0|⊗|x⟩ + |0⟩⟨x|⊗|x⟩ - 2|0⟩⟨0|⊗|0⟩) .
The final decomposition of _A[|ϕ_A⟩⟨ϕ_A|^⊗ 2] follows from <cit.>.
The proof of the fact concludes the proof of the theorem.
The concept class
={|ψ_A⟩=1/√(2^n)∑_x∈1^n|x,x^⊤ Ax (mod 2 )⟩:A∈𝔽_2^n× n}
can be learned in the η-random classification model, using O(n/(1-2η)^2) copies of the noisy state and time O(n^3/(1-2η)^2).
Below, let |ψ_f⟩=1/√(2^n)∑_x (-1)^f(x)|x⟩. In the classification noise model, we are given copies of
|ψ_n⟩=1/√(2^n)∑_x |x⟩⊗(√(1-η)|f(x)⟩+√(η)|f(x)⟩.
We first show that using two copies of |ψ_n⟩, with probability ≥⋯, we can obtain |ψ_f⟩^⊗ 2. In order to do so, observe the following
(𝕀⊗ H)|ψ_n⟩ =1/√(2^n+1)∑_x,b|x⟩( (-1)^b· f(x)|b⟩+√(η)(-1)^b·f(x)|b⟩)
=1/√(2^n+1)∑_x,b(√(1-η) (-1)^b· f(x)+√(η)(-1)^b·f(x)) |x,b⟩.
Now, measuring the last qubit, the probability of seeing b=1 is given by
1/√(2^n+1)∑_x(√(1-η) (-1)^f(x)-√(η)(-1)^f(x)) |x⟩_2^2
=1/2^n+1∑_x (√(1-η) (-1)^f(x)-√(η)(-1)^f(x))^2=1/2 (√(1-η)-√(η))^2:=p,
and the post-measurement state is given by |ψ_f⟩. Hence with probability exactly p=1/2(1-2√(η(1-η)))≤ (1-2η)^2 (which holds for every η≤ 1/2), given two copies of |ψ_n⟩, we can produce two copies of |ψ_f⟩.
Now, if we focus on the concept class {f_A(x)=x^⊤ A x}_A. The learning algorithm first takes O(1/(1-2η)^2) copies of |ψ_A⟩ to produce two copies of |ϕ_A⟩. Note that the algorithm knows when it succeeded, i.e., when the measurement of the last qubit is 1, the algorithm knows that the above procedure performed the transformation |ψ_n⟩^⊗ 2→|ψ_A⟩^⊗ 2. Now using Fact <ref> we can learn f_A given O(n) copies of |ψ_A⟩ and O(n^3) time. Overall, the sample complexity and time complexity of the procedure is O(n/(1-2η)^2) and O(n^3/(1-2η)^2) respectively.
§.§ Smallest circuit class witnessing separation
In the previous section we saw that the concept class of quadratic functions separated from . Observe that states in this concept class can be prepared by circuits of size O(n^2) and depth O(n) consisting of {,,} gates. A natural question is, can states prepared by smaller circuits also witness such a separation between and ? Below we answer this in the positive, by using a simple padding argument inspired by a prior work of Hinshe et al. <cit.>.
Let α∈ (0,1) there exists a family of n qubit Clifford circuits of depth d=(log n)^1/α and size d^2 that requires 2^Ω(d) queries to learn the state to error ≤ 0.05 in trace distance.
The idea is to “pad" a family of circuits with auxilliary qubits. In the previous section, from Theorem <ref> we saw that the set of example states {|ψ_A⟩=1/√(2^n)∑_x |x,x^⊤ A x⟩}_A, is hard to learn to trace distance √(7)/4≤ 1/2. Instead of the example state |ψ_A⟩ now instead consider the “padded state" |ψ_A⟩⊗|0⟩^k(n). Say a algorithm learns these padded states with the set of queries given by {M_i}_i (which are random variables). Let us decompose each M_i as M_i = ∑_x,y∈1^k(n)M_i^x,y⊗|x⟩⟨y|, where ‖ M_i^x,y‖≤ 1 and M_i^x,x is Hermitian. Since the auxiliary qubits are fixed, it is clear that
[M ·|ϕ_A⟩⟨ϕ_A|⊗|0⟩⟨0|^⊗ k(n)] = [M_i^0,0ϕ_A].
Furthermore, we can assume without loss of generality that the algorithm always outputs a state of the form |φ⟩⊗|0⟩^⊗ k(n) (as otherwise we could improve by requiring it to do so). Thus, a algorithm for the padded states implies a algorithm with queries {M_i^0,0}_i for learning ={|ψ_A⟩}_A. Say that this algorithm uses at most t queries. Then Theorem <ref> implies that t ≥ 2^Ω(n). The state is now composed of m = k(n)+n qubits. Pick k=2^n^α for some α<1, so m=Θ(k(n)) and n = (log k)^1/α. Then we have that t ≥ 2^Ω(log m)^1/α). To conclude the theorem, note that |ψ_A⟩ has a circuit of size O(n^2) and depth O(n). Thus, the padded states can be prepared with circuits of size O(log m)^2/α and depth O(log m)^1/α.
§ NEW UPPER AND LOWER BOUNDS ON LEARNING STATES
In this section we first give a couple of classes of states which can be learned in the framework before discussing lower bounds for other class of states.
§.§ New upper bounds
We first prove that the class of functions that are k-Fourier-sparse Boolean functions on n bits, i.e.,
_1={f:1^n→1: |(f)|=k}
can be learned in time (n,k) in the model. This generalizes the results in <cit.>, which showed that showed parities and O(log n)-juntas (which are a subset of Fourier-sparse functions) are (n)-time learnable.[We remark that the same proof also shows that k-term DNF formulas are learnable: for every g∈_1, there exists S s.t. |g(S)|≥ 1/k and the proof of Theorem <ref> can identify such an S using queries and then one can use the algorithm of Feldman <cit.> for learning the unknown DNF formulas.] We observe that the quantum coupon collector problem, i.e., learnability of
_2={S⊆ [n]: |S|=k}
considered in <cit.> can be implemented in . Finally, we also observe that one can learn codeword states defined in <cit.>: consider an [n,k,d]_2 linear code {Mx: x∈1^k} where G ∈𝔽_2^n× k is a rank-k generator matrix of the code, k=Ω(n), and distinct codewords have Hamming distance at least d, then define the concept class
_3={f_x(i)=(Gx)_i:x∈1^k},
where G is known the learning algorithm. Below we show we can learn _3 in the model. Prior learning protocols <cit.> showed that these concept classes are learnable with quantum examples (a stronger model than ) whereas here we show they are learnable in the weaker framework. Before we prove this, we will use the following lemmas.
<cit.>
Let k≥ 2. The Fourier coefficients of a k-Fourier-sparse Boolean function f:1^n→ are integer multiples of 2^1-⌊log k⌋.
<cit.>
Let f:n→, τ∈ (0,1]. There exists a (n,1/τ,ℓ)-time quantum statistical learning algorithm that with high probability outputs U={T_1,…,T_ℓ}⊆ [n] such that: (i) if |f(T)|≥τ, then T∈ U; and (ii) if T∈ U, then |f(T)|≥τ/2.
The concept classes _1,_2,_3 defined above can be learned in the model.
We first give a learning algorithm for _1. For every f∈_1, observe that it's Fourier coefficients satisfy |f(S)|≥ 1/k by Lemma <ref>. We can now use Lemma <ref> to collect all the non-zero Fourier coefficients in time (n,1/τ,k) in the model. Call these non-zero coefficients S_1,…,S_k. Next, we learn all these Fourier coefficients up to error ε/k using queries: for i∈ [k], let ϕ(x,b)= b· (-1)^S_i· x for all x∈1^n,b∈1, hence _x [ϕ(x,f(x))]=_x [f(x)· (-1)^S_i· x]=f(S_i). Overall this takes time O(k). Once we obtain all these approximations {α_i}_i∈ [k], we output the function g(x)=(∑_i∈ [k]α_i ·χ_S_i(x)) for every x∈1^n.
Using the same reasoning as in <cit.> it is not hard to see that g is ε-close to f (i.e., _x [g(x)=f(x)]≥ 1-ε).
We next give a learning algorithm for _2. Let S ⊆ [n] of size k. Given copies of 1/√(k)∑_i ∈ S|i⟩, learn S. We now show how to learn S in using k log n queries. Let M_1=∑_i=1^n/2|i⟩⟨i|. This satisfies M_1≤ 1 and M_1 can be implemented using a (n)-sized circuit. Observe that
⟨ψ|M_1|ψ⟩=1/k∑_q,q'∈ S∑_i∈ [n/2][q=i=q']=|[n/2]∩ S|/k
which is at least 1/k if and only if there is an i ∈ [n/2]∩ S. So if we do a query with M_1 and tolerance 1/(2k), the learning algorithm learns if there is an i ∈ [n/2] such that i ∈ S. Repeat this using a binary search and we will eventually find one element in S using O(log n) queries. Repeat this to find all the elements in S, so the overall complexity is O(k log n).
We next give a learning algorithm for _3.
Consider the queries M_j=|e_j⟩⟨e_j|⊗|0⟩⟨0| and τ=1/(2n). Then observe that
⟨ψ_x| M_j| ψ_x⟩ = [e_j^⊤ M x=0]/n,
which equals 1/n if e_j^⊤ Mx=0 and 0 otherwise, so with tolerance 1/(2n), we can learn which is the case. Since G is the generator matrix of a good code, i.e., G has rank k, there are k linearly independent rows in G∈𝔽_2^n× k (say they are G^i_1,…,G^i_k). The learning algorithm can perform these measurements for all M_i_1,…,M_i_k in order to learn G^i_1 x,…,G^i_k x. Since G^is are linearly independent, these k linearily independent constraints on x suffice to learn x.
We next observe that the set of trivial states, i.e., states |ψ⟩=C|0^n⟩ where C is a constant-depth n-qubit circuits, can be learned in polynomial time in the model. An open question of this work, and also the works of <cit.>, is if we can learn the distribution P_C={⟨ x|ψ⟩^2}_x using classical queries. The theorem below shows that if we had direct access to |ψ⟩, one can learn the state and the corresponding distribution P_C, using queries. In the next section we show that once the depth d=ω(log n), these states are hard for queries as well.
The class of n-qubit trivial states can be learned up to trace distance ≤ε using (n,1/ε) queries with tolerance (ε/n).
Say that the circuit depth is d. Using <cit.> it is sufficient to reconstruct all D:=2^d-body reduced density matrices up to precision ε^2/4n with respect to trace distance. Thus, it is sufficient to show that such a tomography can be accomplished with queries. To do so, simply query all 4^D-1 non-identity Pauli strings acting on a party s of size D and reconstruct the state as ρ̂_s = 1/2^D(𝕀 + ∑_x α_x P_x), where P_x is a non-identity Pauli string and α_x is the response upon querying P_x. The schatten 2-norm of the difference between the resulting state and the true reduced density matrix ρ_s must satisfy
‖ρ_s - ρ̂_s ‖_2^2 = [(ρ_s - ρ̂_s)^2]
= 1/4^D[(∑_x ([P_xρ_s]-α_x)P_x)^2]
= 1/2^D∑_x ([P_xρ_s]-α_x)^2
< τ^2 · 2^D,
where we have used that Pauli strings (including identity) satisfy [P_xP_y]=δ_x,y2^D. In general, (ρ_s,ρ̂_s) ≤ 2^D/2-1‖ρ_s - ρ̂_s‖_2. Thus, (ρ_s,ρ̂_s) < τ· 2^D-1. Taking τ≤^2/n· 2^D+1 = O(^2/n) yields a tomography with the desired precision. There are nD = O(n^D) such reduced density matrices. For each one, we require a constant number of queries, each requiring O(D)=O(1) gates. Thus, the overall complexity is O(n^D) ∈(n) for both query and time complexity.
§.§ Hardness of testing purity
Let be an algorithm that upon a input of quantum state ρ, with high probability, estimates the purity of ρ with error < 1/4 using (τ) queries. Then must make at least 2^Ω(τ^22^n) such queries.
Suppose we have such an algorithm . Then we could solve the decision problem of {U|0⟩⟨0|U^† | U∈𝒰(2^n)} (all pure states) versus 1/2^n𝕀. We prove that this decision problem is hard using a concentration of measure argument similar to the variance method. Draw pure states from the Haar measure on 𝒰(2^n) yields [U|0⟩⟨0|U^†] = 1/2^n𝕀. Upon querying an observable M, consider the adversarial response of 1/2^n[M]. By Levy's Lemma <ref>, most Haar random states cannot deviate much from this average. For our purposes, we are concerned with functions of the form f(|ψ⟩) = [M|ψ⟩⟨ψ|] where ‖ M ‖≤ 1. We immediately observe that such f's have Lipschitz constant 2 <cit.>. By Levy's Lemma <ref> we have that
_U[|[MU|0⟩⟨0|U^†] - 1/2^n[M] | > τ] ≤ 2exp(-2^n+1τ^2/36π^3)
To conclude, we use Levy's lemma to lower bound _τ (in a manner similar to that of the variance lower bound). Recall that _τ(, σ) is the smallest integer d such that there exists a distribution η over queries M such that ∀ρ∈: _M ∼η[|[M(ρ - σ)]| > τ] ≥ 1/d. From this definition we have that
1/d≤_M∼η_ρ∼μ[[M(ρ - σ)]| > τ] ≤ 2exp(-2^n+1τ^2/36π^3) .
Thus, _τ({U|0⟩⟨0|U^†}, 1/2^n𝕀) ≥ 2^Ω(τ^2 2^n) and must make at least 2^Ω(τ^2 2^n) queries.
§.§ Hardness of the Abelian hidden subgroup problem
One of the great successes of quantum computing is solving the hidden subgroup problem for Abelian groups, of which Shor's famous factoring algorithm is a consequence. In this problem, we are given query access to a function f on a group G such that there is some subgroup H ≤ G satisfying f is constant every left coset of H and is distinct for different left cosets of H. How many queries to f suffice to learn H? When G is a finite Abelian group, H can be efficiently determined by separable quantum algorithms. One approach which is often used to analyze the general Hidden subgroup problem is the standard approach, which we describe now <cit.>:
* Prepare the superposition 1/√(| G |)∑_g∈ G|g⟩⊗|0⟩ by a Fourier transform over the group G.
* Use a single query to prepare the superposition state 1/√(| G |)∑_g∈ G|g⟩⊗|f(g)⟩.
* Measure the second register and obtain a superposition over elements in some coset with representative g'. That is, the algorithm can be viewed as having the state ρ_H=∑_g'|ψ_g'H⟩⟨ψ_g'H| where |ψ_g'H⟩=1/√(| H |)∑_g∈ g'H|g⟩.
* Again apply a quantum Fourier transform and measure the state to obtain an element g∈ H^⊥, where H^⊥ = {g∈ G |χ_g(H) =1}.
Repeating the above procedure Õ(log| G |) times yields a generating set for H^⊥ with high probability, allowing one to reconstruct H as well. In fact observe that the above algorithm works even if one just makes separable measurements. The state ρ_H in step (3) of the algorithm above is called a coset state. Here, we show that solving the Hidden subgroup problem for even Abelian groups is hard when the learning algorithm has access only to queries.
Consider the additive group G = ℤ_2^n. In Simon's problem, a version of the hidden subgroup problem on ℤ_2^n, the hidden subgroups are of the form H = {0,s}. While solving Simon's problem is easy using separable quantum measurements, it cannot be readily replicated using queries. Intuitively, every y in the orthogonal complement of s is equally likely to be observed upon a computational basis measurement. To see this, note that after discarding the register containing the function value, the resulting mixed states are ρ_s = 1/2^n-1∑_x|x⟩⟨x|, where x is a coset representative and |x⟩⟨x| is the projector onto the corresponding coset. Thus, accurately simulating this measurement with queries requires exponentially small tolerance τ. The following theorem formalizes this notion.
Solving the hidden subgroup problem for the Abelian group ℤ_2^n with queries of the form M = M'⊗ requires Ω(τ^2 · 2^n) many such queries to succeed with high probability.
We prove the theorem by a bound on _τ(, σ) where σ = 1/2^n𝕀. Say that is an algorithm which solves the hidden subgroup problem with high probability using queries of the form M=M'⊗. Then, the queries {M_i}_i used by imply the existence of queries {M_i'}_i where M_i'∈^2^n×^2^n which suffice to identify the coset states ρ_H = 1/| H |∑_x|x⟩⟨x|, where x denotes a coset and |x⟩⟨x| the projector onto this coset.
Consider the subset _0 ⊂ of coset states of subgroups of the form H_s = {0,s}. For such a subgroup H_s the corresponding coset state is ρ_s = 1/2^n-1∑_x|x⟩⟨x|, where {x} are a set of 2^n-1 coset representatives and |x⟩ = 1/√(2)(|x⟩ + |x ⊕ s⟩). If f is a constant function, then ρ_H = 1/2^n. Thus, the correctness of implies the existence of a algorithm that can solve the decision problem of {ρ_H={0,s}}_H versus σ = 1/2^n𝕀.
For such a decision problem, ρ̂_̂ŝ = 2^nρ_s - 𝕀 and [ρ̂_sρ̂_s'σ] = 2^n[ρ̂_sρ̂_s']-1. Let s = s'. Then [ρ_s^2]=2^-2(n-1) and [ρ̂_s^2σ] = 1. Now instead consider when s≠ s'. For every coset x of H_s there exist two cosets y_1 and y_2 of H_s' with a non-empty intersection (of exactly one element) with x. Thus, we have that [|x⟩⟨x||y_1⟩⟨y_1|] = [|x⟩⟨x||y_2⟩⟨y_2|]= 1/4 and
[ρ̂_s ρ̂_s'σ] = 2^n[ρ_s ρ_s']-1 = 2^n/2^2(n-1)∑_x,y|⟨x|y⟩-1 = 2^n/2^2(n-1)∑_x1/2-1 = 0 .
For any subset '⊆_0 we thus have that γ(',σ) = 1/|' |. If |'| < 1/τ then γ(',σ) > τ. Note that |_0 | = 2^n-1 and thus κ_τ^γ-(_0,σ) = Θ(1/τ 2^n) and _τ(,σ) = Ω(τ· 2^n). By Theorem <ref> we have that _τ^δ(,σ) ≥ (1-2δ)_τ^2(,σ) = Ω(τ^2· 2^n).
Thus, any algorithm for solving the hidden subgroup problem on ℤ_2^n must depend non-trivially on the register holding the function value. This is in contrast to the standard Fourier sampling method which has no dependence on the function register.
The average correlation argument above also implies that learning coset state below trace distance 1/2 with high probability requires Ω(τ^2 · 2^n) queries of tolerance τ.
Note that the trace distance between ρ_H for H={0,s} and 1/2^n𝕀 is 1/2. Using Lemma <ref>, we have that QSQ^1/2, δ_τ() ≥_τ^δ(, 1/2^n𝕀), where = {ρ_H | H = {0,s}}. From section 3 we know that _τ^δ(, 1/2^n𝕀) ≥ (1-2δ)_τ^2(, 1/2^n𝕀). The average correlation argument above yields that _τ^2(, 1/2^n𝕀) = Ω(τ^2 · 2^n), thus proving the claim.
§.§ Hardness of shadow tomography
In <cit.> the authors derive lower bounds on the sample complexity of shadow tomography using separable measurements. Recall that in shadow tomography, given copies of ρ, the goal of a learner is to predict the expectation value [O_i ρ] of a collection of known observables {O_i}_i up to error ε. To prove these lower bounds the authors construct a many-vs-one decision task where σ = 𝕀/2^n and
= {ρ_i = 𝕀+3ε O_i/2^n} .
Assuming that [O_i] =0 and [O_i^2]=2^n for all O_i, then an algorithm which solves the shadow tomography problem with high probability also solves the decision problem. Thus, a lower bound on the latter is also a lower bound on the sample complexity of shadow tomography.
Any algorithm that uses (τ) queries and predicts [Pρ] up to error ε for all non-identity Pauli strings P with high probability requires Ω(τ· 2^2n/ε^2) queries.
We prove the theorem using a bound on _τ(, σ) where σ = 1/2^n𝕀. For convience we label the states from the many-vs-one decision task as ρ_i where i∈ [4^n-1]. For such a σ we further have that [ρ̂_iρ̂_jσ] = 2^n[ρρ']-1. By the orthogonality of Pauli strings, [ρ̂_iρ̂_jσ] = 9ε^2δ_i,j. For any subset '⊆ we thus have that γ(', σ) = 9ε^2/|' |. If |' | < 9ε^2/τ^2 then γ(', σ) > τ^2. Thus, κ_τ^γ-(,σ) = Θ(ε^2·τ^-2· 2^2n) and _τ(, σ) = Ω(τ^2· 2^2n/^2). Using Theorem <ref> we know that _τ^δ (,σ) ≥ (1-2δ)_τ^2(,σ) = Ω(τ^4· 2^2n/^2).
In <cit.> the authors derive a lower bound of Ω(2^n/^2) for the same task. They further show an upper bound of O(n2^n/^2) as well. Our result essentially says that shadow tomography benefits from more than just estimating expectation values. With only queries, the nearly optimal algorithm is to simply query every Pauli string.
§.§ Learning quantum biclique states
An influential work of Feldman et al. <cit.> considers the planted biclique problem. The goal here is to learn the class of distributions each indexed by subsets S ⊆{ 1,2 … ,n }. For every S, the distribution D_S is defined as follows
D_S(x)=k/n/2^n-k+1-k/n/2^n x∈ 1_S×1^n-k
1-k/n/2^n x∉ 1_S×1^n-k,
where above 1_S×1^n-k is the set {x∈1^n: x_S=1_S}.
A natural way of generalizing problems over distributions to quantum statistical queries is to consider coherent encodings of distributions, i.e., for a given distribution D over X, we define a quantum state |ψ⟩ = ∑_x √(D(x))|x⟩. Classical queries then correspond to with diagonal observables and a natural question is, how much can coherent examples help? In what follows, we first show that for the task of distinguishing two coherent encodings, there can be at most a quadratic gap between the precision that is tolerated by and queries. We use this to show that, for some choice of parameters, there are large gaps between the classical and quantum statistical query complexity of the k-biclique problem.
We demonstrate below that measurements can help significantly in certain regimes of tolerance.
For large enough n and k≥ 2log n, the k-planted biclique problem with coherent encodings can be solved with statistical quantum algorithm that makes at most nk (√(k/n)) queries, but cannot be solved by any algorithm that makes (√(k/n)) queries.
First observe that (|ψ_S⟩, |+^n⟩) = √(1-|⟨ψ_S|+^n⟩|^2), and
⟨+^n|ψ_S⟩ = (√(k/n + 1-k/n/2^k) - √(1-k/n/2^k)) 1/√(2^k) + √(1 - k/n).
Define |ϕ⟩ = (|+⟩ + |ψ_S⟩)/√(2+2⟨ψ_S|+⟩) and |ϕ^⊺⟩ (|+⟩ - |ψ_S⟩)/√(2-2⟨ψ_S|+⟩).
The optimal distinguishing query between |+^n⟩ and |ψ_S⟩ is the difference between projectors on the state |P_S⟩ = |ϕ⟩ + |ϕ^⊺⟩/√(2) and its orthogonal complement in the span of |+^n⟩ and |ψ_S⟩ (see for example <cit.>). Call this measurement M_S and notice that it is implementable by a k-qubit controlled rotation. A (possibly inefficient) quantum algorithm for detecting the planted clique would query (τ) oracle with M_S for every subset S ⊆ [n] of cardinality |S| = k.
From Lemma <ref> and optimality of the measurement, we know that |(M_S (ψ_S - ψ_0))| = 2(ψ_S, ψ_0). It follows that
as long as
τ≤(|Ψ_S⟩, |+^n⟩), such algorithm succeeds. We now bound (|Ψ_S⟩, |+^n⟩). To that end, observe that:
(√(k/n + 1-k/n/2^k) - √(1-k/n/2^k)) 1/√(2^k)≤1/√(2^k+1)√(1- k/n).
from which we have that:
( 1 + 2^-(k+1)/2) √(1-k/n)≥⟨+^n|ψ_S⟩≥√(1-k/n),
and[Using
√(1+3 × 2^-(k+1)/2)≥ 1 + 2^-(k+1)/2 for all k ≥ 1.]
√(k/n)≥(ψ_0, ψ_S) ≥√(k/n - 4/2^k/2). For k ≥ 2 log n, n ≥ 5 and τ≤√(2log(n/4)/n), the planted biclique can be detected by at most n k (τ) queries.
On the other hand, the k-planted biclique problem has (D, D_i) = k/n( 1 - 2^-k) for all D_i ∈𝒟_D, from which
(D, D_0) = k/n(1-2^-k) < k/n.
It follows that:
max_ϕ, |ϕ| ≤ 1_D ∼𝒟[ |D[ϕ] - D_0[ϕ]| ≥ 2τ] ≤_D ∼𝒟max_ϕ, |ϕ| ≤ 1[ |D[ϕ] - D_0[ϕ]| ≥ 2τ] = _D ∼𝒟[ (D,D_0) ≥τ].
For τ =k/n, we have _D ∼𝒟[ (D,D_0) ≥k/n] = 0,
which means that the clique state is undetectable by any (τ) query (an adversarial oracle can output an outcome consistent with uniform distribution and succeed at all times). For k ≥ 2 log n and large enough n, the statistical queries have better tolerance than the quantum queries. It follows that for k ≥ 2 log n, n ≥ 72 and τ = √(2log(n/4)/n), the k-planted biclique problem cannot be solved by a (τ) algorithm, but can be solved with an algorithm that can makes (τ) queries.
§.§ Hardness of Learning Approximate Designs
In this class we show that the class of quantum states that forms an approximate 2-designs are hard to learn in the model.
Let be an ensemble of states forming a η-approximate 2-design where η = O(2^-n). Learning states from with error ≤ 1/3 in trace distance requires Ω(τ^2 · 2^n) (τ) queries.
We prove the theorem by showing that the variance of {[Mρ]}_ρ∈ for any such design must be exponentially small. By the definition of an approximate design, we have that
d_(_ρ∼[ρ], 1/2^n𝕀) ≤η, d_(_ρ∼[ρ^⊗ 2], 1/4^n + 2^n(𝕀 + )) ≤η ,
where 1/2^n𝕀 and 1/4^n+2^n(𝕀 + ) are respectively the first and second moments of the unitary Haar measure. For any observable M, by the definition of trace distance we have that
|[M(1/2^n𝕀 - _ρ∼[ρ])] |≤ 2η, |[M(1/4^n+2^n(𝕀+) - _ρ∼[ρ^⊗ 2])] |≤ 2η .
Thus, _ρ∼([M ρ]) ≤_ρ∼𝒰(2^n)([M ρ]) + O(2^-n). We now show that _ρ∼𝒰(2^n)([M ρ]) = O(2^-n) for any ‖ M ‖≤ 1.
_ρ∼𝒰(2^n)([M ρ]) = 1/4^n+2^n[M^⊗ 2(𝕀 + )] - 1/4^n[M]^2
= 1/4^n+2^n[M^2] - 1/2^n(4^n+2^n)[M] ,
where we have used that fact that [M^⊗ 2] = [M]^2 and [M^⊗ 2] = [M^2]. As ‖ M ‖≤ 1 we have that [M^2] ≤ 2^n. Thus, taking M to have 2^n-1 eigenvalues equal to +1 and 2^n-1 equal to -1 maximizes the variance yielding
_ρ∼([M ρ]) ≤_ρ∼𝒰(2^n)([M ρ]) + O(2^-n) ≤2^n/4^n+2^n +O(2^-n) = O(2^-n) .
To invoke Theorem <ref> and Lemma <ref> we first note that all ρ' ∈ are far from _ρ∼[ρ] in trace distance. This follows from triangle inequality:
d_(ρ' , _ρ∼[ρ]) ≥ d_(ρ', 1/2^n𝕀)-d_(1/2^n𝕀, _ρ∼[ρ])≥2^n-1/2^n-O(2^-n).
Fixing ε = 1/3 and τ = 1/(n) there is an n_0 such that for all n≥ n_0 we have that d_(ρ' , _ρ∼[ρ]) > 2(τ + ε). Using Lemma <ref> we thus have that learning states from requires Ω(τ^2 · 2^n) queries.
§ FURTHER APPLICATIONS
§.§ Error mitigation
In this section, we show how to use our lower bound to
resolve an open question posed by Quek et al. <cit.>. Therein the authors consider two forms of quantum error mitigation, which they call strong and weak error mitigation. We first describe these two models before stating our result.
An (ε, δ) weak error mitigation algorithm 𝒜 takes an input a series of observables {O_1,…, O_m} satisfying ‖ O_i ‖≤ 1 and outputs a set of values {α_1,…, α_m} such that with probability at least 1-δ we have that
|[O_i ρ] - α_i |≤ε .
An (ε, δ)-strong error mitigation algorithm 𝒜 outputs a bitstring z sampled from a distribution P such that, with probability at least 1-δ, (P,P_ρ)≤ε. Here P_ρ is the distribution on the computational basis induced by the state ρ, i.e. P_ρ(x) = [|x⟩⟨x|ρ].
In both cases, we assume that the algorithm is given classical descriptions of both the preparation and noise channels resulting in ρ. Further, can make measurements on multiple copies of ρ at once.
For some forms of error mitigation it may be interesting to consider not just allowing the algorithm to query the circuit U_𝒞 but also modified circuits U_𝒞'. However this can be subsumed into the framework of weak error mitigation as given. To return an estimate of O for U_𝒞' the algorithm returns an estimate of U_𝒞U_𝒞'^† O U_𝒞'U_𝒞^† from the original circuit.
In <cit.> the authors show that strong error mitigation implies weak error mitigation for local observables. They then prove a partial converse and show that for a restricted family of observables weak error mitigation cannot recover strong error mitigation (for polynomial-sized inputs). The question of an unconditional separation is left open. Here we will show that Theorem <ref> closes this open question and implies that weak error mitigation with polynomial numbers of observables does not suffice to recover strong error mitigation. First, note that by definition weak error mitigation outputs queries with tolerance τ = ε. To match our notation, we will continue by using τ instead of ε. This is the equivalent of <cit.>. Next we have the following theorem from their work.
<cit.>
For a class of distributions 𝒬 = {q_1,…, q_k} and ε, δ > 0 there is an algorithm which takes O(log|𝒬|/ε^2) samples from a target distribution p (not necessarily in 𝒬) and outputs a q^* ∈𝒬 such that
(p,q^*) ≤ 3min_i∈ [k](p,q_i) + ε .
With these tools we can now prove a separation between strong and weak error mitigation.
Let be an algorithm that takes as inputs the estimates for weak error mitigation with τ = 1/(n) and outputs O(n^2) samples from some distribution P such that (P,P_ρ)< 1/20 with high probability. Then, requires estimates of Ω(τ^2· 2^n/2) distinct observables.
We show that such samples would give one the ability to exact learn quadratic polynomial states with polynomial queries, contradicting Theorem <ref>. Let P_A denote the distribution on the computational basis induced by |ψ_A⟩ = 1/√(2^n)∑_x |x⟩⊗|x^⊤ A x⟩. For this concept class we can directly identify an example state with the distribution it induces on the computational basis and vice versa.
Let's assume that such an algorithm does exist. Using Theorem <ref>, and noting that log|| = Θ(n^2),
we can obtain a P_B such that
(P,P_B) ≤ 3min_B'∈𝒞(P,P_B') + 1/20.
By the assumption upon P, we have that (P, P_A) < 1/20, where A is the true concept. Thus, (P,P_B) < 1/5. For A≠ B we have that
(P_A, P_B) = 1/2∑_x,y| P_A((x,y)) - P_B((x,y))|
= 1/2^n+1∑_x,y|δ[x^⊤ A x = y] - δ[x^⊤ B x =y]| = _x∼1^n[x^⊤ A x ≠ x^⊤ B x] ≥ 1/4,
where the last inequality follows from Fact <ref>. For A ≠ B, by the triangle inequality, we have that
1/4 ≤(P_A, P_B) ≤(P,P_A) + (P,P_B) < 1/4 .
Thus we must have that A^* = B and the true distribution (and function/state) can be recovered. This implies that the inputs to could have been used as queries to solve the approximate state learning problem of Theorem <ref>. By the hardness of this problem requires Ω(τ^2· 2^n/2) distinct observables as inputs.
§.§ Learning distributions
In this section, we consider the following setup of statistical query learning that was considered in the work of <cit.>. Let U be a unitary and consider the induced distribution P_U on the computational basis, i.e.,
P_U(x)=⟨ x| U|0^n⟩^2.
In <cit.> they considered learning algorithms that were given access to the following: for ϕ:1^n→ [-1,1] and τ∈ [0,1],
: (ϕ,τ)→α_ϕ∈[_x∼ P_U[ϕ(x)]+τ,_x∼ P_U[ϕ(x)]-τ].
The goal of the learning algorithm is to learn P_U upto total variational distance ≤ε by making (n) many queries each with tolerance τ=1/(n). Hinsche et al. <cit.> showed the hardness of learning the distribution P_U when U is a Clifford circuit of depth ω(log n) and recently Nietner et al. <cit.> showed that if U is a depth-Ω(n) circuit where each gate is picked from U(4), then P_U is not learnable using just queries.
In this section we consider a stronger question. One can also just directly look at the quantum state |ψ_U⟩=U|0^n⟩ and ask how many queries of the form
: (M,τ)→α_M∈[⟨ψ_U|M|ψ_U⟩+τ,⟨ψ_U|M|ψ_U⟩-τ].
suffice to learn P_U upto small trace distance? Note that the learning model in <cit.> is a strict restriction of this model, cause one could just consider M=∑_x ϕ(x)|x⟩⟨x|, then
⟨ψ_U|M|ψ_U⟩=∑_x ϕ(x)⟨ x|U|0^n⟩^2=∑_x ϕ(x)P_U(x)=_x∼ P_U[ϕ(x)],
which is precisely α_ϕ. To this end, we first generalize <cit.> in the following theorem.
For constant α∈ (0,1), there is a family of n-qubit circuits consisting of {,,} gates of depth d=(log n)^1/α and size d^2 that requires 2^Ω(d) queries to learn the output distribution in the computational basis to error ≤ 0.00125 in total variational distance.
Consider the padded states |ψ_A⟩⊗|0⟩^⊗ k(n) we considered in Theorem <ref> where {|ψ_A⟩=1/√(2^n)∑_x |x,x^⊤ A x⟩}_A.
Using Fact <ref> learning the output distributions of these states below total variational distance 0.00125 implies the existence of an algorithm learning the states up to trace distance 0.05. However, we know that doing so requires 2^Ω(d) queries using Theorem <ref>. Thus, learning the output distributions requires at least 2^Ω(d) queries as well.
We next prove a generalization of <cit.>. Before that, we need the following result.
<cit.>
There exists a d = O(n) such that for any circuit depth d' ≥ d and any distribution Q over {0,1}^n, we have that
_U ∼μ_d'[(P_U, Q) ≥1/225] ≥ 1-O(2^-n),
where μ_d' indicates the uniform distribution over circuits of depth d'.
Let be an algorithm that makes (τ) queries and with high probability learns the output distributions of O(n)-depth random circuits, to error ≤ 1/225 in total variational distance, then must make Ω(τ^2 · 2^n) such many queries.
This is a generalization of <cit.> and follows by a similar analysis to their lower bound. For d ≥ 3.2(2+ln 2)n+ln n we have that the uniform distribution over depth d random circuits is a 2^-n approximate 2-design <cit.>. Like we saw in the proof of Theorem <ref>, for every observable ‖ M ‖≤ 1, we have that _ρ∼([Mρ]) = O(2^-n). We proceed with the same adversarial lower bound. Upon making a query with observable M, the adversary responses with _ρ∼[[M ρ]]. Using Chebyshev's inequality,
_ρ∼[|[Mρ] - [M_ρ∼[ρ]] | > τ] = τ^-2· 2^-n .
While the proof could continue using Theorem <ref> (by reducing to the many-vs-one decision problem), it is more direct to note that the above inequality implies that every deterministic algorithm cannot identify the correct ρ∈ for a large fraction of states in .
In particular, say is a deterministic algorithm that outputs an estimate of a distribution Q such that (P_ψ, Q) < 1/225 and uses at most t (τ) queries. Using Eq. (<ref>), there is a fraction of ' of measure at least 1-O(t·τ^-2· 2^-n) that are consistent with [M_i _ρ∼[ρ]] for the (τ) queries {M_1,…,M_t} made by . Since is deterministic, it must output the same distribution Q for all ρ∈'. We now use Theorem <ref> to claim that there is a large set of states that are both consistent with [M_i _ρ∼[ρ]] and far from Q in total variational distance.
_ρ∼[ρ∈' (P_ρ, Q) ≥ 1/225] = 1-_ρ∼[ρ∉' (P_ρ, Q) < 1/225] ≥ 1 - O(t·τ^-2· 2^-n) ,
where the inequality follows from the union bound, Theorem <ref>, and the concentration of measure shown above. Thus, there is a set ” of measure at least 1-O(t·τ^-2· 2^-n) that is both consistent with [M_i _ρ∼[ρ]] for all queries M_i and also (P_ρ, Q) ≥ 1/225 for all ρ∈”. Upon the input of a ρ∈”, fails to provide a distribution Q such that (P_ρ, Q) < 1/225. For any constant success probability 1-δ there is then some n_δ such that for n ≥ n_δ must fail on a set of measure strictly greater than δ. Using Yao's Principle, thus any randomized algorithm using t∈(n) queries must fail with probability strictly greater than δ/2 for n sufficiently large. Thus, must use t= Ω(τ^2· 2^n) queries.
alpha
§ UPPER AND LOWER BOUNDS ON SEARCH PROBLEMS
Here we use to extend to statistical dimension to search problems. First, a definition of a general search problem over quantum states:
Let be a closed set of quantum states and ℱ some set. Then a decision problem is a mapping : → 2^ℱ[We implicitly assume that maps concepts to measurable subsets of ℱ with respect to some σ-algebra on ℱ.]. An algorithm is said to solve the search problem if upon the input of a state ρ∈ it returns f ∈(ρ).
Learning quantum states can be cast in this framework using the mapping (ρ) = {σ | (ρ,σ) ≤}. In fact, all of the lower bounds for learning in the main text could be recast in this framework. However, this was over kill for our goals in the main text. Lastly, much more than learning can be cast in this framework. For example, solving the hidden subgroup problem is a search problem.
We now define a quantity we dub the quantum search statistical dimension, or . This is a natural quantization of that given in <cit.>, using which we just developed.
Let be a search problem over a closed set of states and a solution set ℱ. And by 𝒮^ℱ we denote the space of probability distributions over ℱ. Then, for success probability α the quantum search statistical dimension is:
_τ^α() = sup_σinf_μ∈𝒮^ℱ(\_α(μ),σ) ,
where _α(μ) := {ρ∈ | μ((ρ) ≥α}.
Think of μ as representing a distribution of solutions an algorithm outputs. Thus, represents the hardness of distinguishing the concept class from a reference state given that the algorithm responds with a solution f drawn from μ upon receiving queries consistent with σ. This is formalized in the following theorem, whose proof quantizes <cit.>.
[Searching is as hard as deciding]
For any quantum search problem , τ >0, and success probabilities 1-δ > α > 0, we have that
_τ^δ() ≥(1 - δ/1-α) ·_τ^α() .
We show that the existence of an algorithm solving the search problem also implies the existence of a cover. Say that solves with probability at least 1-δ using at most q queries and let d = _τ^α(). From the definition of _τ^α there is a σ such that (\_α(μ),σ) is arbitrarily close to d for any distribution μ on the solution space. Further, assume that receives [Mσ] upon querying M whenever this is a valid response. Let f be a random variable (with distribution μ) corresponding to the output of in such a scenario upon receiving responses {[M_i σ]}_i and let
p_ρ = _[∃ i∈ [q], |[M_i(ρ-σ)] | > τ],
which is the probability over the randomness of that it can distinguish ρ from σ. Say that ρ∉_α(μ). If [M_iσ] was a valid answer for all queries, then draws a solution according to μ. However, by construction this solution is in (ρ) with probability strictly smaller than α. However, we assume that fails with probability at most δ. Thus, we have that (1-p_ρ)(1-α) ≤δ, implying that p_ρ≥ 1 - δ/1-α. Let M̂ be a random variable with a pdf constructed by taking a uniform average of the distributions of the queries M_i made by the algorithm (which are random variables) assuming that all queries are given responses [Mσ][Thus, the distribution over queries is not dependent on the input state ρ.]. Then we have that
∀ρ∈, [|[M̂(ρ-σ) | > τ] ≥p_ρ/q≥1-δ/1-α/q .
By Lemma <ref> this implies that _τ(\_α(μ),σ) ≤q/1-δ/1-α proving the theorem statement.
Remarkably, also upper bounds query complexity, which we prove below. Our proof quantizes the proof by Feldman <cit.>.
[Searching is not much harder than deciding]
For any quantum search problem , τ > 0, 0< δ < 1, and 0 < β, α < 1 such that 1-δ= α-β, we have that:
_τ / 3^δ() = O(_τ^α() ·n/τ^2·log(n/τ^2·β)) .
Let ρ∈ be the true state. The online learning algorithm of <cit.> states that one can update a reference state σ_t such that the total regret is upper bounded by 2L√((2 ln 2) Tn), where L is the Lipschitz constant of the loss function and T is the total number of updates. Let E_t be the POVM element the online learning algorithm receives at step t. Then choosing the loss function ℓ_t([E_t σ_t]) = |[E_t (σ_t - ρ)]| results in L=2 and the total regret being exactly ∑_t=1^T |[E_t (σ_t - ρ)]|. We now show that the definition of allows to find a series of POVM elements {E_t}_t=1^T such that σ_t is updated in a manner sufficient to solve the search problem.
Let σ_0 = 1/2^n𝕀 and d = _τ^β. By definition of , there exists a measure μ_0 on ℱ such that (\_δ(μ_0), σ_0) ≤ d. By definition of , there then exists a distribution η over observables such that
_M ∼η[|[M(ρ' - σ_0)]| > τ] ≥1/d
for all ρ' ∈\_δ(μ_0). Assume now that ρ∈\_δ(μ_0). Draw d·ln(T/δ) samples from η, denoted by {M_i}_i. Query each M_i with tolerance τ / 3 and let v_i be the response. Let i be such that |[M_i(ρ-v_i)]| > 2τ/3. This will hold if [M_i(ρ-σ_0) > τ], which fails to occurs with probability at most (1-1/d)^d·ln(T/β)≤β/T. Define E_0 = M_i + 𝕀/2. Then we have
|[E_0 (ρ-σ_0)]| = 1/2|[M_i (ρ -σ_0)]| = 1/2| ([M_iρ]-v_i) +(v_i- [M_iσ_0)] > τ/6 .
Now update σ_0 using the online learning algorithm and POVM element E_0. Continue this process by using the definition of to find some distribution over observables and solutions, sample that distribution, and update σ_t with E_t. The process stops when an M_i distinguishing ρ from σ_t is not drawn, which occurs with some small probability of error or if ρ∈_δ(μ_t).
In each step the online learning algorithm incurs a loss of at least τ / 6, and thus a total regret of at least T·τ / 6. However, since the total regret is upper bounded by 4√((2log 2) Tn), this implies that T ≤ (1152 ln 2)n/τ^2. When outputs a solution, if ρ∈_α(μ_T) then succeeds with probability at least α. If ρ∉_α(μ_T) then the algorithm must have missed an update at some step. This occurs with probability at most β. Thus, the algorithm succeeds with probability at least α-β = 1-δ.
It is worth noting that this result holds in an information-theoretic sense. The operators used to achieved such a query complexity may not efficiently constructed. Further, the theorem is not constructive. Nevertheless, this shows that characterizes the complexity of search problems.
§ ALTERNATIVE PROOFS USING VARIANCE AND YAO'S PRINCIPLE
Our main separation between and involved using the variance method to lower bound , which in turn lower bounds the learning problem. However, again by upper bounding the variance we can directly bound the learning complexity rather than going through an intermediate step involving decision problems. While this does not improve the lower bound on query complexity, it improves the constant below which learning is hard. Further, in some sense it is a simpler/more direct proof. In particular, we avoid having to make the assumption that 2τ < min_f∈(ψ_f, _f[ψ_f]) - 2. We give a proof here for reference and then discuss how this technique can be used in general.
The concept class
={|ψ_A⟩=1/√(2^n)∑_x∈1^n|x,x^⊤ Ax (mod 2 )⟩:A∈𝔽_2^n× n}
requires 2^Ω(n) many (1/(n)) queries to learn below error √(7)/8 in trace distance.
In the main text we saw that for any observable M such that ‖ M ‖≤ 1 we have that _A([Mψ_A]) = O(2^-n/2). Say that a deterministic algorithm makes t queries {M_i}_i=1^t of tolerance τ. Then by Chebyshev's inequality and a union bound we have that
_A∼[∃ i: |[M_i (ψ_A - 1/2^n𝕀)] | > τ] = O(t/τ^2 · 2^n/2) .
Since the distribution above is uniform over , this implies that there is a subset of concepts _0 ⊆ of measure at least 1-O(t/τ^2 · 2^n/2) all consistent with the answer 1/2^n[M_i] to each query M_i. Then upon input of any ψ_A ∈_0, outputs the same solution π. We assume that succeeds to output some π' such that (ψ_A, π') ≤ on concepts not A ∉_0. However, as we will now show, must fail to do so for almost all A ∈_0. Recall that for pure states we have that (ψ_A, ψ_B) = √(1-|⟨ψ_A|ψ_B⟩|^2) and that for function states ⟨ψ_f|ψ_h⟩ = _x∼1^n[f(x) = h(x)]. Using Fact <ref> we have that _x∼1^n[f_A(x) = f_B(x)] ≤3/4 and thus that (ψ_A, ψ_B) ≥√(7)/4 if A ≠ B. Now assume that there is some A ∈_0 such that (ψ_A, π) < √(7)/8. Then for all B ∈_0 such that A ≠ B by triangle inequality we have that
(ψ_B,π) ≥(ψ_A,ψ_B) - (ψ_A,π) > √(7)/8 .
Thus, fails on all other concepts in _0. Since the measure of _0 is at least 1-O(t/τ^2 · 2^n/2), if t = o(τ^2 · 2^n/2) there is some n_δ such that for all n > n_0 we have that the measure of _0 is at least δ for any constant 0 < δ < 1. That is, for any 0 < δ < 1 we must have that fails on at least a δ-fraction of the concepts for n large enough. By Yao's Principle, any randomized algorithm that succeeds with probability at least 1-δ/2 must then use Ω(τ^2 · 2^n/2) queries of tolerance τ.
Note that this then in turn improves theorem <ref> and theorem <ref> to hold for learning the states below trace distance √(7)/8 and total variational distance 7/128 respectively.
It is not hard to see how this technique generalizes to other concept classes. Say that is some concept class such that _ρ∼([Mρ]) ≤ v. Then for t = o(τ^2 · v) queries there are adversarial responses such that any deterministic algorithm must output the same answer for a set of measure δ for any constant 0 < δ < 1. Denote this set by _0. Assume further that for any fixed state π we have that _ρ∼_0[(ψ_f, π) > ] ≥δ'. Then we have that fails to output a state below trace distance for a set of measure at least δ'·δ. By Yao's principle, then any randomized algorithm must use Ω(τ^2 · v) queries of tolerance τ to output a π such that (ψ_f, π) < with probability at least 1-δ' ·δ/2. When it is easy to prove that any output must fail on any large subset of concepts this method readily gives direct lower bounds on (). When this property is hard to verify, going through may be easier in practice.
|
http://arxiv.org/abs/2306.02064v1
|
20230603093616
|
Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training
|
[
"Pucheng Dang",
"Xing Hu",
"Kaidi Xu",
"Jinhao Duan",
"Di Huang",
"Husheng Han",
"Rui Zhang",
"Zidong Du",
"Qi Guo",
"Yunji Chen"
] |
cs.CV
|
[
"cs.CV"
] |
TOP QUARK PROPERTIES AT ATLAS AND CMS
Jan van der Linden
July 31, 2023
=====================================
Unlearning techniques are proposed to prevent third parties from exploiting unauthorized data, which generate unlearnable samples by adding imperceptible perturbations to data for public publishing. These unlearnable samples effectively misguide model training to learn perturbation features but ignore image semantic features.
We make the in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable samples at an early stage, but rapidly go to the overfitting stage since the shallow layers tend to overfit on perturbation features and make models fall into overfitting quickly. Based on the observations, we propose Progressive Staged Training to effectively prevent models from overfitting in learning perturbation features. We evaluated our method on multiple model architectures over diverse datasets, e.g., CIFAR-10, CIFAR-100, and ImageNet-mini. Our method circumvents the unlearnability of all state-of-the-art methods in the literature and provides a reliable baseline for further evaluation of unlearnable techniques. The code is available at <https://github.com/CherryBlueberry/ST>.
§ INTRODUCTION
Deep neural networks (DNNs) significantly boost computer vision techniques in the past decade and achieve even better capabilities surpassing human beings. One of the most important factors contributing to the great success comes from the representative and valuable data collected for training. Not only influential public datasets such as CIFAR <cit.> and ImageNet <cit.> are proposed for benchmark evaluation, but also huge volumes of scenario-sensitive data are used for real model training and deployment <cit.>. Data has become one type of important asset and it is crucial to ensure authorized access to sensitive data.
Previous studies propose “unlearnable sample” techniques aiming to address the issue of unauthorized access to sensitive data <cit.>. The unlearnable samples are produced by injecting carefully-crafted imperceptible perturbations into the training data, which induce models to learn the meaningless perturbation features instead of the valuable semantic features of nature images. Error-minimizing (EM) <cit.> perturbations are produced by an alternating bi-level min-min optimization on both model parameters and perturbations. However, EM perturbations have no defense on adversarial training which adds perturbations to input while training to improve the robustness of models. To solve this problem robust error-minimizing (REM) <cit.> perturbations implement adversarial training while generating unlearnable perturbations, which grants perturbations the ability to resist adversarial training. <cit.> found that perturbations act as shortcuts to mislead models to a bad state, and proposed synthetic perturbations (SP) which generate unlearnable perturbations without the optimization of error-minimization. In order to improve the transferability of unlearnable perturbations, <cit.> proposed transferable unlearnable samples which protects data privacy from unsupervised learning. One-pixel shortcut (OPS) <cit.> used a perceptible pixel as the perturbation to protect data. The state-of-the-art study <cit.> show good protection that lowers the learning accuracy from 92.41% to 21.71% of ResNet-18 on CIFAR-10.
r0.4
< g r a p h i c s >
The training and test accuracy of a ResNet-18 trained on clean data (M_c) and unlearnable data (M_u).
In this work, we differentiate features learned from unlearnable samples and nature samples and propose powerful defeat techniques to break the protection from superior unlearnable perturbation generators effectively. The analysis sheds light on data property protection and urges new protection mechanisms. There are two kinds of features in unlearnable samples: perturbation features and image semantic features. The former achieves serious overfitting, and the latter validly generalizes to test data (Fig. <ref>).
Specifically, we make the following observations: 1) Model training at the early stage learns both the perturbation features and image semantic features. Then the model quickly overfits to learn the perturbation features only in the subsequent learning processes.
2) Shallow layers tend to overfit on perturbation features and make the models fall into overfitting quickly. When shallow layers are overfitting, undesirable activation will pass through the shallow layers and corrupt deep layers in an overfitting way. The intrinsic reasons behind this phenomenon are that unlearnable perturbation set up “fake” and simple features, and model prefers these features rather than image semantic features because that the former is easier to optimize the learning objective.
Based on the observations, we show insights that models have the ability to learn valid semantic features from unlearnable samples at the early training stage, and stopping the overfitting of shallow layers extremely contribute to stopping the overfitting of the whole model. In this case, we propose a training framework, progressive staged training (ST), to effectively prevent models from overfitting in learning perturbation features. ST estimates whether the models are in an overfitting state and trains models with a progressive staged strategy of learning rate adjustment. To quantitatively evaluate whether the model is in an overfitting state, we propose Activation Cluster Measurement (ACM), based on the insight that perturbation features have a larger inter-class distance and smaller intra-class distance <cit.> in a low dimension space. To demonstrate the efficacy of ST, we conduct comprehensive experiments with multiple model architectures on CIFAR-10, CIFAR-100, and ImageNet-mini. Experimental results show that ST significantly defeats existing unlearnable techniques and provides a reliable baseline for further evaluation of unlearnable samples.
We summarize our contributions as follows:
* We show insights that models are capable of learning valid image semantic features from unlearnable samples at the early stage of training. Preventing overfitting in shallow layers significantly contributes to mitigating overfitting in the entire model.
* We propose a learning framework, ST, to prevent models from being overfitted to the features forged by unlearnable perturbations, with the help of ACM metric for quantitatively evaluating whether the model is in the overfitting state.
* We perform comprehensive experiments to show that our ST framework break all state-of-the-art unlearning samples in literatures on multiple model architectures across various datasets.
§ RELATED WORK
Data Poisoning Attack.
Data poisoning involves altering the training data in order to negatively impact the performance of models. Typically, the modified, poisoned examples only make up a portion of the entire dataset, and are easily identifiable as being notably modified <cit.>.
Unlearnable Sample.
As a special case of poison attack, unlearnable sample <cit.> shares similar mechanisms of data poisoning attacks. However, unlearnable sample is a kind of data protection method that adds imperceptible perturbations to the entire dataset. Suppose that {(𝐱_i,𝐲_i)}_i=1^n is the training dataset with the input data 𝐱_i∈𝐗 and the associated label 𝐲_i∈𝐘={1,2,⋯,k}. We assume that the unauthorized parties will use the published training dataset to train a classifier f_θ:𝐗→𝐘 with parameter θ. Error-minimizing (EM) <cit.> perturbations are produced by an alternating bi-level min-min optimization on both model parameters and perturbations:
min_θ𝔼_(𝐱_i,𝐲_i)[min_δ_iℒ(f_θ(𝐱_i+δ_i),𝐲_i)]
where ℒ is the cross entropy loss and ‖δ_i‖_∞<ϵ. Being induced to trust that the perturbation can minimize the loss better than the original image features, the model will pay more attention to the perturbations. However, EM perturbations have no defense on adversarial training which adds random perturbations to input while training to improve the robustness of models. To solve this problem robust error-minimizing (REM) <cit.> perturbations implement adversarial training while generating unlearnable perturbations, which grants perturbations the ability to resist adversarial training:
min_θ𝔼_(𝐱_i,𝐲_i)[min_δ_imax_σ_iℒ(f_θ(𝐱_i+δ_i+σ_i),𝐲_i)]
where ‖σ_i‖_∞<ϵ_a and ‖δ_i‖_∞<ϵ_u. Yu, et al. found that perturbations act as shortcuts to mislead models to a bad state, and proposed synthetic perturbations (SP) <cit.> which generates unlearnable perturbations without the optimization of error-minimization. In order to improve the transferability of unlearnable perturbations, Ren, et al. proposed transferable unlearnable samples <cit.>, which protects data privacy from unsupervised learning. One-pixel shortcut (OPS) <cit.> used a perceptible pixel as perturbations to protect data.
§ METHOD
We first expose the observations on the training process over unlearnable samples and propose two insights on the learning process of models trained on unlearnable samples (Sec. <ref>). Then, based on these insights, we propose progressive staged training (ST), the first effective training framework to make unlearnable samples learnable. We further propose Activation Cluster Measurement (ACM), an indicator to identify whether or not the entire model is overfitting. We also investigate a special color space transformation (color-jitter and gray-scale) that promotes the performance of ST. The overall pipeline of ST to defeat unlearnable samples is presented in Sec. <ref>.
§.§ Insights
Insight 1: Model learns both image semantic feature and perturbation features at the early stage before being trapped in the overfitting status, which provides a "golden window" for image features learning.
To demonstrate this insight, we show the learning process of a ResNet-18 on unlearnable CIFAR-10 in Fig. <ref> (more results of other models and datasets can be found in Appendix Sec. <ref>). It is shown that both the training accuracy and test accuracy increase at the first several epochs, indicating that the model learned valid image semantic features. However, after epoch 3, the training accuracy increases sharply, while the test accuracy significantly decreases, indicating the model is overfitting and trapped in the unlearnable perturbation features learning. In summary, a model is capable of learning the valid semantic features from unlearnable examples at the early stage of training. Then, the model will learn more perturbations features than valid image semantic features as the training going, which gets trapped by unlearnable samples eventually.
Although there are several methods to prevent models from overfitting in natural training <cit.>, these methods do not consider such stronger unlearnable perturbations and perform poorly on defeating unlearnable samples (Sec. <ref>). In this work, we take the model learning gap between image semantic features and perturbation features into consideration and train the model under the monitoring of an overfitting indicator to prevent it from being trapped.
Insight 2: Perturbation features tend to overfit shallow layers rather than deep layers because the prior can trap the models easily.
To show that, we first trained a ResNet-18 <cit.> model on clean CIFAR-10 <cit.> denoted by M_c, which achieves 92.41% accuracy on CIFAR-10 test data. We randomly initialized a ResNet-18 and replaced the first two residual blocks with the first two residual blocks of M_c.
Then we froze these replaced blocks’ parameters and denoted this ResNet-18 by M^S. Similarly, we randomly initialized another ResNet-18 and replaced the last two residual blocks with the last two residual blocks of M_c. Then we froze these replaced blocks’ parameters and denoted this ResNet18 by M^D (details in Fig. <ref> (a) and more results of other models and datasets can be found in Appendix Sec. <ref>). Finally, we trained M^S and M^D on CIFAR-10 unlearnable data individually. Training results are presented in Fig. <ref> (b). It is shown that M^S performs better on CIFAR-10 test data, which proves that shallow layers are capable of learning the correct features at the beginning of the training (like M^S). However, when shallow layers are overfitting, even if deep layers are in the right state (like M^D), shallow layers will pass the wrong activation and corrupt the model to an overfitting way. In this case, how to prevent shallow layers from overfitting is crucial to break unlearnable samples' data protective ability. This phenomenon inspired us to propose a progressive staged training strategy that adjusts the learning rate to slow down the learning process of shallow layers gradually to avoid overfitting.
§.§ ST Training Framework
With the insights above, we propose progressive staged training (ST), a progressive staged training strategy by gradually modifying the learning rate of different layers from shallow layers to deep layers when the model has the tendency to overfit. To decide when to modify the learning rate of layers, we propose the Activation Cluster Measurement (ACM) as an indicator to detect the tendency of overfitting caused by unlearnable perturbations during the training period. Meanwhile, a novel learning rate schedule algorithm is proposed to gradually reduce the learning process of shallow layers to resist overfitting.
Overfitting Indicator.
We propose a metric to quantitatively estimate whether the model is in a perturbation overfitting state, based on the observation that unlearnable perturbations construct simple features, and models are recognized to be “lazy” and prefer to learn simple unlearnable perturbation features rather than difficult image semantic features <cit.>.
To make the above analysis clear and visualized, we present the t-SNE <cit.> clustering results of the penultimate layer activation at different training epochs. As shown in Fig. <ref>, when a model is trained on unlearnable samples, the activation cluster disorder at early epochs, i.e., the intra-class distance is large and inter-class distance is small. As training goes on, the activation begins to cluster well (a small intra-class distance and a large inter-class distance), which is a phenomenon of overfitting (other t-SNE results can be found in Appendix Sec. <ref>). The visualization results of t-SNE also reflect the overfitting during training. To quantify this phenomenon, we propose a clustering measure named Activation Cluster Measurement (ACM) to describe the disorder of activation in different training epochs. Suppose that D={D_i}_i=1^k is a dataset with k labels, and D_i is the set of samples in class i. We define set A_i^M as the penultimate layer activation of model M when given it with set D_i as inputs.
The cluster center of class i is defined as C(i)=1/|A_i^M|∑_𝐱∈A_i^M𝐱. Then we have the intra-class distance σ for class i as:
σ(i)=1/|A_i^M|∑_𝐱∈A_i^M‖𝐱-C(i)‖_2
And the inter-class distance L between class i and j is as follows:
L(i,j)=min{‖𝐱-𝐲‖_2}_𝐱∈ A_i^M,𝐲∈ A_j^M
Then we define ACM of model M on dataset D as:
ACM(M,D)=1/k(k-1)∑_i,j=1
i≠ j^kL(i,j)/R(i)σ(i)+R(j)σ(j)
where k is the number of classes. R(i) is the radius of class i: R(i)=max{‖ 𝐱 - C(i)‖_2}_𝐱∈ A_i^M. The ACM of the model trained on clean data (M_c) and unlearnable data (M_u) at different training epochs is shown in Fig. <ref> (c). We use D_s, a subset of the training dataset, as the validation set to calculate ACM metric during staged training (details in <ref>). The ACM of the M_u is low at the beginning, then it goes to a high level, which is consistent with our previous observation in Fig. <ref> that the inter-class distance becomes larger and intra-class distance becomes smaller. Meanwhile, The gradient mean value (GMV) of the model trained on unlearnable data falls down to a low level, which illustrates that shallow layers are trapped in overfitting status and hardly learn new knowledge at late epochs compared to the model trained on clean data.
r0.5
< g r a p h i c s >
Different τ^e value for α^e_i in Eq. <ref> with β=1/5, l=20.
Progressive Staged Training.
Based on the observations above, we propose progressive staged training (ST), a novel stage training framework to defeat unlearnable samples. For a given model M with l layers and a dataset D, the natural training (NT) process goes through three steps traditionally in each epoch: learning rate attenuation, forward propagation, and back propagation. Differently, ST goes through five steps in each epoch: learning rate attenuation, learning rate adjustment, forward propagation, back propagation and ACM calculation. Once ACM indicates overfitting at the end of an epoch, the model will roll back to the checkpoint of the last epoch. Then, the learning rate will be further modified by an adjustment algorithm after learning rate attenuation to slow down the learning process of shallow layers gradually to resist overfitting. Suppose the initial learning rate of the ith layer of model M is η̂^0_i and Atten is a learning rate attenuation function like momentum <cit.>, cosine <cit.>, etc. In every epoch, we first get an attenuated learning rate η̂^e_i=Atten(η̂^e-1_i), e for the e-th epoch. Then we get a new learning rate η^e_i by adjusting η̂^e_i with α^e_i (i for the i-th layer) as follows.
η^e_i=α^e_i·η̂^e_i
α^e_i=1-1/1+e^i/β·τ^e-l
where τ^e is a counter for how many time ACM larger than a threshold γ from the beginning to epoch e and β is a hyperparameter. We use a vector 𝐇^e=[α^e_1,α^e_2,…,α^e_l] to describe how the learning rate adjustment algorithm works and defeat overfitting as shown in Fig. <ref>. At the beginning of training, we set 𝐇^0=[1,1,…,1], which is equivalent to natural training, and all layers have the same learning rate. As the training goes on, the model tends to overfit and the count τ_e is increased. Meanwhile, following the components of 𝐇, the learning rate of each layer gradually decreased to zero, which means the optimization gradually slow down from shallow layers to deep layers to resist overfitting. The process of ST is described in Alg. <ref>.
ST Training Pipeline.
We also investigate that some data augmentations, e.g., color-jitter and gray-scale (CG) <cit.>, could promote the performance of ST. Color-jitter randomly changes the brightness, contrast, saturation, and hue of an image. Gray-scale randomly converts an image to grayscale. We analyze that CG makes unlearnable perturbations harder to learn for models and weaker on data protection, even if including these augmentations into the EoT process of some unlearnable example generators <cit.> (details in Appendix Sec. <ref>). In this case, we propose the complete ST training pipeline. In step 1, we implement progressive staged training with augmentation CG to train a model. In step 2, we fine-tune the model of step 1 with augmentation CG. With this ST training pipeline, we achieve state-of-art results to defeat unlearnable samples.
§ EXPERIMENTS
§.§ Experimental Setups
r0.5
Notions of different training methods, where NT denotes "natural training" which is our baseline, ST denotes our "staged training".
width=0.5
NT NT-CG ST ST-CG ST-Full
Learning rate adjustment - -
Augmentation CG × ×
Fine-tuning with CG - × × ×
Datasets and model architectures.
Three benchmarks, CIFAR-10, CIFAR-100 <cit.>, and ImageNet-mini, a subset of the first 100 classes in ImageNet <cit.>, are used in our experiments, which is consistent with previous works of unlearnable sample <cit.>. We demonstrate the effectiveness of ST with various model architectures including ResNet-18, ResNet-50 <cit.>, WideResNet-28-10 (WRN-28), WideResNet-34-10 (WRN-34) <cit.>, VGG-16-BN <cit.>, and DenseNet-121 <cit.>.
Unlearnable samples. Four state-of-the-art unlearnable samples generation methods are used: Error-Minimizing perturbation (EM) <cit.>, Robust-Error-Minimizing perturbation (REM) <cit.>, Synthetic Perturbation (SP) <cit.>, and One-Pixel Shortcut (OPS) <cit.>. We set the ℓ_∞ bound ‖δ‖_∞<ϵ=8/255 for EM, REM, and SP to perturb image as much as possible without human perception.
For other hyper-parameters, we follow the default settings according to their official codebases.
Training methods.
To show our effectiveness, we compare the natural training method with 3 different settings of ST (as summarized in Table <ref>). We also adopt adversarial training (AT) with a bound of ℓ_∞≤4/255 as the baseline, since REM using 4/255 as the bound. Specifically, we implement ST with α set to 25% and γ set to 2.6× 10^-4. We use a subset of the training dataset, D^s, containing 1000 samples for CIFAR-10 (each class contains 100 samples) and 5000 samples for CIFAR-100, ImageNet-mini (each class contains 50 samples) as the validation set to calculate the ACM metric during staged training. More training details will be found in Appendix Sec. <ref>.
Learning Effectiveness.
Table <ref> and Table <ref> show the clean test accuracy of models on CIFAR-10 and CIFAR-100 after being trained with different methods on the perturbed training data.
As a pioneering method, EM-Sample and EM-Class perturbations have limited data protection capabilities, while methods on improving it like REM and OPS show that the natural training method cannot work well on perturbed data, most of which are less than 33% on CIFAR-10 and 20% on CIFAR-100. Adversarial training (AT) works well on EM-Sample, EM-Class and SP perturbations. However, AT is still trivial compared with our ST training pipeline. Especially, ST-Full trained ResNet-50 gets 93.80% on EM-Sample perturbed CIFAR-10, and ST-Full trained VGG-16-BN gets 71.30% on EM-Class perturbed CIFAR-100. Meanwhile, ST training pipeline works effectively on clean data as well as normal training.
Furthermore, to confirm the effectiveness of high-resolution images, we implement the ST training pipeline on ImageNet-mini. As shown in Table <ref>, ST works well on four perturbed data (especially, ST improves ResNet-18 from 11.10% to 63.34% on SP and DenseNet-121 from 6.58% to 48.70% on EM-Sample). CG augmentation makes supplement the performance of ST, which ST-CG achieves a good boost to ST. Meanwhile, ST-Full works perfectly on REM and SP perturbation. However, it has a decreased accuracy on EM-Sample and EM-Class with ResNet-18. This is because the protection of perturbation on high-resolution images is much stronger (an example: 32.38% for a naturally trained ResNet-18 on EM-Class perturbed CIFAR-10 in Table <ref> while 3.95% for a naturally trained ResNet-18 on EM-Class perturbed ImageNet-mini in Table <ref>).
§.§ Analysis
ST helps to avoid overfitting.
We further illustrate how the ST framework help to avoid overfitting. We first train a model on REM perturbed data. When this model has a tendency to overfitting (ACM>γ), we either keep ST training (orange lines in Fig. <ref>) or natural training (blue, green, red lines in Fig. <ref>).
The results show that ST performs well in defeating unlearnable samples. Without the learning rate adjustment algorithm, the model will fall back to overfitting, similar to natural training, which shows the necessity of learning rate adjustment. We plot the channel-wise activation magnitudes of different layers (details in Appendix Sec. <ref>), which verifies that ST successfully resists overfitting. We also draw the loss landscape <cit.> of a model natural trained and ST trained on unlearnable samples, which shows that unlearnable samples traps the model in an overfitting way (details in Appendix Sec. <ref>).
r0.5
0.9
< g r a p h i c s >
The training and test accuracy of a RensNet-50 ST trained on REM perturbed data (orange), and the accuracy of not implementing ST (blue, green, red).
Hyperparameter analysis.
We analyzed the sensitivity of hyperparameter γ and β (details in Appendix Sec. <ref>). 2.6e^-4 is a suitable value for γ, and 1/4 is a suitable value for β.
Different protection percentages.
There is a more realistic learning scenario, where only a part of the data are protected by the defensive noise, while the others are clean. We used various unlearnable method perturbed CIFAR-10 and CIFAR-100 with different mixing ratios. The protection percentage (ratio) represents the proportion of unlearnable samples to all samples (details and results can be found in Appendix Sec. <ref>). ST-Full has exceptional performance on all protection percentage (especially on protection percentage 1.0), which reflects the reliability of our method on mixed samples.
Comparison with general counter-overfitting methods.
We have also verified the efficiency of different counter-overfitting methods to naturally train a ResNet-18 on REM perturbed CIFAR-10 in Table. <ref> (details in Appendix Sec. <ref>). The results reflect that ST beats these counter-overfitting methods and defeats unlearnable samples.
§ CONCLUSION
In this paper, we study the mechanism of the open-source data protection method, unlearnable samples, which provide “shortcuts” for models by injecting imperceptible perturbations into the training data, causing it typically to ignore correct semantic features and learn incorrect perturbation features instead. We observed that unlearnable samples mislead models into a trap of overfitting to protect the privacy of data. We propose the Activation Cluster Measurement (ACM) to quantify the overfitting degree of a model. Based on that, we propose progressive staged training (ST), a novel stage training framework to gradually slow down the learning process from shallow layers to deep layers and defeat unlearnable samples for the first time. Our ST training pipeline achieves extraordinary performance and provides a baseline for further studies on unlearnable samples.
Limitations and Broad Impacts
In this work, beyond the proposed ST, we also investigate that some data augmentations, e.g., color-jitter and gray-scale (CG) <cit.>, could promote the performance of ST. However, why these augmentations work and break the protective capability of unlearnable perturbations is not clear. We will focus on this problem in further work. Meanwhile, the breaking techniques for unlearnable samples can potentially cause negative impacts on sensitive data protection. However, by analyzing the model learning process of image semantic features and perturbation features, our work contributes important societal impacts and understanding of the rationale in this research field and provides a reliable baseline for further evaluation of unlearnable samples.
plain
§ EXPERIMENTAL DETAILS
§.§ Data Augmentations
Crop and flip.
Following previous work on unlearnable samples (EM, REM and SP), we perform random flipping on the image in all experiments. Then, we randomly crop the image to 32×32 size for CIFAR-10 and CIFAR-100 and 224×224 size for lmageNet-mini.
Color-jitter and gray-scale
Color-jitter randomly changes the brightness, contrast, saturation, and hue of an image.
Gray-scale randomly converts an image to a grayscale one.
Following traditional data augmentation methods in deep learning, we perform color-jitter on images with 80% probability and gray-scale with 20% probability.
Cutout.
Cutout is an augmentation to randomly mask some areas of images, and we set the mask size to 16.
Guassian-filter.
Guassian-filter smooths and blur an image. We implement Guassian-filter with σ set to 1.5 and kernel-size set to 5.
§.§ Training
For NT, we use the SGD optimizer accompanied by the cosine learning rate decay, with momentum set to 0.9, weight decay set to 10^-4, and learning rate set to 0.1.
For AT, we use the SGD optimizer accompanied by the cosine learning rate decay, with momentum set to 0.9, weight decay set to 10^-4, and learning rate set to 0.1.
For ST series, we use the SGD optimizer accompanied by the cosine learning rate decay, with momentum set to 0.9, weight decay set to 10^-4, and learning rate set to 0.1 for ST and 0.2 for ST-CG.
For fine-tuning with CG, we use the SGD optimizer accompanied by the cosine learning rate decay, with momentum set to 0.9, weight decay set to 10^-4, and learning rate set to 0.3.
All experiments are conducted on 8 GPU (NVIDIA^ Tesla^ A100^). We set other hyperparameters to the default value in the open-sourced codes of EM [<https://github.com/HanxunH/Unlearnable-Examples>], REM [<https://github.com/fshp971/robust-unlearnable-examples>], SP [<https://github.com/dayu11/Availability-Attacks-Create-Shortcuts>], OPS [<https://openreview.net/forum?id=p7G8t5FVn2h>].
§ OBSERVATIONS
§.§ Observations for Insight 1
We also show the learning process of a ResNet-50 and a WRN34-10 separately on unlearnable CIFAR-10 and CIFAR-100 in Fig. <ref>. Both the training accuracy and test accuracy increase at the first several epochs, indicating that the model learned the correct image features. However, with training goes on, the training accuracy increases sharply, while the test accuracy significantly decreases, indicating the model is overfitting and trapped in the perturbation feature learning. These observations provide direct evidence for our insight 1.
§.§ Observations for Insight 2
We also show the learning process of a ResNet-50 and a WRN34-10 separately on unlearnable CIFAR-10 and CIFAR-100. As shown in Fig. <ref>, M^S performs better test data, which proves that shallow layers are capable of learning the correct features at the beginning of the training (like M^S). However, when shallow layers are overfitting, even if deep layers are in the right state (like M^D), shallow layers will pass wrong activation and corrupt deep layers to an overfitting way. These observations provide direct evidence for our insight 2.
§ ABLATION STUDY
§.§ Data Augmentation Influences
We propose composite data augmentation strategies to improve the learning effectiveness on unlearnable samples in the further step. In this section, we analyze that the CG transformated image data lowers the chance of overfitting. Such composite data augmentation can work effectively even if the augmentation strategy is white-box during generating unlearnable examples, because it is hard to build effective unlearnable examples with more complex perturbation feature space. We implement augmentation CG while generating REM perturbations, and we denote this perturbations by REM-T which generate as follows:
min_θ𝔼_(𝐱_𝐢,𝐲_𝐢)[min_δ_imax_σ_iℒ(f_θ(t(𝐱_𝐢+δ_i+σ_i)),𝐲_𝐢)]
where ‖σ_i‖_∞<ϵ_a, ‖δ_i‖_∞<ϵ_u and t is the augmentation CG, ℒ is the loss function.
CG works well even if the augmentation strategy is white-box during generating unlearnable examples. The reason why REM-T hardly generates effective unlearnable examples is that REM-T produces much more complex perturbation feature space, hence models have less chance to be trapped in learning REM-T features. To validate this statement, we introduce the experiments as follows.
We modified a REM perturbations training dataset which adds REM vanilla perturbations to newly generating perturbations. Then, we trained a ResNet-18 model on this mixed-perturbation dataset, and separately tested this model on REM vanilla perturbations (REM) and REM-T perturbations (REM-T). The training dataset contains 45000 mixed-perturbation samples, and each testing dataset contains 5000 perturbation samples. To prevent data leakage, the training dataset and two testing dataset have no common perturbation sample. As shown in Table <ref>, the accuracy of REM is 85.58% and the accuracy of REM-T is 12.36%, which reflects that REM-T features is harder than vanilla REM features and weaker on data protection. In this case, the model is "lazy" to learn REM-T features and prefers REM features.
The above analysis provides irrefutable evidence of that augmentation CG makes unlearnable samples weak on data protection, even if implements CG during the generation process of unlearnable perturbations.
§.§ Channel-wise Activation Analysis
To verify the effectiveness of ST, we analyses the channel-wise activation of different layers. We natural trained a ResNet-18 model separately on clean data (M_c) and REM perturbed data (M_u). Then we ST trained a ResNet-18 on REM perturbed data (M_s). We analyses the channel-wise mean value of output activation form the shallow layers to the deep layers of model M_c, M_u and M_s, when feed them with clean data. As shown in Fig. <ref>, the activation distribution of M_s is similar with M_c, while the activation distribution of M_u is largely different with M_c and M_s. In this case, ST resists overfitting and leads model to a correct way indeed.
§.§ Hyperparameter Analysis
We analyzed the sensitivity of hyperparameter γ and β. As shown in Fig. <ref>, 2.6 × 10^-4 is a suitable value for γ, and 1/4 is a suitable value for β.
§.§ Different Protection Percentages
There is a more realistic learning scenario, where only a part of the data is protected by the defensive noise, while the others are clean. We used perturbed CIFAR-10 and CIFAR-100 with different mixing ratios. The protection percentage (ratio in Table <ref> and Table <ref>) represents the proportion of unlearnable samples to all samples. As shown in Table <ref> and Table <ref>, ST-Full has exceptional performance on all protection percentages (especially on protection percentage 1.0), which reflects the reliability of our method on mixed samples.
§.§ Settings of general counter-overfitting methods
The data augmentations, including cutout, mixup, cutmix, auto-augment, and guassian-filter (some implementation details in Appendix Sec. <ref>), ineffectively resist unlearnable samples. Then, we implement drop-out (0.5 probability) and weight-decay (WD) from 10^-4 to 10^-2, both of which cannot defeat unlearnable samples. Finally, we use adversarial training (AT) with ℓ_∞≤4/255.
§.§ Activation t-SNE Analysis
We natural trained a ResNet-18 on clean data (M_c) and unlearnable data (M_u), and compare them with a ResNet-18 ST trained on unlearnable data (M_s). Then we drew the t-SNE cluster results of the output activation of the fourth residual block layer (Fig. <ref>) and the sixth residual block layer (Fig. <ref>) of ResNet-18. The results show that when a raw model is trained on unlearnable data, the activation cluster disorder at early epochs. As training goes by, the model begins to cluster well, which is a phenomenon of overfitting. The visualization results of t-SNE also reflect the overfitting during training.
§.§ Loss Landscape
We drew the loss landscape of a natural trained ResNet-18 and a ST trained ResNet-18 on REM perturbed ulearnable data. As shown in Fig. <ref>, the natural trained ResNet-18 falls in a deep (dark blue) and smooth trap of global minimum which is hard to get out. While the ST trained ResNet-18 arrives in a shallow (light blue) and sharper local minimum compared with the previous one.
|
http://arxiv.org/abs/2306.08378v2
|
20230614090853
|
Investigation of transport properties of graphene Dirac fluid by holographic two-current axion coupling model
|
[
"C. E. Liu",
"S. G. Zhang"
] |
hep-th
|
[
"hep-th",
"cond-mat.mes-hall",
"cond-mat.str-el"
] |
=1
Figures/
Figures/
|
http://arxiv.org/abs/2306.10261v1
|
20230617053625
|
Continuity of inner-outer factorization and cross sections from invariant subspaces to inner functions
|
[
"Bingzhe Hou",
"Yue Xin"
] |
math.CV
|
[
"math.CV",
"math.FA",
"Primary 30J05, 30J10, Secondary 15A60, 15B05"
] |
Continuity of inner-outer factorization and cross sections]Continuity of inner-outer factorization and cross sections from invariant subspaces to inner functions
Bingzhe Hou, School of Mathematics, Jilin University, 130012, Changchun, P. R. China
[email protected]
Yue Xin, School of Mathematics, Jilin University, 130012, Changchun, P. R. China
[email protected]
[2010]Primary 30J05, 30J10; Secondary 15A60, 15B05.
Let H^∞ be the Banach algebra of bounded analytic functions on the unit open disc 𝔻 equipped with the supremum norm. As well known, inner functions play an important role of in the study of bounded analytic functions. In this paper, we are interested in the study of inner functions. Following by the canonical inner-outer factorization decomposition, define Q_inn and Q_out the maps from H^∞ to ℑ the set of inner functions and 𝔉 the set of outer functions, respectively. In this paper, we study the H^2-norm continuity and H^∞-norm discontinuity of Q_inn and Q_out on some subsets of H^∞. On the other hand, the Beurling theorem connects invariant subspaces of the multiplication operator M_z and inner functions. We show the nonexistence of continuous cross section from some certain invariant subspaces to inner functions in the supremum norm. The continuity problem of Q_inn and Q_out on Hol(𝔻), the set of all analytic functions in the closed unit disk, are also considered.
[
Yue Xin
===========
§ INTRODUCTION
Let 𝔻 be the unit open disc and 𝕋 be the unit circle. Let H^∞ be the Banach algebra of bounded analytic functions on 𝔻 equipped with the supremum norm ‖f‖_∞=sup{|f(z)|; z∈𝔻}. Moreover, denoted by (H^∞)^-1 the set of invertible functions in H^∞. Let L^∞(𝕋) (or L^∞ in brief) be the collection of all essentially bounded measurable functions on the unit circle 𝕋 with regard to the normalized Lebesgue measure of 𝕋. L^∞ is also a Banach algebra equipped with the the essential supremum norm ‖g‖_L^∞=sup_ξ∈𝕋 ess| g(z)|. Then there is an inclusion i:H^∞→ L^∞ defined by
f→f(e^𝐢θ)=lim_r→ 1^- f(re^𝐢θ).
Notice that this inclusion is an isometry, i.e., for any f∈ H^∞,
f_∞=f_L^∞.
Moreover, denoted by (L^∞)^-1 the set of invertible functions in L^∞.
A bounded analytic function u on 𝔻 is called an inner function if it has unimodular radial limits almost everywhere on 𝕋, and denote by ℑ the set of all inner functions. As well known, it plays an important role of inner functions in the study of bounded analytic functions, for instance, the Beurling theorem tells us that each invariant subspace M of the classical Hardy space H^2 under the multiplication operator M_z there exists an inner function u such that M=uH^2.
In addition, a bounded analytic function F on 𝔻 is called an outer function if F is a cyclic vector for multiplication operator M_z, i.e.,
⋁{z^nF(z); n∈ℕ}=H^2,
and denote by 𝔉 the set of all outer functions.
For any f∈ H^∞, there is a canonical inner-outer factorization decomposition f=uF, where F is an outer function and u is an inner function which is unique up to a scalar of modulus 1. Through the present paper, we assume that u is an inner function with u^(n_0)(0)>0 where n_0 is the smallest nonnegative integer such that u^(n_0)(0) is non-vanishing, which fixes the choice of the inner function in the inner-outer factorization decomposition and is called a normalized inner function. Furthermore, let Q_inn be the mapping
Q_inn:H^∞→ℑ, Q_inn(f)=u,
and let Q_out be the mapping
Q_out:H^∞→𝔉, Q_out(f)=F.
A natural question is to ask whether the mapping Q_inn or Q_out is continuous in some certain norm? Unfortunately, V. Kabaila <cit.> showed that neither Q_inn nor Q_out is continuous in H^p-norm, 1≤ p<∞.
In <cit.>, R. Douglas and C. Pearcy made a study on the topology of the invariant subspaces of certain bounded linear operators, where the distance of two invariant subspaces is the norm of the difference of the corresponding orthogonal projections.
If p is a nontrivial projection such that p(H^2) is invariant under M_z multiplication by the coordinate function, then the Beurling theorem gives an inner function
φ in H^∞ such that p(H^2)=φ H^2. This φ is also unique up to a scalar of modulus 1, and p=T_φT^*_φ, where T_φ is the Toeplitz operator induced by φ.
Let p_t, t ∈ [0, 1], be a (operator) norm continuous family of nontrivial projections such that p_t(H^2) is invariant under M_z for each t. In this paper, we denote φ_t the inner function each t such that p_t(H^2)=φ_t H^2, and call φ_t a cross section of p_t in ℑ. In addition, we always denote u_t the inner function chosen for each t such that p_t(H^2)=u_t H^2 and u_t^(n_t)(0) > 0, where n_t is the smallest nonnegative integer such that u_t^(n_t)(0) is non-vanishing, and call u_t the normalized cross section of p_t in ℑ.
The components of the set of inner functions has been considered by Herrero in <cit.> and <cit.>, and by Nestoridis in <cit.> and <cit.>.
Let ℑ^* (CN^*) be the open set in H^∞ of functions of the form f=uh, where u is an inner function (Carleson-Newman Blaschke product) and f∈ (H^∞)^-1. Notice that a function f belongs to ℑ^* if and only if h∈ H^∞ and inf_ξ∈𝕋|f(ξ)|>0, that is f∈ H^∞ and f∈(L^∞)^-1. A result of Laroco <cit.> asserts that the set ℑ^* is dense in H^∞. A. Nicolau and D. Suárez <cit.> studied the connected components of ℑ^* and CN^*.
In this paper, we will consider the subsets of ℑ^* with the same multiplicity of the zero point at 0.
For any f∈ H^∞, denote by Mul_0(f) be the multiplicity of the zero point of f at 0, more precisely,
Mul_0(f)=inf{n; f^(n)(0)≠ 0}.
Furthermore, we denote
ℑ^*_n={f∈ℑ^*; Mul_0(f)=n}, for any n=0,1,2,….
Notice that
ℑ^*_n={z^nf; f∈ℑ^*_0}.
In the present paper, we study the H^2-norm continuity of Q_inn and Q_out on ℑ^*_n and ℑ^* in section 2. In section 3, we study the the H^∞-norm discontinuity of Q_inn and Q_out on ℑ^*_n, and show the nonexistence of continuous cross section from invariant subspaces to inner functions under essential supremum norm. In the final section 4, we also consider the continuity problem of Q_inn and Q_out on Hol(𝔻), the set of all analytic functions in the closed unit disk.
§ CONTINUITY OF INNER-OUTER FACTORIZATION
Following from the proof of Theorem 7.1 in <cit.>, one can see that if p1≠ 0, where 1 is the constant function 1 in H^2, then u=p1/p1 is the normalized inner function of an orthogonal projection p on an invariant subspace of M_z. Consequently, u is H^2-norm continuous with respect to p for p1(0)≠ 0.
Suppose that p_t1(0)≠ 0 for all t. The normalized cross section u_t of p_t in ℑ is continuous in H^2-norm.
Let 𝔐 and 𝔑 be two nontrivial subspaces in a Hilbert space ℋ, p_𝔐 and p_𝔑 be the orthogonal projections on 𝔐 and 𝔑, respectively.
The gap (aperture) between subspaces 𝔐 and 𝔑 defined as, e.g., <cit.> and <cit.>,
gap(𝔐 ,𝔑)=p_𝔐-p_𝔑=max{p_𝔐p_𝔑^, p_𝔑p_𝔐^}
is used to measure the distance between subspaces.
The maximal angle θ_ max(𝔐 ,𝔑) between 𝔐 and 𝔑 was introduced in <cit.> and is defined as the angle in
[0, π/2] given by
sinθ_ max(𝔐 ,𝔑)= sup_x∈𝔐, x=1 dist(x, 𝔑)=sup_x∈𝔐, x=1√(1-p_𝔑(x)^2).
p_𝔐-p_𝔑=max{sinθ_ max(𝔐 ,𝔑), sinθ_ max(𝔑 ,𝔐)}.
For any nonnegative integer n, the maps Q_inn:(ℑ^*_n, ·_∞)→ (ℑ, ·_2) and Q_out:(ℑ^*_n, ·_∞)→ (𝔉, ·_2) are continuous in the H^2-norm.
Notice that for any nonnegative integer n,
ℑ^*_n={z^nf; f∈ℑ^*_0}.
It suffices to consider the case of n=0.
Given any element f in ℑ^*_0. There is a canonical decomposition f=uF, where u=Q_inn(f) is a normalized inner function and F=Q_out(f) is an outer function. Since F is an invertible element in H^∞, there exist two positive constants c_1 and C_1, such that
0<c_1≤F_∞≤ C_1<+∞.
Similarly, for an element g in ℑ^*_0, we may write g=vG, where v=Q_inn(g) and G=Q_out(g). Then, there are two positive constants c_2 and C_2, such that
0<c_2≤G _∞≤ C_2<+∞.
Now denote
c=min{c_1,c_2} and C=max{C_1,C_2}.
We will prove the continuity of Q_inn:(ℑ^*_0, ·_∞)→ (ℑ, ·_2) and Q_out:(ℑ^*_0, ·_∞)→ (𝔉, ·_2), respectively.
Part (1) The H^2-norm continuity of Q_inn on ℑ^*_0.
Denote 𝔐 the nontrivial invariant subspace u(H^2) of multiplication operator M_z and denote v(H^2) by 𝔑. Let p_𝔐 and p_𝔑 be the corresponding nontrivial orthogonal projections from H^2 onto 𝔐 and 𝔑, respectively. Since F and G are invertible in H^∞, we have
p_𝔐 (H^2)=uH^2=fH^2 and p_𝔑 (H^2)=vH^2=gH^2.
For any element φ∈𝔐 with φ_2=1, there is an element h∈ H^2 such that φ=uh. Moreover, h_2=1 and φ=uF·h/F=f·h/F. Since g·h/F is an element in the subspace 𝔑, we have
dist(φ,𝔑)≜ inf{φ-w_2; w∈𝔑}
≤ φ-g·h/F_2
= f·h/F-g·h/F_2
≤ f-g_∞1/F_∞h_2
≤ 1/c·f-g_∞.
By the arbitrariness of φ, we have
sup_φ∈𝔐, φ_2=1dist(φ,𝔑)≤1/c·f-g_∞.
Similarly, we have
sup_ψ∈𝔑, ψ_2=1dist(ψ,𝔐)≤1/c·f-g_∞.
By Lemma <ref>, one can see that
p_𝔐-p_𝔑=max{sinθ_ max(𝔐, 𝔑), sinθ_ max(𝔑,𝔐)}≤1/c·f-g_∞.
Given any ϵ>0. It follows from Lemma <ref> that there is a positive number ϵ' such that when p_𝔐-p_𝔑≤ϵ', we have u-v_2≤ϵ.
Let δ=cϵ'. Then, when f-g_∞≤δ, we have
p_𝔐-p_𝔑≤δ/c= cϵ'/c=ϵ'
and consequently
u-v_2≤ϵ.
Therefore, the map Q_inn:(ℑ^*_0, ·_∞)→ (ℑ, ·_2) is continuous in H^2-norm.
Part (2) The H^2-norm continuity of Q_out on ℑ^*_0.
Since
f-g= uF-vG
= uF-vF+vF-vG
= F(u-v)+v(F-G),
we have
F-G_2 =v(F-G)_2
≤f-g_2 +F(u-v)_2
≤f-g_∞ +F_∞u-v_2
≤f-g_∞ +Cu-v_2.
Given any η>0. It follows from Part (1) that there is a positive number δ<η/2 such that when f-g_∞≤δ, we have u-v_2≤η/2C. Then, we have
F-G_2≤δ+C·η/2C≤η/2+η/2=η.
Therefore, the map Q_out:(ℑ^*_0, ·_∞)→ (𝔉, ·_2) is continuous in H^2-norm.
In the above theorem, the setting of ℑ^*_n is necessary. Neither Q_inn:(H^p, ·_p)→ (ℑ, ·_p) nor Q_out:(H^p, ·_p)→ (𝔉, ·_p) is continuous, 1≤ p≤∞. Kabaila <cit.> proved the discontinuity for the case of 1≤ p<∞. The discontinuity for the case of p=∞ could be found in Corollary 3 in <cit.> by Nakazi. Now, we will give a simple example to show that neither Q_inn:(ℑ^*, ·_∞)→ (ℑ, ·_p) nor Q_out:(ℑ^*, ·_∞)→ (𝔉, ·_p) is continuous for any 1≤ p≤∞, which implies that ℑ^*_n could not be extended to ℑ^* in the Theorem <ref>.
Let f_t(z)=t-z/1-tz for t∈[-1,1]. Obviously, f_t(z) is a continuous path in ℑ^*. However,
Q_inn(f_t)={[ -t-z/1-tz for t∈[-1,0],; t-z/1-tz for t∈(0,1]. ].
and
Q_out(f_t)={[ -1 for t∈[-1,0],; 1 for t∈(0,1]. ].
Then, neither Q_inn(f_t) nor Q_out(f_t) is continuous at t=0 in H^p-norm, for any 1≤ p≤∞.
§ NONEXISTENCE OF CONTINUOUS CROSS SECTIONS FROM INVARIANT SUBSPACES TO INNER FUNCTIONS IN THE SUPREMUM NORM
In this section, we will show the discontinuity of Q_inn:(ℑ^*_n, ·_∞)→ (ℑ, ·_∞) and Q_out:(ℑ^*_n, ·_∞)→ (𝔉, ·_∞). Moreover, we will illustrate the nonexistence of continuous cross section from invariant subspaces to inner functions in the supremum norm. To prove those, we need some lemmas as preliminaries.
Firstly, Nicolau and Suárez have obtained a result with regard to path in ℑ^* in the supremum norm.
Let f, g∈ℑ^*. Then there is a normalized inner function b (in fact, it is a CNBP) such that bf and bg can be joined by a polygonal path contained in ℑ^* in the supremum norm. Moreover, if f, g∈ CN^*, then b can be chosen such that bf and bg can be joined by a polygonal path contained in CN^*.
For a polygonal path, we have an intuitive observation as follows.
Let γ(t):[0,1]→ℂ be a polygonal path. Then for any e^𝐢η which is not parallel to any segment of the polygonal path γ, there exists ϵ_0>0 such that for any 0<ϵ≤ϵ_0, the path γ(t)+ϵe^𝐢η does not pass through 0.
The proof is simple. Since a polygonal path is composed of finite segments, one can see the conclusion from the following figure.
Following from the above simple lemma, we could strengthen the conclusion of Nicolau and Suárez (Lemma <ref>) from ℑ^* to ℑ^*_0.
Let h_0, h_1∈ (H^∞)^-1. Then there exists a normalized inner function φ such that φ h_0 and φ h_1 can be joined by a path contained in ℑ^*_0 in the supremum norm.
By Lemma <ref>, there is a normalized inner function b such that bh_0 and bh_1 can be joined by a polygonal path f_t(z) in ℑ^*, t∈[0,1]. Write f_t=u_th_t, where u_t is the normalized inner function part of f_t and h_t is the outer function part of f_t.
Whenever b(0)=0 or b(0)>0, for a fixed number r∈(0,1), we have b(0)+r>0. Let φ(z)=b(z)+r/1+rb(z). Then, φ(0)>0 and
g_t(z)=b(z)+(1-t)r/1+(1-t)rb(z), t∈ [0,1],
is a continuous path from φ to b in ℑ in the supremum norm, and
g_t(z)=b(z)+tr/1+trb(z), t∈ [0,1],
is the inverse path of g_t. Furthermore, let
f_t(z)={[ g_4t(z)h_0(z) for t∈[0,1/4],; f_4t-1(z) for t∈[1/4,1/2],; g_2t-1(z)h_1(z) for t∈[1/2,1]. ].
Then, f_t(z) is a path from φ h_0 to φ h_1 in ℑ^*. Since the unit closed interval is compact, there is a positive number ϵ_0 such that
0<ϵ_0<inf_ξ∈𝕋essf_t(ξ) for all t∈[0,1].
Moreover, let γ(t)=f_t(0), t∈[0,1]. By Lemma <ref> and the construction of f_t(z), one can see that γ(t) is a polygonal path in ℂ. By Lemma <ref>, there exist a number η and a positive number ϵ with 0<ϵ<min{ϵ_0, |φ(0)h_0(0)|, |φ(0)h_1(0)|} such that the path γ(t)+ϵe^𝐢η does not pass through 0. Consequently,
f_t(z)+ϵe^𝐢η
is a path from φ h_0+ϵe^𝐢η to φ h_1+ϵe^𝐢η in ℑ^*_0. Notice that
φ h_0+tϵe^𝐢η, t∈[0,1], is a path from φ h_0 to φ h_0+ϵe^𝐢η in ℑ^*_0, and φ h_1+(1-t)ϵe^𝐢η, t∈[0,1], is a path from φ h_1+ϵe^𝐢η to φ h_1 in ℑ^*_0.
Then, we could obtain a path from φ h_0 to φ h_1 in ℑ^*_0.
Neither Q_inn:(ℑ^*_n, ·_∞)→ (ℑ, ·_∞) nor Q_out:(ℑ^*_n, ·_∞)→ (𝔉, ·_∞) are continuous in the supremum norm.
It suffices to consider the case of n=0.
As well known, there exist h_0, h_1∈ (H^∞)^-1 such that they can not be joined by a path in (H^∞)^-1 (see <cit.> or <cit.> for example). More precisely, one can choose
h_0=1 and h_1(z)=e^2𝐢/πlog1+z/1-z
as required. By Lemma <ref>, there exists a normalized inner function φ such that φ h_0 and φ h_1 can be joined by a path f_t(z), t∈[0,1], contained in ℑ^*_0 in the supremum norm.
Notice that Q_out(f_t) belongs to (H^∞)^-1 for each t∈[0,1]. If Q_out:(ℑ^*_n, ·_∞)→ (𝔉, ·_∞) is continuous, Q_out(f_t) is a path from h_0 to h_1 in (H^∞)^-1. That is a contradiction.
Now suppose that Q_inn:(ℑ^*_n, ·_∞)→ (ℑ, ·_∞) is continuous. Then, Q_inn(f_t) is a path in ℑ. Let
C≜sup_t∈[0,1]f_t_∞=sup_t∈[0,1]Q_out(f_t)_∞.
For any s,t∈[0,1], since
f_s-f_t= u_sF_s-u_tF_t
= u_sF_s-u_tF_s+u_tF_s-u_tF_t
= F_s(u_s-u_t)+u_t(F_s-F_t),
we have
F_s-F_t_∞ =u_t(F_s-F_t)_∞
≤f_s-f_t_∞ +F_s_∞u_s-u_t_∞
≤f_s-f_t_∞ +Cu_s-u_t_∞.
Then, the continuity of f_t and u_t=Q_inn(f_t) implies the continuity of F_t=Q_out(f_t). That means F_t=Q_out(f_t) is a path from h_0 to h_1 in (H^∞)^-1, which is a contradiction.
There exists a norm continuous family of nontrivial projections p_t such that p_t(H^2) is invariant for each t, t ∈ [0, 1], such that there is no continuous cross section of p_t in ℑ in the supremum norm.
Similar to the proof of the previous theorem, let
h_0=1 and h_1(z)=e^2𝐢/πlog1+z/1-z,
and let f_t(z), t∈[0,1], be a path from φ h_0 to φ h_1 in ℑ^*_0 for some certain normalized inner function φ. Furthermore, let 𝔐_t be the subspace f_tℍ^2, and let p_t be the orthogonal projection onto 𝔐_t. Following from the inequality (<ref>), the continuity of f_t in the supremum norm implies the continuity of p_t in the operator norm.
Suppose that φ_t(z) is a supremum norm continuous cross section of p_t in ℑ, which is not necessary to be normalized. Write f_t(z)=φ_t(z)G_t(z) for t∈[0,1]. As the same as inequalities (<ref>)-(<ref>), one can see for any s,t∈[0,1],
G_s-G_t_∞≤f_s-f_t_∞ +Cφ_s-φ_t_∞,
where C=sup_t∈[0,1]f_t_∞ is a positive constant. Then, G_t is a path from h_0 to h_1 in (H^∞)^-1, which is a contradiction. Therefore, there is no continuous cross section of such p_t in ℑ in the supremum norm.
The above Theorem <ref> tells us that the supremum norm is quite different from ℍ^2-norm. Replace ℍ^2-norm by the supremum norm in Lemma <ref>, the conclusion will no longer hold true, even if the normalized restriction is removed.
§ ON ANALYTIC FUNCTIONS IN THE CLOSED DISK
In this section, we will consider Hol(𝔻) instead of ℑ^*. For any nonnegative integer n, denote
Hol_n(𝔻)={f∈Hol(𝔻); Mul_0(f)=n}.
Given any nonnegative integer n. The map Q_out:(Hol_n(𝔻), ·_∞)→ (𝔉, ·_∞) is continuous in the supremum norm, but not the map Q_inn:(Hol_n(𝔻), ·_∞)→ (ℑ, ·_∞). Both Q_inn:(Hol_n(𝔻), ·_∞)→ (ℑ, ·_2) and Q_out:(Hol_n(𝔻), ·_∞)→ (𝔉, ·_2) are continuous in the H^2-norm.
Notice that for any nonnegative integer n,
Hol_n(𝔻)={z^nf; f∈Hol_0(𝔻)}.
It suffices to consider the case of n=0.
(1) The continuity of Q_out:(Hol_0(𝔻), ·_∞)→ (𝔉, ·_∞)
Given any element f in Hol_0(𝔻). Then f has finite zero points in 𝔻∖{0}. More precisely, f has zero points {α_k}_k=1^n in 𝔻 and {z_s}_s=1^m in ∂𝔻. Consequently, f has a canonical decomposition,
f=B_f F_f=∏_k=1^n|α_k|/α_kα_k-z/1-α_kz· F_0 ·∏_s=1^m(z_s-z),
in which Q_inn(f)=B_f=∏_k=1^n|α_k|/α_kα_k-z/1-α_kz is the normalized inner function part of f and Q_out(f)=F_f=F_0 ∏_s=1^m(z_s-z) is the outer function part of f. Denote F_1=∏_s=1^m(z_s-z) and then F_f=F_0 F_1. Since F_0 is an invertible element in H^∞, there exist two positive numbers C_1 and C_2>1 such that
0<C_1≤F_0_∞≤ C_2<+∞.
By Rouché's theorem, if g∈Hol_0(𝔻) is a small perturbation of f, then g and f have the same number of zeros in a small neighborhood of 𝔻. In fact, each zero point of g is close to a zero point of f.
Furthermore, we may assume that g has zero points {α_k}_k=1^n in 𝔻, {z_s}_s=1^m_1 in 𝔻 and {z_s}_s=m_1 +1^m out of 𝔻, which are close to {α_k}_k=1^n, {z_s}_s=1^m_1 and {z_s}_s=m_1+1^m, respectively. Then, denote
B_f=∏_k=1^n|α_k|/α_kα_k-z/1-α_kz,
G^inn_F_1=∏_s=1^m_1(z_s-z),
and
G^out_F_1=∏_s=m_1 +1^m(z_s-z).
Moreover, denote
G^inn_F_1=∏_s=1^m_1|z_s|/z_s(1-z_sz)
and
B_F=G^inn_F_1/G^inn_F_1=∏_s=1^m_1z_s/|z_s|(z_s-z)/(1-z_sz).
Then, Q_inn(g)=B_g=B_fB_F is the normalized inner function part of g and Q_out(g)=F_g=G^inn_F_1G^out_F_1G_0 is the outer function part of g, in which G_0 is an invertible element in H^∞. So one could write g as follows
g =B_gF_g
=B_fB_FG^inn_F_1G^out_F_1G_0
=∏_k=1^n|α_k|/α_kα_k-z/1-α_kz∏_s=1^m_1z_s/|z_s|(z_s-z)/(1-z_sz)∏_s=1^m_1|z_s|/z_s(1-z_sz)∏_s=m_1 +1^m(z_s-z)G_0.
Given any ϵ>0. Let
M=ϵ+max{∏_s=1^m_1(z_s-z)_∞, ∏_s=m_1+1^m(z_s-z)_∞}.
Since the zero points of g tend to the zero points of f when g tends to f in the supremum norm, there exists a positive number δ<ϵ/2 such that if g∈Hol_0(𝔻) with f-g≤δ,
then each α_k is sufficiently close to α_k such that
B_f-B_f_∞≤ϵ/2C_2(M^2+4M),
and each z_s is sufficiently close to z_s such that
G^inn_F_1-∏_s=1^m_1(z_s-z)_∞≤ϵ/2C_2(M^2+4M),
G^inn_F_1-∏_s=1^m_1|z_s|/z_s(1-z_sz)_∞≤ϵ/2C_2(M^2+4M),
G^out_F_1-∏_s=m_1 +1^m(z_s-z)_∞≤ϵ/2C_2(M^2+4M).
Notice that z_s∈∂𝔻, for s=1, …, m. Then, it is easy to see that
|z_s|/z_s(1-z_sz)=1/z_s(z_sz_s-z_sz)=z_s-z,
and consequently we could rewrite the inequality (<ref>) as
G^inn_F_1-∏_s=1^m_1(z_s-z)_∞≤ϵ/2C_2(M^2+4M).
Following from the inequalities (<ref>), (<ref>) and (<ref>), one can see that
G^inn_F_1≤∏_s=1^m_1(z_s-z)_∞+ϵ/2C_2(M^2+4M)≤ M,
G^inn_F_1_∞≤ M and G^out_F_1_∞≤ M.
Furthermore, together with inequality (<ref>), we have
F_1-G^inn_F_1G^out_F_1_∞
= ∏_s=1^m_1(z_s-z)∏_s=m_1+1^m(z_s-z)-G^inn_F_1G^out_F_1_∞
≤ ∏_s=1^m_1(z_s-z)∏_s=m_1+1^m(z_s-z)-∏_s=1^m_1(z_s-z)G^out_F_1_∞+∏_s=1^m_1(z_s-z)G^out_F_1-G^inn_F_1G^out_F_1_∞
≤ ∏_s=1^m_1(z_s-z)_∞∏_s=m_1+1^m(z_s-z)-G^out_F_1_∞+G^out_F_1(z)_∞∏_s=1^m_1(z_s-z)-G^inn_F_1_∞
≤ 2Mϵ/2C_2(M^2+4M)
and similarly,
F_1-G^inn_F_1G^out_F_1_∞
≤ ∏_s=1^m_1(z_s-z)_∞∏_s=m_1+1^m(z_s-z)-G^out_F_1_∞+G^out_F_1(z)_∞∏_s=1^m_1(z_s-z)-G^inn_F_1_∞
≤ 2Mϵ/2C_2(M^2+4M).
Consequently,
B_f F_1-B_f G^inn_F_1 G^out_F_1_∞
≤ B_f F_1-B_f G^inn_F_1 G^out_F_1_∞+B_f G^inn_F_1 G^out_F_1-B_f G^inn_F_1 G^out_F_1_∞
≤ B_f_∞F_1-G^inn_F_1 G^out_F_1_∞+G^inn_F_1 G^out_F_1_∞B_f-B_f_∞
= F_1-G^inn_F_1 G^out_F_1_∞+G^inn_F_1_∞G^out_F_1_∞B_f-B_f_∞
≤ (M^2+2M)ϵ/2C_2(M^2+4M).
In addition, we have
f-g =B_f F_1 F_0-B_g G^inn_F_1G^out_F_1G_0
=(B_f F_1 F_0-B_g G^inn_F_1G^out_F_1F_0)+(B_g G^inn_F_1G^out_F_1F_0-B_g G^inn_F_1G^out_F_1G_0)
then
G^inn_F_1G^out_F_1F_0- G^inn_F_1G^out_F_1G_0_∞
= B_g(G^inn_F_1G^out_F_1F_0- G^inn_F_1G^out_F_1G_0)_∞
= (f-g)-(B_f F_1 F_0-B_g G^inn_F_1G^out_F_1F_0)_∞
≤ f-g_∞+F_0_∞B_f F_1-B_g G^inn_F_1G^out_F_1_∞
≤ δ+C_2·(M^2+2M)ϵ/2C_2(M^2+4M)
≤ ϵ/2+(M^2+2M)ϵ/2(M^2+4M).
Therefore,
F_f-F_g_∞
= F_1 F_0-G^inn_F_1G^out_F_1G_0_∞
= F_1 F_0-G^inn_F_1G^out_F_1F_0+G^inn_F_1G^out_F_1F_0-G^inn_F_1G^out_F_1G_0_∞
≤ F_0_∞F_1 -G^inn_F_1G^out_F_1_∞+G^inn_F_1G^out_F_1F_0-G^inn_F_1G^out_F_1G_0_∞
≤ C_2·2Mϵ/2C_2(M^2+4M)+ϵ/2+(M^2+2M)ϵ/2(M^2+4M)
= ϵ.
So f→ g in the supremum norm implies F_f→ F_g in the supremum norm, which means the map Q_out:(Hol_n(𝔻), ·_∞)→ (𝔉, ·_∞) is continuous.
(2) The discontinuity of Q_inn:(Hol_0(𝔻), ·_∞)→ (ℑ, ·_∞)
However, the map Q_inn:(Hol_0(𝔻), ·_∞)→ (ℑ, ·_∞) is not continuous in the supremum norm. Here we give an example to show the discontinuity. Let f_t(z)=t-z for t∈[1/2,2]. Then f_t is a continuous path in Hol(𝔻). The inner part of f_1 is Q_inn(f_1)=1 and the inner part of f_t, for t∈[1/2,1), is Q_inn(f_t)=t-z/1-tz. It is easy to see that as t→ 1,
f_t-f_1_∞=|1-t|→ 0
but
Q_inn(f_t)-Q_inn(f_1)_∞=t-z/1-tz-1_∞=2↛ 0.
(3) The continuity of Q_inn:(Hol_n(𝔻), ·_∞)→ (ℑ, ·_2)
Next, we want to prove that Q_inn:(Hol_n(𝔻), ·_∞)→ (ℑ, ·_2) is continuous in the H^2-norm, which is different from the case in the supremum norm.
Recall that Q_inn(g)=B_g=B_fB_F and Q_inn(f)=B_f. Notice that if g tends to f in the supremum norm, then each α_k is sufficiently close to α_k. More precisely, there exists a positive number δ'<ϵ/2 such that if g∈Hol_0(𝔻) with f-g≤δ', we have
B_f-B_f_2≤ϵ/2
and by z_s∈𝕋 for s=1,2,…,m_1,
|∏_s=1^m_1|z_s|-1|=|∏_s=1^m_1|z_s|-∏_s=1^m_1|z_s||≤ϵ^2/8.
Write B_F as follows
B_F(z)=∑_k=0^∞c_k z^k,
where c_k is the k-th Taylor coefficient of B_F.
Since B_F(z)=∏_s=1^m_1z_s/|z_s|(z_s-z)/(1-z_sz) is an inner function, it is not difficult to see that
B_F-1_2 ^2 =∑_k=0^∞c_k z^k-1_2 ^2
=c_0 -1+∑_k=1^∞c_k z^k_2 ^2
≤ |c_0 -1|^2+|∑_k=1^∞c_k|^2
=|c_0 -1|^2+∑_k=0^∞|c_k| ^2-|c_0|^2
=|∏_s=1^m_1|z_s|-1|^2+1-(∏_s=1^m_1|z_s|)^2
≤ϵ^4/64+1-(1-ϵ^2/8)^2
=ϵ^2/4,
that is
B_F-1_2 <ϵ/2.
Consequently, it follows from inequalities (<ref>) and (<ref>) that
B_f-B_g_2 =B_f-B_fB_F_2
=B_f-B_f B_F+B_f B_F-B_fB_F_2
≤B_f-B_f B_F_2+B_f B_F-B_fB_F_2
=B_F-1_2+B_f-B_f_2
<ϵ/2+ϵ/2
=ϵ
So f→ g in the H^2-norm implies B_f→ B_g in the H^2-norm, which means the map Q_inn:(Hol_n(𝔻), ·_∞)→ (ℑ, ·_2) is continuous.
(4) The continuity of Q_out:(Hol_n(𝔻), ·_∞)→ (𝔉, ·_2)
At the end, since Q_out:(Hol_n(𝔻), ·_∞)→ (𝔉, ·_∞) is continuous in the supremum norm, it is obvious that Q_out:(Hol_n(𝔻), ·_∞)→ (𝔉, ·_2) is also continuous in the H^2-norm.
§ DECLARATIONS
Ethics approval
Not applicable.
Competing interests
The author declares that there is no conflict of interest or competing interest.
Authors' contributions
All authors contributed equally to this work.
Funding
There is no funding source for this manuscript.
Availability of data and materials
Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.
000
BS-2010
A. BöTtcher, I. M Spitkovsky, A gentle guide to the basics of two projections theory, Linear Algebra Appl., 432(6) (2010), 1412-1459.
Dav-1988
K. R. Davidson, Nest Algebras, Pittman Research Notes in Mathematics Series 191, Longman Scientific & Technical, Harlow; copublished in the United States with John Wiley & Sons, Inc., New York, 1988.
Doug-1968
R. G. Douglas, C. Pearcy, On a Topology for Invariant Subspaces, J. Funct. Anal., 2 (1968), 323-341.
Gam-1984
T. W. Gamelin, Uniform algebras, Prentice Hall Inc. englewood Cliffs N.j, 1984.
Garn-1981
J. B. Garnett, Bounded Analytic Functions, revised first edition, GTM 236, Springer-Verlag, New York, 2007.
Her-1974
D. A. Herrero, Inner functions under uniform topology, Pacific Z. Math. 51 (1974), 167-175.
Her-1976
D. A. Herrero, Inner functions under uniform topology II, Rev. Un. Mat. Argentina, 28 (1976), 23-35.
Kab-1977
V. Kabaila, Continuous factorization of functions in H^p, Lith. Math. J., 17(4) (1977), 519-522.
Kato-1995
T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, 1995.
KJA-2010
A. Knyazev, A. Jujunashvili, M. Argentati, Angles Between Infinite Dimensional Subspaces with Applications to the Rayleigh-Ritz and Alternating Projectors Methods, J. Funct. Anal., 259(6) (2010), 1323-1345.
KKM-1948
M. Krein, M. Krasnoselski, D. Milman, On the defect numbers of operators in Banach spaces and on some geometric questions, Trudy Inst. Mat. Akad. Nauk Ukrain. SSR 11 (1948) 97-112. (in Russian)
Lar-1991
L. A. Laroco, Stable rank and approximation theorems in H^∞, T. Am. Math. Soc. 327 (1991), 815-832.
Nak-1996
T. Nakazi, Factorizations of outer functions and extremal problems, P. Edinburgh Math. Soc., 39(3) (1996), 535-546.
Nest79
V. Nestoridis, Inner functions invariant connected components, Pacific J. Math., 83 (2) (1979), 473-480.
Nest80
V. Nestoridis, Inner functions: noninvariant connected components, Pacific J. Math., 87 (1) (1980), 199-209.
NS-2010
A. Nicolau, D. Suárez, Paths of inner-related functions, J. Funct. Anal., 262 (9) 2010, 3749-3774.
|
http://arxiv.org/abs/2306.11619v1
|
20230620154756
|
Planetesimal formation at the gas pressure bump following a migrating planet II. Effects of dust growth
|
[
"Yuhito Shibaike",
"Yann Alibert"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
II. Effects of dust growth
Physikalisches Institut & NCCR PlanetS, Universitaet Bern, CH-3012 Bern, Switzerland
[email protected]
Planetesimal formation is still mysterious. One of the ways to form planetesimals is to invoke a gas pressure bump in a protoplanetary disc. In our previous paper, we propose a new scenario in which the piled-up dust at a gas pressure bump created by a migrating planet form planetesimals by streaming instability in a wide region of the disc as the planet migrates inward.
In this work, we consider the global time evolution of dust and investigate the detailed conditions and results of the planetesimal formation in our scenario.
We use a 1D grid single-sized dust evolution model, which can follow the growth of the particles by their mutual collision and their radial drift and diffusion. We calculate the time-evolution of the radial distribution of the peak mass and surface density of the dust in a gas disc perturbed by an embedded migrating planet and investigate if the dust satisfies the condition for planetesimal formation.
We find that planetesimals form in a belt-like region between the snowline and the position where the planet reaches its pebble-isolation mass when the strength of turbulence is 10^-4≤α≤10^-3, which is broadly consistent with observed value of α. The mechanism of the formation, streaming instability or mutual collision, depends on the timescale of the streaming instability. The total mass of planetesimals formed in this scenario also depends on α and is about 30-100 M_ E if the planetary core has already existed at the beginning and grows by gas accretion, but it decreases as the timing of the formation of the planetary core is later. We also provide simple approximate expressions of the surface density and total mass of the planetesimals and find that the total planetesimal mass strongly depends on the dust mass.
We show that planetesimals form in a belt-like region by the combination of the dust pile-up at the gas pressure bump formed by a planet and its inward migration.
Planetesimal formation at the gas pressure bump following a migrating planet
Y. Shibaike1
Y. Alibert1
Received MM DD, 2020; accepted MM DD, 2020
============================================================================
§ INTRODUCTION
The formation of planetesimals, kilometer-sized building blocks of planets, has been investigated for a long time, but a lot of problems still remain. Especially, the so-called “drift barrier” is not solved yet. Planetesimals had been considered to form by mutual collisions of dust particles in protoplanetary discs. The particles, however, suffer head wind from the gas disc rotating with a sub-Kepler speed due to the gas pressure gradient and lose their angular momentum, which forces the particles to drift toward the central star before they grow to planetesimals <cit.>.
One of the solutions to avoid the loss of the particles by inward drift is invoking the gas pressure bump at some location in the disc. The gas pressure gradient is null at the bump, and drifting particles pile up there <cit.>. Many observations of the millimeter continuum emission from protoplanetary discs show ring and gap structures <cit.>, which are considered as the evidence of the dust pile-up at gas pressure bumps <cit.>. One of the most popular mechanisms to form the ring and gap structures is the gravitational interaction with embedded planets <cit.>, and some observations of protoplanets in the gaps support this mechanism <cit.>. Such dust concentrated locations are suitable for planetesimal formation by gravitational instabilities or by mutual sticking (collision) of the dust <cit.>. Especially, streaming instability occurs by the accumulation of dust and makes clumps of dust, which triggers the further gravitational instability and forms planetesimals <cit.>. <cit.> shows that if dust particles at the gas pressure bump form planetesimals by streaming instability, the dust rings in multiple protoplanetary discs observed by DSHARP survey observation <cit.> are better explained. Many previous works also argue that planetesimals can form at the gas pressure bump created by an embedded planet <cit.>. The planetesimals formed at the bump grow larger and parts of them are captured or scattered by the planet <cit.>.
In our previous paper, <cit.> (hereafter Paper 1), we proposed a new scenario in which planetesimals form by streaming instability at the gas pressure bump created by a migrating planet, resulting in planetesimal formation in a wide region of the protoplanetary disc. We developed a simple 1D Lagrangian particle model which can follow the radial distribution of fixed-sized dust in a gas disc perturbed by a migrating planet. We showed that planetesimals form in a wide region of the disc, and their total mass and formation region depend on the dust mass flux and the strength of turbulence in the disc. We also found that the surface density of formed planetesimals can be approximated by a simple equation. <cit.> reproduced the observed exoKuiper belts (i.e., planetesimal belts in extrasolar systems) by this scenario with a simple grid model of the global dust evolution. The surface density profiles of the formed planetesimals in <cit.> are consistent with the approximate expression by Paper 1.
In this paper, we investigate the detailed conditions and results for our planetesimal formation scenario by considering global dust evolution. We do not use the Lagrangian model developed in Paper 1 but use a grid model which can follow the time evolution of the radial profiles of the peak mass and surface density of dust particles. We assume the existence of a migrating planet (or a planetary core) carving the gas disc and investigate when and where the planetesimals form by streaming instability or by mutual collision by changing the strength of turbulence and the (poorly known) condition for streaming instability. Although this work is similar to <cit.>, we do not focus on the reproduction of observations but on the detailed investigation of the phenomena of planetesimal formation. Also, we consider an earlier stage of planet formation, when the planet does not migrate in Type II migration but does in Type I migration.
In Section <ref>, we explain the methods used in this work. We then show the results of the calculation depending on the timescale of streaming instability in Section <ref>. We also explain a case where the properties of streaming instability depends on the Stokes number of dust. In Section <ref>, we investigate the effects of change of disc properties. Furthermore, we investigate the effects of the planetary growth by gas accretion and the later formation of the planetary cores considering the shift of the migration type from the Type I to Type II and the time evolution of the disc. We also discuss the effects of the back reaction from dust to gas and the dust leak from gas pressure bumps. Finally, we conclude this work in Section <ref>.
§ METHODS
§.§ Gas disc model
First, we set a gas disc model. The unperturbed (i.e., not perturbed by a planet) gas surface density is assumed to follow a power low:
Σ_ g,unp=Σ_ g,1au(r au)^-p,
where r is the distance to the star, and Σ_ g,1au and p are constants. The disc temperature (in the midplane) is
T=T_ 1au(r au)^-q,
where T_ 1au and q are constants. We assume the constants as Σ_ g,1au=500 g cm^-2, T_ 1au=280 K, p=1, and q=1/2. This set of assumption is consistent with the “Model A” of Paper 1. The slope of the gas surface density is consistent with the observations of protoplanetary discs under the assumption that the dust-to-gas surface density ratio is uniform through the entire discs <cit.>. We set the snowline at the orbit where T=160 K, which is r_ SL=3.06 au in this disc model.
We note that when the disc temperature is dominated by the viscous heating, the temperature increases as the turbulence is stronger. In this paper, however, we fix the gas surface density and temperature with changing the strength of turbulence. We investigate the cases with a hotter disc in Section <ref> and with a time-evolving gas disc in Section <ref>.
§.§ Gap formation by a migrating planet
Planets embedded in gas discs influence the discs and changes the gas structure. We assume the presence of a single planet with fixed mass M_ pl=20 M_ E, migrating inward from r=50 au. In Section <ref>, we consider the growth of the planet by gas accretion and its later formation. The subscript “pl” indicates the properties of the planet and/or its location. The surface density of the local perturbed gas disc has been modeled by many previous works. We use a model provided by <cit.> in order to compare our results with the pebble-isolation mass provided by <cit.> (see next paragraph), which also uses the model of <cit.>[In Section <ref>, we use the model described in Paper 1, because the model by <cit.> is not accurate when the planet is heavy.]. The perturbed gas surface density profiles is
Σ_ g=Σ_ g,unp{1 - f(r)K/(3π)1 + f_0K/(3π)√(r_ pl/r)}
where r_ pl is the orbital radius of the embedded planet, and the parameter f_0 is fixed as 0.45. The factor K is defined as
K≡(M_ plM_*)^2(H_ g,plr_ pl)^-5α^-1,
where M_*=1M_⊙ is the mass of the star, and α is the strength of turbulence of the gas <cit.>. We treat α as a constant (in space and time) and change the value as a parameter. The gas scale height (at r_ pl) is H_ g,pl=c_ s,pl/Ω_ K,pl, where the sound speed and the Kepler frequency are c_ s,pl=√(k_ BT_ pl/m_ g) and Ω_ K,pl=√(GM_*/r_ pl^3), respectively[These expressions are also valid without the subscripts.]. The Boltzmann constant and the gravitational constant are k_ B and G, respectively. The mean molecular mass is m_ g=3.9×10^-24 g. The function f(r), the scaled-out angular momentum flux by the shocking of the planetary wake, is
f(r)=
f_0, τ(r)<τ_ sh,
f_0√(τ_ sh/τ(r)), τ(r)≥τ_ sh.
where the shock position, τ_ sh, is given as <cit.>
τ_ sh=1.89+0.53(M_ plM_*)^-1(H_ g,plr_ pl)^3.
The parameter τ(r), representing an appropriately scaled distance from the planet, is
τ(r)=32^5/4(H_ g,plr_ pl)^-5/2|∫^r/r_ pl_1|s^3/2-1|^3/2s^p/2+5q/4-11/4ds|.
Once a gap forms around a planet, the dust particles start to pile up at the gas pressure bump, and their accretion onto the planet stops. The planet mass where the dust (pebble) accretion stops is called “pebble-isolation mass” <cit.>. <cit.> found that it depends on the planet mass and the strength of turbulence,
M_ PIM=h_ pl^3√(37.3α+0.01){1+0.2(√(α)h_ pl√(1 St_pl^2+4))^0.7}M_ *,
where h_ pl=H_ g,pl/r_ pl is the aspect ratio of the disc at the orbital position of the planet. We define r_ PIM as the orbital position where the planet mass M_ pl (we fix it as 20M_ E) is equal to M_ PIM. The planet crosses the orbital position of r=r_ PIM during its inward migration outside the snowline when α≤10^-2.6 (see Section <ref> for details).
The ratio of the pressure gradient to the gravity of the central star, η, is important, because it determines the direction and speed of the drift of the particles (see Eq. (<ref>)). We calculate the ratio as
η=-12(H_gr)^2∂lnρ_gc_s^2∂lnr,
where ρ_ g=Σ_ g/(√(2π)H_ g) is the (local) gas density in the midplane. We here define r_η0 as the orbital position where η is zero (due to the cavity of the gas disc by the planet) when the planet is at r=r_ PIM.
We consider the Type I migration of the planet. The migration timescale depends on the planet mass and the structures of the gas and temperature of the disc <cit.>,
τ_ mig=12.728+1.082p(c_ s,plr_ plΩ_ K,pl)^2M_*M_ plM_*r_ pl^2Σ_ g,unpΩ_ K,pl^-1.
We assume the planet is at r=50 au at t=0 and migrates inward with a velocity equal to v_ pl=-r_ pl/τ_ mig. We consider the reduction of the migration speed and the shift from the Type I to Type II migration due to the deep gap formation with the planetary growth by gas accretion in Section <ref>. We also investigate the cases where the planet forms later in that section.
§.§ Dust evolution
We include in our model the evolution of dust particles in the gas disc model. We use a single-sized dust evolution model proposed by <cit.>, which assumes that m_ d singly peaks the mass distribution of dust at each orbit r. We calculate the radial distribution of the surface density of dust particles, Σ_ d, and their peak mass, m_ d, by solving the following equations, Eqs. (<ref>) and (<ref>), simultaneously. The subscript “d” indicates the properties of the dust particles.
We consider the evolution of compact and spherical dust particles, the mass of a single particle being m_ d=(4π/3)R_ d^3ρ_ int, where R_ d is the radius of the particles, and ρ_ int=1.4 and 3.0 g cm^-3 are the internal density of the icy and rocky particles, respectively. Here, we assume that the particles are icy and rocky outside and inside the snowline, respectively.
The continuity equation of the dust particles is,
∂Σ_ d∂ r+1r∂∂ r(rv_rΣ_ d-ν1+ St^2rΣ_ g∂ Z_Σ∂ r)=0,
where Σ_ d is the dust surface density, v_ r is the radial velocity of dust, ν=α c_ sH_ g is the gas viscosity, and Z_Σ=Σ_ d/Σ_ g is the dust-to-gas surface density ratio. The Stokes number (stopping time normalized by Kepler time), St=t_ stopΩ_ K, determines the motion of the particles. The first and second terms in the parentheses represent the drift and diffusion of the particles, respectively.
The Stokes number of the dust particles is
St=π2ρ_ intR_ dΣ_ gmax(1,4R_ d9λ_ mfp),
where λ_ mfp=m_ g/(σ_ molρ_ g) is the mean free path of the gas molecules. Their collisional cross section is σ_ mol=2×10^-15 cm^2.
The radial drift velocity of the particles is calculated by <cit.>
v_ drift=-2 St St^2+1η v_ k,
where v_ k=rΩ_ k is the Kepler velocity. The radial velocity of the particles due to their diffusion is
v_ diff=-ν1+ St^21r∂lnZ_Σ∂lnr.
The total radial velocity of the particles is v_ r=v_ drift+v_ diff. We reduce the inward dust mass flux, Ṁ_ d=-2π rv_ rΣ_ d, just inside the snowline to be half of that just outside when v_ r<0 and increase the flux outside the snowline to the double of that just inside when v_ r>0 to express the evaporation and re-condensation of the icy material of the particles.
The growth of the particles due to their mutual collision is <cit.>
∂ m_ d∂ t+v_ r∂ m_ d∂ r=ϵ_ grow2√(π)R_ d^2Δ v_ ddH_ dΣ_ d,
where ϵ_ grow, Δ v_ dd, and H_ d are the sticking efficiency, collision velocity, and dust scale height, respectively.
The particles break up rather than merge when the collision speed is too fast. We model the sticking efficiency as <cit.>
ϵ_ grow=min{1, -ln(Δ v_ dd/v_ cr)ln5},
where the critical fragmentation velocity for the collision of rocky and icy particles are v_ cr=1 and 10 m s^-1, respectively <cit.>[This expression can be used even if dust grows to meter-size, because numerical simulations show that the fragmentation dose not depend on the number of monomers of dust aggregates <cit.>.].
The collision velocity between the dust particles is
Δ v_ dd=√(Δ v_ B^2+Δ v_ drift^2+Δ v_ϕ^2+Δ v_ z^2+Δ v_ diff^2),
where Δ v_ B, Δ v_ drift, Δ v_ϕ, Δ v_ z, and Δ v_ diff are the relative velocities induced by their Brownian motion, radial drift, azimuthal drift, vertical sedimentation, and diffusion, respectively <cit.>. The relative velocity induced by Brownian-motion between the particles with the same mass is Δ v_ B=√(16k_ BT/(π m_ d)). The relative velocity induced by the radial drift is Δ v_ drift=|v_ drift( St_1)-v_ drift( St_2)|, where St_1 and St_2 are the Stokes numbers of the two colliding particles. The relative velocity induced by the azimuthal drift is Δ v_ϕ=|v_ϕ( St_1)-v_ϕ( St_2)|, where v_ϕ=-η v_ K/(1+ St^2), and that by the vertical motion is Δ v_ z=|v_ z( St_1)-v_ z( St_2)|, where v_ z=-Ω_ K StH_ d/(1+ St). We assume St_2=0.5 St_1, because the single-size simulation reproduces the results by a full-size simulation very well with that assumption <cit.>. For the relative velocity induced by diffusion, we use following three limiting expressions derived from <cit.>,
Δ_ diff=
√(α)c_ s Re_ t^1/4| St_1- St_2|, St_1≪ Re_ t^-1/2,
√(3α)c_ s St_1^1/2, Re_ t^-1/2≪ St_1≪ 1,
√(α)c_ s(11+ St_1+11+ St_2)^1/2, 1≪ St_1.
where Re_t=ν/ν_ mol is the turbulence Reynolds number. The molecular viscosity is ν_ mol=v_ thλ_ mfp/2, where v_ th=√(8/π)c_ s is the thermal gas velocity.
The dust scale height is given by <cit.>,
H_ d=H_ g(1+ Stα1+2 St1+ St)^-1/2,
and the midplane dust density is ρ_ d,mid=Σ_ d/(√(2π)H_ d).
§.§ Planetesimal formation
We calculate when, where, and how much planetesimals form in our scenario. In this work, we consider two mechanisms of planetesimals formation: streaming instability and mutual collision of particles. First, we consider the condition for planetesimal formation by streaming instability. Streaming instability can enhance the accumulation of dust particles, which helps the condition for gravitational instability, ρ_ d, mid>ρ_ Roche≡9M_*Ω_ K^2/(4π G), be reached. We define the condition for planetesimal formation as the dust-to-gas density ratio on the midplane, Z_ρ≡ρ_ d, mid/ρ_ d, gas, is larger than the critical density ratio ϵ_ crit=1 <cit.>. We also consider the case in which ϵ_ crit depends on St in Section <ref>. We assume that planetesimal formation only occurs outside the orbit of the migrating planet in order to focus on the planetesimal formation at the gas pressure bump created by the planet[Without this assumption, planetesimals episodically form inside the planetary orbit due to waves forming in radial profiles of dust when the pebble front crosses the gap, which should not be real.].
The change of the planetesimal surface density due to streaming instability is
dΣ_pls,SI dt=x_ SIΣ_d=ϵ_ SIτ_ SIΣ_ d,
where the efficiency of streaming instability is assumed ϵ_ SI=0.1 <cit.>[Although ϵ_ SI has been treated as a free parameter in previous works <cit.>, its variety is implicitly expressed together with the variety of τ_ SI in this work.]. The timescale of streaming instability, τ_ SI, is an important parameter of this work. We consider the cases with short timescale (τ_ SI=10 years <cit.>, in Section <ref>) and with long timescale (τ_ SI=10^3T_ K, where T_ K=2π/Ω_ K is the orbital period <cit.>, in Section <ref>). We also investigate the cases where the timescale depends on St in Section <ref>.
We also consider planetesimal formation due to mutual collision of particles. When the dust radius R_ d is larger than R_ d,max=1 m, we define that the particles become planetesimals. This is a valid definition, because the rapid growth of particles starts when the particles are smaller than 1 m (see the third column of Fig. <ref>)[We also check that the particle radius becomes much larger than 1 m immediately when we do not consider the planetesimal formation due to mutual collision.]. In every time step, we check this condition after checking the condition for streaming instability.
Although we use a single-sized dust evolution model in this work, the particles have size frequency distribution (SFD) at each r in reality. Hence, we assume an “imaginary” SFD and regard the mass of the particles larger than R_ d,max in the SFD as the mass of newly formed planetesimals due to mutual collision. We assume the SFD as dN∝ a^-q_ dda, where N is the number of the particles larger than a, and the minimum radius as R_ d,min. In that case, the change of the planetesimal surface density due to mutual collision is
dΣ_pls,MC dt= d dt(R_ d^4-q_ d-R_ d,max^4-q_ dR_ d^4-q_ d-R_ d,min^4-q_ dΣ_ d).
Here, we assume that R_ d,min=0.1 μ m and q_ d=3.5.
The total planetesimal formation rate is
dΣ_pls,tot dt= dΣ_pls,SI dt+ dΣ_pls,MC dt,
where the total planetesimal surface density is Σ_pls,tot=Σ_pls,SI+Σ_pls,MC. At the same time, the dust surface density is reduced by planetesimal formation,
dΣ_ d dt=- dΣ_pls,tot dt.
§ RESULTS
§.§ Short streaming instability timescale
§.§.§ Evolution of gas and solids
We show in Fig. <ref> the results with the short SI (streaming instability) timescale (τ_ SI=10 yrs). The figure represents the evolution of the surface densities of the gas, dust, and planetesimals with α=10^-3.4=4×10^-4. The embedded planet carves a deeper gap in the gas, as it migrates inward (the sky blue curves). Thus, the dust particles accumulate more at the outer edge of the gap as the planet migrates inward (blue curves). The figure also shows that the pebble front, the orbital position where the drift timescale of dust becomes shorter than the growth timescale, and dust (pebbles) starts to drift toward the central star <cit.>, moves outward (the bold curves around 100 au). This phenomenon is not related to the embedded planet. Figure <ref> also shows that planetesimals form by streaming instability between the snowline and the orbital position where the inward drift of dust starts to be halted (red curves). The formed planetesimal surface density is about 2 g cm^-2 at the inner edge and about 0.6 g cm^-2 at the outer edge of the formation region.
Figure <ref> shows the detailed evolution of the dust. The first column shows that the radius is smaller than R_ d,max=1 m, so that the all planetesimals should be formed by streaming instability. The Stokes number of dust is smaller than 0.1 when the dust drifts inward, but it increases when the dust is accumulated (the second column). The Stokes number in the accumulation is about 0.1 when the accumulation starts (t=0.35 Myr) and is larger than 0.1 in the full accumulation (t=0.53 and 0.63 Myr). The accumulation of dust is not enough for the condition for the planetesimal formation (Z_ρ≥1) in the beginning of the accumulation (t=0.35 Myr), but it reaches the condition when the drift of dust is stopped (t=0.53 Myr) (the third column). After that, the particles disappear inside the outer edge of the gap, and the inward mass flux of the drifting dust is zero (the fourth column). This condition for the rapid accumulation of dust is consistent with the one proposed by <cit.>, ∂Ṁ_ d/∂ r<0 (Eq. (30) in the paper). The inward mass flux is uniform, almost constant, and equal to Ṁ_ d,pl=1.5×10^-4 M_ E yr^-1 outside the gap.
Once the planet (and the gas pressure bump) crosses the snowline, the midplane dust-to-gas density ratio decreases and planetesimals cannot form. This is because the Stokes number of the dust becomes smaller due to the fragile rocky particles (the second column), which makes the vertical diffusion of dust more efficiently and ρ_ d,mid lower (the third column). The dust mass flux is about half of the one outside the snowline (the fourth column), which is another reason.
We proposed an approximate expression of the planetesimal surface density in Paper 1,
Σ_pls,est ≡ Ṁ_ d2π rv_ pl
= 8.8(Σ_ g,1au500 g cm^-2)^-1(T_ 1au280 K)(M_ pl20 M_ E)^-1
× (M_*M_⊙)^1/2(Ṁ_ d1.5×10^-4 M_ E yr^-1)(r au)^-1 g cm^-2,
where Ṁ_ d is the inward dust mass flux (see Appendix <ref> for more general expressions). Figure <ref> shows that our results are very well approximated when we substitute Ṁ_ d=1.5×10^-4 M_ E yr^-1, which is obtained from our results (see the fourth column of Fig. <ref>), into Eq. (<ref>). This means that all dust drifting into the formation place (around where η=0) is converted immediately to planetesimals once the formation starts. This is also shown in Fig. <ref> that the planetesimal mass (red solid curve) increases linearly along with the slope of the cumulative dust mass drifting into the formation place with Ṁ_ d,pl=1.5×10^-4 M_ E yr^-1 (red dashed line). The dust mass (blue solid curve) also decreases linearly before the beginning of planetesimal formation along with the slope of the dust mass assuming constant loss with the same mass flux with Ṁ_ d,pl (blue dashed line), but it decreases more once the planetesimals start to form (0.53≤ t≤0.6 Myr). This is because, although the rate of the mass converting from dust to planetesimals is the same with the one losing from the disc before the planetesimal formation starts, the dust exists inside the gas pressure bump continues to disappear gradually also after the formation starts (see Fig. <ref>). This is also the reason why the slope of the solid (sum of the dust and planetesimals) mass profile in Fig. <ref> gradually becomes zero. Once the gas pressure bump crosses the snowline, the planetesimal formation stops, and the increase of the planetesimal mass also stops (t=0.66 Myr).
§.§.§ Planetesimal formation regions
We then investigate the formation regions of planetesimals by changing the value of α. Figure <ref> shows that planetesimals form when 10^-4≤α≤10^3, which is broadly consistent with the measured value of α in a lot of observed protoplanetary discs <cit.>. The figure also shows the mechanism of the planetesimal formation is streaming instability in all cases. This is because, the dust being piled up at the bump convert to planetesimals with the instability before they grow to planetesimals by mutual collision. We find that belt-like planetesimal formation regions exist between the snowline and the position where the planet reaches its pebble-isolation mass (Eq. (<ref>)), r_ PIM. Planetesimals do not form inside the snowline as we explained in Section <ref>. The pebble-isolation mass is the mass the planet needs in order to make the gap deep enough to stop the dust (pebble) accretion, meaning that all of the dust drifting into the region piles up, which in turn triggers streaming instability. For the calculation of r_ PIM in this work, we fix the Stokes number as St=0.1.
When α<10^-3.4, the outer edge of the formation region is slightly outside r_ PIM, and the distance between the two orbital positions is larger as α is smaller. On the other hand, when α>10^-3.4, the outer edge is inner than r_ PIM, and the distance between the two orbital positions is larger as α is larger. This is because to get the condition for planetesimal formation, Z_ρ needs to increase beyond unity against the turbulence. In other words, it is the diffusion of the particles, which prevents the accumulation of the dust. Figure <ref> shows that the orbital position where the largest Z_ρ outside the planetary orbit reaches unity is outside r=r_ PIM when α=10^-4. The position is on r=r_ PIM when α=10^-3.4 and is inside when α=10^-3. This result is consistent with Fig. <ref>. The reasons why the profiles in Fig. <ref> wander at their outer parts are that the pebble front has the largest value of Z_ρ until the rapid accumulation of dust at the gas pressure bump starts, and the pebble front also makes waves in the dust profiles when it crosses the gap created by the planet (especially when α=10^-4).
Figure <ref> shows that the total mass of the formed planetesimals is larger as α is smaller due to the α dependence of the outer edge of the formation region. When α=10^-4, the total mass reaches about 60 M_ E. We estimate the total mass of the planetesimals by
M_ pls,tot,est≡∫^r_ PIM_r_ SL2π rΣ_ pls,estdr.
The figure shows this estimate roughly reproduces the results of our calculations. The difference at the high and low α is because the precise location of the outer edge of the planetesimal formation region is different from r_ PIM, as we explained above.
§.§ Long streaming instability timescale
We then investigate the planetesimal formation with the long SI timescale (τ_ SI=10^3T_ K). In the case of the short SI timescale, all planetesimals form with streaming instability independent from the strength of the turbulence. On the other hand, in the case of the long SI timescale, the formation mechanism depends on the turbulence strength.
Figures <ref> and <ref> represent the profiles of the dust evolution and the planetesimal surface density with the long SI timescale, respectively. The first column of Fig. <ref> shows that the dust radius is smaller than R_ d,max=1 m, so that the planetesimals are formed by streaming instability as in the case of short SI timescale. However, the left panel of Fig. <ref> shows that the radial profile of the planetesimal surface density is lower than the approximation (Eq. (<ref>)) in the outer part of the planetesimal formation region. This means that only parts of the drifting dust entering the formation place of planetesimals convert to planetesimals, because the approximation (Eq. (<ref>)) assumes that all dust converts to planetesimals immediately. The rest of the dust piles up there and makes Z_ρ larger than ϵ_ crit=1, the local condition for planetesimal formation (the second column of Fig. <ref>). These interpretations are consistent with the time evolution of the dust and planetesimal mass shown in the left panel of Fig. <ref>. The panel shows that the slope of the planetesimal mass with the long SI timescale (red solid curve) is much smaller than that with the short SI timescale (dotted red curve) (i.e., the case that all mass of the dust converting to the planetesimals) especially at the begging of the planetesimal formation (0.55<t<0.6 Myr). The profiles of the solid mass (solid and dotted purple curves) are the same in both SI timescale, and the dust mass, assuming long SI timescale, does not decreases like the case we assume short SI timescale. This also means that dust particles not converted to the planetesimals pile up at the gas pressure bump. The sharp decrease of the dust (and solid) mass at t=0.65 Myr is because the piled-up dust evaporates when they cross the snowline.
On the other hand, the third column of Fig. <ref> shows that the radius reaches R_ d,max=1 m when α=10^-4. At the same time, the density ratio Z_ρ is larger than ϵ_ crit=1 (the fourth column). This means that the planetesimals are formed by both streaming instability and mutual collision. The right panel of Fig. <ref> shows that planetesimals are formed by both mechanisms but mainly by mutual collision when the turbulence is weak. The surface density of planetesimals formed by mutual collision is about 100 times larger than the one of planetesimals formed by streaming instability. The panel also shows that the surface density of planetesimals formed by mutual collision is very well approximated by Eq. (<ref>). However, dust also piles up at the formation place with Z_ρ larger than ϵ_ crit=1 (the fourth column of Fig.<ref>), because Z_ρ becomes easily large with α=10^-4 (i.e., weak diffusion) compared to α=10^-3.4. The right panel of Fig. <ref> shows that all mass of dust drifting into the formation place converts to the mass of planetesimals (mainly) by mutual collision once the planetesimals start to form (t=0.3 Myr). As a result, the solid mass (i.e., total mass of the dust and planetesimals) is conserved after that.
Figure <ref> shows that the planetesimal formation region is between r=r_ SL and r_ PIM, which is the same result with the short SI timescale case including the deviation of the outer edge from r=r_ PIM. The figure also shows that all planetesimals form by streaming instability when α≥10^-3.5, but most of the planetesimals form by mutual collision when α≤10^-3.6. The left panel also shows that the planetesimal surface density of the outer part of the formation region is smaller than the one with the short SI timescale when α≥10^-3.5. This is because only part of the dust drifting into the formation place (i.e., the gas pressure bump) converts to planetesimals, as we explained above. Except for these cases, the surface density of the planetesimals (formed by both mechanisms) are well approximated by Eq. (<ref>) for any strength of turbulence. Figure <ref> also shows that the dominant planetesimal formation mechanism is streaming instability when α≥10^-3.5 and is mutual collision when α≤10^-3.6. When α≥10^-3.5, the total mass is much smaller than the approximation by Eq. (<ref>), because the planetesimal surface density of the outer part of the formation region is smaller than the approximation by Eq. (<ref>).
§.§ Effects of the Stokes number dependence of streaming instability
Previous 3D hydrodynamical simulations have shown that the condition and timescale of streaming instability depend on the Stokes number of dust particles. We consider such a case according to the results of <cit.>. In this case, the logarithm of the critical density ratio is
logϵ_ crit=A(log St)^2+Blog St+C
with
A=0, B=0, C=log2.5 St≤0.015,
A=0.48, B=0.87, C=-0.11 St>0.015.
The streaming instability timescale depends on the Stokes number of the particles,
τ_ SI=
2700Ω_ K St≤0.015,
40.5 St Ω_ K St>0.015,
as shown by the approximation of the results of <cit.> (see Appendix <ref>).
Figure <ref> represents the surface density and formation regions of planetesimals when streaming instability depends on the Stokes number. The figure shows that the profiles of planetesimals are similar to the case with the short SI timescale (see Fig. <ref>). All planetesimals form by streaming instability, and the planetesimal surface density is well approximated by Eq. (<ref>). The planetesimal formation region lies between r_ SL and r_ PIN as well as the other cases. Planetesimals form even when α=10^-2.9, and the outer edge of the formation region for each α is slightly larger than that with short τ_ SI. This is because ϵ_ crit is smaller than unity when 0.015< St(<1) (Eq. (<ref>)), which is case for the drifting dust (see the third column of Fig. <ref>).
§ DISCUSSION
§.§ Disc properties dependence
We investigate the effects of change of disc properties. We consider the cases with various initial dust-to-gas surface density ratio, gas surface density, and disc temperature as described in Table <ref>. In this section, the condition for planetesimal formation by streaming instability is the same with the one used in Section <ref>. Then, we find that planetesimals form perfectly or mainly by streaming instability in any cases. Figure <ref> represents the surface density and total mass of the planetesimals (including both planetesimals formed by streaming instability and formed by mutual collision) with various disc properties.
The left panel of Fig. <ref> shows the disc properties dependence of the planetesimal surface density. It depends on the dust mass and weakly on the disc temperature but not on the gas disc mass. This dependence can be explained by updating the approximation of planetesimal surface density, Eq. (<ref>). According to <cit.>, the inward dust mass flux is estimated by the following equation:
Ṁ_ d=9.5×10^-5(Σ_ g,1au500 g cm^-2)(Z_Σ,00.01)^5/3
×(M_*M_⊙)^1/3(t Myr)^-1/3M_ E yr^-1,
and this is consistent with our result, Ṁ_ d=1.5×10^-4M_ E yr^-1, when we substitute t=0.25 Myr into Eq. (<ref>) (see Appendix <ref> for more general expressions). Then, by substituting Eq. (<ref>) for Eq (<ref>), we get a general expression:
Σ_pls,est
=33.52.728+1.082p(Z_Σ,00.01)^5/3(T280 K)(M_ pl20 M_ E)^-1
×(M_*M_⊙)^1/2(t Myr)^-1/3(r au)^-1/2 g cm^-2,
which depends on the dust mass, the disc temperature, and the slope of the gas surface density but not on the disc mass (see also Appendix <ref> for detailed derivation). In the case of our simulations,
Σ_pls,est=8.8(Z_Σ,00.01)^5/3(T_ 1au280 K)(r au)^-1 g cm^-2.
The left panel of Fig. <ref> shows that the obtained planetesimal surface density with various parameters fits the approximation lines (Eq. (<ref>), dotted lines with corresponding colours) very well. The surface density is 2^5/3=3.2 times higher than that of the fiducial case when Z_Σ,0 is two times higher (green) and is slightly higher when T_ 1au is 1.25 times higher (orange). On the other hand, the surface density is the same with the one of the fiducial case when Σ_ g,1au is two times higher (light blue).
We also plot the positions of the snowline (r_ SL) and where the planet reaches its pebble-isolation mass (r_ PIM) in the left panel of Fig. <ref>. When the disc is hot, r_ PIM is small (8.60 au), because the pebble-isolation mass depends on the sound speed (see Eq. (<ref>)). On the other hand, r_ PIM is at the same position with the fiducial case (13.4 au) when the dust or disc mass is changed (black lines). The position of the snowline (where the temperature is 160 K) is changed from the fiducial case (3.06 au) to 4.78 au only when the disc temperature is higher.
The right panel of Fig. <ref> shows that the total planetesimal mass also depends on the disc properties, and it fits well with the approximation by Eq. (<ref>) (dotted curves). When Z_Σ,0 is changed, the total mass is in proportion to Σ_ pls,est, because the positions of the inner and outer edges of the planetesimal formation region are fixed (see Eq. (<ref>)). Hence, the total mass is 2^5/3=3.2 times heavier when Z_Σ,0 is two times higher (green). As a result, the total planetesimal mass could be about 200 M_ E when Z_Σ,0=0.02 and α=10^-4. When Σ_ g,1au is large, the total planetesimal mass is the same with the one of the fiducial case, because both surface density and formation region of planetesimals do not depend the gas surface density (light blue). When T_ 1au is 1.25 times higher, the planetesimal surface density is higher in proportion to the temperature (see Eq. (<ref>)), but the formation region is much narrower (orange). As a result, the total planetesimal mass is smaller than that of the fiducial case.
§.§ Effects of the planetary growth and the later formation of the planetary core
In the previous sections, we have considered the cases with simple assumptions to understand how planetesimals form in belt-like regions. Here, we investigate more realistic situation considering the evolution of the gas disc, later formation of the embedded planet, growth of the planet by gas accretion, and Type II migration of the planet.
In this section, we improve the gas disc model used in the previous sections (Eq. (<ref>)) to express the time and radial reduction of the gas surface density <cit.>,
Σ_ g,unp=Σ_ g,1au(r au)^-γexp{-(rr_ c)^2-γ},
where Σ_ g,1au=500exp(-t/τ_ disc) g cm^-2 with τ_ disc=3 Myr, γ=1.5-q=1 (see Eq. (<ref>)), and r_ c=150 au. The outer edge of the disc (calculation region) is 300 au as well as the assumptions in the previous sections.
Planets grown to around the pebble isolation mass also start gas accretion. We consider the growth of the embedded planet by gas accretion as,
dM_pl dt=min{( dM_pl dt)_ KH, ( dM_pl dt)_ disc, Ṁ_ g},
where the first, second, and third terms of the right-hand side represent the gas accretion by the Kelvin–Helmholtz-like contraction of the envelope, the accretion of gas from the protoplanetary disc into the Hill sphere, and the limit due to the global gas accretion rate, respectively <cit.>. The first term is motivated by <cit.>,
( dM_pl dt)_ KH=10^-5M_ E yr^-1(M_ pl10 M_ E)^4(κ1.0 cm^2 g^-1)^-1,
where we assume κ=0.05 cm^2 g^-1 as the opacity of the envelope. The second term is given by
( dM_pl dt)_ disc=0.293π(H_ g,plr_ pl)(M_ plM_*)^4/3Ṁ_ gαΣ_ g,plΣ_ g,unp,
where Σ_ g,pl is the gas surface density at the planetary orbit inside the gap <cit.>. The global gas accretion rate is <cit.>
Ṁ_ g=3πνΣ_ g,unp{1-2(2-γ)(rr_ c)^2-γ}.
As the planet mass increases, the analytical gap model we used in the previous sections is not accurate <cit.>. Therefore, in this section, we use the model described in Paper 1 (see Appendix <ref> for the details).
The type of the planetary migration also shifts from the Type I to Type II as the gap around the planet is deeper. In order to express the Type II migration as well, we adjust the migration timescale, Eq. (<ref>), as follows <cit.>:
τ_ mig,adj=τ_ mig(Σ_ g,plΣ_ g,unp)^-1=(1+0.04K)τ_ mig,
where the factor K is defined as Eq. (<ref>).
The approximation of the planetesimal surface density, Eq. (<ref>), is then improved,
Σ_pls,est
=33.5 (1+0.04K)2.728+1.082p(Z_Σ,00.01)^5/3(T280 K)(M_ pl20 M_ E)^-1
×(M_*M_⊙)^1/2(t Myr)^-1/3(r au)^-1/2 g cm^-2,
where we assume the effect of the radial reduction of Σ_ g,unp is negligible.
Figure <ref> resents the cases where the above effects are included. In all cases, planets grow large and make deep and wide gas gaps during their inward migration. The positions where the planets start to grow are outer as the strength of turbulence is small. After the accretion starts, the migration speed decreases, because the type of migration shifts from the Type I to Type II. Finally, the mass of the planets go to ∼1000 M_ E, and their migration stops. The migration speed before the start of the rapid planetary growth is slower than that of the simple Type I migration case. The migration speed is slower as α is smaller, because the gap is deeper (Eq.(<ref>)). After the rapid growth starts, the migration speed does not depend on α so much, because the gas accretion rate is small as the turbulence is strong (Eq. (<ref>)), which cancels out the above effect.
The first column shows that planetesimals form (by streaming instability) from the start of the calculation, where the planetary orbit is 50 au (see the top panel at t=0.5 Myr). This outer edge of the formation region is farther than that in the previous sections, because the gap model used in this section is different from the other sections' one. The surface density of the formed planetesimals is well reproduced by the updated approximation by Eq (<ref>) (the black dashed curve) and continuously increases as the planet migrates inward. Then, the pebble front reaches the outer edge of the disc by 1.0 Myr, and the inward dust flux decreases, resulting in the rapid reduction of the planetesimal surface density at 26 au (the second top panel), which is not expressed in the approximation. After that, planetesimal formation continues until the stop of the migration of the planet (2.0-10.0 Myr). Small amount of planetesimals also form by mutual collision at this stage (green). Since the planet stops before it reaches the snowline, the inner edge of the planetesimal formation region is at 9 au, which is outer than that of our previous results without the planetary growth and the Type II migration.
The second column shows the case where α=10^-3.4. Planetesimals start to form at 27 au, which is inner than that with α=10^-4 (the top panel). This trend of the position of the outer edge of the planetesimal formation region is the same with the cases without the planetary growth and the Type II migration. At 13 au, the slope of the surface density of planetesimals starts to be steeper than that without the additional effects (Σ_ pls∝ r^-1), because the planet starts the rapid growth, and so the migration speed becomes slow (the second top). This change of the slope is well reproduced by the approximation (Eq. (<ref>)) showing that the planetesimal surface density is proportional to M_ plr^-1 when K≫25 and T∝ r^-1. However, the increase of the planetesimal surface density stops at 8 au, because the dust inward flux decrease after the pebble front reaches the outer edge of the disc. Finally, the planet crosses the snowline, but the outer edge of the gap (i.e., the gas pressure maximum) is still outside the snowline, and the inner edge of the planetesimal formation region is outside the snowline as well although it is inner than the case with α=10^-4.
The third column shows the profiles with α=10^-3. The start of planetesimal formation is later than that with weaker turbulence, when the pebble front has already reached the outer edge of the disc and the inward dust flux has decreased (the second top panel). Since the planet rapidly grows by gas accretion and the migration speed decreases, the slope of the planetesimal surface density is steeper than that without the planetary growth and the Type II migration, which is well reproduced by the approximation (Eq. (<ref>)). The planet migrates inward to 1 au, where the pressure maximum also reaches the snowline (the bottom panel). The inner edge of the planetesimal formation region is then at the snowline as well as the cases without the planetary growth and the type II migration.
The fourth column shows the case where α=10^-3.4 and the planetary core forms at t_ pl,0=2.0 Myr. The top panel shows the properties when the time has passed already 2.5 Myr. Since the amount of the gas and dust still exist in the disc is smaller than the other cases, the planetary growth is less efficient and the dust inward flux is smaller. As a result, the start position of the planetesimal formation (i.e., the outer edge of the planetesimal formation region) is inner than the case with the earlier formation of the planetary core (the second column), and the planetesimal surface density is smaller. The planetesimal surface density is also smaller than the approximation which does not consider the decrease of the inward dust flux due to the pebble front reaches the outer edge of the disc. Also, the planet migrates to only just outside the snowline, resulting in the inner edge of the planetesimal formation region is outer than the the earlier planetary core formation case.
Figure <ref> shows the final distribution of the planetesimal surface density. As we explained in the previous paragraphs, the growth of the planet makes its migration slower, which changes the profiles from those without the growth (the black curve). Due to the slower migration, the embedded planet stops at the point where the gas pressure bump has not reached the snowline yet, which makes the inner edge of the planetesimal formation region outer. The timing of the pebble front reaches the outer edge of the disc (i.e., the inward dust mass flux decreases) is also an important factor for the profiles of the belt-like planetesimal formation region. Also, if the formation of the embedded planet is late, the dust inward flux has already been small, resulting in the low planetesimal surface density and the narrow planetesimal formation region.
Figure <ref> also shows that the belt-like planetesimal formation region is formed with the planetary growth. In the case where the planet exists from the start of the calculation (the left panel), the planetesimal formation region spreads outside r_ PIM, the orbital position where the planet reaches the pebble isolation mass (same with the previous sections' one; the planet mass, the gas disc, and the Stokes number of the dust are fixed), farther than the fiducial case in the previous sections. This is simply because the gas gap model we use here is different from the previous one. The inner edge of the planetesimal formation region is outside the snowline, because the migration speed of the planet decreases as the planet grows heavy, and the gas disc disappears before the outer edge of the gap reaches the snowline. The α dependence of the orbital position of the inner edge of the formed planetesimal belt reflects the α dependence of the migration speed as we discussed in the previous paragraphs. The panel also shows that the slope of the planetesimal surface density is gentle at the inner part of the belt, or even a peak is formed when α≤10^-3.4, which is because the pebble front reaches the outer edge of the disc and then the dust mass flux flowing into the planetesimal formation place decreases.
The right panel of Fig. <ref> shows that the planetesimal formation region with t_ pl,0=2 Myr has similar α dependence to that with t_ pl,0=0 Myr (the left panel), but it is narrower, and the value of the planetesimal surface density is much lower. This is because the pebble front has already reached the outer edge of the disc, and the inward dust mass flux has decreased, as we interpreted Figs. <ref> and <ref> in the previous sections.
Figure <ref> shows that the total mass of the formed planetesimals can be 30-100 M_ E when the planet is formed at t_ pl,0=0 yr (the purple curve). This is higher than the cases without the planetary growth and the Type II migration (the black curve), because the planetesimal formation starts at outer orbital positions than those in the previous section due to the change of the gas gap model. On the other hand, the total mass where the planet is formed at t_ pl,0=1 Myr (green) and 2 Myr (sky blue) is about 10 and 100 times smaller then that with t_ pl,0=0 yr, respectively. This is because the planetesimal surface density is smaller and the widths of the planetesimal formation regions are narrower as we discussed in the above paragraph (Fig. <ref>). We note that Fig. <ref> shows that the planetesimal formation region may spread farther than r=50 au, and the planetesimal mass may be heavier than 100 M_ E, when α≤10^-3.7 and t_ pl,0=0 yr if the planet forms farther than our assumption.
§.§ Effects of the back-reaction from dust to gas
We do not consider the effects of the back-reaction from dust to gas, which could change the gas structure at the pressure bump and prevent the accumulation of dust. A gas and dust 2D (radial and vertical) hydrodynamical simulation by <cit.> shows that the deformation of the gas pressure bump by the back-reaction prevents direct gravitational instability, and the size of the planetesimals formed by streaming instability becomes smaller even if they form. However, a 2D simulation including the stellar vertical gravity shows that gravitational instability occurs at the gas pressure bump <cit.>. A gas and dust 2D (radial and azimuth) hydrodynamical simulation with a fixed-orbit planet including a simple dust growth model by <cit.> shows that the back-reaction makes the gas pressure bump flatter, and extreme dust accumulation is suppressed. However, the dust-to-gas density ratio in the midplane Z_ρ is still about unity, which satisfies the condition for planetesimal formation by streaming instability. A similar simulation but with a migrating planet and a fixed size of dust by <cit.> shows that the gas pressure bump does not constantly follow the inward migration of the planet, and multiple dust rings form when α≤3×10^-4. If this occurs even when the dust growth and the planetesimal formation are considered, the radial profile of the planetesimal surface density inside the formation region will be different from our results. On the other hand, a recent gas and dust 3D hydrodynamical simulation with a fixed size of dust by <cit.> shows that, although the accumulation is moderate when Z_ρ>1, the dust ring is narrower rather than wider when Z_ρ<1, which is different from the results of the previous 2D simulations. Therefore, more precise 3D simulations considering the dust growth with a migrating planet should be conducted in order to predict the precise formation process of the planetesimals in the future.
§.§ Effects of the dust leak
We use a dust evolution model expressing the dust mass at each distant as single peak (largest) mass. This is consistent with full-size simulations because the mass is dominated by the peak (largest) mass. However, the dust has mass distribution in reality, and small dust is relatively easy to escape from the accumulation at the gas pressure bump due to the diffusion. This is obvious from the condition for the accumulation of dust, which is determined by the ratio of the speeds of diffusion and drift <cit.> (see Eqs. (<ref>) and (<ref>)),
|v_ driftv_ diff|∼ Stα(∂lnΣ_ g∂lnr)(∂lnZ_Σ∂lnr)^-1.
Hence, if the fragmentation of the piled-up dust is efficient, and a lot of small dust is formed, the gas pressure bump may not be able to maintain the accumulation of dust.
However, a recent full-size simulation by <cit.> shows that small particles leak from the gas pressure bump, but the dust-to-gas surface density ratio maintains Z_Σ≳0.01 for ∼1 Myr when the critical fragmentation speed is v_ cr=10 m s^-1, which is sufficient for triggering streaming instability outside the snowline <cit.>. Therefore, our scenario of planetesimal formation should still work even if the leak of small dust is considered. A recent 2D gas and fixed-sized dust shearing-box simulation by <cit.> shows that dust particles smaller than St∼0.1 cannot pile up at the gas pressure bump, and the trap efficiency is ∼80% even for the particles with St≳0.1. This result suggests that the surface density and total mass of actually formed planetesimals could be smaller than our results, but a significant leak of dust should not happen due to the quick growth of dust at the gas pressure bump predicted in our simulations.
§.§ Effects of the vertical stirring by the planet
Recent 3D simulations show that an embedded planet vertically stirs dust settling onto the midplane <cit.>. This effect can reach the outer region of the protoplanetary disc as the planet is heavy. Here, we briefly check how this vertical stirring changes our results by mimicking the situation.
We treat the strength of turbulence α in Eq. (<ref>), dominating the vertical equilibrium distribution, as α_ vert=10^-2. We assume this value is spatially and temporally constant. Figure <ref> shows that the final distribution of the planetesimal surface density is similar to the normal case. The total formed planetesimal mass is 59 M_ E, which is also similar to that of the normal case, 52 M_ E. In the case of the vertical stirring, however, the starting point of the planetesimals formation (i.e., the outer edge of the planetesimal formation region) is inner than the normal case. This is because, the dust density in the midplane is lower than that of the normal case because of the vertical stirring, which makes the timing that the condition for the streaming instability is satisfied later. This trend is also shown in Fig. <ref>, where α is changed. At the inner part of the planetesimal distribution, the surface density of the vertical stirring case is higher than that of the normal case. This is because the timing of the pebble front reaches the disc outer edge is later than the normal case due to the less efficient collisional growth of dust on the midplane. The vertical stirring makes the midplane dust density lower, and the collision rate becomes smaller. However, the vertical stirring is weaker as the distance to the planet is farther, the speed of the pebble front may not change so much, especially when the planet (or planetary core) is not heavy. On the other hand, the difference of the starting points of the planetesimal formation will remain in the case of a lighter planet.
The presence of an embedded planet may also make the dust ring wider. It will change the starting position of the planetesimal formation and may make the planetesimal surface density lower if the planetesimal formation rate is also affected by the planet. However, the big picture of the planetesimal formation mechanism will not change, similar to the effects of the dust buck reaction (see Section <ref>).
§ CONCLUSIONS
A planet carves the protoplanetary disc and creates a gas pressure bump. Dust drifting from the outer region of the disc piles up there and form planetesimals. As the planet migrates inward, the planetesimal formation place also moves inward, and the formation region spreads on the inner disc. As a result, planetesimals form in a wide belt-like region in the disc <cit.>.
We investigated this scenario for planetesimal formation by considering in addition the global dust evolution in a protoplanetary disc with a wide range of the value of α, the strength of turbulence. We showed that the dust particles pile up at the bump and form planetesimals by streaming instability and mutual collision. As the planet migrates inward, the formation region lies roughly between the snowline and the orbital position where the planet reaches its pebble-isolation mass when 10^-4≤α≤10^-3, which is broadly consistent with observed value of α <cit.>. As α is smaller, the planetesimal formation region is wider, and the total mass of the formed planetesimals is heavier.
The formation mechanism depends on the SI (streaming instability) timescale. In the case of short SI timescale, all planetesimals form by streaming instability independent from the value of α. On the other hand, in the case of long SI timescale, all planetesimals form by streaming instability when α≥10^-3.5, but most of them form by mutual collision when α≤10^-3.6. We also investigated the case that the condition for streaming instability and the SI timescale depend on the Stokes number of dust <cit.>. The results are almost the same with those with the short SI timescale except for that the outer edge of the planetesimal formation region is slightly farther out.
The planetesimal surface density is ∼10 g cm^-2 around the inner edge and ∼0.1-1 g cm^-2 around the outer edge of the formation region. This is consistent with the results of <cit.>, and that means almost all dust drifting into the formation place is converted immediately to planetesimals at the place. The total planetesimal mass depends on α, and it reaches about 60 M_ E with α=10^-4 and typical initial dust-to-gas surface density ratio (i.e., dust mass). Also, the total planetesimal mass depends on the dust mass significantly, and it can be about 200 M_ E when the initial dust-to-gas surface density ratio is 0.02. We also showed that the surface density and total mass of the planetesimals can be approximated with simple expressions.
Furthermore, when the growth of the embedded planet by gas accretion and the shift to the Type II migration are considered, the profiles of the planetesimal surface density change from those with the simple assumptions. The slowdown of the migration of the planet makes the slopes of the profiles steeper at their inner regions, but it also reduces the surface density once the pebble front reaches the outer edge of the disc and the dust (pebble) inward mass flux decreases. When 10^-4≤α≤10^-3, the total mass of the formed planetesimals is about 30-100 M_ E if the planetary core has already existed at t=0 yr. However, the total mass is about 10 and 100 times smaller in the cases where the planetary core forms at t=1 Myr and 2 Myr, respectively, because the most of the dust (pebbles) has already fallen to the star before the planetesimal formation starts.
We thank the referee, Joanna Drążkowska, for the very valuable comments. We also thank Christoph Mordasini and Takahiro Ueda for constrictive and useful discussion. This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606.
aa
§ CLUMPING TIMESCALE OF DUST
Clumping timescale of dust, in other words, the SI timescale depends on the Stokes number of dust particles. We approximate (by eye) the results of a recent vertically stratified gas and dust hydrodynamical simulation by <cit.>. Figure <ref> shows the results of the work and our approximation of the results, Eq. (<ref>).
§ ANALYTICAL EXPLANATION OF THE APPROXIMATE EXPRESSIONS
The analytical expression of the inward dust (pebble) flux provided by <cit.> can be expressed as a more general expression:
Ṁ_ d=9.5×10^-5(Σ_ ump,g500 g cm^-2)(Z_Σ,00.01)^5/3
×(M_*M_⊙)^1/3(t Myr)^-1/3(r au)M_ E yr^-1.
When Σ_ ump,g=Σ_ g,1au(r/ au)^-p,
Ṁ_ d=9.5×10^-5(Σ_ 1au500 g cm^-2)(Z_Σ,00.01)^5/3
×(M_*M_⊙)^1/3(t Myr)^-1/3(r au)^1-pM_ E yr^-1.
When p=1, the r dependence is canceled out, and we get Eq. (<ref>), which is exactly the same expression with Eq. (14) of <cit.>. By this assumption (p=1), the dust mass flux is uniform. When we also substitute t=0.25 Myr into Eq. (<ref>),
Ṁ_ d=1.5×10^-4(Σ_ 1au500 g cm^-2)(Z_Σ,00.01)^5/3(M_*M_⊙)^1/3M_ E yr^-1,
which is consistent with the results of our simulations.
By substituting Eq. (<ref>) for the upper expression of Eq. (<ref>) with Eqs. (<ref>) and (<ref>), we get general approximate expressions of the planetesimal surface density:
Σ_pls,est
=33.52.728+1.082p(Σ_ ump,g500 g cm^-2)^-1(T280 K)(M_ pl20 M_ E)^-1
×(M_*M_⊙)^1/2(Ṁ_ d1.5×10^-4 M_ E yr^-1)(r au)^-1 g cm^-2,
or
Σ_pls,est
=33.52.728+1.082p(Σ_ 1au500 g cm^-2)^-1(T_ 1au280 K)(M_ pl20 M_ E)^-1
×(M_*M_⊙)^1/2(Ṁ_ d1.5×10^-4 M_ E yr^-1)(r au)^p-q-1 g cm^-2.
When we substitute p=1 and q=1/2 into Eq. (<ref>), we get the lower expression of Eq. (<ref>).
If we substitute Eq. (<ref>) into Eqs. (<ref>) and (<ref>), we get Eq. (<ref>) and
Σ_pls,est
=33.52.728+1.082p(Z_Σ,00.01)^5/3(T_ 1au280 K)(M_ pl20 M_ E)^-1
×(M_*M_⊙)^1/2(t Myr)^-1/3(r au)^-q-1/2 g cm^-2,
respectively. Here, Σ_pls,est does not depend on Σ_ g,unp except for the week dependence in the coefficient 33.5/(2.728+1.082p), because it is canceled out. When p=1 and q=1/2,
Σ_pls,est
=5.6(Z_Σ,00.01)^5/3(T_ 1au280 K)(M_ pl20 M_ E)^-1
×(M_*M_⊙)^1/2(t Myr)^-1/3(r au)^-1 g cm^-2.
When we also substitute t=0.25 Myr, M_ pl=20 M_ E, and M_*=M_⊙ into Eq. (<ref>), we get Eq. (<ref>).
§ GAP MODEL USED IN THE CASES WITH THE PLANETARY GROWTH
In Section <ref>, we use the gap model used in Paper 1 <cit.> to express the cases with the planetary growth. This model is more accurate than that by <cit.> when the planet is heavy. The perturbed gas surface density is
Σ_ g=Σ_ g,unpmax(s_ K, s_ min),
where s_ K=max(s_ Kepler, s_ Rayleigh). The factor s_ Kepler is
s_ Kepler=exp(-C9|x|^3K) (|x|>Δ)
exp(-C9Δ^3K) (|x|≤Δ),
where C=0.798, Δ=1.3, and x=(r-r_ pl)/H_ g,pl. The factor s_ Rayleigh is
s_ Rayleigh=exp(-56x_ m^2+54x_ m|x|-12x^2) (|x|>Δ)
exp(-56x_ m^2+54x_ mΔ-12Δ^2) (|x|≤Δ),
where x_ m={(4/3)CK}^1/5 is the outer edge of the marginal Rayleigh stable region (i.e., |x|<x_ m). The factor s_ min is given by <cit.>:
s_ min=Σ_ g,plΣ_ g,unp=11+0.04K.
|
http://arxiv.org/abs/2306.02660v1
|
20230605074842
|
Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach
|
[
"Chiheb Ben Hammouda",
"Nadhir Ben Rached",
"Raúl Tempone",
"Sophia Wiechert"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"math.OC",
"q-bio.MN",
"q-bio.QM",
"stat.CO",
"60H35, 60J75, 65C05, 93E20"
] |
Fully-Dynamic All-Pairs Shortest Paths: Likely Optimal Worst-Case Update Time
Xiao Mao
Stanford University
=============================================================================
We propose a novel alternative approach to our previous work (Ben Hammouda et al., 2023) to improve the efficiency of Monte Carlo (MC) estimators for rare event probabilities for stochastic reaction networks (SRNs). In the same spirit of (Ben Hammouda et al., 2023), an efficient path-dependent measure change is derived based on a connection between determining optimal importance sampling (IS) parameters within a class of probability measures and a stochastic optimal control formulation, corresponding to solving a variance minimization problem.
In this work, we propose a novel approach to address the encountered curse of dimensionality by mapping the problem to a significantly lower-dimensional space via a Markovian projection (MP) idea. The output of this model reduction technique is a low-dimensional SRN (potentially even one dimensional) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained by solving a related optimization problem via a discrete L^2 regression. By solving the resulting projected Hamilton–Jacobi–Bellman (HJB) equations for the reduced-dimensional SRN, we obtain projected IS parameters, which are then mapped back to the original full-dimensional SRN system, resulting in an efficient IS-MC estimator for rare events probabilities of the full-dimensional SRN. Our analysis and numerical experiments reveal that the proposed MP-HJB-IS approach substantially reduces the MC estimator variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators.
Keywords: stochastic reaction networks, tau-leap, importance sampling, stochastic optimal control, Markovian projection, rare event.
plain
§ INTRODUCTION
This paper proposes an efficient estimator for rare event probabilities for a particular class of continuous-time Markov processes, stochastic reaction networks (SRNs). We design an automated importance sampling (IS) approach based on the approximate explicit tau-leap (TL) scheme to build a Monte Carlo (MC) estimator for rare event probabilities of SRNs. The used IS change of measure was introduced in <cit.>, wherein the optimal IS controls were determined via a stochastic optimal control (SOC) formulation. In that same work, we also presented a learning-based approach to avoid the curse of dimensionality. Building on that work, we propose an alternative method for high-dimensional SRNs that leverages dimension reduction through Markovian projection (MP) and then recover the optimal IS controls of the full-dimensional SRNs as a mapping from the solution in lower-dimensional space, potentially one. To the best of our knowledge, we are the first to establish the MP framework for the SRN setting to solve an IS problem.
An SRN (refer to Section <ref> for a brief introduction and <cit.> for more details)
describes the time evolution of a set of species through reactions and can be found in a wide range of applications, such as biochemical reactions, epidemic processes <cit.>, and transcription and translation in genomics and virus kinetics <cit.>. For a d-dimensional SRNs, 𝐗:[0,T]→^d, with the given final time T>0, we aim to determine accurate and computationally efficient MC estimations for the expected value 𝔼[g(𝐗(T))]. The observable g:^d→ is a given scalar function of 𝐗, where indicator functions g(𝐱)=1_{𝐱∈ℬ} are of interest to estimate the rare event probability ℙ(𝐗(T)∈ℬ)≪ 1, where ℬ⊂^d.
The quantity of interest, 𝔼[g(𝐗(T))], is the solution to the corresponding Kolmogorov backward equations <cit.>. Since solving these ordinary differential equations (ODEs) in closed form is infeasible for most SRNs; thus, numerical approximations based on discretized schemes are used to derive solutions.
A drawback of these approaches is that, without using dimension reduction techniques, the computational cost scales exponentially with the number of species d. To avoid the curse of dimensionality, we propose estimating 𝔼[g(𝐗(T))] using MC methods.
Numerous schemes have been developed to simulate the exact sample paths of SRNs. These include the stochastic simulation algorithm introduced by Gillespie in <cit.> and the modified next reaction method proposed by Anderson in <cit.>. However, when SRNs involve reaction channels with high reaction rates, simulating exact realizations of the system can be computationally expensive. To address this issue, Gillespie <cit.> and Aparicio and Solari <cit.> independently proposed the explicit-TL method (see Section <ref>), which approximates the paths of 𝐗 by evolving the process with fixed time steps while maintaining constant reaction rates within each time step.
Additionally, other simulation schemes have been proposed to handle situations with well-separated fast and slow time scales <cit.>.
In order to compute MC estimates of 𝔼[g(𝐗(T))] more efficiently, different variance reduction techniques have been proposed in the context of SRNs. In the spirit of the multilevel MC (MLMC) idea <cit.>, various MLMC-based methods <cit.> have been introduced to overcome different challenges in this context. Moreover, as the naive MC and MLMC estimators have high computational costs when used for estimating rare event probabilities, different IS approaches <cit.> have been proposed.
To estimate various statistical quantities efficiently for SRNs (specifically rare event probabilities), we use the path-dependent IS approach originally introduced in <cit.>. This class of probability measure change is based on modifying the rates of the Poisson random variables used to construct the TL paths. In <cit.>, it is shown how optimal IS controls are obtained by minimizing the second moment of the IS estimator (equivalently, the variance), representing the cost function of the associated SOC problem, and that the corresponding value function solves a dynamic programming relation (see Section <ref> for revising these results). In this work, we generalize the discrete-time dynamic programming relation by a set of continuous-time ODEs, the Hamilton–Jacobi–Bellman (HJB) equations, allowing the formulation of optimal IS controls in continuous time. Compared to the discrete-time IS control formulation presented in <cit.>, the continuous-time formulation offers the advantage that it provides a curve of IS controls over time instead of a discrete set. This allows its application for any time stepping in the IS-TL paths and thereby eliminates the need for ad-hoc interpolations often needed in the discrete setting.
In the multidimensional setting, the cost of solving the backward HJB equations increases exponentially with respect to the dimension d (curse of dimensionality). In <cit.>, we proposed a learning-based approach to reduce this effect. In that approach, the value function is approximated using an ansatz function, the parameters of which are learned through a stochastic optimization algorithm (see Figure <ref> for a schematic illustration of the approach). In this work, we present an alternative method using a dimension reduction approach for SRNs (see Figure <ref> for a schematic illustration of the approach). The proposed methodology is to adapt the MP idea originally introduced in <cit.> for the setting of diffusion-type stochastic differential equations (SDEs) to the SRN framework, resulting in a significantly lower-dimensional process, preserving the marginal distribution of the original full-dimensional SRN. The propensities characterizing the lower-dimensional MP process can be approximated using L^2 regression. Using the resulting low-dimensional SRN, we derive an approximate value function and, consequently, near-optimal IS controls, while reducing the effect of the curse of dimensionality. By mapping the IS controls to the original full-dimensional SRNs, we derive an unbiased IS-MC estimator for the TL scheme. Compared to the learning-based approach presented in <cit.>, this novel MP-IS approach eliminates the need for an ansatz function to model the value function. This approach allows its application to general observables g that differ from indicator functions for rare event estimation, because no prior knowledge regarding the shape of the value function and suitable ansatz functions is required.
To the best of our knowledge, we are the first to establish the MP idea for SRNs and apply it to derive an efficient pathwise IS for MC methods. Initially, the MP idea was introduced for Itô stochastic processes in <cit.> and was later generalized to martingales and semimartingales <cit.>. In addition, MP has been widely applied for dimension reduction in SDEs <cit.>, particularly in financial applications <cit.>. For instance, in <cit.>, solving HJB equations for an MP process was pursued but in the setting of Itô SDEs with the application of pricing American options. In <cit.>, MP was used for control problems and IS problems for rare events in high-dimensional diffusion processes with multiple time scales. In this work, we introduce the general dimension reduction framework of MP for SRNs such that it can be applied to other problems beyond the selected IS application. (e.g., solving the chemical master equation <cit.> or the Kolmogorov backward equations <cit.>).
The remainder of this work is organized as follows. Sections <ref>, <ref>, <ref>, and <ref> recall the relevant SRN, TL, MC, and IS notations and definitions from <cit.>. Next, Section <ref> reviews the connection between IS and SOC by introducting the IS scheme, the value function, and the corresponding dynamic programming theorem from <cit.> in Section <ref>. Then, Section <ref> extends the framework to a continuous-time formulation leading to the continuous-time value function and deriving the corresponding HJB equations.
Section <ref> presents the MP technique for SRNs and shows how the projected dynamics can be computed using L^2 regression. Next, Section <ref> addresses the curse of dimensionality of high-dimensional SRNs occurring from the optimal IS scheme in Section <ref> by combining the IS scheme with MP (Section <ref>) to derive near-optimal IS controls. Finally, Section <ref> presents the numerical results for the rare event probability estimation to demonstrate the efficiency of the proposed MP-IS approach compared to a standard TL-MC estimator.
§.§ Stochastic Reaction Networks (SRNs)
We recall from <cit.> that an SRN describes the time evolution for a homogeneously mixed chemical reaction system, in which d distinct species interact through J reaction channels. Each reaction channel ℛ_j , j=1…,J, is given by the relation
α_j,1 S_1+…+α_j,d S_d θ_j→β_j,1 S_1+…+β_j,d S_d,
where α_j,i molecules of species S_i are consumed and β_j,i molecules are produced. The positive constants {θ_j}_j=1^J represent the reaction rates.
This process can be modeled by a Markovian pure jump process, 𝐗:[0,T]×Ω→^d, where (Ω, ℱ, ℙ) is a probability space.
We are interested in the time evolution of the state vector,
𝐗(t) = (X_1(t), …, X_d(t)) ∈^d ,
where the i-th component, X_i(t), describes the abundance of the ith species present in the system at time t.
The process 𝐗 is a continuous-time, discrete-space Markov process characterized by Kurtz's random time change representation <cit.>:
𝐗(t)= 𝐱_0+∑_j=1^J Y_j (∫_0^t a_j(𝐗(s)) ds ) ν_j,
where Y_j:_+×Ω→ are independent unit-rate Poisson processes and the stoichiometric vector is defined as ν_j=(β_j,1-α_j,1,…,β_j,d-α_j,d) ∈^d.
The propensity a_j(·) for reaction channel ℛ_j is derived from the stochastic mass-action kinetic principle and
obeys
a_j(𝐱):=θ_j ∏_i=1^d x_i!/(x_i-α_j,i)!1_{x_i≥α_j,i}
where x_i is the counting number for species S_i.
§.§ Explicit Tau-Leap Approximation
The explicit-TL scheme is a pathwise approximate method based on Kurtz's random time change representation (<ref>) <cit.>. It was originally introduced to overcome the computational drawbacks of exact methods, which become computationally expensive when many reactions fire during a short time interval.
For a uniform time mesh {t_0=0, t_1,...,t_N= T} with step size Δ t=T/N and a given initial value 𝐗(0)=𝐱_0 , the explicit-TL approximation for 𝐗 is defined by
𝐗̂_0 := 𝐱_0
𝐗̂^Δ t_k :=max(0,𝐗̂^Δ t_k-1+∑_j=1^J𝒫_k-1,j(a_j(𝐗̂^Δ t_k-1) Δ t) ν_j) 1 ≤ k ≤ N,
where {𝒫_k,j(r_k,j)}_{1≤ j≤ J } are independent Poisson random variables with respective rates r_k,j:=a_j(𝐗̂^Δ t_k)Δ t conditioned on the current state 𝐗̂^Δ t_k, {𝒫_k,j(r_k,j)}_{1≤ j≤ J }. The maximum in (<ref>) is applied entry-wise. In each TL step, the current state is projected to zero to prevent the process from exiting the lattice (i.e., producing negative values).
§.§ Biased Monte Carlo estimator
We let 𝐗 be an SRN and g: ^d→ be a scalar observable.
For a given final time T, we estimate 𝔼[g(𝐗(T))] using the standard MC-TL estimator:
μ_M :=1/M∑_m=1^M g(𝐗̂^Δ t_[m](T))
where {𝐗̂^Δ t_[m](T)}_m=1^M are independent TL samples.
The global error for the proposed MC estimator has the following error decomposition:
|𝔼[g(𝐗(T))]-μ_M|≤|𝔼[g(𝐗(T))]-𝔼[g(𝐗̂^Δ t(T))]|_Bias+|𝔼[g(𝐗̂^Δ t(T))]-μ_M|_Statistical Error.
Under some assumptions, the TL scheme has a weak order, Δ t <cit.>, that is, for sufficiently small Δ t,
|𝔼[g(𝐗(T))- g(𝐗̂^Δ t(T) )] |≤ CΔ t
where C>0.
The bias and statistical error can be bound equally using TOL/2 to achieve the desired accuracy, TOL, with a confidence level of 1-α for α∈ (0,1), which can be achieved by the step size:
Δ t(TOL)= TOL/2· C
and
M^*(TOL)=C_α^24·Var[g(𝐗̂^Δ t(T))]/TOL^2
sample paths, where the constant C_α is the (1-α/2)-quantile for the standard normal distribution. We select C_α=1.96 for a 95% confidence level corresponding to α =0.05.
Given that the computational cost to simulate a single path is Δ t^-1, the expected total computational complexity is TOL^-3.
§.§ Importance Sampling
Using IS techniques <cit.> can improve the computational costs for the crude MC estimator through variance reduction in (<ref>). For a general motivation, we refer to <cit.> Section 1.4. For illustrating the IS method, let us consider the general problem of estimating 𝔼[g(Y)], where g is a given observable and Y is a random variable taking values in ℝ with the probability density function ρ_Y. We let ρ̂_Z be the probability density function for an auxiliary real random variable Z. The MC estimator under the IS measure is
μ_M^IS=1/M∑_j=1^M L(Z_[j])· g(Z_[j]),
where Z_[j] are independent and identically distributed samples from ρ̂_Z for j=1,…,M and the likelihood factor is given by L(Z_[j]):=ρ_Y(Z_[j])/ρ̂_Z(Z_[j]). The IS estimator retains the expected value of (<ref>) (i.e., 𝔼[L(Z)g(Z)]=𝔼[g(Y)]), but the variance can be reduced due to a different second moment 𝔼[(L(Z)· g(Z))^2].
Determining an auxiliary probability measure that substantially reduces the variance compared with the original measure is challenging and strongly depends on the structure of the considered problem. In addition, the derivation of the new measure must come with a moderate additional computational cost to ensure an efficient IS scheme. This work uses the path-dependent change of probability measure introduced in <cit.>, employing an IS measure derived from changing the Poisson random variable rates in the TL paths. Section <ref> recalls the SOC formulation for optimal IS parameters from <cit.> and extends it with a novel HJB formulation. We conclude this consideration in Section <ref>, combining the IS scheme with a dimension reduction approach to reduce the computational cost.
§ IMPORTANCE SAMPLING VIA STOCHASTIC OPTIMAL CONTROL FORMULATION
§.§ Dynamic Programming for Importance Sampling Parameters
This section revisits the connection between optimal IS measure determination within a class of probability measures, and the SOC formulated originally in <cit.>.
We let 𝐗 be an SRN as defined in Section <ref> and let 𝐗̂^Δ t denote its TL approximation as given by (<ref>). Then, the goal is to derive a near-optimal IS measure to estimate 𝔼[g(𝐗(T))]. We limit ourselves to the parameterized class of IS schemes used in <cit.>:
𝐗_n+1^Δ t =max(0,𝐗_n^Δ t+∑_j=1^JP̅_n,jν_j) , n=0,…,N-1,
𝐗_0^Δ t =𝐱_0,
where the measure change is obtained by modifying the Poisson random variable rates of the TL paths:
P̅_n,j=𝒫_n,j(δ_n,j^Δ t(𝐗^Δ t_n)Δ t), n=0,…, N-1, j=1,…,J .
In (<ref>), δ_n,j^Δ t(𝐱)∈𝒜_𝐱,j is the control parameter at time step n, under reaction j, and in state 𝐱∈ℕ^d. In addition, 𝒫_n,j(r_n,j) are independent Poisson random variables, conditioned on 𝐗^Δ t_n, with the respective rates r_n,j:=δ_n,j^Δ t(𝐗^Δ t_n)Δ t. The set of admissible controls is
𝒜_𝐱,j={0} ,if a_j(𝐱)=0
{y∈ℝ: y>0} ,otherwise.
In the following, we use the vector notation (δ_n^Δ t(𝐱))_j:=δ_n,j^Δ t(𝐱) and (𝐏̅_n)_j:=P̅_n,j for j=1,…,J.
The corresponding likelihood ratio of the path {𝐗^Δ t_n: n=0,…,N} for the IS parameters δ_n^Δ t(𝐱) ∈×_j=1^J 𝒜_𝐱,j is
L((𝐏̅_0,…,𝐏̅_N-1),(δ_0^Δ t(𝐗^Δ t_0),…,δ_N-1^Δ t(𝐗^Δ t_N-1)))=∏_n=0^N-1 L_n(𝐏̅_n,δ_n^Δ t(𝐗^Δ t_n)),
where the likelihood ratio update at time step n is
L_n(𝐏̅_n,δ_n^Δ t(𝐗^Δ t_n))
=∏_j=1^Jexp(-(a_j(𝐗_n^Δ t)-δ_n,j^Δ t(𝐗^Δ t_n))Δ t)(a_j(𝐗_n^Δ t)/δ_n,j^Δ t(𝐗^Δ t_n))^P̅_n,j
=exp(-(∑_j=1^J a_j(𝐗_n^Δ t)-δ_n,j^Δ t(𝐗^Δ t_n))Δ t) ·∏_j=1^J(a_j(𝐗_n^Δ t)/δ_n,j^Δ t(𝐗^Δ t_n))^P̅_n,j.
To simplify the notation, we use the convention that a_j(𝐗_n^Δ t)/δ_n,j^Δ t(𝐗^Δ t_n)=1, whenever both a_j(𝐗_n^Δ t)=0 and δ_n,j^Δ t(𝐗^Δ t_n)=0 in (<ref>). From (<ref>), this results in a factor of 1 in the likelihood ratio for reactions where a_j(𝐗_n^Δ t)=0.
Using the introduced change of measure (<ref>), the quantity of interest can be expressed with respect to the new measure:
𝔼[g(𝐗̂^Δ t_N)]=𝔼[L((𝐏̅_0,…,𝐏̅_N-1),(δ_0^Δ t(𝐗^Δ t_0),…,δ_N-1^Δ t(𝐗^Δ t_N-1)))· g(𝐗^Δ t_N)],
with the expectation on the right-hand side of (<ref>) taken with respect to the dynamics in (<ref>).
Next, we recall the connection between the optimal second moment minimizing IS parameters {δ_n^Δ t(𝐱)}_n=0,…,N-1; 𝐱∈ℕ^d and the corresponding discrete-time dynamic programming relation from <cit.>. We revisit the definition of the discrete-time value function u_Δ t(·,·) in Definition <ref>, allowing the formulation of the dynamic programming equations in Theorem <ref>. The proof and further details for Theorem <ref> are provided in <cit.>.
For a given Δ t>0, the discrete-time value function u_Δ t(·,·) is defined as the optimal (infimum) second moment for the proposed IS estimator. For time step 0 ≤ n ≤ N and state 𝐱∈ℕ^d,
u_Δ t(n,𝐱)
=inf_{δ^Δ t_k}_k=n,…,N-1∈𝒜^N-n𝔼[g^2(𝐗_N^Δ t)∏_k=n^N-1 L_k^2(𝐏̅_k,δ_k^Δ t(𝐗_k^Δ t))| 𝐗_n^Δ t=𝐱],
where 𝒜=_𝐱∈ℕ^d_j=1^J𝒜_𝐱,j∈ℝ^ℕ^d × J is the admissible set for the IS parameters, and u_Δ t(N,𝐱)=g^2(𝐱), for any 𝐱∈ℕ^d.
For 𝐱∈ℕ^d and the given step size Δ t>0, the discrete-time value function u_Δ t(n,𝐱) fulfills the dynamic programming relation:
u_Δ t(N,𝐱) =g^2(𝐱)
and for n =N-1,…,0, and 𝒜_𝐱:=_j=1^J𝒜_𝐱,j,
u_Δ t(n,𝐱) =inf_δ_n^Δ t(𝐱)∈𝒜_𝐱exp((-2∑_j=1^J a_j(𝐱)+∑_j=1^Jδ_n,j^Δ t(𝐱))Δ t)
×∑_𝐩∈ℕ^J(∏_j=1^J(Δ t ·δ_n,j^Δ t(𝐱))^p_j/p_j! (a_j(𝐱)/δ_n,j^Δ t(𝐱))^2p_j)· u_Δ t(n+1,max(0,𝐱+ ν𝐩)),
where ν=(ν_1, …,ν_J)∈ℤ^d× J.
Analytically solving the minimization problem (<ref>) is challenging due to the infinite sum. In <cit.>, the problem is solved by approximating the value function (<ref>) using a truncated Taylor expansion of the dynamic programming (<ref>). To overcome the curse of dimensionaliy, a learning-based approach for the value function was proposed. Instead, in this work, we utilize a continuous-time SOC formulation, leading to a set of coupled d-dimensional ODEs, the HJB equations (refer to Section <ref>). We deal with the curse of dimensionality issue by using a dimension reduction technique, namely the MP, as explained in Section <ref>.
§.§ Derivation of Hamilton–Jacobi–Bellman (HJB) Equations
In Corollary <ref>, the discrete-time dynamic programming relation in Theorem <ref> is replaced by its analogous continuous-time relation, resulting in a set of ODEs known as the HJB equations. The continuous-time value function ũ(·,𝐱):[0,T]→ℝ, 𝐱∈ℕ^d, is the limit of the discrete value function u_Δ t(·,𝐱) as the step size Δ t approaches zero. In addition, the IS controls δ(·,𝐱): [0,T]→𝒜_𝐱 become time-continuous curves for 𝐱∈ℕ^d.
For all 𝐱∈ℕ^d, the continuous-time value function ũ(t, 𝐱) fulfills (<ref>) for t∈ [0,T]:
ũ(T, 𝐱) =g^2(𝐱)
-dũ/dt (t, 𝐱) =inf _δ(t,𝐱) ∈𝒜_𝐱(-2 ∑_j=1^J a_j(𝐱)+∑_j=1^J δ_j(t,𝐱)) ũ(t, 𝐱)+∑_j=1^J a_j(𝐱)^2/δ_j(t,𝐱)ũ(t, max(0, 𝐱+ν_j)),
where δ_j(t,𝐱):=(δ(t,𝐱))_j.
The proof of the corollary is presented in Appendix <ref>.
If ũ(t,𝐱)>0 for all 𝐱∈ℕ^d and t∈[0,T], we can solve the minimization problem in (<ref>) in closed form, such that the optimal controls are given by
δ̃_j(t,𝐱) = a_j(𝐱)√(ũ(t, max(0, 𝐱+ν_j))/ũ(t,𝐱))
and (<ref>) simplifies to
dũ/dt (t, 𝐱) =-2∑_j=1^J a_j(𝐱)( √(ũ(t,𝐱)ũ(t,max(0,𝐱+ν_j)))-ũ(t,𝐱)).
To estimate rare event probabilities with an observable g(𝐱)=1_x_i>γ, we could encounter ũ(t,𝐱)=0 for some 𝐱∈ℕ^d; therefore, we modify (<ref>) by approximating the final condition g(𝐱) using a sigmoid:
g̃(𝐱)=1/1+exp(-b-β x_i) >0
with appropriately chosen parameters b∈ℝ and β∈ℝ. By incorporating the modified final condition, we obtain an approximate value function by solving (<ref>) using an ODE solver (e.g. from MATLAB). When using the numerical solver, we truncate the infinite state space ℕ^d̅ using sufficiently large upper bounds. The approximated near-optimal IS controls are then expressed by (<ref>). By the truncation of the infinite state space and the approximation of the final condition g by g̃, we introduce a bias to the value function. This can impact the amount of variance reduction in the IS-MC forward run, however, the IS-MC estimator is bias-free.
The cost for the ODE solver scales exponentially with the dimension d of the SRNs, making this approach infeasible for high-dimensional SRNs. Section <ref> presents a dimension reduction approach for SRNs employed in Section <ref> to derive suboptimal IS controls for a lower-dimensional SRN. We later demonstrate how these controls are mapped to the full-dimensional SRN system.
In Corollary <ref> and Theorem <ref>, we present two alternative methods to express the value function (<ref>) and the IS controls. Utilizing the HJB framework, we can derive continuous controls across time. This allows any time stepping Δ t in the IS-TL forward run and eliminates the need for ad-hoc interpolations.
§ MARKOVIAN PROJECTION FOR STOCHASTIC REACTION NETWORKS
§.§ Formulation
To address the curse of dimensionality problem when deriving near-optimal IS controls, we project the SRN to a lower-dimensional network while preserving the marginal distribution of the original high-dimensional SRN system. We adapt the MP idea originally introduced in <cit.> for the setting of diffusion type stochastic differential equations to the SRNs framework. For an d-dimensional SRN state vector, 𝐗(t), we introduce a projection to a d̅-dimensional space such that 1≤d̅≪ d:
P:ℝ^d→ℝ^d̅: 𝐱↦𝐏𝐱,
where 𝐏∈ℝ^d̅× d is a given matrix. This section develops a general MP framework for arbitrary projections with d̅≥ 1. However, the choice of the projection depends on the quantity of interest. In particular, when considering rare event probabilities with an observable g(𝐱)=1_{x_i>γ}, γ∈ℝ as we do in Section <ref>, the projection operator is of the form
P(𝐱)=(0,…,i-10, i1, i+10,…,0 ) 𝐱.
The projected process S(t):=P(𝐗(t)), for t ∈ [0,T], is non-Markovian. Theorem <ref> shows that a d̅ dimensional SRN, S̅(t) exists that follows the same conditional distribution as S(t) conditioned on the initial state 𝐗(0)=𝐱_0 for all t ∈ [0,T].
We let S̅(t) be a d̅-dimensional stochastic process whose dynamics are given by the following:
S̅(t)= P(𝐱_0)+∑_j=1^JY̅_j (∫_0^t a̅_j(τ,S̅(τ)) dτ) P(ν_j)_=:ν̅_j,
for t∈[0,T], where Y̅_j denotes independent unit-rate Poisson processes and a̅_j, j=1,…,J, are characterized by
a̅_j(t,s):=𝔼[a_j(𝐗(t))|P(𝐗(t))=s, 𝐗(0)=𝐱_0 ] , for 1≤ j≤ J, s∈ℕ^d̅.
Thus, S(t)|_{𝐗(0)=𝐱_0}=P(𝐗(t))|_{𝐗(0)=𝐱_0} and S̅(t)|_{𝐗(0)=𝐱_0} have the same distribution for all t∈[0,T].
The proof for Theorem <ref> is given in Appendix <ref>.
The propensities of the full-dimensional process {a_j}_j=1^J follow the mass-action kinetics in (<ref>) (i.e., a time-homogeneous function of the state), whereas the resulting propensities, a̅_j of the MP-SRN S̅ are time-dependent (see (<ref>)).
Reactions with P(ν_j)=0 do not contribute to the MP propensity in (<ref>). For reactions with P(ν_j) ≠ 0, it may occur that their corresponding projected propensity is known analytically. We denote the index set of reactions requiring an estimation of (<ref>) (e.g., via a L^2 regression as described in Section <ref>) by 𝒥_MP. This index set is described as follows:
𝒥_MP:={1≤ j ≤ J: P(ν_j)≠0 and a_j(𝐱)≠ f(P(𝐱)) for all functions f:ℝ^d̅→ℝ_(*)},
where condition (*) excludes reaction channels for which the MP propensity is only dependent on s and given in closed form by a̅_j(t,s)=f(s) for the function f.
§.§ Discrete L^2 Regression for Approximating Projected Propensities
To approximate the Markovian propensity a̅_j for j∈𝒥_MP, we reformulate (<ref>) as a minimization problem and then use discrete L^2 regression as described below.
We let V:={f:[0,T]×ℝ^d̅→ℝ: ∫_0^T𝔼[f(t,P(𝐗(t)))^2]dt<∞}. Then, the projected propensities via the MP for j∈𝒥_MP are approximated by
a̅_j(·,·) =argmin_h∈ V∫_0^T𝔼[( a_j(𝐗(t))-h(t,P(𝐗(t))))^2]dt
≈argmin_h∈ V𝔼[1/N∑_n=0^N-1( a_j(𝐗̂^Δ t_n)-h(t_n,P(𝐗̂^Δ t_n)))^2]
≈argmin_h∈ V1/M∑_m=1^M1/N∑_n=0^N-1( a_j(𝐗̂^Δ t_[m],n)-h(t_n,P(𝐗̂^Δ t_[m],n)))^2 ,
where {𝐗̂^Δ t_[m]}_m=1^M are M independent TL paths with a uniform time grid 0=t_0<t_1<…<t_N=T with step size Δ t.
To solve (<ref>), we use a discrete L^2 regression approach. For the case d̅=1, we employ a set of basis functions of V, {ϕ_p(·,·)}_p∈Λ, where Λ⊂ℕ^2 is a finite index set. In Remark <ref>, we provide more details on the choice of the basis. Consequently, the projected propensities via MP are approximated by
a̅_j(t,s)≈∑_p∈Λc_p^(j)ϕ_p(t,s), j ∈𝒥_MP
where the coefficients c_p^(j) must be derived for j∈𝒥_MP and p∈Λ.
Next, we derive the linear systems of equations, solved by {c_p^(j)}_p∈Λ from (<ref>) for j∈𝒥_MP.
For a given one-dimensional indexing of {1,…,M}×{0,…,N-1}, the corresponding design matrix D∈ℝ^MN×|Λ| is given by
D_k,p=ϕ_p(t_n,P(𝐗̂^Δ t_[m],n)), for k=(m,n)∈{1,…,M}×{0,…,N-1}, p∈Λ.
Further, we set ψ_k^(j)=a_j(𝐗̂^Δ t_[m],n) (ψ^(j)∈ℝ^MN) for k∈{1,…,M}×{0,…,N-1}, and j∈𝒥_MP.
Then, the minimization problem in (<ref>) becomes
c^(j) =argmin_{c_p}_p∈Λ1/MN∑_m=1^M∑_n=0^N-1( a_j(𝐗̂^Δ t_[m],n)-∑_p∈Λ c_pϕ(t_n,P(𝐗̂^Δ t_[m],n)))^2
=argmin_𝐜∈ℝ^#Λ(ψ^(j)-Dc)^⊤(ψ^(j)-Dc)
= argmin_𝐜∈ℝ^#Λψ^(j)^⊤ψ^(j)-2𝐜^⊤D^⊤ψ^(j) +𝐜^⊤D^⊤D𝐜_=:I(𝐜).
We minimize I(𝐜) with respect to 𝐜 by solving
∂ I(𝐜)/∂𝐜 = -2D^⊤ψ^(j) +2D^⊤D𝐜=0
and obtain the normal equation for j∈𝒥_MP:
(D ^⊤D)𝐜^(j)=D ^⊤ψ^(j).
For the case d̅=1, the normal equation with a set of polynomials {ϕ_p}_p∈Λ on ℝ^2 can be used to derive the MP propensity a̅_j for j∈𝒥_MP. We use the standard basis {ϕ_(i_1,i_2)}_(i_1,i_2)∈Λ for a two-dimensional index set Λ, where
ϕ_(i_1,i_2): ℝ^2→ℝ, (t,x) ↦ t^i_1x^i_2.
For better stability <cit.>, we use the Gram–Schmidt orthogonalization algorithm to determine an orthonormal set of functions for the empirical scalar product:
⟨ϕ_i, ϕ_j⟩_ρ, M=1/N∑_n=0^N-11/M∑_m=1^Mϕ_i(t_n, P 𝐗̂^Δ t_[m],n) ϕ_j(t_n,P 𝐗̂^Δ t_[m],n)
to find an orthonormal set of functions.
We base the empirical scalar product and the normal equation (<ref>) on the same set of TL paths, {𝐗̂^Δ t_[m]}_m=1,…,M, such that the matrix condition number becomes cond(D ^⊤D)=1 and D^⊤ D=diag(T/Δ tM,…,T/Δ tM) <cit.>.
§.§ Computational Cost of Markovian Projection
The computational work to derive an MP for an SRN with J reactions based on a time stepping Δ t based on M TL paths and an orthonormal set of polynomials (see Remark <ref>) of size #Λ splits into three types of costs:
W_MP(#Λ,Δ t,M)≈ M· W_TL(Δ t) + W_G-S(#Λ,Δ t,M)+W_L^2(#Λ,Δ t,M),
where W_TL, W_G-S, and W_L^2 denote the computational costs to simulate a TL path, derive an orthonormal basis (as described in Remark <ref>), and derive and solve the normal equation in (<ref>), respectively. The dominant terms these costs contribute as follows:
W_TL(Δ t) ≈T/Δ t· J · C_Poi,
W_G-S(#Λ,Δ t,M) ≈ M ·T/Δ t·(#Λ)^3,
W_L^2(#Λ,Δ t,M) ≈ M·T/Δ t·((#Λ)^2+#𝒥_MP·#Λ),
where C_Poi represents the cost to simulate one realization of a Poisson random variable. The main computational cost results from deriving an orthonormal basis (see Remark <ref>). A more detailed derivation of the cost terms is provided in Appendix <ref>. For many applications, such as the MP-IS approach presented in Section <ref>, the MP must be computed only once, such that the computational cost W_MP(#Λ,Δ t,M) can be regarded as an off-line cost.
§ IMPORTANCE SAMPLING FOR HIGHER-DIMENSIONAL STOCHASTIC REACTION NETWORKS VIA MARKOVIAN PROJECTION
Next, we employ MP to overcome the curse of dimensionality when deriving IS controls from solving (<ref>). Specifically, we solve the HJB equations in (<ref>) for a reduced-dimensional MP system as explained in Section <ref>. Given a suitable projection P:ℝ:d→d̅ and a corresponding final condition g̃:ℕ^d̅→ℝ, the HJB equations (<ref>) for the MP process are
ũ_d̅(T, s) =g̃^2(s), s∈ℕ^d̅
dũ_d̅/dt(t, s) =-2∑_j=1^J a̅_j(t,s)( √(ũ_d̅(t,s)ũ_d̅(t,max(0,s+ν̅_j)))-ũ_d̅(t,s)), t∈[0,T], s∈ℕ^d̅.
For observables of the type g(𝐱)=1_{x_i>γ}, we use an MP to a (d̅=1)-dimensional process via projection (<ref>), and the final condition is approximated by a positive sigmoid (see (<ref>)).
The solution of (<ref>) is the value function ũ_d̅ of the d̅-dimensional MP process.
To obtain continuous-time IS controls for the d-dimensional SRN, we substitute the value function ũ(t,𝐱) of the full-dimensional process in (<ref>) with the value function ũ_d̅(t,P(𝐱)) of the MP-SRN:
δ̅_j(t,𝐱)=a_j(𝐱)√(ũ_d̅(t, max(0, P(𝐱+ν_j)))/ũ_d̅(t,P(𝐱))) for 𝐱∈ℕ^d, t∈[0,T].
In the presented approach, we map the value function of the d̅-dimensional MP process to the full-dimensional SRNs. Alternatively, one could also map the optimal controls from the d̅-dimensional MP-SRN to the full-dimensional SRNs, leading to the following controls:
δ̃^d̅_j(t,𝐱)=a̅_j(P(𝐱))√(ũ_d̅(t, max(0, P(𝐱)+P(ν_j)))/ũ_d̅(t,P(𝐱))), for 𝐱∈ℕ^d, t∈[0,T].
The numerical experiments demonstrate that this approach results in a comparable variance reduction to the approach presented in (<ref>).
In (<ref>), when utilizing ũ_d̅ as the value function for the d-dimensional control, we introduce a bias to the optimal IS controls by approximating ũ(t,𝐱) by ũ_d̅(t,P(𝐱)) for 𝐱∈ℕ^d and t∈[0,T]. For the case d̅=d, we have ũ_d̅(t,P(𝐱))=ũ(t,𝐱) and the MP produces the optimal IS control for the full-dimensional SRNs. For d̅<d, this equality does not hold, since the interaction (correlation effects) between non-projected species are not taken into account in the MP SRNs, because the MP only ensures that the marginal distributions of P(𝐗(t))|_{𝐗(0)=𝐱_0} and S̅(t)|_{𝐗(0)=𝐱_0} are identical. This can be seen in examples in which reactions occur with P(ν_j)=0. Those reactions, are not present in the MP and; thus, are not included in the IS scheme. For the extreme case, d̅=1, we expect to achieve the least variance reduction which could be already substantial and satisfactory for many examples as we show in our numerical experiments.
However, examples could exist where a projection to dimension d̅=1 is insufficient to achieve a desired variance reduction. In this case, we can adaptively choose a better projection with increased dimension d̅=1,2,… until a sufficient variance reduction is achieved. This will imply an increased computational cost in the MP and in solving the HJB equations (<ref>) for 𝐱∈ℕ^d̅. Investigating the effect of d̅ on improving the variance reduction of our approach is left for a future work.
To derive an MP-IS-MC estimator for a given uniform time grid 0=t_0≤ t_1≤…≤ t_N=T with step size Δ t, we generate IS paths using the scheme in (<ref>) with IS control parameters δ_n,j^Δ t(𝐱)=δ̅_j(t_n,𝐱), as in (<ref>), for j=1,…,J, 𝐱∈ℝ^d, n=0,…,N-1. Figure <ref> presents a schematic illustration of the entire derivation of the MP-IS-MC estimator.
This computational work consists of three cost contributions:
W_MP-IS-MC (#Λ,Δ t,M,M_fw)
≈ W_MP(#Λ,Δ t,M)+W_HJB(#Λ)+W_forward(Δ t, M_fw),
where W_MP(#Λ,Δ t,M) denotes the off-line cost to derive the MP (see (<ref>)), W_HJB(#Λ) represents the cost to solve the HJB (<ref>) for the d̅-dimensional MP-SRN, and W_forward(Δ t, M_fw) indicates the cost of deriving M_fw IS paths. The cost to solve the HJB (<ref>) W_HJB(#Λ) depends on the used solver, and the cost for the forward run has the following dominant terms:
W_forward(Δ t, M_fw)≈ M_fw·T/Δ t· (J · C_Poi+C_lik+#𝒥_MP· C_δ),
where C_δ is the cost to evaluate (<ref>).
In this work, we use the described MP for dimension reduction to derive a sub-optimal change of measure for IS, but the same MP framework can be used for other applications, such as solving the chemical master equation <cit.> or the Kolmogorov backward equations <cit.>. We intend to explore these directions in a future work.
§ NUMERICAL EXPERIMENTS AND RESULTS
Through Examples <ref> and <ref>, we demonstrate the advantages of the proposed MP-IS approach compared with the standard MC approach for rare event estimations. We numerically demonstrate that the proposed approach achieves a substantial variance reduction compared with standard MC estimators when applied to SRNs with various dimensions.
[Michaelis–Menten enzyme kinetics <cit.>]
The Michaelis-Menten enzyme kinetics are enzyme-catalyzed reactions describing the interaction of an enzyme E with a substrate S, resulting in a product P:
E+Sθ_1→ C, Cθ_2→ E+S,
Cθ_3→ E+P,
where θ = (0.001,0.005,0.01)^⊤.
We consider the initial state 𝐗_0=(E(0),S(0),C(0),P(0))^⊤=(100, 100, 0, 0)^⊤ and the final time T=1. The corresponding propensity and the stoichiometric matrix are given by
a(x)=([ θ_1 E S; θ_2 C; θ_3 C ]), ν=([ -1 1 1; -1 1 0; 1 -1 -1; 0 0 1 ]).
The observable of interest is g(𝐱)=1_{x_3>22}.
[Goutsias's model of regulated transcription <cit.>]
The model describes a transcription regulation through the following six molecules:
Protein monomer (M), Transcription factor (D),
mRNA (RNA), Unbound DNA (DNA),
DNA bound at one site (DNA· D), DNA bounded at two sites (DNA· 2D).
These species interact through the following 10 reaction channels
R N A θ_1→ R N A+M, M θ_2→∅,
D N A · D θ_3→ R N A+D N A · D,
R N A θ_4→∅,
D N A+D θ_5→ D N A · D,
D N A · D θ_6→D N A+D,
D N A · D+D θ_7→ D N A · 2 D,
D N A · 2 D θ_8→ D N A · D+D,
2M θ_9→ D,
D θ_10→ 2 M,
where (θ_1,…,θ_10)=(0.043, 0.0007, 0.0715, 0.0039, 0.0199, 0.479, 0.000199, 8.77×10^-12, 0.083, 0.5). As the initial state, we use 𝐗_0=(M(0),D(0),RNA(0),DNA(0),DNA· D(0),DNA·2D(0))=(2,6,0,0,2,0), and the final time is T=1. We aim to estimate the rare event probability ℙ(D(T)>8).
§.§ Markovian Projection Results
Through simulations for Examples <ref> and <ref>, we numerically demonstrate that the distribution of the MP process distribution S̅(T)|_{𝐗_0=𝐱_0} matches the conditional distribution of the projected process S(t)|_{𝐗_0=𝐱_0}=P(𝐗(t))|_{𝐗_0=𝐱_0}, as shown in Theorem <ref>. For both examples, we use an MP projection with d̅=1 using the projection given in (<ref>), where the projected species is indexed as i=3 in Example <ref> and as i=2 in Example <ref>.
The MP is based on M=10^4 TL sample paths with a step size of Δ t=2^-4 and uses the orthonormal basis of polynomials described in Remark <ref> with Λ = {0,1,2}×{0,1,2} for the L^2 regression. Figure <ref> shows the relative occurrences of states at final time T with M_fw=10^4 sample paths, comparing the TL distribution of P(𝐗(t))|_{𝐗_0=𝐱_0} and the MP estimate of S̅(T)|_{𝐗_0=𝐱_0}. We set a step size of Δ t=2^-4 for the forward runs. In both examples, the one-dimensional MP process mimics the distribution of the state of interest X_i(T) of the original SRNs. Further quantification and analyses of the MP error are left for future work. In this work, a detailed analysis of the MP error is less relevant because the MP is used as a tool to derive IS controls for the full-dimensional process, and the IS is bias-free with respect to the TL scheme.
§.§ Makovian Projection-Importance Sampling Results
For the numerical experiments, we use a six-dimensional and a four-dimensional SRNs with the observable g(𝐱)=1_{x_i>γ}, where i and γ are specified in Examples <ref> and <ref>. Figure <ref> indicates that this observable leads to a rare event probability estimation for which an MC estimate is insufficient. We use the workflow in Figure <ref> with separate simulations for various Δ t values for the MP-IS simulations. The MP is based on M=10^4 TL sample paths each. The MP-IS-MC estimator, the sample variance, and the kurtosis estimate are based on M_fw=10^6 IS sample paths.
The relative error is more relevant than the absolute error for rare event probabilities. Therefore, we display the squared coefficient of variation <cit.> in the simulations results, which is given by the following for a random variable X:
Var_rel[X]=Var[X]/𝔼[X]^2.
The kurtosis is a good indicator of the robustness of the variance estimator (see <cit.> for the connection between the sample variance and kurtosis).
Figure <ref> shows the simulation results for the four-dimensional Example <ref> for different step sizes Δ t. The quantity of interest is a rare event probability with a magnitude of 10^-5. For a step size of Δ t=2^-10, the proposed MP-IS approach reduces the squared coefficient of variation by a factor of 10^6 compared to the standard MC-TL approach. The last plot in Figure <ref> indicates that the kurtosis of the proposed MP-IS approach is below the kurtosis for standard TL for all observed step sizes Δ t, confirming that the proposed approach results in a robust variance estimator.
The second application of the proposed IS approach is the six-dimensional Example <ref>. Figure <ref> shows that this rare event probability has a magnitude of 10^-3. We observe that, for Δ t ≤ 2^-3, the squared coefficient of variation of the proposed MP-IS approach is reduced compared to the standard TL-MC approach. For a step size of Δ t=2^-10, this is a variance reduction of a factor of ≈ 500. Note that, this example achieves less variance reduction than Example <ref>due to a less rare quantity of interest. For most step sizes Δ t, the kurtosis of the proposed IS approaches is moderately increased compared to the standard TL estimator, with decreasing kurtosis for smaller Δ t. This outcome indicates a potentially insatiable variance estimator for coarse time steps of Δ t > 2^-7. For finer time steps, we expect a robust variance estimator.
§ CONCLUSION
In conclusion, this work presented an efficient IS scheme for estimating rare event probabilities for SRNs. We utilized a class of parameterized IS measure changes originally introduced in <cit.>, for which near-optimal IS controls can be derived through a SOC formulation. We showed that the value function associated with this formulation can be expressed as a solution of a set of coupled ODEs, the HJB equations. One challenge encountered in solving the HJB equations is the curse of dimensionality, arising from the high-dimensional SRN. To address this issue, we introduced a dimension reduction approach for the setting of SRNs, namely MP. Then, we used a discrete L^2 regression to approximate the propensity and the stoichiometric vector of the MP-SRN. We demonstrated how the MP-SRN can be used for solving a significantly lower-dimensional HJB system, and how the resulting parameters are then mapped back to the full-dimensional SRNs to derive near-optimal IS controls. Our numerical simulations showed substantial variance reduction for the MP-IS-MC estimator compared to the standard MC-TL estimator for rare event probability estimations.
Acknowledgments
This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-2019-CRG8-4033. This work was performed as part of the Helmholtz School for Data Science in Life, Earth and Energy (HDS-LEE) and received funding from the Helmholtz Association of German Research Centres and the Alexander von Humboldt Foundation. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
plain
§ PROOF FOR COROLLARY <REF>
For 𝐱∈ℕ^d, we define ũ(·, 𝐱;Δ t) as the continuous smooth extension of u_Δ t(·, 𝐱) (defined in (<ref>)) on [0,T]. Consequently, we denote the continuous-time IS controls by δ(·,𝐱): [0,T]→𝒜_𝐱 for 𝐱∈ℕ^d.
Then, the Taylor expansion of ũ(t+Δ t, 𝐱;Δ t) in t results in the following:
ũ(t+Δ t, 𝐱;Δ t)=ũ(t, 𝐱;Δ t)+Δ t ∂_t ũ(t, 𝐱;Δ t)+𝒪(Δ t^2), 𝐱∈ℕ^d.
By the definition of the value function <ref>, the final condition is given by
ũ(T, 𝐱;Δ t)=g^2(𝐱), 𝐱∈ℕ^d.
For t=T-Δ t, …, 0, we apply to (<ref>) from Theorem <ref> a Taylor expansion around Δ t=0 to the exponential term and (<ref>) to ũ(t+Δ t, 𝐱;Δ t):
ũ(t, 𝐱;Δ t) =
inf_δ(t,𝐱)∈𝒜_𝐱exp((-2∑_j=1^J a_j(𝐱)+∑_j=1^Jδ_j(t,𝐱))Δ t)
×∑_𝐩∈ℕ^J(∏_j=1^J(Δ t ·δ_j(t,𝐱))^p_j/p_j!(a_j(𝐱)/δ_j(t,𝐱))^2p_j)·ũ(t+Δ t,max(0,𝐱+ ν𝐩);Δ t)
=
inf_δ(t,𝐱) ∈𝒜_𝐱(1+ (-2 ∑_j=1^J a_j(𝐱)+∑_j=1^J δ_ j(t,𝐱))Δ t +𝒪(Δ t^2))
×[∑_𝐩∈ℕ^J(∏_j=1^J(Δ t ·δ_ j(t,𝐱))^p_j/p_j!(a_j(𝐱)/δ_ j(t,𝐱))^2p_j).
.·( ũ(t, max( 0,𝐱+ν𝐩);Δ t)+Δ t ∂_t ũ(t, max( 0,𝐱+ν𝐩);Δ t )+𝒪(Δ t^2))]
(*)⟹- ∂_t ũ(t, 𝐱;Δ t) =inf _δ(t,𝐱) ∈𝒜_𝐱(-2 ∑_j=1^J a_j(𝐱)+∑_j=1^J δ_ j(t,𝐱)) ũ(t, 𝐱;Δ t)+𝒪(Δ t)
+(1+𝒪(Δ t)) [∑_𝐩≠0Δ t^|𝐩|-1(∏_j=1^J a_j(𝐱)^2 p_j/p_j ! ·( δ_ j(t,𝐱))^p_j).
.·( ũ(t, max( 0,𝐱+ν𝐩);Δ t) +𝒪(Δ t) )],
where |𝐩|:=∑_i=1^Jp_j, and δ_ j(t,𝐱):=(δ(t,𝐱))_j. In (*), we split the sum, rearrange the terms, divide by Δ t, and collect the terms of 𝒪(Δ t).
The limit for Δ t → 0 in (<ref>) is denoted by ũ(t,𝐱), leading to (<ref>) for 0<t<T and 𝐱∈ℕ^d.
§ PROOF FOR THEOREM <REF>
We let f:ℝ^d̅→ℝ be an arbitrary bounded continuous function and S̅ be defined in (<ref>). We consider the following weak approximation error:
ε_T:=𝔼[f(P(𝐗(T))) |𝐗(0)=𝐱_0 ]-𝔼[f(S̅(T)) |S̅(0)=P(𝐱_0)].
For t ∈ [0,T], we define the cost to go function as
v̅(t,s):=𝔼[ f(S̅(T))|S̅(t)=s].
Then, we can represent the weak error in (<ref>) as follows:
ε_T=𝔼[v̅(T,P(𝐗(T)))|𝐗(0)=𝐱_0]-v̅(0,P(𝐱_0)).
Using Dynkin's formula <cit.> and (<ref>), we can express the first term in (<ref>) as follows:
𝔼 [v̅(T,P(𝐗(T)))|𝐗(0)=𝐱_0]
=v̅(0,P(𝐱_0))+∫_0^T 𝔼[∂_tv̅(τ,P(𝐗(τ))).
.+∑_j=1^J a_j(𝐗(τ))( v̅(τ,P(𝐗(τ)+ν_j))-v̅(τ,P(𝐗(τ)))) |𝐗(0)=𝐱_0]dτ.
The Kolmogorov backward equations of S̅ (<cit.> ) are given as
∂_τv̅(τ,s)=-∑_j=1^J a̅_j(τ,s)( v̅(τ,s+ν̅_j)-v̅(τ,s)), 𝐬∈ℕ^d̅,
implying that the weak error simplifies to
ε_T
=∑_j=1^J ∫_0^T 𝔼[a_j(𝐗(τ))v̅(τ,P(𝐗(τ)+ν_j))-a̅_j(τ,P(𝐗(τ))) v̅(τ,P(𝐗(τ))+ν̅_j)|𝐗(0)=𝐱_0]
-𝔼[(a_j(𝐗(τ))-a̅_j(τ,P(𝐗(τ))))v̅(τ,P(𝐗(τ))) |𝐗(0)=𝐱_0]dτ.
Next, we choose a̅_j and ν̅_j for j=1,…,J such that ε_T=0 for any function f. We consider the second term in (<ref>) and use the tower property to obtain
𝔼[(a_j(𝐗(τ))-a̅_j(τ,P(𝐗(τ))))v̅(τ,P(𝐗(τ))) | 𝐗(0)=𝐱_0]
=𝔼[ 𝔼[ (a_j(𝐗(τ))-a̅_j(τ,P(𝐗(τ))))v̅(τ,P(𝐗(τ)))| P(𝐗(τ)), 𝐗(0)=𝐱_0]| 𝐗(0)=𝐱_0]
=𝔼[( 𝔼[a_j(𝐗(τ))| P(𝐗(τ)), 𝐗(0)=𝐱_0]-a̅_j(τ,P(𝐗(τ))) )v̅(τ,P(𝐗(τ)))|𝐗(0)=𝐱_0].
To ensure that (<ref>)=0 for any function f, we obtain the following:
a̅_j(τ,P(𝐗(τ)))= 𝔼[a_j(𝐗(τ))| P(𝐗(τ)), 𝐗(0)=𝐱_0], j=1,…,J.
Applying (<ref>) and the tower property for the first term, we derive
𝔼[a_j(𝐗(τ))v̅(τ,P(𝐗(τ)+ν_j))-a̅_j(τ,P(𝐗(τ))) v̅(τ,P(𝐗(τ))+ν̅_j)|𝐗(0)=𝐱_0]
=𝔼[ 𝔼[a_j(𝐗(τ))v̅(τ,P(𝐗(τ)+ν_j))..
..-a̅_j(τ,P(𝐗(τ))) v̅(τ,P(𝐗(τ))+ν̅_j)| P(𝐗(τ)), 𝐗(0)=𝐱_0]|𝐗(0)=𝐱_0]
=𝔼[ 𝔼[a_j(𝐗(τ))|P(𝐗(τ)), 𝐗(0)=𝐱_0]v̅(τ,P(𝐗(τ))+P(ν_j))).
.-a̅_j(τ,P(𝐗(τ))) v̅(τ,P(𝐗(τ))+ν̅_j)|𝐗(0)=𝐱_0]
=𝔼[ 𝔼[a_j(𝐗(τ))|P(𝐗(τ)), 𝐗(0)=𝐱_0] .
·.(v̅(τ,P(𝐗(τ))+P(ν_j))-v̅(τ,P(𝐗(τ))+ν̅_j) )| 𝐗(0)=𝐱_0].
Moreover, Equation (<ref>) becomes zero for any function f using
ν̅_j=P(ν_j), j=1,…,J.
With this choice for a̅_j and ν̅_j, we derive ε_T=0. The derivation holds for arbitrary bounded and smooth functions f, for all fixed times T; thus, the process S(t)=P(𝐗(t)) has the same conditional distribution as S̅(t) conditioned on the initial value 𝐗(0)=𝐱_0.
§ MARKOVIAN PROJECTION COST DERIVATION
We present details on the computational cost of MP, as provided in (<ref>):
* The number of operations to generate one TL paths is given by
W_TL(Δ t)=T/Δ t· (C_prop+J· C_Poi+d(J+2)),
where C_prop is the cost of one evaluation of the propensity function (<ref>). The dominant cost in (<ref>) is C_Poi (the cost of generating a Poisson random variable).
* The number of operations for the Gram–Schmidt algorithm, as described in Remark <ref>, is given by
W_G-S(#Λ,Δ t,M)=#Λ·(C_inner+#Λ+1)+(#Λ-1)#Λ/2(2#Λ+C_inner),
where C_inner is the cost of the evaluation of the empirical inner product (<ref>) given by
C_inner=T/Δ t· M (2+2C_pol)+3 = 𝒪(T/Δ t· M ·#Λ).
The cost C_pol in (<ref>) is the computational cost for one evaluation of a polynomial in the space <ϕ_p>_p∈Λ, which is 𝒪(#Λ).
In the simulations, we apply the setting #Λ≪T/Δ t· M (see Section <ref>, using the parameter #Λ=9, M=10^4, T/Δ t=2^4). Therefore, the dominant cost in (<ref>) is 𝒪(M ·T/Δ t·(#Λ)^3).
* The cost W_L^2(#Λ,Δ t,M) is split into two: the cost to (1) derive and (2) solve the normal equation (<ref>). The number of operations to derive the design matrix D is M·T/Δ t·#Λ· C_pol, and the cost to derive one right-hand side (Ψ^(j))_j∈𝒥_MP is M·T/Δ t· C_prop. In (<ref>), the cost for the matrix product D^⊤ D is 𝒪(#Λ^2· M·T/Δ t), and the cost for #𝒥_MP matrix-vector products is 𝒪(#𝒥_MP·#Λ· M·T/Δ t). Finally, solving (<ref>) costs 𝒪(#𝒥_MP·#Λ^3), which is a nondominant term under the given setting, #Λ≪T/Δ t· M.
|
http://arxiv.org/abs/2306.06179v1
|
20230609180706
|
Hidden symmetries of ReLU networks
|
[
"J. Elisenda Grigsby",
"Kathryn Lindsey",
"David Rolnick"
] |
cs.LG
|
[
"cs.LG",
"math.CO",
"math.GT",
"57R70, 57Q99, 52B70, 52C35",
"I.2.6"
] |
[
Hidden Symmetries of ReLU Networks
equal*
J. Elisenda Grigsbyequal,bc
Kathryn Lindseyequal,bc
David Rolnickequal,mcgill,mila
bcDepartment of Mathematics, Boston College, Boston, USA
mcgillSchool of Computer Science, McGill University, Montreal, Canada
milaMila – Quebec AI Institute, Montreal, Canada
J. Elisenda [email protected]
deep learning theory, functional dimension, parameter space, linear region, activation pattern, bent hyperplane arrangement
0.3in
]
The parameter space for any fixed architecture of feedforward ReLU neural networks serves
as a proxy during training for the associated class of functions – but how faithful is this representation? It is known that many different parameter settings θ can determine the same function f. Moreover, the degree of this redundancy is inhomogeneous: for some networks, the only symmetries are permutation of neurons in a layer and positive scaling of parameters at a neuron, while other networks admit additional hidden symmetries. In this work, we prove that, for any network architecture where no layer is narrower than the input, there exist parameter settings with no hidden symmetries. We also describe a number of mechanisms through which hidden symmetries can arise, and empirically approximate the functional dimension of different network architectures at initialization. These experiments indicate that the probability that a network has no hidden symmetries decreases towards 0 as depth increases, while increasing towards 1 as width and input dimension increase.
§ INTRODUCTION
The success of deep learning relies upon the effectiveness of neural networks in expressing a wide variety of functions. However, it is generally impractical to explicitly write down the function computed by a network, so networks of a given architecture are described and learned via parameter vectors (encompassing weights and biases). The space of parameter vectors serves as a convenient proxy for the space of functions represented by a given network architecture, but it is an imperfect proxy since it is possible for two different parameter vectors to map to the same function.
Indeed, for any fully connected neural network with ReLU activation, it has been observed that the following transformations to the parameters are symmetries – i.e., they do not change the function computed by the network <cit.>:
* Permutation (P). Reordering the neurons in any hidden layer, along with the corresponding permutation of the weights and biases associated with them,
* Scaling (S). For any neuron in any hidden layer, multiplying the incoming weights and the bias by any c > 0, while dividing the outgoing weights by c.
Such symmetries can have important implications for gradient-based learning algorithms that operate on parameters. Many authors have considered methods to optimize neural networks accounting for scaling symmetries (see e.g. <cit.>). While networks trained on the same data from different initializations are far apart in parameter space, they express similar functions; recent work suggests that such networks may in fact be close in parameter space if one accounts for permutation symmetries (see e.g. <cit.>).
It remains unknown, however, in what cases permutation and scaling are the only symmetries admitted by the parameters of a neural network, and how often there are other hidden symmetries (formalization in Definition <ref>). <cit.> prove that under certain conditions, no hidden symmetries exist, and indeed that under these conditions it is possible to reverse-engineer a network's parameters up to permutation and scaling. <cit.> prove that for all architectures with non-increasing widths, there exist parameter settings with no hidden symmetries. Work by <cit.> on the functional dimension of networks suggests that a wide variety of hidden symmetries may exist depending on the parameter setting.
Our key results in this paper are as follows:
* We prove (Theorem <ref>) that if all layers in a fully connected ReLU network are at least as wide as the input layer, then there exists some setting of the parameters such that the network has no hidden symmetries. Indeed, we show that a positive-measure subset of parameter space admits no hidden symmetries.
* We describe four mechanisms through which hidden symmetries can arise (Subsection <ref>). In particular, we prove (Proposition <ref>) that if the image of the domain in a hidden layer is contained in a subspace of positive codimension, then there is ambiguity in the neuron of the next layer map.
* We experimentally estimate the functional dimension of randomly initialized network parameter settings. Our results suggest that the probability that a network has no hidden symmetries decreases with depth, but increases as input dimension and width increase together.
§ RELATED WORK
Several important lines of work have considered the symmetries of the parametric representations of deep ReLU networks and their implications for learning. One focus area has been in designing optimization methods for neural networks that are invariant to scaling symmetries at individual neurons. Approaches for achieving this goal have include path normalization <cit.>, manifold optimization <cit.>, proceeding in a different vector space <cit.>, and projection onto a normalized manifold <cit.>.
Another fruitful direction of work has been in understanding how permutation symmetries in parameter space affect connectivity of the loss landscape. <cit.> consider when different parameter permutations of a trained network are connected via piecewise linear paths in parameter space with low loss. <cit.> show that linearly interpolating between different permutations of a network leads to flat regions of the loss landscape. Several recent works have shown that, if permutation symmetries are taken into account, then it is possible to interpolate between networks trained from different initializations, while maintaining a low loss barrier <cit.>.
In <cit.>, the authors define and study the moduli space of neural network functions using quiver representation theory. This theory provides a framework for extracting global symmetries of parameter space of a network architecture from symmetries of the computational graph and the activation functions involved (see also <cit.>), <cit.> build on these ideas to define neural teleportation algorithms aimed at using symmetries in the loss landscape to improve the efficiency of gradient descent in finding a minimal-loss solution. In <cit.> the authors argue that symmetries in the loss landscape have associated conserved quantities that impact training dynamics.
A number of works have explicitly considered which symmetries are admitted by different ReLU networks.
A number of authors consider the relationship between the parameters of a ReLU network and the geometry of its bent hyperplane arrangement (aka fold set); <cit.>, <cit.>, and <cit.> use these properties to reverse-engineer the parameters of certain networks up to permutation and scaling symmetries, and <cit.> proves that for certain architectures there exist parameter settings without hidden symmetries.
In particular, in <cit.>, the authors provide a geometric condition on the bent hyperplane arrangement of a parameter ensuring that the parameters can be reverse-engineered up to permutation and scaling, hence has no hidden symmetries. It follows nearly immediately that all depth 2 networks and a positive measure subset of any depth 3 network have no hidden symmetries. In <cit.>, the authors prove that a positive measure subset of parameters in every non-widening (n_0 ≥ n_1 ≥… n_d) architecture has no hidden symmetries. In the present work, we prove the complementary result that a positive measure subset of parameters in every architecture whose hidden layers are at least as wide as the input layer (that is, n_0 ≤ n_ℓ
for all ℓ < d) has no hidden symmetries. An example of a family of architectures for which the question of the existence of parameters without hidden symmetries remains unresolved after the present work is an architecture of the form (n_0, n_1, n_2, n_3, n_4) with n_0 < n_1 and n_2 <n_0.
<cit.> study the functional dimension of a network parameter setting – the dimension of the space of functions that can be achieved by infinitesimally perturbing the parameters – proving an upper bound on functional dimension that we conjecture is achieved for almost all parameter settings without hidden symmetries (cf. Lemma <ref>).
§ NOTATION AND BACKGROUND
We consider fully connected neural networks with ReLU activation, denoting by (n_0, …, n_d) the architecture with input dimension n_0, hidden layer widths n_1,n_2,…,n_d-1, and output dimension n_d.
Formally, let σ: ℝ^n →ℝ^n denote the function that applies the activation function (x):= max{0,x} component-wise. For an architecture (n_0, …, n_d), we define a parameter space Ω := ℝ^D where a parameter θ := (W^1,b^1, …, W^d) ∈Ω consists of weight matrices W^i ∈ℝ^n_i+1× n_i and bias vectors b^i ∈ℝ^n_i for i=0, …, d-1. Accordingly, D:=-n_d + ∑_i=1^d n_i(n_i-1 + 1). From a parameter θ we define a neural network function:
F_θ: ℝ^n_0[r]^-F^1 ℝ^n_1[r]^-F^2 …[r]^-F^d ℝ^n_d,
with layer maps given by:
F^i(x) := {[ σ(W^ix + b^i) ; W^ix . ].
Note that for any θ∈Ω, F_θ is a finite piecewise-linear function – that is, a continuous function for which the domain may be decomposed as the union of finitely many closed, convex pieces, on each of which the function is affine.
Following notation in Definition 4 of <cit.>, let F_(ℓ) := F^ℓ∘…∘ F^1 denote the composition of layer maps from the domain, ending with the ℓth layer map, and let F^(ℓ) := F^d ∘…∘ F^ℓ denote the composition of the layer maps ending at the codomain, beginning with the ℓth layer map. In particular, F_θ = F^(ℓ+1)∘ F_(ℓ).
We refer to the components of F_(ℓ) as the neurons in the ℓth layer. The pre-activation map z_(ℓ),i: ℝ^n_0→ℝ associated to the ith neuron in the ℓth layer is given by:
z_(ℓ),i(x) = π_i(W^ℓ(F_(ℓ-1)(x)) + b^ℓ),
where π_i: ℝ^n_ℓ→ℝ denotes the projection onto the ith component.
Following <cit.>, we refer to the zero-set of the pre-activation map for the ith neuron in the ℓth layer as its associated bent hyperplane, Ĥ^ℓ_i := z_(ℓ),i^-1{0}.
The following notions from <cit.> (see also <cit.>, <cit.>, and <cit.>), will play a crucial role in the proofs of our results. A (ternary) activation pattern (aka neural code or sign sequence) for a network architecture (n_0, …, n_d) is an N–tuple s ∈{-1,0,+1}^N of signs for N = ∑_i=1^d n_i (Def. <ref>). Each activation pattern determines a (frequently empty) subset of ℝ^n_0 called its associated activation region. Informally, an activation region is the collection of points x for which the pre-activation sign of a non-input neuron matches the sign of the corresponding component of s (Def. <ref>, or Def. 1 of <cit.> and Def. 13 of <cit.>.) A linear region of a finite piecewise linear function is a maximal connected set
on which the function is affine-linear.
A ReLU network map F_θ: ℝ^n_0→ℝ^n_d for an architecture (n_0, …, n_d) is said to satisfy the Linear Regions Assumption (LRA) (Def. <ref>) if each linear region is the closure of a single non-empty activation region corresponding to a an activation pattern with all nonzero entries.
<cit.> proved that for almost all parameters in any fixed architecture (n_0, …, n_d), the bent hyperplane (zero set of the pre-activation output) associated to a non-input neuron has codimension 1 (i.e., dimension n_0 -1) in the domain.[Note that it may also be empty.]
Moreover, it is proved in <cit.> that for almost all parameters in any fixed architecture (n_0, …, n_d), the intersection of k bent hyperplanes has codimension k
in the domain. Following <cit.>, we shall call a network whose bent hyperplanes satisfy this enhanced condition supertransversal.[The formal definition of supertransversality, given in Definition <ref>, is stronger than what is stated here, but implies it.]
It is proved in Theorem 2 of <cit.> that if a supertransversal ReLU network map F_θ satisfies LRA,[Note that <cit.> do not need the LRA to be satisfied on the entire domain–only on a relevant subset for their algorithm. See Section <ref> in the Appendix.] and each pair of bent hyperplanes associated to each pair of neurons in each pair of adjacent layers has non-empty intersection, then the network admits no hidden symmetries. Indeed, the authors detail an algorithm allowing the parameters to be extracted from the local geometry of the intersections, up to permutation and scaling.
Accordingly, we will say that a network map F_θ associated to a parameter θ∈Ω satisfies the transverse pairwise-intersection condition, abbreviated TPIC, if its associated bent hyperplane arrangement is supertransversal, and each pair of bent hyperplanes associated to each pair of neurons in each pair of adjacent layers has non-empty intersection.
See Figure <ref>.
§ MAIN RESULT
Let (n_0, …, n_d) be a feedforward ReLU network architecture satisfying (n_0 = k) ≤ n_ℓ for all ℓ, and let Ω denote its parameter space. A positive-measure subset of Ω has no hidden symmetries.
Proof Sketch:
A more explicit version of Theorem 2 of <cit.>, stated in Lemma <ref>, tells us that any network map F_θ satisfying TPIC and LRA
on a neighborhood of the intersections admits no hidden symmetries.
By Proposition <ref>, TPIC is an open condition. That is, any parameter satisfying TPIC has an open neighborhood of parameters also satisfying TPIC. Lemma <ref> furthermore tells us that any parameter satisfying LRA in a neighborhood of the intersections has an open neighborhood on which LRA is satisfied in a neighborhood of the intersections.
We proceed by induction on the depth, d. When d=1, there is nothing to prove, so our true base case is d=2. In this case, we need only show that we can choose a combinatorially stable parameter θ∈Ω that satisfies (i) supertransversality, (ii) each bent hyperplane from ℓ = 2 has non-empty intersection with each hyperplane from ℓ = 1, and (iii) LRA on a neighborhood of the pairwise intersections.
Since the set of parameters θ satisfying (i) and (iii) has full measure in Ω, it is routine – though technical – to guarantee that these conditions are satisfied once we find a single parameter satisfying (ii). Details are given in the appendix.
To arrange that each bent hyperplane from ℓ=2 has non-empty intersection with each hyperplane from ℓ = 1, we make use of so-called positive-axis hyperplanes. Generically, an affine hyperplane intersects each coordinate axis of ℝ^n on either the positive or negative side. A positive-axis hyperplane intersects all coordinate axes on the positive side. Equivalently, a positive-axis hyperplane is describable as the zero set of an affine-linear equation with positive weights and negative bias:
H := {x⃗∈ℝ^n | w⃗·x⃗ + b = 0}
for w⃗ = (w_1, …, w_n) satisfying w_i > 0 for all i and b < 0.
The important property of a positive-axis hyperplane is that it has non-empty intersection with every origin-based ray contained in the non-negative orthant, 𝕆^≥ 0⊆ℝ^n.
We now use the fact (Lemma <ref>) that for almost all parameters θ, the image under F^1 of almost every unbounded ray in ℝ^n_0 is a ray in 𝕆^≥ 0⊆ℝ^n_1 (not necessarily based at the origin). Since every affine hyperplane in ℝ^n_0 contains an (n_0 - 2)–dimensional sphere of unbounded rays, we can ensure, by perturbing the parameters if necessary, that the image of every hyperplane H^1 ⊆ℝ^n_0 associated to F^1 has non-empty intersection with any given positive-axis hyperplane H^2 ⊆ℝ^n_1 with sufficiently high bias. We then apply the following lemma, with G=F^1, H = F^2, to each pair S = F^1(H^1) for H^1 ⊆ℝ^n_0 a hyperplane associated to the first layer map F^1 and S' = H^2 ⊆ℝ^n_1 a sufficiently high bias positive-axis hyperplane associated to the second layer map F^2 to ensure that the the bent hyperplanes in the domain, ℝ^n_0, have non-empty pairwise intersection, as desired.
Let G: A → B and H: B → C be functions, let F= H ∘ G be their composition, and let S, S' ⊆ B be subsets of the intermediate domain. Then G^-1(S) ∩ G^-1(S') is non-empty iff G(A) ∩ S ∩ S' is non-empty.
Immediate from the fact that
G^-1(S) ∩ G^-1(S') = G^-1(G(A) ∩ S ∩ S').
The inductive step in the construction of a combinatorially stable depth d network from a combinatorially stable depth d-1 network satisfying TPIC is more intricate, but the key idea is to notice that adding a layer to the network preserves all previous bent hyperplanes and their intersections, so all that is needed is to choose parameters for the final layer ensuring that the new bent hyperplanes associated to the final layer have non-empty pairwise intersection with the bent hyperplanes from the penultimate layer.
This is where we will need to use one more key fact about the images of activation regions under ReLU neural network maps, which, when combined with Lemma <ref> above, will allow us to conclude that certain points of intersection between the images of hyperplanes in the penultimate layer and hyperplanes associated to the final layer can be pulled back to obtain points of intersection between the corresponding bent hyperplanes in the domain:
Let (n_0, …, n_d) be a ReLU neural network architecture with (n_0 = k) ≤ n_ℓ for all ℓ. For almost all parameters θ∈Ω, if C ⊆ℝ^k is an activation region of F_θ with activation pattern s_C = (s^1_C, …, s^d_C), then F_θ(C) is a polyhedral set of dimension min{(s^1_C), …, (s^d_C), k}.
Here, (s^ℓ_C) refers to the number of +1's in the binary tuple that forms the activation pattern s^ℓ_C associated to the output neurons of the ℓth layer map, F^ℓ, on the activation region C (cf. Section <ref> in the Appendix). The above proposition has the following useful corollary, which allows us to conclude that any points of intersection between (bent) hyperplanes in the penultimate layer can be pulled back faithfully to points of intersection between the corresponding bent hyperplanes in the domain:
Let (n_0, …, n_d) be a ReLU network architecture and θ∈Ω as above. If C is an activation region of F_θ with activation pattern satisfying (s^ℓ_C) = k for all ℓ, then F_θ restricted to the interior of C is a homeomorphism onto its image.
The linear regions assumption is satisfied for this construction because sufficiently many neurons from previous layers are active, which implies that we can distinguish activation regions (cf. Lemma 8 of <cit.>). The stability of this construction on a closed subset of the domain containing the pairwise intersections (Proposition <ref>) follows from genericity and supertransversality, which allows us to find a positive measure open neighborhood of θ that also admits no hidden symmetries.
See Figure <ref> for an illustration of what a bent hyperplane arrangement produced by our construction looks like for architecture (n_0, …, n_3) = (2,5,3,3).
§ MECHANISMS BY WHICH HIDDEN SYMMETRIES ARISE
For any feedforward ReLU architecture (n_0, …, n_d) with at least one hidden layer (i.e. d ≥ 2), the set of parameters admitting hidden symmetries has positive measure (<cit.>).
There are numerous mechanisms by which hidden symmetries can arise. We give a partial list below.
* A stably unactivated neuron (Def. <ref>).
The positive half-space in ℝ^n_ℓ associated to a neuron of the layer map F^ℓ, for
1 < ℓ < d, could have empty intersection with F_(ℓ-1)(ℝ^n_0), the image of ℝ^n_0 under the earlier layer maps. If this intersection remains empty under small perturbations of the parameters defining this neuron (while keeping the other neurons fixed), such perturbations will not alter the function.
In particular, the image of ℝ^n_0 in any hidden layer ℝ^n_ℓ is contained in the closed positive orthant,
so a neuron whose associated half-space has (stably) empty intersection with the positive orthant results in a hidden symmetry. See Theorem 7.3 and Lemma 7.4 of <cit.>) for a probabilistic lower bound on this phenomenon, and Figure <ref> for an illustration.
* A pair of neurons in consecutive layers that are never co-active. As noted in <cit.>, it is possible for two neurons in adjacent layers of a network to never be simultaneously active (cf. Definition <ref>). In such a case, the weight between these neurons is generically able to be perturbed without changing the function computed by the network. This is because either (a) the upstream neuron is inactive, in which case the downstream neuron receives zero input from it regardless of the weight between them, or (b) the downstream neuron is inactive, in which case it outputs zero regardless of the input received from the upstream neuron. See Figure <ref>.
* A ReLU of a later layer may collapse complexity constructed by earlier layers, negating the impact of the parameters from those layers. One way this could happen is if multiple linear regions of F^i-1∘…∘ F^1(ℝ^n_0), 1 < i < d are collapsed by one or more ReLUs in layer F^i to form a single linear region (violating LRA). This collapsing could erase the effect of parameters from these earlier layers. See Figure <ref>.
* The (relevant part of the) image in a hidden layer may be contained in a subspace of positive co-dimension. Suppose, for 1 < i < d, the image of the domain in the ith hidden layer, F^i-1∘…∘ F^1(ℝ^n_0), is contained in an affine-linear subspace A ⊂ℝ^n_i of positive codimension. Then for any neuron N of the (i+1)th layer, denoting by H the associated oriented and co-normed hyperplane in ℝ^n_i, there is a 1-parameter family of oriented, co-normed hyperplanes {H_t} obtained by rotating H around its intersection with A that all give rise to the same map (Lemma <ref>). See Figure <ref>.
Proposition <ref> says that, given a neuron map of the (k+1)th layer, if the part of
Im_(k)
F^k ∘… F^1(ℝ^n_0)
where that neuron is nonnegative is contained in an affine-linear subspace of ℝ^n_k of positive codimension, then the hyperplane associated to that is not uniquely determined.
Let η:ℝ^n_k→ℝ^1 be a neuron given by η = σ∘ A for some fixed affine-linear map A:x⃗↦x⃗· n_H - b_H. Suppose
* there exists an affine-linear hyperplane S ⊂ℝ^n_k such that
Im_(k)∩{x |η(x) ≥ 0}⊆ S,
* the hyperplanes S and H are in general position
* no unbounded ray contained in Im_(k) has a slope that is orthogonal to n⃗_H ≠ 0.
Then there is a 1-parameter family of (distinct) hyperplanes {H_t}_t in ℝ^n_k such that
η∘ F^k ∘… F^1 = f_H_t∘ F^k ∘… F^1
where f_H_t: ℝ^n_k→ℝ is a neuron associated to the hyperplane H_t (equipped with some suitable choice of oriented conorm).
The proof of Proposition <ref> is in <ref>.
§ EXPERIMENTS
We conducted an empirical investigation of hidden symmetries at various parameter settings. It is computationally impractical to directly rule out the existence of hidden symmetries at a parameter using, e.g., the geometry of the bent hyperplane arrangement. Accordingly, in our experiments we rely on a relationship between the symmetries of a parameter and its functional dimension to probe the prevalence of hidden symmetries for a variety of architectures.
Recall that the functional dimension (θ) of parameter θ is, informally, the number of linearly independent ways the function F_θ can be altered by perturbing the parameter θ. Following the formal Definition <ref>, we can approximate (θ) by evaluating the function F_θ at a finite subset Z⊂^n_0 of points in input space, stacking the outputs into a single vector of dimension |Z|· n_d, and calculating the rank of the Jacobian of this vector with respect to θ. The functional dimension (θ) is the supremum of this rank over all sets Z. Intuitively, this is because we are evaluating the number of coordinates of θ that independently affect the value of the function F_θ at some point in its domain (recognizing that some weights and biases may not have an effect on F_θ except on a limited subset of input space).
In our experiments, we consider networks with n_d=1, and to approximate the functional dimension, we evaluate the set of gradient vectors {∇_θ F_θ(z)}_z∈ Z over a finite subset of m points Z={z_1,…,z_m}⊂^n_0 in input space (we use points sampled i.i.d. from the zero-centered unit normal). Then, for m sufficiently large, we have:
(θ) ≈([[ ∇_θ F_θ(z_1); ⋮; ∇_θ F_θ(z_m) ]]).
We initialize networks with weights drawn i.i.d. from normal distributions with variance 2/fan-in, according to standard practice for ReLU networks <cit.>, and biases drawn i.i.d. from a normal distribution with very small variance (arbitrarily set to 0.01). To improve the quality of the approximation in (<ref>), we use m sample points for m equal to 100 times the maximum possible value for (θ), according to the upper bound given in <cit.> (in Appendix <ref>, we show that our experimental conclusions are not dependent on the choice of m). Note that our approach yields approximations of (θ) which are also necessarily lower bounds to it; in particular, any computed value that attains the theoretical upper bound on (θ) is guaranteed to be accurate and thus is consistent with the parameter admitting no hidden symmetries.
In Figure <ref>, we plot the distribution of approximate functional dimensions for networks with depth d=4,5,6 and with n_0=n_1=⋯=n_d-1 equal to 5,10,15. For each architecture, we consider 5000 different choices of θ∈Ω, computing the fraction that lead to networks with a given approximated functional dimension. Thus, we find that for depth 4, the widths 5,10,15 result in respectively 25%, 48%, 66% percent of networks having the maximum possible functional dimension (marked with black dots in the Figure), while for depth 6, the widths 5,10,15 result in respectively 1%, 3%, 5% percent of networks having the maximum possible functional dimension.
We observe that increasing the depth d (while keeping the width fixed) results in a decreased probability of full functional dimension, and thus the likely absence of hidden symmetries (cf. Lemma <ref>). By contrast, increased width (with n_0=n_1=⋯=n_d-1, i.e. varying the input dimension and width together, while keeping depth fixed) is associated with an increasing probability of full functional dimension.
We offer the following explanations of possible drivers of these observed phenomena. Increasing the depth increases the variance and higher moments associated with properties such as the activation of individual neurons <cit.>, thereby increasing the likelihood that functional dimension is decreased via mechanisms (<ref>) or (<ref>). By contrast, increasing the input dimension and width increases the chance that the bent hyperplanes associated with two neighboring neurons will intersect. (Intuitively, this is because a hyperplane will intersect a bent hyperplane unless the latter “curves away” in all dimensions, which becomes exponentially unlikely as the dimension increases, in the same way that the probability a matrix is positive definite decays exponentially with dimension <cit.>). However, further study is required, and it is worth noting that a counteracting factor as the width increases may be that the maximum functional dimension increases, so the support of the distribution also increases and the probability assigned to any individual functional dimension, including the maximum, is less than it might be for a distribution with smaller support.
We also note that the distributions of approximate functional dimensions appear to approach smooth unimodal curves if the probability of full functional dimension is low (as in the Depth 6 plots), but are strongly multimodal when there is a high probability of full functional dimension. In the inset panels of the figure, we show zoomed-in versions of the upper ends of certain distributions, detailing the multiple peaks in the distribution. We note that for each such multimodal distribution, the peaks appear to be spaced by a value equal to the width of the network. We note that of the mechanisms we consider for hidden symmetries, mechanism (<ref>) (a stably inactivated neuron) should reduce functional dimension by 2×width (the number of incoming and outgoing weights of the neuron), while (<ref>) (two neurons that are never co-active) reduces functional dimension by one (the weight between the neurons). Thus, neither of these mechanisms should apply in this case, and mechanisms (<ref>) or (<ref>) may apply; the phenomenon bears further investigation.
In Figure <ref>, we show the results of a similar set of experiments, where the input dimension n_0 is instead kept fixed at 5 as the widths n_1=n_2=⋯=n_d-1 of the hidden layers vary.
While we again observe in this case that the probability of full functional dimension decreases with depth, this probability slightly decreases with width, unlike the previous scenario (Figure <ref>).
As in Figure <ref>, we again note that when the distributions are multimodal, the gaps between the modes are spaced according to the width n_1=⋯=n_d-1.
§ CONCLUSIONS AND FURTHER QUESTIONS
We have performed both a theoretical and empirical investigation of the following question: How faithfully does the parameter space of a feedforward ReLU network architectures model its associated function class?
Our investigation centers on a relationship between well-established symmetries of parameter space (operations on parameters that leave the resulting function unchanged)
and the functional dimension of a parameter (informally, the true dimension of the local search space for any gradient-based optimization algorithm). It was established in <cit.> that the functional dimension is inhomogeneous across parameter space, but the prevalence of this inhomogeneity and specifics about its dependence on architecture was previously unknown.
In the theoretical component of this work, we significantly expand the collection of architectures containing parameters with no hidden symmetries beyond the restricted classes considered in <cit.> and <cit.>. We also provide a partial list of geometric mechanisms that give rise to positive-dimensional spaces of hidden symmetries.
Our empirical investigation strongly suggests that the probability distribution on the functional dimension at initialization is both interesting and architecture-dependent. In particular, under standard assumptions on the probability distribution on the parameters, the expected value of the functional dimension appears to scale positively with width and negatively with depth. It also appears to be multimodal when the ratio of the width to the depth is high, with modes separated by integer multiples of the width. Further investigation of these effects
may help us understand which mechanisms dominate in producing hidden symmetries, at various depth vs. width scales.
In future work, we hope to investigate how functional dimension evolves during training, since parameters with lower functional dimension are associated to lower-complexity functions that are more likely to generalize well to unseen data. Since lower functional dimension corresponds to higher-dimensional spaces of local symmetries, low functional dimension should induce local flatness of the loss landscape. Comparing this conjecture to recent work suggesting that stochastic gradient descent favors flat minima of the loss landscape,[Critical points for which the Hessian of the loss has many eigenvalues close to 0.] this could at least partially explain any implicit regularization behavior of SGD for feedforward ReLU neural network architectures.
§ ACKNOWLEDGMENTS
J.E.G. acknowledges support from Simons Collaboration grant 635578 and NSF grant DMS - 2133822. K.L. acknowledges support from NSF grants DMS-2133822 and DMS-1901247.
D.R. acknowledges support from the Canada CIFAR AI Chairs Program and an NSERC Discovery Grants.
icml2023
§ COMBINATORIAL/GEOMETRIC BACKGROUND AND NOTATION
Let
𝕆^≥ 0:= {(x_1, …, x_n) ∈ℝ^n | x_i ≥ 0 ∀ i}
denote the non-negative orthant in ℝ^n. Letting
ℝ^n_k := {(x_1, …, x_n) ∈ℝ^n | x_k+1 = … x_n = 0}
denote the initial coordinate k–plane, we will denote by
𝕆^≥ 0_k := 𝕆^≥ 0∩ℝ^n_k
the distinguished k-face of the non-negative orthant which is obtained by intersecting with the initial coordinate k–plane.
Let θ∈Ω be a parameter in the parameter space of a feedforward ReLU network architecture (n_0, …, n_d), and let F_θ be defined as in Equations <ref> and <ref>.
Letting z^ℓ_i = π_i (W^ℓ x + b^ℓ) denote the ith component of the pre-activation output of the ℓth layer map F^ℓ:ℝ^n_ℓ-1→ℝ^n_ℓ of F_θ, we denote its zero set by H^ℓ_i := (z^ℓ_i)^-1{0}⊆ℝ^n_ℓ - 1. Note that for almost all parameters, H^ℓ_i is an affine hyperplane.
Accordingly, we associate to each layer map F^ℓ the set
𝒜^ℓ = {H^ℓ_1, …, H^ℓ_n_ℓ}⊆ℝ^n_ℓ-1,
which for almost all parameters is a hyperplane arrangement. In the course of the inductive proof of our main theorem, we will need notation for the preimages of the hyperplanes in the previous layer:
𝒜̀^ℓ = {H̀^ℓ_i}_i=1^n_ℓ := {F_ℓ^-1(H^ℓ_i)}_i=1^n_ℓ⊆ℝ^n_ℓ-2.
and in the domain (the reader easily checks that the latter are precisely the bent hyperplanes defined in Equation <ref>):
𝒜̂^ℓ = {Ĥ^ℓ_i}_i=1^n_ℓ := {F_(ℓ)^-1(H_i^ℓ)}_i=1^n_ℓ⊆ℝ^n_0
𝒜^ℓ is said to be generic if for all subsets
{H^ℓ_i_1 , … , H^ℓ_i_p}⊆𝒜^ℓ,
it is the case that H^ℓ_i_1∩…∩ H^ℓ_i_p is an affine-linear subspace of ℝ^n_ℓ-1 of dimension n_ℓ-1 - p, where a negative-dimensional intersection is understood to be empty.
A layer map F^ℓ is said to be generic if 𝒜^ℓ is generic. A parameter θ or the corresponding network map F_θ is said to be generic if all of its layer maps are generic.
It is well-established in the hyperplane arrangement literature (cf. <cit.>) that generic arrangements are full measure. It follows <cit.> that generic network maps are full measure in parameter space.
§.§ Decompositions of Polyhedral Sets
Recalling that a polyhedral set in ℝ^n_ℓ -1 is an intersection of finitely many closed half spaces, a hyperplane arrangement in ℝ^n_ℓ-1 induces a polyhedral decomposition of ℝ^n_ℓ -1 into finitely many polyhedral sets. The face structure on these polyhedral sets gives the decomposition the structure of a polyhedral complex. By pulling back these polyhedral complexes to the domain, ℝ^n_0, and taking intersections, we inductively obtain the canonical polyhedral complex 𝒞(F_θ) as follows.
For ℓ∈{1,…,d}, denote by R^ℓ the polyhedral complex on ℝ^n_ℓ-1 induced by the hyperplane arrangement associated to the ℓ^th layer map, F^ℓ. Inductively define polyhedral complexes 𝒞(F_(1)),…,𝒞(F_(ℓ)) on ℝ^n_0 as follows: Set 𝒞(F_(1) = F^1):= R^1 and for i = 2,…,m, set
𝒞(F_(ℓ))
:= {S ∩(F_(ℓ))^-1(Y) | S ∈𝒞(F_(ℓ)), Y ∈ R^ℓ}.
Set 𝒞(F_θ) := 𝒞(F_θ = F_(d)). See <cit.> and <cit.> for more details.
It was proved in <cit.> that on a full measure subset of parameter space, the n_0-cells of the canonical polyhedral complex, 𝒞(F), are the closures of the activation regions, and the (n_0-1)-skeleton of 𝒞(F) is the bent hyperplane arrangement associated to F_θ.
We will also need the following terminology and results (cf. <cit.>) pertaining to the structure of polyhedral sets. See <cit.> and <cit.> for additional details.
A polyhedral set P ⊂ℝ^n is said to be pointed if it has a face of dimension 0. The convex hull of a set S ⊂ℝ^n is the intersection of all convex subsets of ℝ^n that contain S. A cone in ℝ^n is a set C such that if x,y ∈ C and λ, μ≥ 0, then λ x + μ y ∈ C. Let P and Q be polyhedral sets embedded in ℝ^n. The Minkowski sum of P and Q is P + Q := {p+q | p ∈ P, q ∈ P}. The characteristic cone of P, denoted Cone(P), is the maximal set { y | x+y ∈ P for all x ∈ P} that also has the structure of a cone. Note that a polyhedral set is unbounded iff its characteristic cone is non-empty. Moreover, every polyhedral set P has a decomposition as the Minkowski sum of a bounded polyhedral set (polytope) P_B and (P). If P is pointed, we may take P_B to be the convex hull of its 0–cells, cf. Theorem 8.5 of <cit.>.
A ternary activation pattern (aka ternary neural code or ternary sign sequence) for a network architecture (n_0, …, n_d) with N neurons is a ternary tuple s ∈{-1,0,+1}^N. The ternary labeling of a point x ∈ℝ^n_0 is the sequence of ternary tuples
s_x := (s_x^1, …, s_x^d) ∈{-1,0,+1}^n_1 +… + n_d
indicating the sign of the pre-activation output of each neuron of F_θ at x.
Explicitly, letting F_θ be defined as in Equations <ref> and <ref>, and x ∈ℝ^n_0 any input vector, and the pre-activation output z_(ℓ),i(x) of the ith neuron in the ℓth layer at x is as in Equation <ref>, s_x^ℓ = (s^ℓ_1, … , s^ℓ_n_ℓ) are defined by s^ℓ_x,i = (z_(ℓ,i)(x)) (using the convention sgn(0) = 0).
Moreover, for all parameters θ it follows immediately from the definitions that the ternary labeling is constant on the interior of each cell of 𝒞(F_θ), inducing a ternary labeling s_C on each cell C of 𝒞(F_θ) <cit.>. If s^ℓ_x,i≤ 0 at an input vector x (resp., s^ℓ_C,i≤ 0 on a cell C), we say that the ith neuron in the ℓth layer is off or turned off at x (resp., on C).
Fix a parameter θ∈Ω. A neuron (say it is the ith neuron of layer ℓ) is said to be stably unactivated at θ if there exists an open neighborhood U ⊂Ω of θ such s^ℓ_x,i(u) ≤ 0 for every x ∈ℝ^n_0 and every u ∈ U, where
s^ℓ_x,i(u) denotes the corresponding coordinate of ternary coding with respect to the parameter u.
The activation region of F_θ corresponding to a ternary activation pattern s is a maximal connected component of the set of input vectors x ∈ℝ^n_0 for which the ternary labeling s_x equals s.
A ±-activation pattern is a ternary activation pattern in which every coordinate is nonzero, and a ±-activation region is an activation region associated to a ±-activation pattern.
Note that any ±-activation region is an open set.
Neuron i of layer ℓ and neuron j of layer ℓ+1 are called never coactive if
{x ∈ℝ^n_0| s^ℓ_x,i = s^ℓ+1_x,j = 1} = ∅.
For generic, supertransversal (Definition <ref>) networks, it follows from <cit.> that the ±-activation regions of 𝒞(F_θ) are precisely the interiors of the n_0–cells of 𝒞(F_θ).
For s = (s^1, …, s^d) ∈{-1,0,+1}^d a ternary d–tuple let
𝕆^≥ 0_s := {x ∈𝕆^≥ 0 | x_i = ReLU(s_ix_i)}
denote the face of the non-negative orthant consisting of points whose ith component is 0 when s^i ≤ 0.
The following lemma is immediate from the definitions.
Let F be a ReLU neural network map of architecture (n_0, …, n_d), 𝒞(F) its canonical polyhedral complex, and C a cell of 𝒞(F) with ℓth ternary label s_C^ℓ. Then F_(ℓ)(C) is contained in 𝕆^≥ 0_s_C^ℓ.
The dimension of a ternary label s, denoted (s), is the dimension of the face 𝕆^≥ 0_s. Equivalently, (s) is the number of `+1's in the tuple s.
§ TRANSVERSALITY
Recall the following classical notions (cf. <cit.> and Section 4 of <cit.>):
<cit.> Let X be a smooth manifold with or without boundary, Y and Z smooth manifolds without boundary, Z a smoothly embedded submanifold of Y, and f:X → Y a smooth map. We say that f is transverse to Z and write f ⋔ Z if
df_p(T_pX) + T_f(p)Z = T_f(p)Y
for all p ∈ f^-1(Z).
Let 𝒞 be a polyhedral complex in ℝ^n, let f: |𝒞| →ℝ^r be a map which is smooth on all cells of 𝒞, and let Z be a smoothly embedded submanifold (without boundary) of ℝ^r. We say that f is transverse on cells to Z and write f ⋔_c Z if the restriction of f to the interior, (C), of every cell C of 𝒞 is transverse to Z (in the sense of Definition <ref>).
Note that we use the convention that the interior of a 0–cell is the 0–cell itself.
The definition above can be extended so that Z is the domain of a polyhedral complex in ℝ^r. This was essentially carried out in <cit.>:
<cit.> Let 𝒞 be a polyhedral complex in ℝ^n, let f: |𝒞| →ℝ^r be a map which is smooth on all cells of 𝒞, and let 𝒵 be a polyhedral complex in ℝ^r. We say that f and 𝒵 are transverse on cells and write f ⋔_c |𝒵| if the restriction of f to the interior of every cell of 𝒞 is transverse to the interior of every cell of |𝒵| (in the sense of Definition <ref>).
Transversality for intersections of polyhedral complexes implies that each non-empty cell in the intersection complex has the expected dimension. The following extension of the classical Map Transversality Theorem to polyhedral complexes is immediate (cf. <cit.> Cor. 4.7, <cit.>):
Let 𝒞 be a polyhedral complex in ℝ^n, let f: |𝒞| →ℝ^r be a map which is smooth on all cells of 𝒞, and let 𝒵 be a polyhedral complex in ℝ^r for which f ⋔_c 𝒵. Then for every pair of cells C ∈𝒞 and Z ∈𝒵, f^-1(Z) ∩(C) is a (possibly empty) smoothly embedded submanifold of (C) whose codimension in (C) equals the codimension of Z in ℝ^r.
We use the standard convention that a manifold of negative dimension is empty. In particular, if f: |𝒞| →ℝ^r and 𝒵⊆ℝ^r are transverse on cells as above, and C ∈𝒞 is a 0–cell, then f^-1(Z) ∩ C = ∅ for all cells Z of positive codimension.
Let 𝒞 be a polyhedral complex with domain |𝒞| ⊆ℝ^n, and F: |𝒞| →ℝ^r a map that is affine-linear on cells. Then F(𝒞) is a polyhedral complex in ℝ^r.
Note that F(𝒞) need not be imbedded, nor even immersed.
We begin by showing that the image, F(C), of a k–dimensional cell (polyhedral set) C ∈𝒞 is itself a polyhedral set in ℝ^r. By definition, C is the solution set of finitely many affine-linear inequalities. That is, there exists (for some m) an m × n matrix A and a vector b ∈ℝ^n such that
C := {x ∈ℝ^n | Ax ≥ b}.
Let V = (C) be the k–dimensional affine hull of C and choose any point p ∈ V. Noting that F is affine-linear on V, let j denote the rank of F restricted to V.
Now choose an (affine) basis ℬ = {v_1, …, v_k} for V whose final (k-j) vectors form a basis for the affine kernel of the map F restricted to V. That is, all vectors v of the form
v = p + ∑_i=j+1^k a_iv_i
satisfy F(v) = F(p).
Let W = {v_1, …, v_j}. By construction,
F|_V = F' ∘π_V → W
can be realized as the composition of the projection map π_V → W: V → W and an affine-linear isomorphism F': W → F(V). It can be seen using the Fourier-Motzkin elimination method (cf. Sec. 12.2 in <cit.>) that the image of C under π_V → W is a polyhedral set, and it is immediate that the image of a polyhedral set under an affine-linear isomorphism is a polyhedral set. It follows that the image of C under F is a polyhedral set in ℝ^r. The continuity of F ensures that if C' is a face of C, then F(C') will be a face of F(C), so the image of |𝒞| under F will be the domain of a polyhedral complex, as desired.
Let 𝒞 be a polyhedral complex in ℝ^n, F: |𝒞| →ℝ^r a map that is affine-linear on cells, and 𝒵 a polyhedral complex in ℝ^r. Let F(𝒞) denote the polyhedral complex in ℝ^r that is the image of 𝒞. If i: |F(𝒞)| →ℝ^m is the inclusion map, then i ⋔_c 𝒵 iff F ⋔_c 𝒵.
Let C be a cell of 𝒞 with image F(C) ∈ F(𝒞), and let Z ∈𝒵. We will show that when F (resp., i) is restricted to (C) (resp., to (F(C))), F|_(C)⋔(Z) iff i|_(F(C))⋔(Z).
In the following, choose an affine-linear extension of F to all of ℝ^n and call it F|^ℝ^n. Note that any such extension can be decomposed as a projection onto the affine hull of C followed by an affine isomorphism onto the affine hull of F(C), as described in the proof of Lemma <ref>. Let c := (C), c' = (F(C)), and z = (F^-1(Z)), z' = (Z). Begin by noting that i|_(F(C))⋔(Z) iff F(C) ∩ Z is a polyhedral set of dimension (c'+z) - r iff either they have empty intersection or the interiors of F(C) and Z have non-empty intersection and the affine hulls of F(C) and Z intersect in an affine-linear space of codimension (r-c')+(r-z') (that is, of dimension (c'+z') - r).
The statement that F|_C ⋔ Z iff i|_F(C)⋔ Z is vacuous in the empty intersection case.
In the non-empty intersection case, the affine-linear map F restricted to (C) is either a homeomorphism onto its image or a linear projection map onto a set homeomorphic to its image, (C) ∩(Z) ≠∅ iff (C) ∩(F^-1(Z)). Moreover, the rank-nullity theorem applied to F|^ℝ^n tells us that (F(C)) ∩(Z) has dimension (c'+z')-r iff (C) ∩ F^-1((Z)) has dimension (c+z) - n.
It follows that for all cells C of 𝒞 and Z of 𝒵, F|_(C)⋔ Z iff i|_F(C)⋔ Z, and the statement follows.
Let F be a ReLU neural network of depth d, and let the (i-1)st layer map, F^i-1: ℝ^n_i-2→ℝ^n_i-1 (resp., the composition of maps from the ith layer map, F^(i): ℝ^n_i-1→ℝ^n_d) be viewed as maps that are affine-linear on cells of their respective canonical polyhedral decompositions, 𝒞(F^i-1) (resp. 𝒞(F^(i))). If, for all 2 ≤ i ≤ d, we have
F^i-1⋔_c 𝒞(F^(i)),
then we call F a supertransversal neural network.
Informally, we can think of supertransversality as the right generalization of the genericity condition for hyperplane arrangements to bent hyperplane arrangements. Recall that a hyperplane arrangement is generic if every k–fold intersection of hyperplanes in the arrangement is an affine linear subspace of dimension n-k. Analogously, it follows from the definitions that every k–fold intersection of bent hyperplanes associated to a generic, supertransversal network intersect in a (possibly empty) polyhedral complex of dimension n-k.
An important result proved in <cit.> (see also Theorem 3 of <cit.>) is the following:
For any neural network architecture, the set of parameters associated to generic, supertransversal marked neural network functions is full measure in parameter space, ℝ^D.
Let s ∈ (Ω = ℝ^D) be a generic, supertransversal (Definition <ref>) parameter for a ReLU neural network of architecture (n_0, …, n_d). We say that s satisfies the transverse pairwise intersection condition (TPIC) for all adjacent layer maps if Ĥ^ℓ_i ⋔_c Ĥ^ℓ + 1_j ≠∅ for all i,j, ℓ. That is, every pair of bent hyperplanes in adjacent layers has non-empty transverse intersection.
In the language of <cit.> and <cit.>, a generic, supertransversal parameter s satisfies the transverse pairwise intersection condition (TPIC) for all adjacent layer maps iff every pair of nodes in every pair of adjacent layers of the dependency graph is connected by an edge.
§ UNBOUNDED SOLYHEDRAL SETS AND SUFFICIENTLY HIGH-BIAS POSITIVE-AXIS HYPERPLANES
In order to choose parameters whose bent hyperplane arrangement satisfies (TPIC), we will need to establish some results about the images of unbounded polyhedral sets under generic, supertransversal ReLU neural network layer maps. We will also need to understand the intersections of these images with sufficiently high-bias positive-axis hyperplanes.
The following proposition ensures that the images of the nested unbounded polyhedral sets 𝒮_1 ⊆𝒮_2 ⊆…⊆𝒮_d referenced in the proof of the main theorem are unbounded in the layers of the neural network.
Let F_θ: ℝ^n_0→ℝ^n_d be a generic, supertransversal ReLU network map of architecture (n_0=k, n_1, …, n_d) with n_ℓ≥ k for all ℓ, and let 𝒮 be an unbounded polyhedral set of dimension k in the canonical polyhedral complex 𝒞(F_θ). If the sign sequence s_S = (s^1, …, s^n_d) associated to 𝒮 satisfies s^ℓ = (+1, …, +1_k, -1, … -1_n_ℓ - k) for all i ≤ℓ, then F_(ℓ)(𝒮) is an unbounded polyhedral set of dimension k contained in 𝕆^+_k ⊆ℝ^n_ℓ.
The fact that F_(ℓ)(S) is a polyhedral set contained in 𝕆^≥ 0_k ⊆ℝ^n_ℓ is a consequence of Lemmas <ref> and <ref>, so we need only prove that its image is unbounded, of dimension k.
We will prove this by induction on ℓ. When ℓ = 1, 𝒞(F_θ) = 𝒞(A^1) for the generic hyperplane arrangement 𝒜^1 associated to F^1. Since 𝒮 is a pointed (since n_1 ≥ k) unbounded polyhedral set of dimension k, its boundary contains unbounded 1–cells (rays) R_i = {x_i + tv_i | t ≥ 0} based at {x_i} with slopes {v_i}. Note that because 𝒜^1 is generic, each 0–cell x_i is a k–fold intersection of distinct hyperplanes from 𝒜^1, and each R_i is contained in a (k-1)–fold intersection of distinct hyperplanes from 𝒜^1. It follows that 𝒮 has at least k unbounded facets, each contained in a different hyperplane of 𝒜^1. Reindex if necessary so k of these are H_1^1, …, H_k^1 and then flip co-orientations so that the sign sequence on 𝒮 is
s^1_𝒮 = (+1, …, +1_k, -1, … -1_n_1 - k).
We have chosen the first k neurons of F^1 to be “on" on (the interior of) 𝒮. This implies that if we let w_1, …, w_n_1 be the weight vectors and b_1, …, b_n_1 the biases associated to F^1, we have w_i · x_j +b_i≥ 0 for all 1 ≤ i ≤ k, with w_i · x_j +b_i = 0 iff x_j ∈ H_i.
As to the slopes {v_i} of the unbounded 1–cells (rays) {R_i}, it is immediate that each {v_i}∈(S) (cf. <cit.>),
and by reindexing if necessary we may assume {v_1, …, v_k} is a basis for ℝ^n_0 = k since 𝒮 has dimension k. Moreover, we claim that w_i · v_j ≥ 0 for all 1 ≤ i, j ≤ k with w_i · v_j = 0 iff R_j ⊆ H_i.
To see this claim, note first that if w_i · x_j + b_i> 0, then x_j ∉H_i, so R_i ⊄H_i. We also see that w_i · v_j ≥ 0, since otherwise there would be some t > 0 for which w_i · (x_j + tv_j) = 0, contradicting the unboundedness of R_i. But w_i · v_j ≠ 0 since this would imply v_j ∈ H_i_1∩…∩ H_i_k-1∩ w_i^⊥, which contradicts the genericity of 𝒜^1.
If w_i· x_j + b_i= 0, then x_j ∈ H_i and hence R_j ⊂ H_i iff w_i · v_j = 0, as desired.
Now let
W = [[ w_1^T; ⋮; w_n_ℓ^T ]]
be the matrix whose row vectors are the weight vectors
{w_i} associated to 𝒜^1 and let
V := [[ v_1 ⋯ v_k ]].
Then the ith column of WV is precisely the pre-activation image of the vector v_i in ℝ^n_1. Moreover, since 𝒜 is generic, each k× k minor of WV has rank k. We also just saw above that the initial k × k minor of WV is unaffected by the ReLU activation, since all entries are ≥ 0.
Since the post-activation rank of WV is the dimension of (F^1(𝒮)), we conclude that F^1(𝒮) is unbounded, of dimension k, as desired, and the base case is complete.
Now suppose 𝒮 satisfies the assumptions and we know that F_(ℓ -1)(𝒮) ⊆𝕆^≥ 0_k ⊆ℝ^n_ℓ -1 is unbounded of dimension k.
As in the proof of the base case, consider the unbounded rays R_i of F_(ℓ-1)(𝒮), their basepoints x_i, and their slopes {v_i}∈(F_(ℓ-1)(𝒮)). As before, assume that v_1, …, v_k gives a basis for the initial k–plane of ℝ^n_ℓ-1 and let w_1, …, w_n_ℓ be the weight vectors of F^ℓ, and let W be the matrix whose rows are w_i^T and V the matrix whose columns are v_j. By exactly the same argument as before, we see that the initial k × k minor of WV is unaffected by the ReLU activation, and since F^ℓ is generic and super-transversal to all previous layer maps, each k × k minor of WV is rank k, which–as before–implies that F_(ℓ)(𝒮) = F^ℓ(F_(ℓ-1)(𝒮)) is also unbounded, of dimension k, as desired.
Let F_θ: ℝ^n_0→ℝ^n_d be a generic, supertransversal ReLU network map of architecture (n_0=k, n_1, …, n_d) with n_ℓ≥ k for all ℓ, and let 𝒮 be a non-empty unbounded polyhedral set of dimension k in the canonical polyhedral complex 𝒞(F_θ). If the sign sequence s_S = (s^1, …, s^n_d) associated to 𝒮 satisfies
s^ℓ = (+1, …, +1_k, -1, … -1_n_ℓ - k) for all i ≤ℓ, then F_(ℓ) restricted to the (non-empty) interior of 𝒮 is a rank k affine-linear map, hence a homeomorphism onto its image for all i ≤ℓ.
Let 𝒮 be a non-empty unbounded polyhedral set of dimension k in 𝕆^≥ 0_k ⊆ℝ^n_ℓ-1, viewed as the domain of a polyhedral complex that also contains all of its faces. Suppose H ⊆ℝ^n_ℓ - 1 is a hyperplane for which H ⋔_c 𝒮≠∅. For any n_ℓ≥ k, we can always find n_ℓ hyperplanes H_1, …, H_n_ℓ satisfying:
* H_i ⋔_c 𝒮≠∅,
* the restricted hyperplane arrangement 𝒦 = {K_1, … K_n_ℓ} is generic, and
* the bounded subcomplex of the restricted hyperplane arrangement {K_1, … K_n_ℓ} is non-empty and contained in the interior of 𝒮.
Let H_1 := H. Choose a point p ∈ H ∩𝒮 in the (non-empty) interior of 𝒮, and a small open neighborhood N(p) ⊆𝒮. We can pick slight (non-generic) perturbations H_2, …, H_n_ℓ of H so that p ∈ H_i for all i. Since transverse intersection is an open condition, we can by further perturbation insure that H_2, …, H_n_ℓ still intersect 𝒮 transversely. Because generic hyperplane arrangements are dense and open in parameter space, we can by further perturbation insure that the restricted hyperplane arrangement {K_1, …, K_n_ℓ} is generic in the initial coordinate k–plane ℝ^n_ℓ_k and that the bounded subcomplex of this arrangement is contained in N(p), as desired.
Let F: ℝ^n_ℓ -1→ℝ^n_ℓ be a generic ReLU neural network layer map with associated co-oriented generic hyperplane arrangement 𝒜, and let 𝒞(F) = 𝒞(𝒜) be the associated polyhedral decomposition of ℝ^n. For almost all positive weight vectors w⃗∈ℝ^n_ℓ there exists a negative bias b ∈ℝ such that the corresponding positive-axis hyperplane:
H := {x⃗∈ℝ^n_ℓ | w⃗·x⃗ + b = 0}.
has non-empty transverse intersection with F(C) for each unbounded n_ℓ-1–cell C in 𝒞(𝒜) whose ternary labeling has dimension ≥ 1.
By an argument analogous to the one in the proof of Proposition <ref>, we see that the image of any unbounded polyhedral set C in 𝒞(𝒜) of dimension d≥ 1 whose associated sign sequence s_C has dimension (s_C) (Definition <ref>) is an unbounded polyhedral set of dimension (d, (s_C)), hence contains an unbounded ray in the non-negative orthant 𝕆^≥ 0. But for every ray R contained in the non-negative orthant and every positive weight vector there exists a sufficiently high negative bias such that the corresponding sufficiently-high bias positive-axis hyperplane intersects R. Since transverse intersection is an open condition, we can perturb w slightly so this intersection is transverse.
Let B ⊆ℝ^n be a bounded set, and let S ⊆ℝ^n be any unbounded pointed polyhedral set of dimension n. There exists v⃗∈(S) such that the translate of B by v⃗ is in S. That is, there exists v⃗∈(S) such that b+ v⃗∈ S for all b ∈ B.
Let P be any polytope containing B and let V be its (finite) set of 0–faces, so P is the convex hull of V.
It suffices to prove that when (S) is non-empty and has dimension n (as is the case for any unbounded polyhedral set S of dimension n), the set S + -(S) is all of ℝ^n. This will imply that P, being the convex hull of finitely many points, can be realized as P' + w for P' the convex hull of finitely many points in S and w ∈ -(S).
But the fact that ℝ^n = S + -(S) follows immediately from the fact that there exist vectors v_1, …, v_n ∈(S) that form a basis for ℝ^n.
The following result is well-known (cf. <cit.>), but we include a proof here for completeness.
Let ℝ^D be the parameter space for a feedforward ReLU network architecture. There exists an algebraic set S of positive codimension (and Lebesgue measure 0) such that every parameter θ∈ℝ^D ∖ S is generic.
A parameter is generic iff each hyperplane arrangement associated to each layer map is generic. Since there are finitely many layer maps, and the union of finitely many algebraic sets is an algebraic set, we need only prove that the statement of the lemma holds for a parameter associated to a single layer map.
Classical results in linear algebra relating the solution sets of homogeneous and inhomogeneous linear systems tell us that an arrangement of n_ℓ affine hyperplanes in ℝ^n_ℓ-1 is generic iff the associated bias-free (central) hyperplane arrangement has the property that every k–fold intersection of (non-affine) hyperplanes intersects in a linear subspace of codimension k (where a linear subspace of codimension >n_ℓ is by definition empty).
The rank-nullity theorem tells us that this happens iff every weight matrix associated to a k–fold subset of the central arrangement has full rank, min{k, n_ℓ-1}. Noting that the rank of a k × n_ℓ - 1 matrix is the maximum m for which some m × m minor has nonzero determinant, and the determinant is a polynomial equation in the matrix entries, we conclude that away from an algebraic set the parameters are generic. Since a non-empty algebraic set always has positive codimension (and hence Lebesgue measure 0), the conclusion follows.
Let θ_0 ∈ℝ^D be a generic parameter. There exists an open neighborhood U of θ_0 such that every parameter θ∈ U is generic.
An algebraic set is closed, hence its complement is open.
§ LINEAR REGIONS ASSUMPTION (LRA) AND TRANSVERSE PARWISE INTERSECTION CONDITION (TPIC)
A linear region of a continuous, finitely piecewise linear function f:ℝ^n_0→ℝ^n_d is any maximal connected set S ⊂ℝ^n_0 such that the restriction of f to S is affine-linear.
Note that each linear region is a closed set.
Let T ⊆ℝ^n_0 be a union of the closures of ± activation regions for a ReLU network map F_θ: ℝ^n_0→ℝ^n_d.
F_θ is said to satisfy the Linear Regions Assumption (LRA) on T if each linear region of the restriction of F_θ to T is its intersection with the closure of a single ±-activation region of F_θ.
The following Lemma is an immediate consequence of the definition of LRA. (Compare Remark <ref>.)
For a parameter that satisfies LRA on T ⊆ℝ^n_0 as above, the intersection of T with the union of the bent hyperplanes coincides with the domain of the (n_0-1)-skeleton of the canonical polyhedral complex.
In order to deduce that there are no hidden symmetries on a positive measure subset of parameter space, it will be important for us to know that the required LRA condition is satisfied not just for the construction we give in Section <ref> but also on a full measure subset of an open neighborhood of the associated parameter. We turn to establishing the foundations for proving this now.
Recall the following definition and result (cf. <cit.>, <cit.>), which tell us that for most pairs (θ, x) ∈Ω×ℝ^n_0, each coordinate of the realized function F_θ(x) is expressible as a polynomial in the coordinates of the parameter and the input.
Let Ω be the parameter space for ReLU network architecture (n_0, …, n_d). Denote by ℱ: Ω×ℝ^n_0→ℝ^n_d the function ℱ(θ,x) := F_θ(x).
Let Ω be the parameter space for ReLU network architecture (n_0,…,n_d). Let θ∈Ω, and suppose x ∈ℝ^n_0 is in a ±–activation region for F_θ.
Then there is an open neighborhood of (θ,x) ∈Ω×ℝ^n_0 on which each coordinate of ℱ is a polynomial in the coordinates of θ and x.
Indeed, if θ is moreover a generic and supertransversal parameter and x ∈ℝ^n_0 is any point in the domain, it is well-known that we can calculate this polynomial explicitly in terms of the ternary labeling on x and the parameters of θ, cf. Lemma 8 of <cit.>. For completeness, we describe this process here. We first establish some notation, then describe how the polynomial is calculated in Lemma <ref>.
The augmented computational graph G̃ for the feedforward ReLU network architecture (n_0, …, n_d) is the graded oriented graph:
* with n_ℓ ordinary vertices and 1 distinguished vertex of grading ℓ for ℓ = 0, …, d-1, and n_d ordinary vertices of grading d,
* for every ℓ = 0, …, d-1, every vertex of grading ℓ is connected by a single oriented edge to every ordinary vertex of grading ℓ+1, oriented toward the vertex of grading ℓ+1.
One obtains the augmented computational graph for an architecture from the standard computational graph for the architecture by adding an extra marked vertex for each non-output layer, whose purpose is to record the bias term in each affine-linear map. Accordingly, given a parameter θ one obtains a labeling of the edges of the augmented computational graph:
* the edge from the distinguished vertex of layer ℓ to the kth ordinary vertex of layer ℓ+1 is labeled with b^ℓ+1_k, the kth component of the bias vector for F^ℓ+1,
* the edge from the ith ordinary vertex of layer ℓ-1 to the jth ordinary vertex of layer ℓ is labeled with W^ℓ_ji.
Associated to every oriented path γ is a corresponding monomial, m(γ), in the parameters obtained by taking the product of the parameters on the edges traversed along γ. See Figure <ref>.
< g r a p h i c s >
An augmented computational graph for architecture (2,3,3,1). The ordinary vertices are black, and the distinguished vertices are red. Black edges are labeled with weights, and red edges are labeled with biases. A complete path is one that ends at an output vertex and begins either at an input vertex or at one of the distinguished vertices. In the diagram above, we have blurred out vertices corresponding to inactive neurons associated to an input vector x with ternary label s_x = (s_x^1,s_x^2,s_x^3) = ((-1,0,+1), (+1,0,0), (+1)). The open paths associated to this ternary label are the ones in the diagram above with solid (non-dashed) edges. The reader can check that there are three open, complete paths γ, γ', γ”∈Γ_x,*, whose monomials are m(γ) = b_3^1W^2_13W^3_11, m(γ') = b_1^2W^3_11, and m(γ”) = b_1^3. There is a unique open, complete path γ_1 ∈Γ_1, with monomial m(γ_1) = W^1_31W^2_13W^3_11 and a unique open, complete path γ_2 ∈Γ_2, with monomial m(γ_2) = W^1_32W^2_13W^3_11.
Let θ∈Ω be a generic, supertransversal parameter, and let x ∈ℝ^n_0 be any point in the domain, with associated ternary labeling s_x = (s^1_x, …, s^d_x). A path γ is said to be open at x for parameter θ if every node along γ has ternary labeling +1.
Let G̃ be the augmented computational graph for the ReLU network architecture (n_0, …, n_d). A path γ is said to be complete if it ends at a vertex in the output layer and begins at either a vertex of the input layer or at one of the distinguished vertices in a non-input layer. For θ∈Ω and each 1 ≤ k ≤ n_d we will denote by
* Γ_x^θ,k the set of complete paths that are open at x for the parameter θ and end at the kth node of the output layer,
* by Γ_x,i^θ,k⊆Γ_x^θ,k the subset of Γ_x^θ,k beginning at input node i, and
* by Γ_x,*^θ,k⊆Γ_x^θ,k the subset of Γ_x^θ,k beginning at one of the distinguished vertices.
The following lemma is well-known to the experts (e.g. Lemma 8 of <cit.>).
Let θ∈Ω be a generic, supertransversal parameter, and let x = (x_1, …, x_n_0) ∈ℝ^n_0 be any point in the domain. For k=1, …, n_d, the polynomial associated to the kth output component of F_θ at x is given by
ℱ_k(θ,x) = ∑_γ∈Γ_x,*^θ,k m(γ) + ∑_i=1^n_0 x_i∑_γ∈Γ_x,i^θ,k m(γ) .
Fix θ and any ternary labeling s. Define
(ℝ^n_0_s ×Ω)_s {(x,θ) ∈ℝ^n_0×Ω| the ternary labeling of x with respect to θ is s }.
Then ℱ restricted to (ℝ^n_0_s ×Ω)_s is a vector of polynomial functions. In the proof of the Lemma below, we will use the notation ℱ_s for this vector of polynomial functions.
In the Lemma below, recall from Section <ref> that Ĥ_i^ℓ denotes the bent hyperplane in the domain ℝ^n_0 associated to the ith neuron of the ℓth layer map.
There exists an algebraic, measure 0 set B ⊂Ω such that for any generic, supertransversal parameter θ∈Ω∖ B
and any point x ∈Ĥ_i^ℓ - 1∩Ĥ_j^ℓ (for parameter θ),
if Γ_x∖Γ_x,* is non-empty, then the LRA is satisfied on the union, T_ij^ℓ, of the closures of the four ±-activation regions adjacent to x=p_ij^ℓ.
Fix i,j. Fix a ternary labeling s ∈{-1,0,1}^n_1×…×{-1,0,1}^n_d. Suppose there exists a point x ∈Ĥ_i^ℓ - 1∩Ĥ_j^ℓ with ternary labeling s=s_x.
By supertransversality, every non-empty bent hyperplane besides Ĥ_1^ℓ-1 and Ĥ_1^ℓ has positive minimal distance to x. It follows that a sufficiently small neighborhood of x contains points only in the closures of the four ±-activation regions adjacent to x.
By permuting neurons in layers ℓ-1 and ℓ if necessary, we may assume without loss of generality that i=j=1. It follows that the first component of each of the ternary labels s_x^ℓ-1 and s_x^ℓ is 0. Accordingly, the ternary labelings of the four ±-activation regions adjacent to x agree with those of x except at the first coordinates of s^ℓ-1, s^ℓ . It is therefore natural to label the adjacent ±-activation regions by ++, +-, -+, – according to whether the 1st coordinate of s^ℓ-1, s^ℓ is ± 1. Let x_++, x_+-, x_-+, x_– be points in the corresponding ±-activation regions adjacent to x, and let s_++, s_+-, s_-+, s_– be the corresponding ternary labelings.
The assumption that Γ_x ∖Γ_x,* is non-empty tells us that there is at least one open, complete path at x, which implies that each of the ternary labelings s_x^1, …, s_x^d has at least one +1 component. In other words, there is at least one neuron active in each layer at the input x = p^ℓ_ij.
Let ℱ_s_++, ℱ_s_+-, ℱ_s_-+, ℱ_s_– be the vector of polynomials as in Remark <ref>.
Lemma <ref> tells us how to compute these four polynomials.
We will show that the vectors of polynomials ℱ_s_++, ℱ_s_+-, ℱ_s_-+, ℱ_s_–
are pairwise distinct. I.e., viewing the summands of the polynomial components as consisting of a coefficient that is an algebraic expression in θ and a variable x_i, we will show that different polynomials have different coefficients.
Then we let B_i,j,s denote the set of parameters θ such that two or more of the restrictions ℱ_s_++(θ, ·), ℱ_s_+-(θ,·), ℱ_s_-+(θ,·), ℱ_s_–(θ, ·) coincide. The set B_i,j,s is an algebraic set (a finite union of solutions to polynomials) and hence has measure 0. It follows that the set B ⋃_i,j,s B_i,j,s is an algebraic set of 0 measure that has the desired properties.
Thus, we turn to proving that ℱ_s_++, ℱ_s_+-, ℱ_s_-+, ℱ_s_– are pairwise distinct polynomials. Explicitly, let v_1^ℓ-1 (resp., v_1^ℓ) be the first ordinary vertex in layer ℓ -1 (resp., in layer ℓ). Consider the set of paths in Γ_x_++ passing through an ordinary vertex from layer ℓ - 1 and an ordinary vertex from layer ℓ. It is immediate that this set can be decomposed into the disjoint union of:
* the set of paths through both v_1^ℓ-1 and v_1^ℓ, which we will denote by Γ_11
* the set of paths through v_1^ℓ-1 and not v_1^ℓ, which we will denote by Γ_1*
* the set of paths through v_1^ℓ and not v_1^ℓ-1, which we will denote by Γ_*1
* the set of paths through neither v_1^ℓ-1 nor v_1^ℓ, which we will denote by Γ_**
Now note that Γ_x_– = Γ_x. Since Γ_x is non-empty each of Γ_x_-+, Γ_x_+-, Γ_x_++ is non-empty as well, which implies that the polynomial in (θ,x) associated to each of these sets is nonzero.
Next, note that Γ_x_+-= Γ_x_–∪Γ_1*. Since there is at least one neuron in each layer active at x_+-, the set Γ_1* is non-empty, and hence the polynomial
∑_γ∈Γ_1* m(γ)
is nonzero. This tells us that the polynomials ℱ_s_– and ℱ_s_+- associated to Γ_x_– and Γ_x_+- are distinct (as functions of two variables θ and x, and affine-linear in x).
Similarly, the polynomial associated to Γ_x_-+ is distinct from that associated to Γ_x_–.
Indeed, the fact that each of Γ_11, Γ_1*, Γ_*1, and Γ_** contains a path (and hence a monomial containing a distinct weight) not present in the others implies, by an analogous argument, that the polynomials associated to Γ_x_++, Γ_x_+-, Γ_x_-+, Γ_x_– are all pairwise distinct.
By permuting neurons in layers ℓ-1 and ℓ if necessary, we may assume without loss of generality that i=j=1. It follows that the first component of each of the ternary labels s_x^ℓ-1 and s_x^ℓ is 0. Accordingly, the ternary labelings of the four ±-activation regions adjacent to x agree with those of x except at the first coordinates of s^ℓ-1, s^ℓ ±. It is therefore natural to label the adjacent ±-activation regions by ++, +-, -+, – according to whether the 1st coordinate of s^ℓ-1, s^ℓ is ± 1. Let x_++, x_+-, x_-+, x_– be points in the corresponding ±-activation regions adjacent to x.
The assumption that Γ_x ∖Γ_x,* is non-empty tells us that there is at least one open, complete path at x, which implies that each of the ternary labelings s_x^1, …, s_x^d has at least one +1 component. In other words, there is at least one neuron active in each layer at the input x = p^ℓ_ij.
By supertransversality, every non-empty bent hyperplane besides Ĥ_1^ℓ-1 and Ĥ_1^ℓ has positive minimal distance to x = p_11^ℓ. It follows that a sufficiently small neighborhood of x contains points only in the closures of the four ±-activation regions adjacent to x.
Moreover, Lemma <ref> tells us how to compute the associated affine-linear functions associated to these four regions. We will show that the four polynomials x̃↦ℱ(θ,x̃)
associated to these four activation regions are nonzero and pairwise distinct (i.e. different polynomials have different coefficients, and these coefficients are products of coordinates of θ).
.
This will imply that the linear maps on these four regions differ away from an algebraic set (and hence away from a set of measure zero) in Ω.
Explicitly, let v_1^ℓ-1 (resp., v_1^ℓ) be the first ordinary vertex in layer ℓ -1 (resp., in layer ℓ). Consider the set of paths in Γ_x_++ passing through an ordinary vertex from layer ℓ - 1 and an ordinary vertex from layer ℓ. It is immediate that this set can be decomposed into the disjoint union of:
* the set of paths through both v_1^ℓ-1 and v_1^ℓ, which we will denote by Γ_11
* the set of paths through v_1^ℓ-1 and not v_1^ℓ, which we will denote by Γ_1*
* the set of paths through v_1^ℓ and not v_1^ℓ-1, which we will denote by Γ_*1
* the set of paths through neither v_1^ℓ-1 nor v_1^ℓ, which we will denote by Γ_**
Now note that Γ_x_– = Γ_x. Since Γ_x is non-empty each of Γ_x_-+, Γ_x_+-, Γ_x_++ is non-empty as well, which implies that the polynomial in (θ,x) associated to each of these sets is nonzero.
Next, note that Γ_x_+-= Γ_x_–∪Γ_1* [this was ⨿ – is it supposed to mean union? ]
. Since there is at least one neuron in each layer active at x_+-, the set Γ_1* is non-empty, and hence the polynomial
∑_γ∈Γ_1* m(γ)
is nonzero. This tells us that the polynomials ℱ associated to Γ_x_– and Γ_x_+- are distinct. Similarly, the polynomial associated to Γ_x_-+ is distinct from Γ_x_–.
Indeed, the fact that each of Γ_11, Γ_1*, Γ_*1, and Γ_** contains a path (and hence a monomial containing a distinct weight) not present in the others implies, by an analogous argument, that the polynomials associated to Γ_x_++, Γ_x_+-, Γ_x_-+, Γ_x_– are all pairwise distinct away from a measure zero set in Ω.
In <cit.>, it is proved that for a generic, supertransversal parameter θ, the map
s: 𝒞(F_θ) →{-1,0,+1}^n_1 + … + n_d
that assigns to each cell C of the polyhedral complex, 𝒞(F_θ), its ternary activation pattern, s_C, is well-defined, injective, and has the property that C is a k-cell of 𝒞(F) if and only if s(C) has exactly n_0 - k entries which are 0.
That is, there is no ambiguity in defining a ternary activation pattern for a cell C of 𝒞(F_θ), each possible ternary activation pattern s ∈{-1,0,+1}^n_1 + … + n_d is in the image of at most one polyhedral set C in 𝒞(F_θ), and the dimension of C as a polyhedral set is n_0 -k, where k is the number of 0's
in s_C.
Moreover, we state the following additional result (implicit in <cit.>), which tells us that the presence of a ternary activation pattern is stable under (almost all) sufficiently small perturbations of the parameter:[Note that the absence of a ternary activation pattern is not necessarily stable in this way. See the definition of combinatorial stability and related discussion in <cit.>.]
Let θ' ∈Ω be a generic, supertransversal parameter, C' ∈𝒞(F_θ') a cell in the corresponding polyhedral complex and s_C' its associated ternary activation pattern.
There exists an open neighborhood N of θ' ∈Ω
such that for each θ∈ N, θ is generic, supertransversal, and there exists a non-empty cell C in 𝒞(F_θ) with ternary activation pattern s_C = s_C'.
The assumption that θ' is generic and supertransversal tells us that the ternary activation pattern of a cell C' in 𝒞(F_θ') gives us a precise recipe for realizing C' as the intersection of bent hyperplanes and “bent" half-spaces.[The complement of a bent hyperplane is the union of at most two open connected components. These are what we mean by bent half-spaces. Note that a bent hyperplane may be empty, in which case exactly one of the two bent half-spaces is also empty.] Moreover, if C' has dimension (n_0-k), s_C' will have k 0's (corresponding to k intersecting bent hyperplanes) and (n_0-k) ± 1's (corresponding to (n_0 - k) intersecting bent half-spaces). We also note (cf. Lem. 12 of <cit.> and Lemma <ref> that the set of generic, supertransversal parameters is full measure in parameter space.
Now suppose that C' is a non-empty cell in 𝒞(F_θ'), and p' is a point in (C').
Let ℋ_0 (resp., ℋ_±) denote the set of bent hyperplanes of F_θ' associated to ternary label 0 (resp., ternary labels ± 1) in s_C'.
By transversality on cells, there is an open neighborhood N_0 of the subset of the parameters defining the bent hyperplanes in ℋ_0 for which every parameter θ∈ N_0 defines a collection of |ℋ_0| bent hyperplanes with intersection that is both transverse on cells and non-empty.
Moreover, since p' has positive minimal distance to every bent hyperplane in ℋ_±, and every such bent hyperplane is closed (though not necessarily compact), there is some positive δ for which a neighborhood of p' of radius δ contains only the bent hyperplanes in ℋ_0. This implies that there is a sufficiently small open neighborhood N_± of the parameters defining the bent hyperplanes in ℋ_± for which C' is in the same bent half-space for the bent hyperplanes in ℋ_± for every parameter in N_±.
Letting N=N_0 ∩ N_± and further restricting to a neighborhood with generic parameters (Lemma <ref>) if necessary, we obtain a neighborhood of θ' ∈Ω for which every θ∈ N is generic, supertransversal, and contains a non-empty cell C with s_C = s_C', as desired.
Recall (Definition <ref> that a generic, supertransversal parameter θ satisfies TPIC if every pair of bent hyperplanes in adjacent layers has non-empty transverse intersection.
For any architecture, TPIC is an open condition. That is, if θ∈Ω is a generic, supertransversal parameter satisfying TPIC, then there exists an open neighborhood of θ∈Ω on which all parameters satisfy TPIC.
The proof of Lemma <ref> showed that for a parameter θ and point x in the intersection of two bent hyperplanes, the polynomials for the four associated sign sequences are distinct. However, it did not show that there is a neighborhood N ⊂Ω of θ such that all four sign sequences are actually realized on ℝ^n_0 near x for all θ' ∈ N. The next Lemma combines the persistence of cells realizing sign sequences given by Proposition <ref> with Lemma <ref>.
Suppose θ' ∈Ω is a generic, supertransversal parameter, let x = p_ij^ℓ be a point in Ĥ_i^ℓ-1∩Ĥ_j^ℓ, and let T_ij^ℓ be the union of the closures of the four ±-activation regions adjacent to x=p_ij^ℓ (as in Lemma <ref>). If Γ_x∖Γ_x,* is non-empty, then there is an open neighborhood N of θ' ∈Ω and an algebraic, measure 0 set S ⊂ N for which every θ∈ N ∖ S satisfies:
* There are four non-empty ±-activation regions of F_θ with the same ternary labelings as those in T_ij^ℓ
* Letting T_ij^ℓ(θ) denote the (non-empty) closure of these four ± activation regions, LRA is satisfied on T_ij^ℓ(θ).
Proposition <ref> tells us that there is an open neighborhood of θ' for which each θ in the neighborhood satisfies (i).
Moreover Lemma 14 of <cit.> and Lemmas <ref> and <ref> tell us that away from a closed (algebraic) set of measure 0, the parameters θ in this open neighborhood satisfy LRA on T_ij^ℓ(θ), as desired.
Let θ∈Ω be a generic, supertransversal parameter satisfying TPIC. For each i,j,ℓ, choose a point p_ij^ℓ∈Ĥ^(ℓ-1)_i ∩Ĥ^ℓ_j. Let T_i,j^ℓ denote the union of the closures of the four ±–activation regions adjacent to p_ij^ℓ, and let T = ⋃_i,j,ℓ T_i,j^ℓ. If F_θ satisfies LRA on T, then θ can be recovered from F_θ up to permutation and positive-rescaling.
The proof of Theorem 2 and associated algorithm in <cit.> require only that the LRA is satisfied in the (closures of) the four adjacent ±-activation regions for one point in each relevant transverse pairwise intersection.
Lemma <ref> tells us that in order for a parameter satisfying TPIC to admit no hidden symmetries, it need not satisfy LRA everywhere but only on the union of the closures of all activation regions near the intersection points (the set T in the statement of Lemma <ref>. Accordingly, we will say that a parameter θ∈Ω satisfying the assumptions of Lemma <ref> satisfies TPIC and LRA on a neighborhood of the pairwise intersections.
§ MAIN THEOREM AND PROOF
Let (n_0, …, n_d) be a neural network architecture satisfying (n_0 = k) ≤ n_ℓ for all ℓ, and let Ω = ℝ^D denote its parameter space. There exists a positive measure subset Y ⊂Ω for which each θ∈ Y satisfies TPIC and LRA on a neighborhood of the pairwise intersections, hence has no hidden symmetries.
In the course of the proof we will need the following additional notation. Let 𝒜^ℓ = {H_1^ℓ, …, H_n_ℓ} be a hyperplane arrangement in ℝ^n_ℓ-1. If the initial coordinate k–plane ℝ^n_ℓ_k is transverse to H_i^ℓ, then the intersection, H_i^ℓ∩ℝ^n_ℓ-1_k, is a hyperplane in ℝ^n_ℓ-1_k. In this case we will use:
K_i^ℓ:= H_i^ℓ⋔ℝ^n_ℓ-1_k
to denote this hyperplane in ℝ^n_ℓ-1_k. If 𝒜^ℓ⋔_c ℝ^n_ℓ-1_k, then this implies that all of the hyperplanes in 𝒜^ℓ intersect ℝ^n_ℓ-1_k transversely, and we will use 𝒦^ℓ to denote the corresponding hyperplane arrangement in ℝ^n_ℓ-1_k.
We now proceed to prove the theorem by construction. Our strategy will be to find a particular generic, supertransversal parameter θ' ∈Ω satisfying TPIC and LRA on a neighborhood of the intersections. It will then follow by Lemmas <ref> and <ref> that every parameter θ away from a measure zero set in a neighborhood of θ' satisfies TPIC and LRA on a neighborhood of the intersections. The conclusion of the theorem will then follow.
Our construction will be by induction on d. We will find it convenient to prove the following strictly stronger (and unavoidably technical) conclusion, since it helps with the inductive construction: There exists a generic, supertransversal choice of parameters whose associated sequence of polyhedral refinements 𝒞(F_1 = F_(1)) ≽…≽𝒞(F=F_(d)) contains a nested sequence of unbounded k–cells 𝒮_1 ⊇…⊇𝒮_d for which the ℓ-th ternary labeling on 𝒮_ℓ is s^ℓ = (+1, …, +1_k, -1, … -1_n_ℓ - k) for all ℓ≤ d and which satisfies the additional conditions that
* 𝒮_ℓ⋔_c (Ĥ_i^ℓ + 1⋔_c Ĥ_j^ℓ + 2) ≠∅ for all i,j when ℓ≤ d-2, and
* ℝ^n_ℓ_k ⋔_c 𝒞(𝒜^ℓ+1), and the preimage of the bounded subcomplex of 𝒞(𝒦^ℓ+1) under the map F_(ℓ) is contained in the interior of 𝒮_ℓ for all ℓ≤ d-1.
Note that condition (i) above implies TPIC, and the assumption that the ℓth ternary labeling on 𝒮_ℓ has k +1's for all ℓ is enough to guarantee that for each pairwise intersection x = p_ij^ℓ the set Γ_x ∖Γ_x,* referenced in Lemma <ref> is non-empty, hence LRA will be satisfied in a neighborhood of the intersections.
When d=1, choose 𝒜^1 = {H^1_1, …, H^1_n_1} to be any generic arrangement of hyperplanes. Choose any unbounded k–cell of
𝒞(𝒜^1) and call it 𝒮_1. Note that we can alter the ordering and co-orientations on the hyperplanes of 𝒜^1 (without affecting the arrangement) to ensure that the first k neurons of F^1 are active on 𝒮_1 and the remainder are inactive. That is, the ternary labeling on 𝒮_1 is s^1 = (+1, …, +1_k, -1, … -1_n_1 - k). The rest of the conditions are vacuously true.
Now suppose d=2. By Corollary <ref>, we know that F^1 restricted to the interior of 𝒮_1 is a homeomorphism onto the interior of the k–dimensional polyhedral set F^1(𝒮_1) in 𝕆^≥ 0_k ⊆ℝ^n_1. Moreover, F^1(𝒮_1) is unbounded by Proposition <ref>. Lemma <ref> then tells us that there exists a positive-axis sufficiently high-bias hyperplane H ⊂ℝ^n_1 for which F^1(𝒮_1) ⋔_c H ≠∅.
Proposition <ref> ensures we can choose sufficiently small perturbations H_1^2, …, H_n_2^2 of H so that all of the hyperplanes in 𝒜^2 are transverse to 𝒮_1 ⊆ℝ^n_1, the parameters associated to 𝒜^1, 𝒜^2 are generic and supertransversal, and the bounded subcomplex of the restricted hyperplane arrangement 𝒦^2 = 𝒜^2 ∩ℝ^n_1_k is contained in (F^1(𝒮_1)). Another application of Corollary <ref> allows us to conclude that the preimage of the bounded subcomplex of 𝒞(𝒦^2) is contained in the interior of 𝒮_1, and hence the technical inductive conclusion (ii) is satisfied. Since d = 2, the technical inductive conclusion (i) is vacuous, so the d=2 case is proven.
Now assume d > 2. By the inductive assumption, there exists a generic, supertransversal choice of parameters for F^1, …, F^d-1 for which the technical inductive conditions above are satisfied for all ℓ up through d-1. Therefore, we need only choose generic parameters for F^d that are supertransversal to all previous choices of parameters and for which in the polyhedral refinement 𝒞(F_(d-1)) of 𝒞(F_(d-2)) we have identified an unbounded k–cell 𝒮_d-1⊆𝒮_d-2 with ternary labeling s^d-1 = (+, …, +_k, -, … -_n_d-1 - k),
* 𝒮_d-2⋔_c (Ĥ_i^d-1⋔_c Ĥ_j^d) ≠∅ for all i,j, and
* ℝ^n_d-1_k ⋔_c 𝒞(𝒜^d), and the preimage of the bounded subcomplex of 𝒞(𝒦^d) under the map F_(d-1) is contained in the interior of 𝒮_d-1.
Proceed by choosing any unbounded k–cell contained in 𝒮_d-2 in the polyhedral refinement 𝒞(F_(d-1)) and call it 𝒮_d-1. As before, we are free to choose the ordering and co-orientations on the hyperplanes of 𝒜^d-1 without altering 𝒜̂^d-1 or the polyhedral refinement 𝒞(F_(d-1)). So we can arrange for 𝒮_d-1 to have the desired ternary labeling.
By Corollary <ref> we know that F_(d-1) restricted to the interior of 𝒮_d-1 is a homeomorphism onto the interior of an unbounded k–dimensional polyhedral set in 𝕆^≥ 0_k ⊆ℝ^n_d-1, and we can therefore choose a sufficiently high bias positive-axis hyperplane H in ℝ^n_d-1 so that F_(d-1)(𝒮_d-1) ⋔_c H ≠∅. By Proposition <ref> we can find small perturbations H_1^d, …, H_n_d^d such that the preimage of the bounded subcomplex of the restricted generic hyperplane arrangement is contained in F_(d-1)(𝒮_d-1), which we may assume is unbounded by Proposition <ref>.
Moreover, Lemma <ref> tells us that since H ⊆ℝ^n_d-1 is a sufficiently high bias positive-axis hyperplane,
F^d-1(H^d-1_i) ⋔ H ≠∅
for all i. Since non-empty transverse intersection is an open condition, we also know that
F^d-1(H^d-1_i) ⋔_c H^d_j ≠∅
for all i,j. An application of Lemma <ref> then tells us that
H^d-1_i ⋔H̀^d_j ≠∅
for all i,j.
We would now like to conclude that condition (i) is satisfied for ℓ = d-2, but although we know that F_(d-1)(𝒮_d-2) ⋔_c H_i^d-1≠∅ and H^d-1_i ⋔_c H̀^d_j ≠∅ we don't yet know that H^d-1_i ∩H̀^d_j are contained in the unbounded polyhedral set F_(d-2)(𝒮_d-2), so we cannot yet conclude that the three-fold intersections 𝒮_d-2⋔_c (Ĥ_i^d-1⋔_c Ĥ_j^d) are non-empty.
To arrange for this, choose one point of intersection from H^d-1_i ∩H̀^d_j for each i,j. This is a finite, hence bounded, set. Call it B.
We now appeal to Lemma <ref>, which tells us that there is some vector v ∈(F_(d-2)(𝒮_d-2)) that can be added to B so that B + v ∈(F_(d-2)(𝒮_d-2)). Since we can achieve this translation by altering just the biases of F^d and F^d-1, we have now arranged that condition (i) is satisfied. The inductive proof that our construction satisfies TPIC is now complete, and hence a positive measure subset of Ω has no hidden symmetries, as desired.
§ FUNCTIONAL DIMENSION, FIBERS, AND SYMMETRIES
Roughly speaking, the functional dimension of a parameter θ∈Ω_n_0,…,n_d is the dimension of the space of functions F_θ realizable by infinitesimally perturbing θ. We will now make this more precise.
Suppose Z = {z_1,…,z_k} is a finite collection of points in the domain ℝ^n_0 of F_θ. For any θ, we may record the result of evaluating the map F_θ at all k points of Z as a single “unrolled” vector, i.e.
E_Z(θ) (F_θ(z_1),…,F_θ(z_k)) ∈ℝ^kn_d.
We can then measure how many degrees of freedom we have to vary this data locally at θ by considering the rank of the total (Jacobian) derivative of the map θ↦ E_Z(θ), rank(JE_z(θ)).
Of course, because ReLU is not differentiable at 0, there is a (Lebesgue) measure 0 set of pairs (θ,x) in Ω_n_0,…,n_d×ℝ^n_0 at which the total derivative does not exist. Consequently, we will wish to restrict our attention to pairs (θ,x) at which all relevant partial derivatives exist.
A point x ∈ℝ^n_0 is parametrically smooth for a parameter θ∈Ω_n_0,…,n_d if (θ,x) is a smooth point for the parameterized family
ℱ: Ω_n_0,…,n_d×ℝ^n_0→ℝ^n_d defined by
ℱ(θ,x) = F_θ(x), where F_θ:ℝ^n_0→ℝ^n_d is the neural network map determined by the parameter θ.
The functional dimension of a parameter θ∈Ω_n_0,…,n_d is
dim_fun(θ) sup{rank(JE_Z(θ)) | Z⊂ℝ^n_0 is a finite set of parametrically smooth points for θ}
where the supremum is taken over all sets Z consisting of finitely many points in ℝ^n_0.
In practice, in experiments such as those in <ref>, we ignore the issue of differentiability, assuming that all points in a randomly selected set Z ⊂ℝ^n_0 are parameterically smooth for the parameter; the assumption is supported by the fact that ℱ is smooth except on a set of 0 measure,
The fiber (with respect to the realization map) of a parameter θ∈Ω is the set {θ̃∈Ω| F_θ̃ = F_θ}.
Recall that elements of (P), the set of permutations, and elements of (S), the scalings, act on Ω. Moreover, these actions commute, so the semigroup generated by (P) and (P) equals {p ∘ s | p ∈(P), s ∈(S)}. The following definition formalizes the notion that a hidden symmetry describes two parameters that define the same function, but do not differ by elements of (P) and (S).
A parameter θ∈Ω has no hidden symmetries if the semigroup generated by (P) and (S) acts transitively on the fiber of θ.
In other words, θ has no hidden symmetries if, given any two parameters θ_1 and θ_2 such that F_θ_1 =F_θ_2 = F_θ, there exists p ∈(P) and s ∈(S) such that p ∘ s(θ_1) = θ_2.
* A pointwise symmetry of θ is a permutation (i.e a bijection, not necessarily continuous) of the fiber of θ.
* A local symmetry of θ is a homeomorphism of the fiber of θ equipped with the subspace topology (as a subset of Ω).
* A global symmetry of Ω is a homeomorphism T:Ω→Ω such that F_T(θ) = F_θ for all θ∈Ω.
As shown in <cit.>, there exists a fiber and a permutation of that fiber that cannot be extended to a homeomorphism of an open neighborhood of that fiber. This is because it is possible for a fiber to contain two parameters θ_1 and θ_2 (of the same architecture) such that any arbitrarily small neighborhood of θ_1 gives rise to functions that cannot be realized by parameters in arbitrarily small neighborhoods of θ_2. Said otherwise, there exist pointwise symmetries that cannot be extended to local symmetries. Our definition of no hidden symmetries (Definition <ref>) is a statement about pointwise symmetries – that every pointwise symmetry of θ is the restriction to the fiber of θ of an element p ∘ s of the group of global symmetries generated by (P) and (S).
The theoretical upper bound on functional dimension (<cit.>) comes from taking account of (only) the well-known global symmetries (P) and (S). The intuition is that the set of functions realizable by parameters in a small neighborhood U of θ should be modeled by the quotient of U by the equivalence relation defined by (P) and (S).
Let U be an open ball in Ω such that if any two points u_1, u_2 ∈ U satisfy F_u_1 = F_u_2 if and only if u_2 = p ∘ s (u_1) for some p ∈(P) and s ∈(S). Then the functional dimension of any parameter θ∈ U attains the theoretical upper bound.
Denote by ∼ the equivalence relation on U defined by the semigroup generated by (P) and (S). Denote by ℱ|_U the set of functions realizable by parameters in U, equipped with the compact-open topology.
The condition that F_u_1 = F_u_2 if and only if u_2 = p ∘ s (u_1) for some p ∈(P) and s ∈(S) is equivalent to the statement that the quotient space U / ∼ is homeomorphic to ℱ_U. It follows that for any parameter θ∈ U, dim_fun(θ) is the Euclidean dimension of the quotient space U / ∼. The result follows.
§.§ Proof of Proposition <ref>
The following Lemma will be used to prove Proposition <ref>.
Let S be a hyperplane in ℝ^n. Let A:ℝ^n→ℝ^1 be an affine-linear map given by
A(x⃗) = x⃗·n⃗_H - b_H.
Let o⃗∈ℝ^n be a vector orthogonal to S (i.e. if s⃗_1, s⃗_2 are two points in S, then (s⃗_2-s⃗_1) ·o⃗ = 0). For each t ∈ℝ, define the affine-linear map A_t:ℝ^n →ℝ by
A_t(x⃗) = x⃗· (n⃗_H + t o⃗) - b_H - ts⃗_H ·o⃗
for any fixed point s⃗_H ∈ S ∩{x | A(x) = 0}.
Then A and A_t coincide on S.
Since A, A_t are affine-linear maps, it suffices to show that they agree at a point (in particular, at the point s⃗_H) and that
A(s⃗_1) - A(s⃗_2) = A_t(s⃗_1) - A_t(s⃗_2)
for all s⃗_1,s⃗_2 ∈ S.
First, observe that
A(s⃗_H) = s⃗_⃗H⃗·n⃗_H - b_H = 0
by definition, and
A_t(s⃗_H) = s⃗_⃗H⃗· (n⃗_H + t o⃗) - b_H - ts⃗_H ·o⃗ = (s⃗_⃗H⃗·n⃗_H - b_H) + s_H · t o⃗ - ts⃗_H ·o⃗ = 0.
Next, for any points s_1,s_2 ∈ S,
A(s⃗_1) - A(s⃗_2) = (s⃗_1 - s⃗_2) ·n⃗_H
and
A_t(s⃗_1) - A_t(s⃗_2) = (s⃗_1 - s⃗_2) ·n⃗_H + (s⃗_1 - s⃗_2) ·o⃗t = (s⃗_1 - s⃗_2) ·n⃗_H
since o⃗ is perpendicular to (s⃗_1 - s⃗_2).
Fix a nonzero vector o⃗ that is orthogonal to the hyperplane S.
For t ∈ℝ, let A_t be the map constructed in Lemma <ref>, set f_H_tσ∘ A_t, and denote the co-oriented hyperplane associated to A_t by H_t. By construction, f_H_t and η coincide on S.
Denote by H_t^+ (resp. H^+) the closed nonnegative half-spaces associated to A_t (resp. A). It suffices to show that there exists ϵ > 0 such that |t| < ϵ implies
H_t^+ ∩Im_(k) = H^+ ∩Im_(k).
(The desired one-parameter family is then the family parametrized by t with |t| < ϵ).
For convenience, define G_k := F^k ∘…∘ F^1. Consider the complex
M G_k(𝒞(G_k)) ∩ H^-.
(Here 𝒞(G_k) denotes the canonical polyhedral complex for G_k; we take the image (in ℝ^n_k) of this complex under G_k, which is itself a polyhedral complex, and then intersect it with the closed half-space H^-.)
By condition (<ref>),
Im_(k)∩{x |η(x) = 0} = Im_(k)∩ H ⊆ S.
Consequently, each cell of M either contains a cell in H ∩ S as a face, or is a positive distance away from H. Consequently, there exists ϵ_1 > 0 such that |t| < ϵ_1 implies the intersection of any bounded cell of M with H_t^+ is contained in S ∩ H or is empty.
By condition (<ref>), |M| does not contain any unbounded geometric rays parallel to H.
Consequently, there exists ϵ_2 such that |t| < ϵ_2 implies the intersection of any unbounded bounded cell of M with H_t^+ is empty. Set ϵ = min{ϵ_1,ϵ_2}. Then when |t| < ϵ, H_t^+ ∩Im_(k) = H^+ ∩Im_(k).
§ SUPPLEMENTARY EXPERIMENTS
In this appendix, we consider the effect of varying the number m of sample points when approximating the functional dimension. For a fixed network architecture of depth 4 and width 5, Figure <ref> shows the fraction of networks attaining the maximum possible value for (θ), as a fraction of m, where m is shown as a multiple of the maximum possible functional dimension. We observe that the curve is very flat in the region of x=100 (the value used in Section <ref>), suggesting that further increasing m would likely not change the results meaningfully.
Figures <ref> and <ref> below consider the effect of choosing a much smaller value of m. These figures are analogous to Figures <ref> and <ref>, but with m equal to twice instead of 100 times the maximum possible value for (θ). Each figure aggregates results for 20,000 different choices of θ∈Ω. We note that the bounds obtained on the functional dimension are unsurprisingly somewhat weaker for this very low value of m, but the distributions of approximate function dimension still show the same patterns as observed in Section <ref>, indicating the robustness of our conclusions.
|
http://arxiv.org/abs/2306.04247v2
|
20230607083849
|
The scaling relations of galaxies back in time: the road toward virialization
|
[
"Mauro D'Onofrio",
"Cesare Chiosi"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Department of Physics and Astronomy, University of Padua,
Vicolo Osservatorio 3, I-35122 Padova (Italy)
[email protected]
[email protected]
The structural scaling relations (SSRs) of galaxies, the observed correlations between effective radius, effective surface intensity and velocity dispersion, are important tools for understanding how evolution proceeds.
In this paper we aim to demonstrate that the evolution of the SSRs back in time is governed by the combination of the virial theorem (VT) and the relation , where the parameters β and L'_0 vary with time and from galaxy to galaxy.
Using the WINGS database for the galaxies at redshift z=0 and the Illustris-1 and Illustris-TNG databases of artificial galaxies, for the galaxies up to redshift z=4, we analyse the SSRs back in time and, by means of simple algebraic expressions for L'_0 and β (functions of time and other physical quantities), we derive the expected paths followed by galaxies in the various SSRs toward the distributions observed at z=0.
The distribution of galaxies in the SSRs is ultimately related to the evolution in luminosity and velocity dispersion that are empirically mirrored by the law. Furthermore, the β parameter works as a thermometer of the virialization of a galaxy. This parameter can assume either positive or negative values, and its absolute value attains high values when the galaxy is close to the virial condition, while it tends to zero when the galaxy is far from it.
As the SSRs change with time, the method we are proposing allows us to decipher the temporal evolution of galaxies.
The scaling relations of galaxies back in time:
the road toward virialization
M. D'OnofrioCorresponding author: Mauro D'Onofrio
1
C. Chiosi1
Received January 2023; accepted June 2023
=================================================================================================
§ INTRODUCTION
The structural scaling relations (SSRs) of galaxies, the mutual correlations between the main measured structural parameters, such as the effective radius , the effective surface intensity , the total stellar mass M_s, the luminosity L, and the central velocity dispersion σ, have been recognized long ago as important tools for understanding the evolution of these stellar systems and for deriving fundamental cosmological information
<cit.>.
In particular, the SSRs of early-type galaxies (ETGs), that are much easier to obtain, have been used in the past as distance indicators for measuring the Hubble constant <cit.>, for testing the expansion of the Universe <cit.>, for mapping the velocity fields of galaxies <cit.>, and for measuring the variation of the mass-to-light ratio across time <cit.>.
Among the various SSRs, the Fundamental Plane (FP) relation for the ETGs <cit.> log R_e = a logσ + b log I_e + c, is probably the most studied one of the last 30 years. The tilt of this relation with respect to the prediction of the virial theorem (VT) has been the subject of many studies
<cit.>, invoking different physical mechanisms at work. We remember for example: i) the systematic change of the stellar mass-to-light ratio (M_s/L) <cit.>; ii) the structural and dynamical non-homology of ETGs <cit.>; iii) the dark matter content and distribution <cit.>; iv) the star formation history (SFH) and initial mass function (IMF) <cit.>; v) the effects of environment <cit.>; vi) the effects of dissipation-less mergers <cit.>; vii) the gas dissipation <cit.>; viii) the non regular sequence of mergers with progressively decreasing mass ratios <cit.>; ix) the multiple dry mergers of spiral galaxies <cit.>.
A similar long list can be compiled for the small intrinsic scatter of the FP (≈0.05 dex in the V-band), where among the claimed possible physical causes, we have: 1) the variation in the formation epoch; 2) the dark matter content; 3) the metallicity or age trends; 4) the variations of the mass-to-light ratio M/L <cit.>, etc..
Despite all these efforts, it is still unclear today why the FP is so tight and uniform when seen edge-on, while in its projections (, in the , the and planes) the distribution of galaxies presents well defined structures, where regions with large clumps of objects and big scatter are observed together with regions where no galaxies are present (the so called Zone of Exclusions, ZOE), and where clear non linear distributions are well visible. The mutual dependence of the SSRs, the peculiar shape of the observed distributions and the link among the various FP projections have never found a single and robust explanation in which the tilt and the scatter of the FP are understood.
The same difficulties are encountered when we consider one particular projection of the FP: the Faber-Jackson relation (FJ) <cit.>, the correlation observed between the total luminosity L and the central velocity dispersion σ. Even in this case the observed trend is not that predicted by the VT. In addition to this, it has been shown that the relation is not consistent with the distribution observed in the plane, in the sense that it is not possible to transform one space into the other (and viceversa) adopting the observed classical correlations <cit.>.
In other words, the underlying questions behind the nature of the SSRs of galaxies are: can we find a single explanation for the tilt of the FP and the shapes of the distributions observed in its projections?
Is it possible to reconcile the FJ relation and the plane?
Is it possible to account for the mutual relationship among the various projections of the FP? How are these planes linked to each other? How the SSRs change going back in time? Why the FP is so tight?
A new perspective to simultaneously explain the tilt of the FP and the observed distributions of galaxies in the FP projection planes has been advanced by <cit.>. The novelty of their approach is based on the assumption that the luminosity of galaxies follows a relation of the form:
L(t) = L'_0(t) σ(t)^β(t).
where t is the time, σ the velocity dispersion, and the proportionality coefficient L'_0 and the exponent β are all function of time and, even more importantly, they can vary from galaxy to galaxy.
This empirical relation is formally equivalent to the FJ relation for ETGs, but has a profoundly different physical meaning. In this relation β and L'_0 are free time-dependent parameters that can vary considerably from galaxy to galaxy, according to the mass assembly history and the evolution of the stellar content of each object. The new relation mirrors the effects of the evolutionary history of a galaxy on its luminosity and stellar velocity dispersion, parameters that can both vary across time because galaxies evolve, merge, and interact.
In previous papers on this subject we called attention on some of the advantages offered by the joint use of the VT and the law <cit.>. Accepting the idea of a variable β parameter, taking either positive or negative values, yields a simple explanation of the shifts of galaxies along the SSRs. Furthermore it allows us to understand the physical reasons for the observed distributions of galaxies in the various projection planes. This approach seems to be the correct one because it is able to simultaneously account for: i) the tilt of the FP, ii) the existence of the ZoE, and iii) the shifts of galaxies in the FP projections that are closely connected with the variations of σ and L through the β parameter.
In the present study we take advantage of what we learned from joining the VT and the law to analyze how galaxies move along the SSRs at high redshift. To this aim, since current observational data at high redshifts are not enough for our aims, we adopt the data of the Illustris-1 <cit.> and the Illustris-TNG <cit.> simulations from z=0 up to z=4 and look at the possible changes in the properties of galaxies suggested by the simulations.
The paper is organized as follows: in Sec. <ref> we briefly describe the samples of galaxies (both real and simulated) we have used in our work, we present the basic SSRs at z=0 and we explain why one can trust in the simulated data at higher redshift. In Sec. <ref> we summarize the basic equations of the problem and in Sec. <ref> we show how the SSRs change with redshift and how the β parameter is able to account of the observed distributions at each epoch. In Sec. <ref> we discuss the β parameter as a thermometer of the virialization condition. In Sec. <ref> we discuss the history of mass assembly for a few test galaxies and investigate how β changes as function of time and history of mass assembly. In Sec. <ref> we present our conclusions. Finally, in Appendix <ref> we present a toy model of dry and wet mergers to estimate the variation of a galaxy luminosity as consequence of merger and companion star formation.
For the sake of internal consistency with the previous studies of this series, in our calculations with the Illustris-1 database we adopt the same values of the Λ-CDM cosmology used by <cit.>:
Ω_m = 0.2726, Ω_Λ= 0.7274, Ω_b = 0.0456, σ_8 = 0.809, n_s = 0.963, H_0 = 70.4 km s^-1 Mpc^-1. Slightly different cosmological parameters are used by for the Illustris-TNG simulations: Ω_m = 0.3089, Ω_Λ= 0.6911, Ω_b = 0.0486, σ_8 = 0.816, n_s = 0.967, H_0 = 67.74 km s^-1 Mpc^-1 <cit.> . Since the systematic differences in M_s, R_e, L, I_e, and σ are either small or nearly irrelevant to the aims of this study, no re-scaling of the data is applied.
§ OBSERVATIONAL DATA AND MODEL GALAXIES
The observational data used here are the same adopted in our previous works on this subject <cit.>. The data at redshift z∼ 0 have been extracted from the WINGS and Omega-WINGS databases
<cit.>.
The samples used here have not the same dimension in each plot because the spectroscopic database is only a sub-sample of the whole optical photometric sample (containing ∼ 32700 galaxies). For this reason in some of our plots we can appreciate the distribution of the whole photometric sample, while in others only the subsamples with available measured stellar velocity dispersion or available stellar masses are visible.
The subsample with measured stellar masses M_s contains approximately 1200 galaxies. The masses were estimated by <cit.> by means of the spectral synthesis analysis. This provided the measurements of the stellar masses and of the star formation rate (SFR) at different cosmic epochs (among many others quantities).
The cross-match between the spectroscopic and photometric samples gives here only 480 ETGs with available masses and velocity dispersions. The sample span a magnitude range from M_V∼-16 to M_V∼-23 mag, a central velocity dispersion range from σ∼50 to σ∼300 and masses from 10^8.5 to 10^12 solar masses[The measured parameters for the real galaxies are always shown in our plots with the black dots. For the reason just explained in each plot, containing real observations, the number of galaxies is not always the same.].
The morphological types of the galaxies were measured with the software MORPHOT for the whole photometric dataset. The final morphological type T is quite robust, coming from the combination of different approaches <cit.>.
The error on the measured parameters is ≃20%. These are not shown in our plots, because they are much lower than the observed range of variation of the structural parameters in the SSRs. The small size of the errors does not affect the whole distribution of galaxies. Furthermore, no quantitative analysis has been made here, such as fits of data or statistical evaluations.
The sample of real data at z∼0 is used only to demonstrate that the simulated galaxies quite well reproduce the SSRs of the local objects and therefore there are good reasons to trust in the simulation when we look at the behavior of the SSRs at much higher redshift.
The analysis of the SSRs at high redshift is unfortunately still difficult for galaxies above z∼1.0, because the observational surveys at these redshifts contain only few and sparse data. Some empirical evidences however exists for a varying tilt of the FP with redshift <cit.>.
Given such difficulties we decided to perform our analysis of the SSRs at high redshift using the database of artificial galaxies provided by the Illustris-1 and Illustris-TNG simulations. The hydrodynamic simulations, like the Illustris databases, are today the best models available to compare theory with observations, despite the fact that several problems still bias their results.
The first set of artificial galaxies, named Illustris-1 appeared on 2014 <cit.>. Later on,
a number of works demonstrated that Illustris-1 suffer from a number of problems: it yields an unrealistic population of ETGs with no correct colours, it lacks morphological information, the sizes of the less massive galaxies are too big, and the star formation rates are not always comparable with observations <cit.>. In addition to this, there is the claim in the literature that Illustris-1 does not produce a realistic red sequence of galaxies due to insufficient quenching of the star formation with too few red galaxies <cit.>, while Illustris-TNG produces a much better result <cit.>. There is also the problem of the insufficient number of red galaxies with respect to the observed population of ETGs. For what concern the internal structure of the Illustris-1 galaxies, <cit.> measured the Sérsic index, the axis ratio and the radii, and found that too few bulge-dominated objects are produced in tension with observations. In contrast, the Illustris-TNG galaxies have much better internal structural parameters <cit.>.
For this reason Illustris-1 was superseded in 2018 by Illustris-TNG <cit.>.
In this work we considered only the subsample named Illustris-TNG-100, which is briefly referred to below as Illustris-TNG. This sample has approximately the same volume and resolution of Illustris-1 and it used the same initial condition (updated for the different cosmology) adopted by Illustris-1.
Among the many tabulated quantities provided for the galaxies of Illustris-1, we worked in particular with the V-band photometry, the mass and half-mass radii of the stellar particles (i.e., integrated stellar populations), for the most massive clusters, for which Cartesian comoving coordinates (x', y', z') are available.
We have analyzed in our previous papers the projected light and mass profiles using the z'=0 plane as a reference plane. Starting from the V magnitudes and positions of the stellar particles, we computed the effective radius , the radial surface brightness profile in units of r/R_e, the best-fit Sérsic index, and the line-of-sight velocity dispersion.
The values of were calculated considering only the star particles inside the friend-of-friends (FoFs) of galaxies and the galaxies inside the FoFs of clusters. We have set z'=0 to project the coordinates of the stellar particles inside galaxies so that the velocity dispersion is calculated along the z'-axis. The sample does not contain galaxies with masses lower than 10^9 solar masses at z=0 because for these objects it was impossible to derive . The total stellar mass has been used here.
The data-set for each value of the redshift extracted from the Illustris-1 simulation and used here contains ∼ 2400 galaxies of all morphological types. A full description of this data-set was given in <cit.> and <cit.>.
From the TNG-100 dataset we selected the first 1000 objects, ordered with decreasing stellar masses, coming out from the online Search Galaxy/Subhalo Catalog[See https://www.tng-project.org/data/]. In this case we used the half-mass stellar radius instead of the effective radius . This radius is not so different from the effective radius and its use does not change in any way the conclusions reached here. The data have been extracted at redshift z=4, z=3, z=2, z=1 and z=0 in order to be consistent with those used for Illustris-1.
The choice of using both Illustris-1 and Illustris-TNG has the following reasons: i) we want to be consistent with our previous works on this subject; ii) the differences in M_s, R_e, I_e, L, and σ of Illustris-1 and Illustris-TNG do not bias significantly the results on the values of the β and L'_0 parameters of the law <cit.>; iii) the two data samples are in some way complementary, since Illustris-TNG has better measurements of the half-mass radii of less massive galaxies, while Illustris-1 seems much rich in massive objects; iv) the two simulations agree in the physical parameters of the massive objects.
The detailed analysis of the differences between Illustris-1 and Illustris-TNG data has not been addressed here because there are already several studies on this subject <cit.>. One of the issues of major tension between the two suites of models concerns the radii of the low mass galaxies (roughly of M_s ≤ 5 10^10 M_⊙ where the Illustris-TNG radii are about a factor of two smaller that those of Illustris-1 while above it they are nearly equal <cit.>.
Figure <ref> shows the distributions of the Illustris-1 (red lines) and Illustris-TNG100 (black lines) data for several parameters used here at two redshift epochs: z=0 (solid lines) and z=4 (dashed lines).
From the figure we see that the effective radii of Illustris-1 are systematically a bit larger than those of TNG100. Another significant difference is found in the distribution of the total luminosity and total stellar masses. As already said, the Illustris-1 sample does not contain objects with masses lower than 10^9 solar masses at z=0. It follows that the distribution of masses and luminosities appears different for the TNG sample: it is much smooth and flatter than that of Illustris-1 which seems to be peaked at 9 dex approximately for the objects at z=0. The range covered by luminosities and masses however is quite similar.
The other parameters appear more or less superposed. We will see later that such differences do not compromise the analysis done here as well as the main conclusions.
The intrinsic problems of the simulations are of little relevance for our analysis because: i) we do not make use of the color of galaxies and of the SFRs; ii) we have demonstrated <cit.> that the two samples of Illustris-1 and Illustris-TNG produce very similar distributions of the β and L'_0 parameters of the law; iii) we will show here that the SSRs at high redshift of the two samples are very similar; iv) the point mass view of the galaxies adopted here secures that our analysis is not too much affected by the problems of the simulations.
For both Illustris-1 and Illustris-TNG we did not extract information on the morphology of the galaxies. For this reason in our comparison ETGs and late-type galaxies (LTGs) are mixed in our plots. This choice originates from the observation that the SSRs of ETGs and LTGs are almost identical. This is clearly seen in Figure <ref> showing the (left panel) and the (right panel) for the ETGs (open circles) and LTGs (filled black circles).
The two distributions are very well superposed in both diagrams. The only exception is that very large are observed only for the most massive ETGs.
This is only partially in agreement with Fig. 11 of <cit.>, showing that ETGs and LTGs follow quite similar trends, but with small systematic differences for the two morphological types. The data of the WINGS database do not suggest any significant difference in the SSRs of LTGs and ETGs. We believe that the effective radii measured in their work are affected by a systematic bias due to the method used to derive . While for the WINGS galaxies the effective radius was measured as the circle enclosing half the total luminosity, in the Huertas-Company work it was used the semi-major axis of the best fitting Sérsic model. This choice can likely introduce a systematic effect due to the inclination of the galaxies and the intrinsic shape of the light profiles. In any case the inclusion of LTGs is a potential source of bias.
We remark in addition that the completeness of the samples is not critical for the conclusions drawn here. In fact we do not attempt any statistical analysis of the data nor we fit any distribution to derive correlations. The data are only used to qualitatively show how the distribution of galaxies in the various planes can change with the different cosmic epochs and how the law and the β parameter can at least qualitatively account for the variations expected/observed across time.
The kind of analysis carried out here is indeed somehow independent of the level of precision reached by the models of the different sources, because we are mainly interested in presenting the method for deciphering the information encrypted in the observational data of the SSRs. The only hypothesis made here is that we can trust the results of simulations at high redshifts. This hypothesis is based on the fact that the simulations are able to reproduce some features of the distributions seen in the FP projections at redshifts z∼ 0 and the tilt of the FP at z∼1 (see below). The artificial galaxies match quite well the observations, reproducing the position of the brightest cluster galaxies and the existence of the Zone of Exclusion (ZoE). All this makes us confident that the simulations produce galaxies with luminosities, stellar masses and effective radii not too far from those of real galaxies.
Figure <ref> shows the four main important SSRs for the WINGS and Illustris data. The left upper panel plots the stellar mass M^* versus the effective radius [The symbols M_s and M^* both used in this work always refer to the total stellar mass in solar units.]. The WINGS data (black dots), the Illustris-1 data (red dots) and the Illustris-TNG data (blue dots) at z=0 are well visible.
We note that the relation is clearly non linear. The galaxies of small masses are distributed nearly horizontally, while the brightest galaxies follow a tail with a slope close to 1. The real and simulated data nicely superimpose each other over the same range of mass, even if the effective radii of Illustris-1 for the less luminous galaxies are systematically greater than the observational ones. In contrast, Illustris-TNG gives much smaller radii for the low mass galaxies. This is a well known fact already discussed in our papers of this series <cit.>.
Both observations and simulations suggest the presence of the tail for the brightest ETGs, in which radii and masses are almost identical in observations and simulations. The different number of objects in the tail is due to different volumes sampled and to the way in which
the sample have been created: the WINGS and Illustris-1 datasets include only objects from clusters of galaxies, where large ETGs are frequent, while Illustris-TNG takes galaxies from the general field. In addition, the total volume of the surveys is different for WINGS, Illustris-1 and Illustris-TNG.
The right upper panel of Fig. <ref> shows the plane obtained with the same data
(here the sample of WINGS galaxies is much smaller than in Fig. <ref> because only the subsample is involved). Also in this case the most important fact to note is that the simulations correctly reproduce the presence of the tail for the brightest ETGs that is clearly separated from the cloudy distribution of the less luminous galaxies.
This tail, already seen in the original paper of <cit.>, has a slope close to -1 (that predicted by the VT) and has been attributed to the peculiar evolution of the brightest galaxies that grow in mass by minor mergers <cit.>. Our conclusion is therefore that both simulations catch the presence of some peculiar features of the plane: the cloudy distribution of the faint galaxies, the tail formed by the brightest ETGs and the ZoE, the region totally empty of galaxies above the dashed black line in Fig. <ref>.
The lower panels of Fig. <ref> show the other projections of the FP: the and planes. Again we observe that both simulations are quite well superposed to the observational data. In particular the simulations are able to reproduce the curvature observed in the two distributions.
The good agreement between observations and simulations at z=0 is a good starting point. It tell us that simulations are able to reproduce the main features of the SSRs at z=0.
However, since our aim is to use simulations to infer the possible behavior of the SSRs at higher cosmic epochs, we need at least one further proof that simulations are able to catch the structural parameters of galaxies at much higher redshifts. To prove this we have used the data of <cit.>, who have analyzed the FP at z∼1. In our Fig. <ref> we can appreciate that the FP at this redshift epoch coming from the Illustris data (red and blue circles as before) is well in agreement with the observed one (black filled circles). The tilt of the plane is in practice identical with a value for the a coefficient lower than 1 (for the Coma cluster the tilt provides a∼1.2). The tilt is different with respect to that measured for the local clusters and this is an indication that there is an evolution of the structural parameters that seems to be reproduced by the simulations. Even in this case we note that the radii of the galaxies in the simulations are a bit systematically larger than those measured for the real galaxies, but this does not change the FP tilt of the simulated galaxies. Probably, since the total luminosity of the galaxies is quite well reproduced by the simulations, the combination of and is correct and the different effective radius simply change in such a way that the galaxy shift along the FP and not orthogonally to it.
In concluding this section we observe that the artificial galaxies in the simulations are in quite good agreement with the real galaxies for what concern the main structural parameters even at much larger redshifts. The differences with real galaxies mainly concern the stellar content, the colors and the star formation rates, but these differences do not seem to plague the general behavior of the SSRs. For this reason we believe that it is possible to extract some information on the evolution of galaxies, by looking at the distributions of galaxies in the SSRs. When high redshift data will be available in good number, we could better compare observations and simulations and extract useful information on the evolution of galaxies.
§ THE BASIC EQUATIONS OF OUR FRAMEWORK
Before starting the discussion of the main SSRs predicted for the most far cosmic epochs, it is important to summarize here the main conclusions drawn by <cit.> using the combination of the VT and the law[From here on we drop the time notation for simplicity.]. This combination is the key novelty of their approach and a necessary premise for understanding what follows. The two equations representing the VT and the law are:
σ^2 = G/k_vM_s/R_e
σ^β = L/L'_0 = 2π I_e R^2_e/L'_0.
In these equations β and L'_0 are free time-dependent parameters that depend on the peculiar history of each object.
From these equations one can derive all the mutual relationships existing among the parameters M_s, R_e, L, I_e, σ characterizing a galaxy. We find:
I_e = Π R_e^γ
for the plane, where
γ=(2/β)-(1/2)/(1/2)-(1/β)
and Π is a factor that depends on k_v, M/L, β, and L'_0. It is given by
Π = [ (2π/L'_0 )^1/β (L/M_s )^(1/2) (k_v/2π G )^(1/2) ]^1/1/2-1/β.
Then we have:
I_e = [ G/k_vL'_0/2πM_s Π^3/γ ]^β-2/1+3/γσ^β-2/1+3/γ
for the relation and
R_e = [ G/k_vL'_0/2πM_s/Π ] σ^β-2/3+γ
for the relation.
In addition we have:
R_e = [ (G/k_v)^β/2L'_0/2π1/Π ]^2(β-2)/β^2-6β+12 M_s^β^2-2β/β^2-6β+12
for the relation.
It is important to note here that in all these equations the slopes of the log relations depend only on β. This means that when a galaxy changes its luminosity L and its velocity dispersion σ, when β has a well defined value (either positive or negative), the effects of the motion in the plane are propagated in all the FP projections. In these planes the galaxies cannot move in whatever directions, but are forced to move only along the directions (slopes) predicted by the β parameter in the above equations. In this sense the β parameter is the link we are looking for between the FJ (and the FP) and the observed distributions in the FP projections.
In addition, the combination of eqs. (<ref>) gives us another important equation. It is now possible to write a FP-like equation valid for each galaxy depending on the β and L'_0 parameters:
log R_e = a logσ + b <μ>_e + c
where the coefficients:
a = (2+β)/3
b = 0.26
c = -10.0432+0.333*(-log (G/k_v) - log (M/L)
-2*log (2π)-log (L'_0))
are written in terms of β and L'_0. We note that this is the equation of a plane whose slope depends on β and the zero-point on L'_0. The similarity with the FP equation is clear. The novelty is that the FP is an equation derived from the fit of a distribution of real objects, while here each galaxy independently follows an equation formally identical to the classical FP but of profoundly different physical meaning. In this case, since β and L'_0 are time dependent, the equation represents the instantaneous plane on which a generic galaxy is located in the space and consequently in all its projections.
Finally, the combination of the above equations allows us to determine the values of β and L'_0, the two critical evolutionary parameters. This is possible by writing the following equations:
β [log(I_e)+log(G/k_v)+log(M_s/L)+log(2π)+log(R_e)] +
+ 2log(L'_0) - 2log(2π) - 4log(R_e) = 0
βlog(σ) + log(L'_0) + 2log(σ) + log(k_v/G) - log(M_s) +
- log(2π) - log(I_e) - log(R_e) = 0.
Posing now:
A = log(I_e)+log(G/k_v)+log(M_s/L)+log(2π)+
log(R_e)
B = - 2log(2π) - 4log(R_e)
A' = log(σ)
B' = 2log(σ) - log(G/k_v) - log(M_s) - log(2π) -
log(I_e) - log(R_e)
we obtain the following system:
Aβ + 2log(L'_0) + B = 0
A'β + log(L'_0) + B'= 0
with solutions:
β = -2log(L'_0) - B/A
log(L'_0) = A'B/A - B'/1-2A'/A.
The key result is that the parameters L, M_s, , and σ of a galaxy fully determine the evolution that is encoded in the parameters β and L'_0.
Considering the fact that each structural parameter is known with a maximum error of ∼20%, the single values of beta cannot be trusted too much. On average instead we will show that the galaxies move in the SSRs only in the directions defined by beta.
Given this premise, we proceed now to show the basic SSRs at much higher redshifts.
§ THE SSRS AT HIGH REDSHIFT
To explore the behavior of the SSRs at high redshifts we can only relay on simulations because we do not have enough observational data for galaxies at high redshift.
Fortunately, despite the small systematic overestimate of the effective radii, the Illustris-1 and Illustris-TNG data are sufficiently good to be trusted even at high redshifts. Furthermore, as shown by <cit.>, both Illustris-1 and Illustris-TNG produce a very similar distribution for the β parameter.
Thanks to this, the simulated data can provide a reliable insight on the evolution of the SSRs with time. In the following we will show the results for both the Illustris-1 and Illustris-TNG samples currently available to us.
Figures <ref> and <ref> present the plane from z=4 (upper left) to z=0 (bottom right) for the Illustris-1 and Illustris-TNG respectively. For Illustris-1 the whole sequence of redshifts is z=4 (left upper panel), z=3, z=2.2, z=1.6, z=1.0, z=0.6, 0.2, z=0 (right bottom panel) as indicated. In all the panels, the colored dots indicate galaxies with β > 0 and the black points those with β < 0. Crosses and open squares indicate galaxies with SFR greater than the average <SFR> or lower than the average respectively at each redshift epoch.
The same color code is also used for the arrows indicating the mean slope of β, calculated from eq. (<ref>) for each object of the simulation). Such mean value provides approximately the direction of motion in this plane for most of the galaxies as expected from eq. (<ref>).
The sequence of panels indicates that the tails well visible at z=0 for the brightest galaxies start to appear at z∼ 1-1.5. This epoch probably corresponds to the time in which minor mergers either with or without star formation on already formed massive objects became the typical event thus increasing both the mass and radius of galaxies <cit.>. It is interesting to note that the directions of the arrows, whose slope depends on eq. (<ref>) (see Table <ref>), flip progressively with z, in particular for the positive β's, assuming the value close to -1 (as predicted by the VT) approximately at z∼ 0.6, and remaining constant thereafter.
Such slope gives the only possible direction of motion of galaxies in the plane at each epoch when the evolution proceeds and β changes. The brightest galaxies, that have likely reached a full virial equilibrium far in the past, are no longer affected by strong episodes of star formation, and start to move along this direction at z∼ 1-1.5, forming the tail we observe today. Notably, even the galaxies with β≤ 0 progressively reach the same slope. This happens because several objects have large negative β values. As we will see later, both positive and negative values of β are possible and, as demonstrated by <cit.>, this is a necessary condition for reproducing the distribution starting from the distribution (and viceversa).
A further thing to note is that the galaxies with strong SFR (greater than <SFR>) have in general a positive β. When some object with negative β appears on top of the distribution, it has a SFR > <SFR>. Only later on, when the present epoch is approached, we start to see in the upper part of the cloud, galaxies with negative β and SFR < <SFR> (the black open squares). These objects might be relatively small compact galaxies where the SF is over (a possible candidate for this class of galaxies could be M32). Notably the galaxies with the higher surface brightness have positive β at high redshift and only later this region of the plot is populated by objects with negative β and low SFR.
A very similar behavior is observed in Fig. <ref> when we use the Illustris-TNG data. As in Fig. <ref> the colored dots indicate the galaxies with positive β and the arrows the mean value of β. With respect to the previous figure we note that the colored arrows do not change very much their directions. We attribute this behavior to the different dimension of the two samples.
For the Illutris-TNG data the arrows have always slope close to -1, a fact that indicate quite big values for β. In any case, the trend of populating the upper region of high surface brightness with object with negative β and low <SFR> is confirmed.
One might ask now what produces the tail for the brightest objects in the plane, being the arrows directed at all redshift epochs approximately toward the same directions. This can be better understood looking at the other FP projections and
taking into account that the physical mechanisms at work in small and large galaxies can be different.
Figure <ref> is even more impressing in showing how the changes of β across time determine the motions in the FP projections (the symbols and color code have the same meaning of those in Fig.<ref>).
In the plane the curvature formed by the brightest galaxies is much more pronounced. We note again that as β increases, the colored arrows point to the direction of the tail that we see at z∼ 0. In the figure we also note that the galaxies with negative β are preferentially at the bottom of the cloud distribution and their number decreases up to z∼0.6-1.0. Then they increase again in number and tend to crowd the top region of the distribution. As before the upper region of the distribution with high is populated by objects with high SFR at the most remote epochs and only approaching z=0, galaxies with negative β and low SFR appears on top of the distribution.
Again the TNG data (Fig. <ref>) give a similar picture of the plane. Now the change of direction due to positive and negative values of β is much evident and we understand that the tail originates when β increases and the galaxies progressively become much virialized.
Figures <ref> and <ref> display the plane with the Illutris-1 and Illutris-TNG data, respectively. Again the slopes of the arrows predicted by eqs. (<ref>), (<ref>), and (<ref>) are in good agreement with those inferred from the observed distribution of real galaxies and explain the tail formed by the bright galaxies.
The same can be said for the plane (Figs. <ref> and <ref>). In both planes the tail formed by the brightest galaxies stands out clearly.
The slope of the tail in the plane is exactly close to 1, as predicted by the VT. The last bottom right panel (that at z=0) shows in particular that objects both with negative and positive large values of β begin to climb the tail as soon as their mass exceeds about 10^11 M_⊙.
As in the plane, the TNG data exhibit quite similar mean values for the slopes predicted at the different redshifts. The slope is close to 1 and this means that the majority of the galaxies are quite well virialized at z=0. A further notable thing is that the TNG100 data indicate the presence of quite massive objects with small radii, not visible in Illustris-1. These objects might be the class of compact massive galaxies with high also visible in Fig. <ref> and <ref>. Notably we can see that all these objects have negative β and very low SFR. In other words they are isolated compact massive galaxies where the SF stopped long time ago.
The picture we have illustrated here using the Illustris-1 and Illutris-TNG data clearly reveals a progressive trend of the galaxies toward the full virial equilibrium as indicated by the slopes of the arrows when |β| →∞. Such condition is reached by the most massive galaxies approximately at z∼1.5-1.0.
In general the galaxies with the largest radii have β>0 both in simulations and observations. This behavior is compatible with the predictions of minor mergers in which galaxies might increase their radius without changing significantly their mass and luminosity <cit.>.
Finally, we have to remark that the FJ relation does not change very much with redshift. As the redshift decreases from z=4 to z=0 through six intermediate steps, we observe that some galaxies have negative β and others have positive β. The fits of the observed distributions reveal that the slope of the FJ relation progressively decreases passing from nearly 4 (at z=4) to nearly 2 (at z=0).
One may legitimately ask why the scatter of the FJ relation does non increase with time if there are objects that move nearly perpendicularly to the trend indicated by the observed distribution. The same trend is visible with the TNG data (not plotted here).
We believe that the scatter cannot increase as consequence of the merger activity because the maximum possible variation in luminosity that a galaxy might experience does not exceed a factor of two (when a galaxy approximately doubles its mass merging with a similar object of the same mass and stellar content) that in log units corresponds to a factor of ∼ 0.3, a very small shift compared with the scale spanned by the data values.
To support this statement in Appendix <ref> we present a toy model predicting the effects on the total luminosity of a galaxy with mass M_1, age T_1 and luminosity L_1 would undergo as a consequence of a merger with another object of mass M_2, age T_2 and luminosity L_2. The event may or may not be followed by star formation engaging a certain amount of gas with mass M_3.
Using reasonable values for the masses and luminosities of the three components (see eq.(<ref> in Appendix <ref> ) we may expect that
the total luminosity first increases and then decreases on a timescale that depends on the amount of matter engaged in the burst of activity. In any case the luminosity evolution is fast up to a few 10^8 years after the burst (turnoff mass about 3M_⊙) slows down up to 10^9 years (turnoff mass about 2 M_⊙), and then becomes even slower afterwards. The estimated fading rate of the luminosity is about |Δ (logL/L_⊙)| ≃ 0.015 per Gyr and per unit SSP mass, must be multiplied by 5.8 to get the real fading rate per Gyr <cit.>. Consequently it is very unlikely to catch a galaxy exactly at the time of the maximum luminosity. Equation (<ref>) allows us to quickly evaluate the effects of mergers with different combinations of masses and ages of the involved galaxies. However, the examples shown in Appendix <ref> demonstrate that but for the case of a merger between to objects of comparable mass, in which the luminosity and mass of the resulting objects are double the original ones, mergers among objects of different mass, age and likely undergoing some star formation during the merger generate objects that in practice do not keep trace of the merger but simply keep the properties (mass and luminosity) of the most massive component. More details are not of interest here.
The main conclusion of this section can be summarized as follows. The hypothesis that the VT and the work together to govern the evolution of mass M_s, luminosity L, radius R_e, surface brightness I_e and velocity dispersion σ, leads to a coherent and self-consistent explanation of all the scale relations of galaxies together with a reasonable explanation for the tilt of the FP as demonstrated by <cit.>.
§ THE IMPORTANT ROLE OF Β
To better understand the effects played by β it is necessary to think about the possible variations of and when L and σ vary in the plane. There are six possible changes of L and σ in this plane: σ either decreases, increases or remains constant, and the same does L. The effective relationship between the two variables depends in turn on β, e.g. when β is negative, not necessarily there is a decrease in luminosity, and when β is positive, a decrease in luminosity might also occurs <cit.>. The ambiguity in the direction of evolution can be only solved by looking at the movements of the galaxies in the different SSRs, in particular observing the behavior of .
When the luminosity of a galaxy changes, both the effective radius R_e and the mean effective surface intensity vary. This happens because R_e is not a true physical radius, like e.g. the virial radius (which depends only on the total mass), but it is the radius of the circle that encloses half the total luminosity of the galaxy. Since galaxies have different stellar populations with different ages and metallicity, it is very unlikely that a change in luminosity does not change the whole shape of the luminosity profile and therefore the value of . If the luminosity decreases passively, in general one could expect a decrease of R_e and an increase of I_e. On the other hand, if a shock induced by harassment or stripping induces an increase of L (and a small decrease in σ), we might expect an increase of R_e and a decrease of I_e.
The observed variations of these parameters depend strongly on the type of event that a galaxy is experiencing (stripping, shocks, feedback, merging, etc.).
In general, one should keep in mind that these three variables L, R_e and I_e are strongly coupled each other and that even a small variation in L might result in ample changes of R_e and I_e.
In summary, as already pointed out, the variations of the parameter β with time are responsible of all the changes observed in the FP projections. This means that the FP problem should be considered from an evolutionary point of view where time plays an important role and the effects of evolution are visible in all the FP projections. The single SSRs are snapshots of an evolving situation. The law catches such evolution in the right way, predicting the correct direction of motion of each galaxy in the basic diagnostic planes.
We now show that the parameter β changes with the cosmic epochs and that such variations are in turn related mainly to the change of the mean surface intensity due to the natural variation of the star formation activity with time. We will see that β tends to be low when star formation is high and viceversa. A large scatter is however present at all epochs. Furthermore we will show that β increases considerably if and when the galaxy can attain the condition of full virialization, i.e. the two variables M_s and R_e combine in such a way as to yield the measured velocity dispersion (i.e. that measured for the stellar content).
Figure <ref> shows the β-log(I_e) plane. The dots of different colors represent the galaxies at different redshifts using the same color code of Fig. <ref>. From this plot it is clear that β increases and log(I_e) on average decreases when the cosmic epoch approaches z=0 (light gray dots). In the remote epochs (z=4) and up to z∼1.5 we observe an almost linear dependence of β on , in which β ranges from 0 to ∼20. This condition is an indication that at such epochs the galaxies are still far from the full virialization. The real data of WINGS (black dots) are very well superposed over the simulation data, showing a large spread with large positive and negative values of β.
This behavior of β is connected with the average star formation rate at the different cosmic epochs. It is clearly seen in Fig. <ref>, where we note that, when the SFR is high, the values of β are close to 0-10. The large scatter in β starts to be visible with the gray dots at z∼0.6-1.0, the same epoch in which before we have seen the tail of high luminosity galaxies in the SSRs for the first time.
Figure <ref> is also helpful to understand that β gets large positive and negative values preferably in galaxies with masses higher than ∼ 10^10 M_⊙.
Finally, Fig. <ref> shows β versus the quantity [log(2T)]-log(Ω)] that is a proxy of the virial condition, being the difference between the kinetic and potential energy of the stellar systems. The figure clearly indicates that |β| increases while β can be either positive or negative, when the difference of the two energies approach zero[The difference predicted by the VT is 0, but the calculated energies depend on , which is not exactly the virial radius.]. Note that at high redshift β remains very close to 0. This means that the galaxies are still far from virial equilibrium. In contrast at low redshifts the peak of β falls in the interval 0 to 20 (z=1) and 0 to 50 (z=0) with larger and larger spreads toward both high positive and low negative values.
In Fig. <ref> we show the histogram of the number frequency distribution (N/N_tot) of β in the model galaxies at different redshifts. There is no much difference between the histograms of the Illustris-1 and Illustris-TNG samples. The most remarkable features to note are that (i) at high redshifts (z ≥ 2) the distribution peaks fall in the interval 4 ≤β≤ 0 with a small tail of positive values in the interval 0 ≤β≤ 4; (ii) the distribution gradually spreads to higher values of |β| at low redshifts (1 and 0), both positive and negative values of β are present; (iii) finally, at low redshifts the peaks are visible both in the positive and negative range of the β values and |β| can attains large values.
The reason why we observe such large dispersion in β is that the term 1- 2A'/A at the denominator of eq.(<ref>) becomes very close to zero. Consequently both log(L'_0) and β diverge. According to the direction in which 0 is approached, one can have either very large and positive or large and negative values for β. As already discussed, this happens when the system is in conditions of strict virialization. The sign of β depends on the particular history of the variables M_s, R_e, L, and I_e, in other words whether the term 2A'/A is tending to 1 from below (β > 0) or above 1 (β<0). From an operational point of view we may define "state close to strict virialization" when |β| > 20.
Notably the Illustris-1 and Illustris-TNG models agree very well with the observational data (black dots in these panels). The inclusion of real dynamics and the hierarchical scenario provide much better conditions to bring the action of virialization into evidence. The hierarchical scenario by mergers, ablation of stars and gas, harassment, secondary star formation, inflation of dimension by energy injections of various kinds, etc. induces strong variations of the fundamental parameters of a galaxy and hence strong temporary deviations from the virial conditions. However, after this happened, the viral conditions are soon recovered over a suitable timescale. This can be short or long depending on the amount of mass engaged in the secondary star forming activity and the amount of time elapsed since the star forming event took place <cit.>. As a consequence of all this, detecting systems on their way back to virial equilibrium is likely a frequent event thus explaining the high dispersion seen on the β-I_e plane. The value of β evaluated for each galaxy can provide a useful hint about the equilibrium state reached by the system. Most likely, the condition of strict virial equilibrium is a transient phenomenon that could occur several times during the life of a galaxy. This is suggested by the large number of galaxies with very small or very high β.
§ THE HISTORY OF MASS ASSEMBLY
The Illustris-1 and Illustris-TNG simulations have made clear that the history of mass assembling of galaxies is not simple, but goes through repeated episodes of mass accretion and mass removal.
Figures <ref> and <ref> show the main SSRs and the β-z plane respectively. Eight single galaxies of different mass and evolutionary history extracted randomly from the sample are displayed in these plots. These galaxies are taken from our Illustris-1 sample. Each galaxy is indicated by a broken line of different color, while the mass assembly history of each object is represented by the series of dots of the same color. Along each line there are eight points, one for each value of the redshift from z=4 to z=0 according to the list already presented in the previous sections. A very similar figure is obtained with the TNG data and therefore has not be plotted here.
The β values of these 8 galaxies at different redshift are shown in Fig. <ref> and are very close to 0 at every epochs with the exceptions of the blue track. Using the β values one should enter in Table <ref> and derive the possible directions of motion at each redshift in each of the planes.
For example the yellow and green objects have always values of beta close to 0 (a bit positive). These corresponds to slopes around ∼-2 in the plane, ∼2.5 in the and ∼-1.5 in the planes. They are therefore objects of the "big cloud" were galaxies can move in every possible direction. The blue galaxy on the other hand reaches quite high values of β and it possible to see that its movements are in the direction close to -1 in the plane.
It is clear from Fig. <ref> that the galaxies do not move in the planes of the SSRs in a continuous and uniform way, rather they randomly change their position at different epochs. In the same figure we can also note that the galaxies with blue, red, brown, and magenta colors are more massive, more luminous, and with higher σ than the others even at early epochs (z=4). Even more important, we emphasizes that the same galaxies at epochs closer to us than z∼1.5 are able to reach both large positive and negative values of β. Their βs start low and gradually increase, in other words may reach the state of virial equilibrium. In contrast, the less massive and fainter galaxies have always low βs (see Fig. <ref> and <ref>) close to 0 and are located in the region of low σ, , and L. The dwarf galaxies never reach the condition of full virialization.
As already pointed out the condition of full virial equilibrium can be a transient state in the sense that once reached it cannot be maintained for ever if a galaxy undergoes events such as mergers, stripping and harassment that may push it away from this condition. However, the virial equilibrium can be recovered again on a suitable time scale that of course depends on the relative intensity of the disturbing event. For instance, in the case of a merger between two galaxies of comparable mass, most likely accompanied by intense star formation, the resulting system will not be in virial equilibrium and will take quite some time to reach this condition. On the contrary, the merger between two galaxies of significantly different mass, likely accompanied by modest star formation, will only slightly depart for the virial condition. If so, we expect that after a certain redshift, only the massive galaxies remain unperturbed by mergers and can move toward the condition of strict virialization, while the low mass ones are still far from this ideal condition. The few objects displayed in Fig. <ref> are typical examples of the above situations.
§ DISCUSSION AND CONCLUSIONS
The aim of this paper is to show that combining the VT with the relation (as a proxy of evolution, in which β and L'_0 vary from galaxy to galaxy and in the course of time) is rich of positive consequences
<cit.>. The variation of β and L'_0 with time traces the path followed by each galaxy in the various SSRs. The law together with the VT yield , , and relations that nicely reproduce the data and more important strongly suggest the existence of a system of two equations in the unknowns β and L'_0 with coefficients functions of M_s, R_e, L, and I_e that for each galaxy determines the value of β and L'_0. With the aid of these relations we can determine the instantaneous position and direction of motion of a galaxy on the FP and its projection planes. Because of this and limited to ETGs, we named these equations fundamental equations of galaxy structure and evolution <cit.>.
With this study we show that the Illustris-1 and Illustris-TNG databases give basic parameters of galaxies in satisfactory agreement with the observational data for the galaxies at z ≃ 0. They indeed reproduce some distinct features observed in the FP projections, such as the tail of the bright ETGs, the ZoE, and the clumps of small mass objects.
Basing on these simulated data, we look at the SSRs at different epochs (from redshift z=0 up to redshift z=4) highlighting their expected behaviour. In summary we show that:
* The SSRs change with time;
* The variations of the SSRs can be explained with the variation of the β parameter driving the law;
* When β varies with time, a galaxy can move in the SSRs only in some well defined directions that ultimately depend on β. These directions change with time and, going toward z=0, progressively acquire the slope exhibited by the most massive galaxies that lay along the tail of the bright ETGs at z=0;
* The parameter β can get both positive and negative values across time. Basing on it, we suggest that the parameter β can be considered as a thermometer gauging the virialization conditions. As a matter of fact β can be either large and positive or large and negative when the galaxies are close to the virial equilibrium;
* The only galaxies that can reach the virial state are those that became massive enough (above 10^9-10^10 M_⊙) already at high redshifts (z=4). These are no longer disturbed by the merging events, which by the way become rare events after z∼ 1.5 and/or in any case are not influential in terms of changing the mass ratio between donor and accretor;
* Finally, the relation can be considered as an empirical way of catching the temporal evolution of galaxies. The values of β (and L'_0) mirror the history of mass assembly and luminosity evolution of a galaxy.
The conclusion is that SSRs are full of astrophysical information about galaxy evolution.
M.D. thanks the Department of Physics and Astronomy of the Padua University for the financial support.
aa
§ MERGERS AND BURSTS OF STAR FORMATION
In order to better quantify the effect of a merger on the integrated light of a galaxy we present here an elementary model of population synthesis. Let us consider two galaxies with total stellar mass M_1 and M_2 and total luminosity L_1, and L_2 (either bolometric or in some passband). The luminosity is generated by the stars already existing in each galaxy. The two galaxies are supposed to merge. The merger event may or may be not accompanied by star formation induced by the merger itself. Let M_3 be the mass of gas (belonging to one of the galaxies or both) that is eventually turned in newly born stars. For simplicity we consider this event as a unique single stellar population, SSP, of total mass M_3 generating a total luminosity L_3. If no star formation occurs at the merger M_3=0 and L_3=0.
The total mass of the system is
M= M_1 + M_2 + M_3
and the ratios between the mass of each component to the total mass are
α_1= M_1/M α_2= M_2/M α_3= M_3/M,
(with no star formation M_3=0). Let us suppose that the most massive object of the three is M_1, followed by M_2 and M_3 (with M_2 > M_3). Therefore, we have the sequence α_1 > α_2 > α_3.
The total luminosity is
L = L_1 + L_2 + L_3
and the corresponding ratios between the luminosity of each component to the total luminosity are
h_1= L_1/L h_2= L_2/L h_3= L_3/L.
With no star formation L_3=0 and h_3=0.
As the luminosity of a galaxy depends not only on the mass but also on the age (the luminosity gets fainter at increasing age) and it may undergo large and fast variations in presence of star formation, the sequence h_1 > h_2> h_3 can be easily violated. It may happens that h_2 > h_1 and h_3 > h_1 and/or h_3 > h_2. In summary, we have defined the following identities for the total mass and total luminosity, in which the contribution of each component is brought to evidence
M = (α_1 + α_2 + α_3) M
L = ( h_1 + h_2 + h_3) L
Given these premises, we shortly present the key ingredients of our analysis, namely the relation with time of the mass and luminosity of single stars. This yields an idea of the timescale over which the mass and light of the most massive objects in any generation of stars vary with the age. Each generatios of stars forms a single stellar population (SSP) with a certain abundance of chemical elements. In a SSP stars are distributed in mass according to some initial mass function (IMF), in which typically the number of stars per mass interval increases at decreasing stellar mass; many more faint stars of low mass than bright stars of high mass. Since stars evolve and die, the total light emitted by a SSP decreases and the SSP becomes fainter and redder with time. Finally, galaxies are made of stars born at different epochs and dying at different times. Therfore the stellar content of a galaxy can be conceived as a manifold of SSPs of different age, hence emitting different amounts of light of different colors. The total light is the integral of the light emitted by each SSP weighed on the rate of star formation over the whole history of a galaxy. All of this function of time and chemical composition.
In Fig. <ref> we show the time dependence of the mass (left panel) and luminosity (right panel) of single stars (lifetime and luminosity are taken at the stage of the brightest luminosity attained by the long lived evolutionary stages) of different mass (in the interval 0.6 to 120 M_⊙). In both panels the colors indicate the metallicity and the ticks along each curve mark the value of the mass; blue is for low metal abundance and red for high metal contents. In Fig. <ref> we show the luminosity versus time relationship for SSPs of different chemical composition as indicated (left panel), and model galaxies of different mass as indicated (right panel). In the left panel we display the luminosity (in solar units) in the V-passband of the Johnson-Bessell system and of the SSPs with different metallicity (solid lines with different colors where blue is for low metal content and red for high metal content). The black solid line is the best fit of the luminosities for metallicities. In the same panel and limited to the bestfit we also show the luminosity in the B-passband (dashed line). In the right panel we display the V-luminosity in the same photometric system of model galaxies with infall during the whole lifetime of the galaxies. This is the luminosity integrated over the many generations of stars formed in the galaxy under a suitable star formation rate. Chemical enrichment of the gas out which stars are formed is taken into account.
Stellar models and SSPs, and model galaxies are taken from the Padua library of stellar models, isochrones, and SSPs <cit.>. The model galaxies are calculated by the authors and are described in <cit.>. The SSPs in use here are for the Salpeter initial mass function with slope x=2.35 (in number of stars per mass interval), the SSP mass and luminosity are named M_SSP and L_SSP with M_SSP=5.826 M_⊙.
Notable features of these diagrams are:
(i) The luminosity of single stars may vary by more than two orders of magnitude and the lifetime goes from a few million years (Myrs) to more that ten billions years (Gyrs) as the mass decreases from to 120 to 0.6 M_⊙ with little dependence on the initial chemical composition (at least for our purposes). The interval of variation is much narrower passing to SSPs and even more to galaxy models. In SSPs, this is simply due to the integration over the mass under some initial mass function; the more numerous low mas stars of lower luminosity somewhat quench the luminosity of the brighter less numerous stars. In galaxy models, supposing that each star formation event in the time interval dt can be represented by a SSP of suitable chemical composition emitting the total luminosity l_ssp(t), the total light L is given by
L = ∫_0^T_GΨ(t) l_ssp(t) dt
where Ψ(t) is the current rate of star formation in suitable units.
(ii) The remarkable reduction in the luminosity interval passing from single stars to SSPs and model galaxies is simply due to the integration of the contribution of single stars to the total light of the SSPs of given age and the integration over time of the contributions from the many generations of SSPs weighed on the star formation rate.
(iii) The luminosity of SSPs varies by about a factor of hundred with little dependence on the chemical composition passing from young to old SSPs. From these SSPs we can derive the following bestfits
log l_V = -0.685 log T +6.886
log l_B = -0.835 log T +8.278
where l_V and l_B are the luminosities in solar units in the V and B passdands of the Johnson-Bessell photometric system and the age T is in years. These luminosities are for the SSP mass of M_SSP=5.826 M_⊙. To be applied to a galaxy of stellar mass M_s we must apply the transformations
log L_V = log l_V - log M_SSP + log M_s
log L_B = log l_B - log M_SSP + log M_s
where M_s is in solar units and it is identified with the galaxy mass.
(iv) In the model galaxies, initially the luminosity increases by a factor of ten up to a peak corresponding to the maximum of the star formation rate at about 1.5 Gyr and then decreases by about the same factor down to the present day value.
(v) As expected the total luminosity varies by orders of magnitude among galaxies of different mass.
All this has a immediate effect on the maximum variations of the total luminosity in mergers among galaxies of different mass and intensity of the companion star formation during the merger event. To illustrate the point we present here a few examples corresponding to typical situations occurring among real galaxies:
. With this assumption we have always M_3=0 and L_3=0.
(a) The case of two identical objects (same mass, same stellar content, and same age) with e.g. M_1 = M_2 = 10^12 M_⊙ and luminosity L_1 = L_2 = 10^11 L_⊙. The luminosities have been derived from the models in Fig.<ref> (right panel) at the present age. In this case the mass and luminosity of the resulting object are two times the starting value (α_1=α_2 =0.5, h_1=h_2=0.5).
(b) The case of two objects with different mass but same age (old). Suppose M_1 = 10^12 M_⊙,
L_1= 10^11 L_⊙, and M_2 = 10^11 M_⊙, L_2= 10^10 L_⊙. The total mass is
M= 1.1× 10^12 M_⊙ and the total luminosity is L= 1.1× 10^11 L_⊙. Therefore,
α_1=0.909, h_1=0.909 and α_2=0.091, h_2=0.091.
The total luminosity is L = (0.909 + 0.091) L, i.e. the light is dominated by the more massive galaxy. The merger has increased the total luminosity by about 10%. This is a sort of lower limit because a dry merger between two galaxies of the same age differing in mass by more that a factor of ten in practice would be undetectable.
(c) In the case of two galaxies with different mass and different age, hence different luminosities, the total luminosity may show the effect of the younger object. Consider the following galaxies M_1 = 10^12 M_⊙, L_1= 10^11 L_⊙ (the old object) and M_2 = 10^11 M_⊙,
L_2= 5× 10^10 L_⊙ (the young object, taken near the peak value). After merging, the total mass is M= 1.1 × 10^12 M_⊙ while the total luminosity is L ≃ 1.5×10^11 L_⊙. Therefore,
α_1=0.909, h_1=0.667 and α_2=0.091, h_2=0.333. The contribution from the less massive but younger object is important, about half that produced by the more massive but older object.
Other combinations of masses and ages hence luminosities can be easily derived from eq.(<ref>).
Effects due to differences in the mean chemical composition of the stellar contents can be ignored at a first order approximation.
. In this case in eq.(<ref>) M_3 ≠ 0 and L_3 ≠ 0. The major difference with respect to previous cases is that the stellar activity induced by the merger can be approximated by a single giant SSP with its own mass and luminosity. The total light emitted by M_3 is L_3 = (L_SSP/M_SSP)× M_3. Now the interval spanned by the luminosity at aging SSP is much wider than before so depending on the age the effects can be large. For the merging galaxies we assume M_1 = 10^12 M_⊙, L_1=10^11 L_⊙, and M_2= 10^11 M_⊙, L_2= 10^10 L_⊙, finally for the mass of the giant SSP simulating the induced star formation event we adopt
M_3= 10^10 M_⊙. The total mass is M=1.11 × 10^12 M_⊙. For the associated luminosity we adopt three values corresponding to a very young age, high luminosity SSP with L_3= 1× 10^11 L_⊙; an intermediate age, lower luminosity SSP with L_3= 1× 10^10 L_⊙; and an old age, low luminosity SSP with L_3= 1× 10^9 L_⊙. In the first case the total luminosity is L= 2.1× 10^11 L_⊙, whose relative components are h_1=0.476, h_2= 0.048, and h_3=0.476. The burst of star formation contributes to half of the total light and 1% of the total mass. This is a rapidly transient situation that on a short time scale fades down. In the second case, the total light is L=1.2× 10^10 L_⊙, and the relative contributions to the luminosity are h_1= 0.834, h_2=0.083, h_3=0.083; the burst of star formation contributes to about 10% of the light equal to the contribution of the less massive, old galaxy. In the last case, the SSP is very old, the three relative contributions are h_1=0.908, h_2=0.091, and h_3=0.009. The occurrence of the burst is nearly undetectable.
Other combinations of the parameters can be tested with similar results.
What we learn from these simple tests is that in the case of dry mergers but for the merger between objects of similar mass in which the mass and light are increased by a factor of two, the effect of mergers among objects with different masses and ages scarcely affect the light of the originally dominant object. Wet mergers with induced star formation more efficiently leave their fingerprints on the post-merger light of a galaxy. Unfortunately the bright phase is of short duration, the time scale required by the SSP to evolve from a turnoff, in the range of bright massive stars (say 20 M_⊙ with a lifetime of a few Myr), down to a turnoff in the range of low mass stars (say below 2 M_⊙ with a lifetime of about 1 Gyr), see the panel of Fig. <ref>. Therefore, it is by far more probable to catch a galaxy that underwent a wet merge when the burst of star formation has already faded down to low luminosities.
This is an interesting result that could explain why the Faber-Jackson relation and the Fundamental Plane we see today show little scatter in the observational distribution of galaxies.
|
http://arxiv.org/abs/2306.09333v1
|
20230615175848
|
Dynamics of magnetization at infinite temperature in a Heisenberg spin chain
|
[
"Eliott Rosenberg",
"Trond Andersen",
"Rhine Samajdar",
"Andre Petukhov",
"Jesse Hoke",
"Dmitry Abanin",
"Andreas Bengtsson",
"Ilya Drozdov",
"Catherine Erickson",
"Paul Klimov",
"Xiao Mi",
"Alexis Morvan",
"Matthew Neeley",
"Charles Neill",
"Rajeev Acharya",
"Igor Aleiner",
"Richard Allen",
"Kyle Anderson",
"Markus Ansmann",
"Frank Arute",
"Kunal Arya",
"Abraham Asfaw",
"Juan Atalaya",
"Joseph Bardin",
"A. Bilmes",
"Gina Bortoli",
"Alexandre Bourassa",
"Jenna Bovaird",
"Leon Brill",
"Michael Broughton",
"Bob B. Buckley",
"David Buell",
"Tim Burger",
"Brian Burkett",
"Nicholas Bushnell",
"Juan Campero",
"Hung-Shen Chang",
"Zijun Chen",
"Benjamin Chiaro",
"Desmond Chik",
"Josh Cogan",
"Roberto Collins",
"Paul Conner",
"William Courtney",
"Alexander Crook",
"Ben Curtin",
"Dripto Debroy",
"Alexander Del Toro Barba",
"Sean Demura",
"Agustin Di Paolo",
"Andrew Dunsworth",
"Clint Earle",
"E. Farhi",
"Reza Fatemi",
"Vinicius Ferreira",
"Leslie Flores",
"Ebrahim Forati",
"Austin Fowler",
"Brooks Foxen",
"Gonzalo Garcia",
"Élie Genois",
"William Giang",
"Craig Gidney",
"Dar Gilboa",
"Marissa Giustina",
"Raja Gosula",
"Alejandro Grajales Dau",
"Jonathan Gross",
"Steve Habegger",
"Michael Hamilton",
"Monica Hansen",
"Matthew Harrigan",
"Sean Harrington",
"Paula Heu",
"Gordon Hill",
"Markus Hoffmann",
"Sabrina Hong",
"Trent Huang",
"Ashley Huff",
"William Huggins",
"Lev Ioffe",
"Sergei Isakov",
"Justin Iveland",
"Evan Jeffrey",
"Zhang Jiang",
"Cody Jones",
"Pavol Juhas",
"D. Kafri",
"Tanuj Khattar",
"Mostafa Khezri",
"Mária Kieferová",
"Seon Kim",
"Alexei Kitaev",
"Andrey Klots",
"Alexander Korotkov",
"Fedor Kostritsa",
"John Mark Kreikebaum",
"David Landhuis",
"Pavel Laptev",
"Kim Ming Lau",
"Lily Laws",
"Joonho Lee",
"Kenneth Lee",
"Yuri Lensky",
"Brian Lester",
"Alexander Lill",
"Wayne Liu",
"William P. Livingston",
"A. Locharla",
"Salvatore Mandrà",
"Orion Martin",
"Steven Martin",
"Jarrod McClean",
"Matthew McEwen",
"Seneca Meeks",
"Kevin Miao",
"Amanda Mieszala",
"Shirin Montazeri",
"Ramis Movassagh",
"Wojciech Mruczkiewicz",
"Ani Nersisyan",
"Michael Newman",
"Jiun How Ng",
"Anthony Nguyen",
"Murray Nguyen",
"M. Niu",
"Thomas O'Brien",
"Seun Omonije",
"Alex Opremcak",
"Rebecca Potter",
"Leonid Pryadko",
"Chris Quintana",
"David Rhodes",
"Charles Rocque",
"N. Rubin",
"Negar Saei",
"Daniel Sank",
"Kannan Sankaragomathi",
"Kevin Satzinger",
"Henry Schurkus",
"Christopher Schuster",
"Michael Shearn",
"Aaron Shorter",
"Noah Shutty",
"Vladimir Shvarts",
"Volodymyr Sivak",
"Jindra Skruzny",
"Clarke Smith",
"Rolando Somma",
"George Sterling",
"Doug Strain",
"Marco Szalay",
"Douglas Thor",
"Alfredo Torres",
"Guifre Vidal",
"Benjamin Villalonga",
"Catherine Vollgraff Heidweiller",
"Theodore White",
"Bryan Woo",
"Cheng Xing",
"Jamie Yao",
"Ping Yeh",
"Juhwan Yoo",
"Grayson Young",
"Adam Zalcman",
"Yaxing Zhang",
"Ningfeng Zhu",
"Nicholas Zobrist",
"Hartmut Neven",
"Ryan Babbush",
"Dave Bacon",
"Sergio Boixo",
"Jeremy Hilton",
"Erik Lucero",
"Anthony Megrant",
"Julian Kelly",
"Yu Chen",
"Vadim Smelyanskiy",
"Vedika Khemani",
"Sarang Gopalakrishnan",
"Tomaž Prosen",
"Pedram Roushan"
] |
quant-ph
|
[
"quant-ph"
] |
PART:
super
Understanding universal aspects of quantum dynamics is an unresolved problem in statistical mechanics. In particular, the spin dynamics of the 1D Heisenberg model were conjectured to belong to the Kardar-Parisi-Zhang (KPZ) universality class based on the scaling of the infinite-temperature spin-spin correlation function. In a chain of 46 superconducting qubits, we study the probability distribution, P(ℳ), of the magnetization transferred across the chain's center. The first two moments of P(ℳ) show superdiffusive behavior, a hallmark of KPZ universality. However, the third and fourth moments rule out the KPZ conjecture and allow for evaluating other theories. Our results highlight the importance of studying higher moments in determining dynamic universality classes and provide key insights into universal behavior in quantum systems.
Dynamics of magnetization at infinite temperature in a Heisenberg spin chain
Google Quantum AI and Collaborators
Received ... ; accepted ...
============================================================================
In statistical physics, the notion of universality is a powerful assertion; it implies that systems with entirely different microscopic interactions can share the same emergent macroscopic description due to having certain basic physical properties in common. It is a triumph of this assertion that, for instance, the Ising model prevails in our understanding of the zero-temperature phase transitions in a wide class of systems <cit.>. The basic ingredients commonly affecting universality classes are the collective behavior of constituent elements, symmetries, conservation laws, and dimensionality, as described by the renormalization group (RG) theory <cit.>. In contrast to rather well-understood low-temperature universality classes, which are determined by ground-state physics, we have limited knowledge of the universality classification of dynamical phases of matter at finite temperatures, where contributions from the entire spectrum must be considered <cit.>.
It has been observed in several dynamical systems that the long-time behavior permits a few-parameter hydrodynamical description, suggesting the existence of universality <cit.>. The emergence of a hydrodynamical description relies on reaching local, and subsequently, global equilibrium <cit.>. This fate is less certain in systems with an extensive set of conserved quantities, i.e., integrable systems, which are known to evade thermalization, and their universal behaviors are discussed in the framework of generalized hydrodynamics <cit.>.
Distinct microscopic models or dynamics belong to the same universality class if they share a single scale-invariant limit under a RG flow <cit.>. A universality class is commonly characterized by scaling exponents and scaling functions, and it is rather implausible to extract them all experimentally. Therefore, experiments, e.g., on quantum processors, cannot prove that a set of observed dynamics belongs to a given class, but they can falsify a universality conjecture <cit.> by examining its predictions. They can also probe numerically and theoretically challenging regions of the parameter space, which has proven advantageous for studying universal behaviors <cit.>.
Superconducting quantum processors offer high wavefunction sampling rates, which enabled them to outperforming classical computers in sampling tasks <cit.>. On these processors one can go beyond mean expectation values and provide “snapshots” of an observable, which allows for measuring quantum fluctuations and the probability distribution of the observable. The capability of collecting full counting statistics could have fundamental consequences for our understanding of dynamical universalities. In particular, it is commonly assumed that the scaling functions and exponents of the first few moments uniquely determine a universality class, and there have not yet been any instances where the higher moments of an observable have led to a different classification.
Spin dynamics in the one-dimensional (1D) XXZ model have been the subject of numerous recent studies <cit.>. This integrable model describes nearest-neighbor exchange interactions between spin-1/2 particles, with the Hamiltonian <cit.>
Ĥ=∑_i ( S^x_i S^x_i+1+ S^y_i S^y_i+1 + Δ S^z_i S^z_i+1),
where S^x, S^y, and S^z are spin-1/2 operators, and Δ is the anisotropy parameter. When Δ=1, this system is the Heisenberg model, a paradigmatic model of quantum magnetism that possesses a global SU(2) rotational symmetry. The spin dynamics in the Heisenberg model exhibit characteristics consistent with the Kardar-Parisi-Zhang (KPZ) universality class, which was originally introduced to describe the stochastic, nonlinear dynamics of driven interfaces and has proven to apply to a wide range of classical systems <cit.>. The KPZ-like behavior of the spin dynamics is surprising due to the absence of stochasticity and nonlinearity in the Heisenberg model.
In a 1D chain of N_Q=46 superconducting qubits, we simulate this spin model by periodic (Floquet) application of high-fidelity 2-qubit unitary fSim(θ, ϕ) gates (Fig. <ref>A, see SM and Ref. Neill2021accurately). Here, θ sets the amplitude of hopping between adjacent qubit lattice sites, and ϕ is the conditional phase angle imparted when two spin excitations are adjacent to each other. Within each cycle, two-qubit fSim(θ, ϕ) gates are applied between all neighboring pairs in the chain, resulting in the cycle unitary:
Û_F= ∏_even bondsfSim(θ, ϕ) ∏_odd bondsfSim(θ, ϕ).
In the limit θ, ϕ → 0, Û_F is the Trotter–Suzuki expansion of the XXZ Hamiltonian (<ref>), with Δ = sin(ϕ/2)/sin(θ). Away from this limit, there is no unique Hamiltonian associated with Û_F, but Eqs. (<ref>) and (<ref>) still share symmetries and are both integrable by the Bethe ansatz <cit.>.
To study dynamics under the unitary evolution (<ref>), we generate domain-wall initial states with an adjustable contrast parameter μ (Fig. <ref>B). Specifically, we initialize the chain in a set of product states such that the left and right halves have average magnetization ±tanh(μ), respectively:
ρ(t=0) ∝ (e^2μ S^z)^⊗ N_Q/2 ⊗ ( e ^-2μ S^z) ^⊗ N_Q/2.
When μ→∞, the system approaches a pure domain-wall state with the two sides fully magnetized in opposite directions. Only when μ=0, the initial state is an infinite-temperature thermal state that preserves SU(2) symmetry. When μ 0, the magnetization is preferentially along the z-axis, breaking the SU(2) rotational symmetry of the Heisenberg model.
A natural measure of spin transport is the total transferred magnetization, ℳ(t), defined as twice the net number of excitations that have crossed the middle of the chain after t cycles. In our experiment, we sample over initial bitstring states with probabilities given by Eq. (<ref>). For each initial state, we prepare the qubits in that state and then apply t cycles of fSim gates. Let N_R,1(b) be the number of excitations (“1"s) in the right half of bitstring b. The transferred magnetization ℳ is the stochastic variable defined by
ℳ(t)/2 = N_R,1(b_t) - N_R,1(b_i),
where b_i is the initial bitstring, sampled from Eq. (<ref>), and b_t is the associated final bitstring sampled at t. For example, if the initial bitstring is 111010 and the final bitstring is 110110, then the transferred magnetization is 2. Since the dynamics are number-conserving, the transferred magnetization is also the net number of zeros that have crossed from the right to the left. Repeating the experiment many times, we construct the probability distribution of ℳ, P(ℳ).
Fig. <ref>B shows measurement instances for three values of μ. The left column in each panel shows an instance of the initial state for the given μ, and the subsequent columns show typical bitstrings evolved from that state. As excitations (spin flips) propagate through the chain, smaller domains become more probable.
In Fig. <ref>C, we show histograms of ℳ at different times, starting in a pure (μ=∞) domain wall. Due to locality of the circuit, |ℳ(t)| is upper-bounded by 2 t. Consequently, the distribution is narrow and centered around a small value at t=1, since only a few excitations have crossed the middle of the chain, and becomes wider at later times.
In the context of spin transport, the first and second (variance) moments of ℳ have been extensively studied both theoretically and experimentally <cit.>. Taking advantage of our tunable fSim gates, we explore how these two moments depend on the anisotropy parameter, Δ. Fig. <ref>A shows the mean of ℳ over time for values of Δ equal to 0.16 (purple), 1 (orange) and 1.6 (green), and an initial domain wall height of μ=0.5. We observe markedly different scaling behaviors in the three regimes. Eliminating the initial transient cycles, we fit a power law, ⟨ℳ⟩∼ t^1/z, to the data over cycles 10–23 and extract scaling exponents of z =1.12 ± 0.04, z = 1.6± 0.1 and z = 1.9± 0.2. These are in close agreement with theoretical predictions for the ballistic (z=1)<cit.>, superdiffusive (z=3/2)<cit.>, and diffusive (z=2)<cit.> behaviors, respectively. Observation of superdiffusive propagation for isotropic interactions (Δ=1), measured here and also in other works <cit.>, has been interpreted as a signature of the KPZ universality class.
Numerical simulation of these domain-wall dynamics are shown with solid dark lines in Fig. <ref>A. A variety of numerical simulations often rely on approximation schemes, which could lead to inaccurate results. In contrast, here we perform exact statevector sampling up to cycle 18 without any approximations. This is achieved by taking advantage of the fact that ⟨ ℳ(t) ⟩ only depends on the spins within the light cone of width 2t, and can thus be determined exactly by simulating shorter chains. This simplification also allows for arriving at analytical results for all moments of ℳ at early cycles. Nevertheless, the computational cost grows exponentially, and with the resources used here, the simulations at cycles 14, 16, and 18 take about 1, 2, and 14 hours, respectively (see SM).
Importantly, the slight discrepancies between the observed and predicted exponents are also seen in the exact statevector simulations up to t=18 cycles (colored lines), suggesting that these deviations primarily stem from finite-time and large-μ effects and are less affected by experimental imperfections. Indeed, the exact scaling exponents are only expected in the long-time limit and as μ→0. By simulating the effects of noise in our system (lighter lines in Fig. <ref>A), we find that these are almost negligible for Δ<1 and Δ=1, and somewhat larger for Δ>1, due to the lower values of ℳ in the diffusive regime. This effect is also noticeable in the magnetization transfer distributions in Fig. <ref>B. Since the distribution is narrower in the Δ > 1 case, the noise has a larger effect on the shape of the distribution here than for the other two values of Δ. The error in this case is found to be predominantly caused by combined occurrences of T1 errors and 0→ 1 readout errors, which are not eliminated by post-selection (SM). By including this effect in the simulation, we find good agreement in all three regimes.
Superdiffusive transport, ⟨ℳ⟩ ∼ t^2/3, at Δ = 1 is a characteristic of systems within the KPZ universality class. Moreover, numerical studies found that the spin-spin correlation function coincides with the KPZ scaling function <cit.>, which has led to the conjecture that near-equilibrium spin transport in the Heisenberg model belongs to the KPZ universality class <cit.>. This universality class is associated with the classical nonlinear stochastic KPZ equation ∂ h/∂ t = ν∇^2 h + λ (∇ h)^2 + η(x,t), which was originally introduced <cit.> to describe the dynamics of driven interfaces as a height field h(x,t), where ν, λ, η set the strength of the smoothening diffusion, roughening nonlinear growth, and stochasticity terms, respectively. The conjecture asserts that at late times the magnetization profile behaves similarly to ∂ h(x,t) / ∂ x. Consequently,
lim_μ→ 0ℳ (t) ⟷ 2h(0,t)- h(-∞,t)- h(∞,t).
To further examine the universality class of the Heisenberg spin dynamics, two aspects are of particular importance. First, since the universal behavior is expected to depend on whether the system is in equilibrium, it is essential to measure the dependence on μ. Second, while the scaling exponent of the mean is consistent with the KPZ universality class, further insights can be gained by examining higher moments (the “full counting statistics”) of P(ℳ). Due to the reduced signal-to-noise ratio, measuring higher moments at small μ is experimentally challenging. We utilize our fast sampling capability to measure P(ℳ) as a function of μ and t (Refs. <cit.>). Figs. <ref>C,D show the temporal evolution of ℳ and its variance for various values of μ ranging from 0.2 to 1. We find that, at small μ, the dynamical exponents of both the mean and the variance are close to 3/2. For larger μ, the dynamical exponent of the mean approaches 5/3, consistent with recent numerical results <cit.> (SM).
Next, we extract the skewness 𝒮 and kurtosis 𝒬 of P(ℳ),
𝒮 = α_3 /α_2^3/2, 𝒬 = α_4 /α_2^2 - 3,
where α_k=⟨ (ℳ-⟨ℳ⟩)^k⟩ is the k^th moment. In Fig. <ref>A, we show the temporal dependence of 𝒮 for μ ranging from 1.5 to 0.1. Consistent with Ref. wei2022quantum, 𝒮 approaches to about 0.3 for μ >1. However, as μ is reduced towards the equilibrium point, we observe that 𝒮 goes to zero. Fig. <ref>B shows that for later cycles, the initial strong time dependence of 𝒬 weakens. By averaging over cycles 16 to 23, we obtain a kurtosis of -0.05± 0.02 (Fig. <ref>B).
In order to test the KPZ universality conjecture, one needs to study the infinite-time (t →∞) and near-equilibrium (μ→ 0) limits. These limits are experimentally inaccessible. However, if there exists a function f(μ, t) such that the moments are functions of f(μ,t), then one may be able to extrapolate measured values at finite μ and t to these unattainable limits. We empirically find that the zero crossing of 𝒮 scales as t_0∼μ ^-1.49 (SM), suggesting that 𝒮 may be a function of μ t^2/3. Indeed, after excluding the initial transient behavior, 𝒮 does appear to be a single-valued function of μ t^2/3 (Fig <ref>C, SM).
KPZ has been conjectured to apply to high-temperature thermal states at late times, corresponding to taking μ→0 first and then t→∞ (Ref. <cit.>). In this case, P(ℳ) should become the Baik-Rains distribution <cit.>. However, this distribution is skewed (Table <ref>), whereas our measurements suggest that 𝒮 = 0 (Fig. <ref>C), as is also dictated by symmetry.
One might also search for KPZ universality away from μ=0, corresponding to a different order in taking these noncommuting limits. When taking t→∞ first, the appropriate probability distribution to compare P(ℳ) against is the Tracy Widom (TW) distribution <cit.>, which has 𝒮 of about 0.22. This order of limits corresponds to large μ t^2/3 in Fig. <ref>C, where we indeed find 𝒮 consistent with this distribution, as also seen in earlier experiments <cit.>. However, 𝒬 of the TW GUE (Gaussian unitary ensemble) distribution is 0.09, whereas we find 𝒬 = -0.05± 0.02. The emergence of KPZ dynamics in this order of limits is further ruled out by numerical and theoretical predictions that the dynamics become diffusive (z=2) on a timescale t∼ 1/μ^3 (Refs. <cit.>).
One could consider taking the two limits simultaneously in a way that the dynamics do not become diffusive, e.g., by holding μ t^2/3 constant. The correct distribution to compare against is TW GUE in this case as well. If we take the limit with μ t^2/3 fixed at a large value, we find 𝒮 consistent with TW GUE, but the measured 𝒬 is still inconsistent with the TW GUE prediction of 0.09, ruling out KPZ dynamics on the timescales accessible in the experiment. While it remains possible that KPZ dynamics will emerge at much later times (i.e., 𝒬 will increase to 0.09), we see no evidence or motivation for this.
An outstanding question is why only lower-point observables seem to behave consistently with KPZ universality. Intriguingly, other systems have been identified that exhibit similar behavior. One such system is a nonlinear fluctuating hydrodynamic (NLFH) model with two coupled stochastic modes <cit.>, which predicts 𝒮=0, consistent with the Heisenberg spin chain. However, it suggests 𝒬=0.14, differing from what we observe, perhaps because not all aspects of the model are universal. Another such system is the classical Landau-Lifshitz (CLL) magnet <cit.>, which predicts 𝒮=0 and a 𝒬 that is negative and close to zero at these time scales <cit.>. These are consistent with our experimental results. It is rather surprising that this classical system is so successful in capturing the behavior of a quantum spin chain with enhanced quantum fluctuations due to confinement <cit.>.
Studies of the universal aspects of quantum dynamics have attracted notable interest recently; accordingly, a complete classification of their universal properties is lacking. Our findings suggest these classifications could involve unanticipated subtleties. Our first result, also observed by others, is the superdiffusive transport characterized by ⟨ℳ⟩∼ t^2/3, shown in Fig. <ref>A. While this anomalous diffusion is suggestive of the known KPZ universality classes, this classification is not compatible with our second finding—the vanishing of 𝒮 and 𝒬 near equilibrium (Figs. <ref>C, D). Despite the apparent success of the CLL model, a full understanding requires the development of a systematic spacetime RG framework that could establish the origin of the KPZ-like behavior starting from the microscopic dynamics of the Heisenberg model. Quantum processors have the potential to help with such RG studies. For example, the multi-scale entanglement re-normalization ansatz (MERA) applies ideas of RG flow to tensor networks and quantum circuits <cit.>.
Our observations are rooted in the interplay of integrability, quantum fluctuations, and symmetry and have proved to be challenging to describe using an effective quantum field theory. The observed discrepancies with KPZ predictions suggest that the infinite-temperature dynamics in the Heisenberg chain—if universal—belong to a yet-to-be-discovered dynamical universality class.
*Acknowledgments:
We acknowledge discussions with I. Bloch, V.B. Bulchandani, A. Morningstar, and R. Vasseur. V.K. acknowledges support from the US Department of Energy, Office of Science, Basic Energy Sciences, under Early Career Award No. DE-SC0021111, the Alfred P. Sloan Foundation through a Sloan Research Fellowship, and the Packard Foundation through a Packard Fellowship in Science and Engineering. S.G., V.K., and T.P. acknowledge the hospitality of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara (supported by NSF Grant PHY-1748958). R.S. is supported by the Princeton Quantum Initiative Fellowship. T.P. is supported by Program P1-0402 of Slovenian Research Agency (ARRS).
*Author contributions:
T. Prosen, S. Gopalakrishnan, and V. Khemani proposed the experiment and helped guide and interpret it. P. Roushan selected the proposal and advised the experimental effort. E. Rosenberg implemented the experiment. E. Rosenberg and T. Andersen collected the experimental data. E. Rosenberg developed and ran the numerical simulations. R. Samajdar provided theoretical input and implemented additional numerics. P. Roushan, T. Andersen, E. Rosenberg, R. Samajdar, T. Prosen, and S. Gopalakrishnan contributed to writing. The Google Quantum AI team fabricated the processor, built the cryogenic and control systems, optimized the processor performance, and provided the tools that enabled execution of this experiment. C. Neill, X. Mi, A. Morvan, and J. Hoke helped develop fSim gates.
authorlist^† Google Quantum AI and Collaborators
E. Rosenberg, ,
T. I. Andersen,
, ,
,
,
,
,
, ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
, ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
, ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
, ,
,
,
,
, ,
,
,
,
,
,
,
, ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
T. Prosen,
P. Roushan
corra^ These authors contributed equally to this work.
corrb^ Corresponding author: [email protected]
corrb^ Corresponding author: [email protected]
Supplementary Materials for
“Dynamics of magnetization at infinite temperature in a Heisenberg spin chain"
§ EXPERIMENTAL TECHNIQUES AND DEVICE CHARACTERIZATION
§.§ Overview
The experiments are performed using 46 frequency-tunable superconducting transmon qubits. The qubits are prepared in a random bitstring state according to the probabilities set by the initial imbalance μ: qubits on the left side of the chain are prepared in |1⟩ with probability p=e^μ/(e^μ + e^-μ), otherwise |0⟩, and qubits on the right are prepared in |0⟩ with probability p, otherwise |1⟩. The system is then evolved with alternating layers of fSim gates <cit.>, which implment a Floquet XXZ model. Finally, all 46 qubits are measured in the computational basis. Because ideal fSim gates are number-conserving, we post-select on the measured bitstrings having the correct number of 1s, effectively mitigating against photon loss, which otherwise causes the number of 1s to decay. After sampling over N_ states initial bitstring states, we compute the expectation value of the kth power of the transferred magnetization as
⟨ℳ(t)^k ⟩ = 1/N_ states∑_i 1/N^ counts_i(t)∑_j N^ counts_ij(t) (2(N_1^R(j) - N_1^R(i) ))^k,
where i is the initial bitstring and j is the measured bitstring, N_i^ counts(t) is the total number of counts that survive postselection after t cycles when the initial state is i, and N^ counts_ij(t) is the number of times the bitstring j is measured after t cycles when the initial state is i. N_i^ counts(t) = ∑_j N^ counts_ij(t). N_1^R(i) is the number of 1s in the right half of the binary representation of i. Moments are computed as
α_k(t) = ⟨(ℳ(t) - ⟨ℳ(t)⟩)^k⟩
= ∑_i=0^k [ k; i ]⟨ℳ(t)^k-i⟩( - ⟨ℳ(t) ⟩)^i,
where the second line is written in terms of the experimentally measured quantities, Eq. (<ref>).
Finally, the skewness 𝒮(t) and kurtosis 𝒬(t) are computed as
𝒮(t) = α_3(t)/α_2(t)^3/2
𝒬(t) = α_4(t)/α_2(t)^2 - 3.
Statistical uncertainties of each of these quantities are computed using the remove-one jackknife method, wherein one initial state is removed from the sample, and the variation of the quantity of interest (e.g. of ⟨ℳ(t)⟩, α_2(t), 𝒮(t), or 𝒬(t)), depending on which state is removed, is used to estimate the statistical uncertainty of that quantity. The jackknife method is also used to estimate bias, which is found to be negligible compared to the statistical uncertainties. The jackknife method is described in more detail in Section <ref>.
In addition to post-selecting on number conservation, we apply several additional error avoidance and mitigation techniques:
(1) We post-select on the causal possibility of the observed bitstring. For example, if an initial bitstring on 8 qubits is 11011000, then it is not possible, in a noiseless system, to observe 01011001 at cycle 1. The rightmost 1 appeared acausally (likely by readout error) and hence the bitstring is filtered out even though it contains the correct number of 1s. We have an efficient algorithm (described in Section <ref> for checking whether a given observed bitstring is causally possible after t cycles from a given initial bitstring. This filtering mostly affects the earliest few cycles, for which the number of causally connected bitstrings is small. This filtering is a small effect compared to the number-conserving post-selection, which keeps an exponentially decaying number of bitstrings as a function of cycle number (see Figure <ref>A).
(2) Noting that the effects of amplitude damping (T_1) on our experiment are worse in initial bistring states with more 1s, when an initial state is more than half-full, i.e. the number of 1s is greater than 23, we relabel the |0⟩ and |1⟩ states, i.e. we start in the initial bitstring b̅_i instead of b_i and then replace each measured bitstring b_j with b̅_j, where b̅ means to apply a NOT operator to all of the bits in b. The advantage gained from this technique is illustrated in Figure <ref>C.
Figure <ref>A shows exponential decay of the fraction of counts that survive the post-selection. We call the decay constant the algorithmic relaxation time, T_1^A, which, as illustrated in Figure <ref>C, is about 3 cycles at half filling. If one naively estimates T_1^A at half-filling from
e^-t/T_1^A?=∏_i=1^23 e^-t/T_1^(i)⟹1/T_1^A?=∑_i=1^231/T_1^(i),
where T_1^(i) is the T_1 of qubit i, measured at its idle frequency, we obtain an estimate for T_1^A of over 7 cycles, even if we pick sum to be over the 23 qubits with the shortest T_1 out of the 46 total.
There are two main mechanisms expected to cause discrepancies between the algorithmic T1 and the estimate based on single-qubit T1 values. First, when the coupling is turned on, the coupler is brought close to the qubits in frequency, allowing noise in the coupler to affect the qubit. This can also enable noise-induced transitions from the qubit to the coupler. Second, the relevant T_1 for the experiment is not the T_1 at the idle frequency, even though that is what is typically optimized for and reported. The fSim gates are implemented as in Refs. <cit.>; pairs of qubits are tuned to their interaction frequencies in a trapezoidal coupler pulse, the amplitude and duration of which are tuned to obtain the desired SWAP and controlled-phase angles. The resulting fSim gate includes single-qubit phases, which must be calibrated to zero by applying physical Z rotations <cit.>. Physical Z gates are fixed-duration 10-ns gates in which the qubit frequency is detuned from the idle frequency f_0 to the frequency f_z. In the frame rotating at the idle frequency, the qubit accumulates a phase of 2π (f_z - f_0) × 10 ns. Therefore, the full range of phases from -π to π can be obtained by the range of frequencies |f_z - f_0| ≤ 0.05 GHz. Figure <ref>A shows T_1 as a function of frequency for a typical qubit in our chain, indicating the idle frequency, the interaction frequencies with the two neighboring qubits, and the range of frequencies used for physical-Z rotations. It is readily seen that, although the idle frequency may be optimized to give a long T_1, other frequencies used during the circuit execution have T_1s that can be about a factor of 2 shorter. Figure <ref>B shows how the frequencies for each of the 46 qubits in our chain vary over the course of a cycle. Clearly, T_1 during the circuit execution differs from T_1 at the idle frequency, and the factor-of-two difference between the measured and predicted algorithmic relaxation time is plausibly explained.
Post selection largely mitigates against amplitude damping, characterized by the algorithmic relaxation time, at the cost of an exponential overhead in the number of shots required. However, some errors make it past the post selection. In particular, although amplitude damping causes the number of 1s to decrease, 0→ 1 readout errors cause it to increase. Therefore, when both amplitude damping and 0→ 1 readout error occur, the measured bitstrings can have the correct number of 1s and pass the post-selection. As evident in Figure <ref>A, by later cycles, the vast majority of bitstrings have had some amplitude damping, and on 46 qubits, it is likely that at least one 0→ 1 readout error will occur (typical readout error rates are shown in Figure <ref>), so this is a non-negligable effect. It manifests as excitations appearing to jump nonlocally along the chain, moving from the side with high concentration to the side with low concentration faster than they would without noise. As described in Section <ref>, we perform simulations including this effect and find that it explains most of the discrepancy between the noiseless simulation and the experiment. Evidently, the quality of our post-selected experimental results could be improved by changing how the readout calibration is done; the readout centers could be chosen to decrease the 0→ 1 error rate at the expense of the 1→ 0 error rate (and hence requiring more shots). We leave this modified calibration technique for future work.
There are other sources of errors that are not mitigated by post-selection. These include dephasing, leakage (occupation of the |2⟩ state), and control errors (fSim angle miscalibrations). Because the coupling strengths used here are not particularly high, leakage is not expected to be a dominant source of error. We characterize single- and two-qubit dephasing, as well as control errors, and include these effects in our simulations. However, we find that most of the observed discrepancy is explained by T_1 and readout errors alone (Figure <ref>).
The data included in this paper were collected over the course of several months, on different sets of qubits and two different devices. In order to ensure consistent data quality, readout error rates, single-qubit error rates, and two-qubit cross-entropy benchmarking (XEB) fidelities were measured periodically, typically after every 10 initial states; we kept only data for which the maximum error rates were below thresholds that we set.
§.§ Causal filter
Here we present the causal filter with which we post-select our measured bitstrings. We illustrate the algorithm with the two bitstrings mentioned in Section <ref>. Suppose that the initial bitstring is 11011000 and we want to know whether 01011001 is a possible measurement outcome after one cycle.
To answer this question, we determine the minimum number of cycles required to obtain 01011001 in the ideal dynamics. First, we assign identities to the “1"s in the initial and final bitstring determined by their order. This is illustrated in Figure <ref>, where we assign colors to the “1"s in both the initial bitstring (on the left) and the final bitstring (on the right). We then consider each layer of fSim gates and move the excitations if it is allowed by the gates and if doing so brings the excitations closer to their desired locations. For example, in the first layer, the blue and orange excitations are blocking each other from moving (since color is determined by order). The green one could move up, but doing so would bring it further from its final position, so it does not move. The red excitation moves down because that brings it closer to its final position. From the figure, it is clear that at least 1.5 cycles are needed to obtain 01011001 from 11011000. Therefore, this bitstring would be filtered out if seen after only one cycle.
§.§ Gate calibration
We implement fSim gates using the same trapezoidal coupler pulses used in previous works <cit.>. Here, we note some differences between how gates are calibrated here versus in previous works. (1) in Ref. <cit.>, the fSim gates were not of uniform duration across pairs. Here, we adjust the padding (the idle time in Figure <ref>B) so that the gate duration (including the added padding) is uniform across pairs, leading to the neat alignment in time acros qubits shown in Figure <ref>B. (2) In Ref. <cit.>, Floquet calibration of the fSim angles allowed them to be controlled with high precision. Because control errors (called disorder in Figure <ref>) are a negligible source of error for us, we instead use unitary tomography to calibrate the fSim angles. This enables us to calibrate gates quickly and in a way that is mostly automated. We tried Floquet calibration, which allows for more precise calibrations of the angles at the cost of a higher overhead, but found that it did not improve our gate fidelities. (3) We iteratively calibrate the hold time T and coupling strength g_ max of the trapezoidal pulse by measuring the fSim angles θ and ϕ at points in a small cross shape in the (T, g_ max) plane centered at the previous guess, fitting the polynomials
θ = f((b_1 g_ max + b_0)(T + T_b))
ϕ = (c_1 g_ max + c_0)(T + T_c),
where f is the triangle-wave function illustrated in Figure <ref>. This technique allows for fast calibration of fSim gates without relying on expensive 2d sweeps and in a way that is more robust to noise than gradient descent. Figure <ref> illustrates this calibration procedure.
An advantage of being able to quickly optimize T and g_ max for the desired fSim angles is that we can now put this in an outer loop that adjusts the interaction frequencies. Indeed, during gate calibration, we set a minimum two-qubit XEB fidelity and optimize the interaction frequencies (re-optimizing T and g_ max each time) until all qubit pairs achieve the desired fidelity.
§.§ Jackknife estimate of uncertainties
We use the “delete one" jackknife to estimate the statistical uncertainty of our skewness and kurtosis, and dynamical exponent measurements <cit.>. For finite m, where we average over initial states, define θ̂_(i) to be the quantity of interest, for example the skewness or kurtosis, computed with initial state i removed. Define θ̂_(.) = 1/N_s∑_i θ̂_(i), where N_s is the number of initial states. Then we use
σ_θ̂ = √(N_s - 1/N_s∑_i( θ̂_(i) - θ̂_(.))^2)
as our estimate of the statistical uncertainty of the quantity θ̂.
The averages of skewness or skewness over cycle number are uncertainty-weighted averages, and the uncertainty of the average is computed directly using Eq. (<ref>).
In the case of m=∞, there is only one initial state, so we instead perform the jackknife estimate by deleting each one of the shots. This gives us the uncertainty of the skewness and kurtosis at each cycle. Because the shots, unlike the initial states, are different at each cycle number, we cannot use the jackknife to directly compute the uncertainty of the cycle-averaged skewness and kurtosis for m=∞. Instead, in this case, we treat the skewness and kurtosis at each cycle as independent random variables, so the uncertainty of their weighted average is
σ_ weighted avg = 1/√(∑_t w_t),
where the weight, w_t, at cycle t is 1/σ_t^2, where σ_t is the uncertainty of the quantity (either skewness or kurtosis) at cycle t.
§ SIMULATION TECHNIQUES AND NUMERICAL RESULTS
Our main quantity of interest, the transferred magnetization ℳ, counts the number of excitations that have moved across the center of the 1D chain. A significant simplification stems from the fact that, at early times, excitations far from the center have not had time to cross the center of the chain. As a result, we can imagine an infinitely large system and only simulate a finite number of sites in order to study it. In particular, at cycle t, it is only necessary to simulate 2t sites [For a particular initial bitstring state, the transferred magnetization is only independent of system size when N_Q ≥ 4t-2, as one would expect from the causality structure of the circuit, but after averaging over initial bitstrings (or equivalently in the mixed initial state), we find the transferred magnetization is exactly independent of system size as long as N_Q ≥ 2t sites.]. As a result, the optimal simulation technique varies depending on the cycle number. Through cycle 8 (16 qubits), we obtain exact results by simulating the full density matrix. Beyond that point, density matrix simulations become costly, so for cycles 9–18 (18–36 qubits), we instead sample random initial bitstring states, as done in the experiment, and apply exact statevector simulation to these initial states. For the pure domain wall case (μ=∞), we employ tensor-network simulations using the time-evolving block decimation (TEBD) algorithm <cit.> to extend the simulations to cycle 23.
§.§ Analytical results
For very early times, the matrices involved are small enough that it is possible to obtain relatively simple analytical results. Some of these are tabulated here:
§.§.§ Cycle 1
To compute the transferred magnetization at cycle 1, it is only necessary to consider two qubits with an fSim gate between them. Therefore, the probability distribution of the transferred magnetization takes a simple form. For positive integer power k, we have:
⟨ℳ̂^k ⟩ = 2^k sin^2 θtanhμ, k
2^k-1sin^2θ(1 + tanh^2 μ), k .
In particular, the mean, variance, and skewness are
⟨ℳ̂⟩ = 2 sin^2 θtanhμ
= 2 sin^2θ(1 +cos(2θ)tanh^2μ)
𝒮 = 2 √(2)((2 sin^4θtanh^2μ + 1) (sinh(2μ) + cosh(2 μ) + 1)^2 - 3 (sinh(4 μ) + cosh(4 μ) + 1) sin^2(θ)) tanhμ/(sinh(2 μ) + cosh(2μ) + 1)^2√((cos(2 θ)tanh^2μ + 1)^3)sinθ
= μ√(2) (2θ -3sinθ) + O(μ^3)
𝒬 = 2^2θ - 3 + O(μ^2)
Observe that, for μ≪ 1, the skewness and kurtosis are both positive for small θ, i.e. the Trotter limit, and negative for large θ. The crossover happens at θ = arcsin(√(2/3)) ≈ 0.3 π. In Figure 3 of the main text, we choose θ = 0.4π, which is why we observe negative skewness and kurtosis. In continuous-time Hamiltonian dynamics, we expect the opposite signs.
§.§.§ Cycle 2
It is also possible to obtain analycial expressions at cycle 2. In particular, the mean and variance of the transferred magnetization are
⟨ℳ̂⟩ = 2μsin^2θ(cos^4 θ (3 + cosϕ) + 2 sin^2 θ) + O(μ^3)
= sin^4 θ (1-cosϕ) + 1/8 (3+cosϕ)(7 sin^2 θ + sin^2(3θ)) + O(μ^2)
§.§ Simulation cost and runtime
To perform statevector simulations out to cycle 18 (36 qubits), we use NVIDIA's cuQuantum <cit.> and its interface with qsim <cit.>. cuQuantum supports multi-GPU quantum simulations, and with eight 80-GB NVIDIA A100 GPUs, available to virtual machines running in Google Cloud's compute services, we can simulate up to 36 qubits. On this platform, a noiseless 18-cycle simulation takes about 17.6 seconds per initial state. However, the memory required to store the state increases exponentially in the cycle number, as shown in Figure <ref>. The cuQuantum implementation stores 2^2t complex numbers. The memory footprint could be reduced by taking advantage of number conservation, in which case only [ 2t; t ] complex numbers would be needed to represent the state.
§.§ Noisy simulations
In this subsection, we describe how we performed the noisy simulations shown in Figure <ref> and Figure 2 of the main text. The simulations with disorder were performed in a straightforward way; we simply measured the actual fSim angles, including the single-qubit phases, using unitary tomography, and used the measured angles in the simulation, simulating only the 2t qubits about the center, as in the noiseless statevector simulations. The simulations with dephasing were also performed in a straightforward way; we simply averaged over many circuits, adding high- and low-frequency Gaussian noise to the fSim angles, as well as Z-rotations between the fSim gates with random angles that vary both within a circuit and across shots.
The simulation of amplitude damping and readout error is slightly more involved. For each initial 46-qubit bitstring, we consider separately the 2t qubits about the center and the remaining 46-2t outer qubits. For the center qubits, we perform a noisy simulation using cirq/qsim <cit.> that includes the measured amplitude damping as gates applied between the layers of fSim gates. The outer qubits are treated as if no two-qubit gates are applied; qubits prepared in |1⟩ are stochastically flipped to |0⟩ with a probability 1-e^-t/T_1. The resulting bitstrings from the center and outer qubits are concatenated back together, and then bits are randomly flipped according to the 0→1 readout error rate e_0 and the 1→0 readout error rate e_1. Finally, the same post-selection that is used in the experiment is applied to the simulated bitstrings, so that only those conserving the number of 1s and satisfying the causality constraints survive. The readout error rates used here are those measured on the device at the time the experiment was run, including the qubit-by-qubit variations. The amplitude damping rate, T_1, is obtained as in Figure <ref>A and is approximated as being the same across all qubits.
§.§ Length independence
As demonstrated in Table <ref>, the transferred magnetization is independent of the length of the chain as long as the chain consists of at least 2t qubits, where t is the cycle number.
§.§ Crossing time
In Figure 3 of the main text (panels C and D), we plot the skewness and kurtosis as functions of μ t^2/3. The data collapse observed in Fig. 3C is seen even more clearly in the numerics, shown in Figure <ref>. In the inset to panel A, we see that the time at which the skewness becomes positive scales like μ^-3/2, and, in panel B, we see a collapse of the numerical data when plotted as a function of μ t^2/3. The power law scaling sensibly predicts that the crossing time becomes infinite as μ→ 0, which makes sense because the skewness is always 0 at μ=0. The kurtosis, however, cannot be a function of μ t^2/3 because it is not constant when μ=0.
Figure <ref> shows that the skewness also appears to collapse reasonably well when plotted as a function of μ t^1/3, thus conveying the difficulty of estimating the exponent of t by eye. In order to do so in an unbiased manner, we define a quantitative measure of data collapse based on the appropriately normalized sum of fit residuals (where the fit describes the purportedly universal scaling function). Using μ t^γ as the scaling variable, for varying γ, we find that this metric is minimized for γ∼ 0.65. This is consistent with our initial observation of μ t^2/3 seemingly yielding the best data collapse.
§.§ Sweeps of anisotropy and imbalance
The transport characteristics of the XXZ model are strongly dependent on the anisotropy parameter, Δ. In particular, ballistic, superdiffusive and diffusive behaviors are expected in the regimes where Δ<1, Δ=1 and Δ>1, respectively. Moreover, the KPZ conjecture was only proposed for the isotropic point (Δ=1). In order to get a better sense of the parameter space in which we are operating, we performed 2D numerical sweeps of the anisotropy Δ and the initial imbalance
μ. The results are shown in Figure <ref>. They illustrate a sign change in the skewness and kurtosis close to the Heisenberg point, Δ=1, as well as a clear change in the dynamical exponent.
§ FURTHER EXPERIMENTAL DATA
§.§ Dynamical exponent
In Fig. <ref>, we plot both the experimentally observed and numerically simulated dynamical exponents of the mean and the variance, as a function of the initial imbalance, μ. At large μ, we find that the dynamical exponent is higher than the superdiffusive value of 3/2. The observed values are consistent with Ref. <cit.>, where it was found that the dynamical exponent drifts from 3/2 at small initial imbalance m to approximately 5/3 when m is about 1.
§.§ Pure domain walls
In addition to the results presented in the main paper, we also studied the pure domain wall, μ=∞. This is a simpler experiment because it does not require any averaging over initial states. It is also easier for classical simulations; TEBD simulations converge at least to cycle 23, allowing us to check our experimental results at later cycles than is possible at finite μ with the simulation techniques used here. Experiment and simulation results are shown in Figure <ref>. Our findings are largely consistent with expectations (see Section 6.1 of <cit.> for a review). We observe an absence of transport in the easy-axis regime (Δ > 1), with the observed transport in the experiment consistent with amplitude damping and readout errors (see Figure <ref>). In the easy-plane regime (Δ < 1), we observe ballistic transport. In the isotropic (Δ = 1) case, we observe transport with a dynamical exponent of about 5/3, consistent with the finding of Ref. <cit.> and differing from the 3/2 dynamical exponent of KPZ. Figure <ref> compares the skewness and kurtosis at the isotropic point with the KPZ predictions. While the skewness is close to the KPZ prediction, it can be seen in the simulation results that it continues to increase above the KPZ value. The kurtosis approaches the KPZ prediction, but it is not clear that it has stopped increasing.
§ THE KARDAR-PARISI-ZHANG UNIVERSALITY CLASS
In 1985, Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang <cit.> set out to study the general properties of stochastically growing interfaces, examples of which include flame fronts and tumors. They abstracted these situations to a height function h(x⃗,t) that obeys
∂ h/∂ t = ν∇^2 h + λ/2(∇ h)^2 + η(x⃗, t),
where η(x⃗, t), at each position and time, is an independent zero-mean Gaussian random variable with variance proportional to a parameter D. This is the stochastic Burgers equation for the slope ∇ h. The ∇^2 h is a diffusive term, and the (∇ h)^2 is a nonlinearity. In general, an equation describing a growing interface may have higher powers of the slope, such as (∇ h)^4, but these are irrelevant to the large-scale physics in the renormalization group (RG) sense. Indeed, in 1+1 dimensions (d=1), the KPZ equation [Eq. (<ref>)] has divergences that must be regularized, e.g., by putting it on a spatial lattice or imposing a maximum cutoff wavenumber Λ in Fourier space. Kardar Parisi, and Zhang considered an RG flow in which (1) high-wavenumber modes are integrated out (Λ→ e^-lΛ), (2) space and time are rescaled so that the new smallest length scale is labeled by the same numerical value as the old smallest length scale (x⃗→ e^-lx⃗, t→ e^-zlt), and (3) the height function is rescaled (h→ e^-(d+χ)lh). This procedure can be thought of as coarse graining and zooming out, while changing units so that it appears as though the zoom-out did not occur. Kardar, Parisi and Zhang showed that Eq. (<ref>) is a fixed-point of this RG flow, for specific choices of the scaling exponents and parameters. In 1+1 dimensions in particular, if z=3/2, χ=1/2, and λ^2 D/ν^3 = 2, then Eq. (<ref>) is invariant under the rescaling and coarse graining procedure. Further, this fixed point is stable; if λ^2 D/ν^3 > 2, then it will flow down to 2 under the coarse graining. Conversely, if λ^2 D/ν^3 < 2, then it will flow up to 2.
Kardar, Parisi, and Zhang therefore proposed a new universality class (the KPZ universality class) into which their equation falls. They conjectured that a variety of other systems, such as ballistic deposition (e.g., snow falling and sticking together), and the Eden model (a random growth model), are in this universality class, based on scaling exponents observed in earlier numerical experiments <cit.>.
The KPZ equation and universality class have since been studied in great detail. For example, Kurt Johansson <cit.> studied the asymmetric exclusion process, a model in this universality class, for an initial state corresponding to a wedge-shaped h(x,t=0), and found that a quantity corresponding to the regularized height function, 2h(0,t) - h(∞,t) - h(-∞, t), follows the Tracy-Widom (TW) distribution for the largest eigenvalue in the Gaussian Unitary Ensemble (GUE) at late times. Prähofer and Spohn <cit.> generalized this result, using a mapping from the polynuclear growth model, which is in the KPZ universality class, to random permutations and from there to random Gaussian matrices to again identify the the asymptotic probability distribution of the regularized height function. The precise distribution depends on the initial conditions, and there are three cases that are relevant for us: (1) flat, meaning h(x,0) = 0, (2) stationary, meaning that h(x_i, 0) = h(x_i-1, 0) + η_i, where η_i is randomly ± 1 with equal probabilities, and (3) wedge-shaped, meaning that h(x,0) = -|x|/δ. The wedge-shaped initial condition leads to the same distribution as the curved initial condition studied by Prähofer and Spohn <cit.> and is unaffected by nonzero variance in h(x_i, 0), unlike the flat initial condition, which becomes the stationary initial condition with the addition of fluctuations. Prähofer and Spohn found that the asymptotic probability distributions of the regularized height function in these three cases are (1) GOE Tracy-Widom, (2) Baik-Rains, and (3) GUE Tracy-Widom, in agreement with <cit.>. The corresponding values of skewness and kurtosis are listed in Table <ref>.
Prähofer and Spohn <cit.> further used the polynuclear growth model to solve for the two point correlation correlation function,
C(x,t) = ⟨(h(x,t) - h(0,0) - t⟨∂_t h⟩) ⟩,
assuming stationary initial conditions, which imply that C(x,0) = D/ν |x|. The slope-slope correlation function can then be obtained as
⟨∂_x h(0,0) ∂_x h(x,t)⟩ = 1/2∂^2_xC(x,t).
C(x,t) takes the form
C(x,t) = t^2/3g( const· x/t^2/3),
where g(y) is a universal scaling function. Defining the scaling function f(y) = 1/4 g”(y), which is proportional to the slope-slope correlation function, Prähofer and Spohn obtained exact numerical solutions for f(y), which they found to behave as f(y) ∼ e^-0.295 |y|^3 for large y, falling off faster than a Gaussian.
Evidence for anomalous transport in the Heisenberg spin chain at nonzero temperature was first found in the late 1990s. Sachdev and Damle <cit.> explained diffusive (z=2) nonzero-temperature transport in the easy-axis (Δ > 1) XXZ model even though quasiparticles propagate ballistically, whereas other works <cit.> found ballistic (z=1) behavior at finite temperature in the easy-plane (Δ < 1) regime, suggesting anomalous transport at Δ = 1.
The first numerical evidence for anomalous transport in the infinite-temperature Δ = 1 Heisenberg spin chain was provided in 2011 by Ref. <cit.>. The z=3/2 exponent was demonstrated numerically in 2017 <cit.>, in partially polarized domain wall initial states similar to those studied in our work, which approach the infinite-temperature state as μ→ 0. The 3/2 exponent alone was not enough for the authors to propose that the Heisenberg spin chain is in the KPZ universality class, as it could have other explanations. In 2019 <cit.>, however, they found numerically, for both the continuous-time Heisenberg model and the Floquet version studied here, that the two-point spin-spin correlation function at infinite temperature precisely matched the KPZ prediction for the slope-slope correlation function, Eq. (<ref>), including the deviations from Gaussian at the tails. They therefore proposed that the infinite-temperature spin-1/2 Heisenberg model is in the KPZ universality class, with σ_i^z ↔∂_x h(x_i), and that the infinite-temperature initial condition on the spin chain side corresponds to the stationary initial state (see Table <ref>) on the KPZ side. In their work, they used the finite-μ domain wall states studied here as a computational tool for obtaining the two-point correlation function at infinite temperature.
A number of works have proposed theoretical explanations for the observed z=3/2 dynamical exponent in the Heisenberg spin chain (e.g., <cit.>, see <cit.> for a review). The picture that emerges from these theoretical explanations is that the z=3/2 dynamical exponent and the two-point correlation function are universal for 1D integrable quantum systems with a non-Abelian global symmetry <cit.>, which, in the spin-1/2 Heisenberg model, is SU(2).
The z=3/2 dynamical exponent has also been observed in several experiments <cit.>. Ref. <cit.>, for example, studied the transferred magnetization, which, assuming σ_z ↔∂_x h, corresponds to 2h(0,t) - h(∞, t) - h(-∞, t) for domain-wall initial states with several initial imbalances μ. They confirmed the KPZ predictions that the mean and variance of the transferred magnetization both grow as t^2/3. They also measured the skewness which, in a nonzero-μ domain wall initial state, is expected to asymptote to 0.2241 (see Table <ref>). They measured 0.33 ± 0.08, where the uncertainty is one standard deviation, a result consistent with the KPZ prediction. They also confirmed that breaking either integrability or SU(2) symmetry causes the dynamics to become either ballistic (z=1) or diffusive (z=2). However, they measured the skewness for a domain wall with a very large initial imbalance (μ=1.5), whereas the KPZ dynamics are expected to emerge at small μ.
However, there is a problem with the conjecture that the infinite-temperature (μ=0) Heisenberg model is in the KPZ universality class. As Refs. <cit.> point out, the probability distribution of the transferred magentization in this state must be symmetric; excitations are just as likely to move from the right side of the chain to the left as from the left to the right. Therefore, all of the odd moments of this distribution must be zero. This differs from the Baik-Rains distribution, which has a nonzero skewness of 0.359 <cit.>; see Table <ref>.
If one reversed the order of limits, first taking t→∞ and then μ→0, the resulting transferred magnetization distribution may be skewed because even an infinitesimal domain wall breaks the mirror symmetry of the μ=0 state. However, this does not resolve the issue because the late-time behavior at nonzero μ has been shown to be diffusive rather than KPZ <cit.>. Ref. <cit.> would suggest otherwise, namely, that KPZ dynamics emerge even for μ>0. In that work, the Heisenberg spin chain is coarse-grained and the global SU(2) symmetry is promoted to a gauge symmetry, with a dynamical gauge field specifying the direction of the local Bethe vacuum in each lattice cell. A long-wavelength torsional mode of the gauge field is shown to obey a stochastic Burgers equation when the quasiparticle occupancy is uniform across cells, a condition that also holds for the μ>0 domain-wall states. However, this work does not connect the dynamics of the torsional mode with the dynamics of σ_i^z, the variable that is has been observed to play the role of ∂_x h. A summary of the different regimes discussed here and ways in which they differ from the KPZ universality class is shown in Table <ref>.
There are other ways of taking the limit, illustrated in Fig. <ref>. So far, we have considered taking μ→0 first (purple line) or t→∞ first (red line), neither of which can result in KPZ dynamics. Taking a simultaneous limit can avoid these theoretical arguments. In particular, diffusive dynamics are expected to emerge at a time that scales as 1/μ^3 <cit.>. Therefore, if t is scaled with μ in such a way that it remains less than 1/μ^3, for infinitesimal μ, but still approaches infinity, the theoretical arguments against KPZ are avoided and the experiment has a chance of providing new information. In Fig. <ref>, the orange curve, μ∼ t^-1/3 indicates the threshold for diffusive dynamics; any curve approaching the origin that remains above the orange curve will result in diffusion. The purple curve results in symmetrically-distributed transferred magnetization, unlike the KPZ prediction, which is skewed. The green curve is an example of a way of taking the limit that is not ruled out theoretically. Although there is no prior theoretical motivation for taking the limit in any order other than μ→0 first, the collapsing behavior observed in Fig. <ref> suggests that the skewness may be a function of μ t^2/3, which is constant along the μ∼ t^-2/3 curve. Further, from Fig. <ref> and Fig. 3 of the main text, if μ t^2/3 is fixed to a large number (at least about 10), the skewness appears to be consistent with that of the TW GUE probability distribution. However, we experimentally find a kurtosis of about -0.05± 0.02 (Fig. 3 of the main text and Fig. <ref>), inconsistent with the TW GUE kurtosis of 0.09. This rules out KPZ dynamics on the timescale of the experiment. It does not rule out KPZ dynamics at much later times, but we do not see evidence for these dynamics either.
Therefore, the challenge is to explain why the dynamical exponent and two-point correlation function, at infinite temperature, look like KPZ, universally across integrable 1D quantum systems with a global non-Abelian symmetry, and yet other observables, such as the skewness of transferred magnetization when μ=0, differ. In response to this challenge, Refs. <cit.> proposed and studied a classical Landau-Lifshitz (CLL) model and Refs. <cit.> a non-linear fluctuating hydrodynamics (NLFH) model. Both systems predict a symmetric distribution for the transferred magnetization. CLL predicts an excess kurtosis close to 0, whereas NLFH model predicts 0.14. Although CLL agrees nicely with the findings reported in this work, it is an example of another system with similar behavior rather than an explanation of why these features should appear in the Heisenberg spin chain.
§ UNIFYING FSIM CONVENTIONS
§.§ Placement of the phase angle
In our work, we use the following definition of the fSim gate:
U_fSim=(
1 0 0 0
0 cos(θ) isin(θ) 0
0 isin(θ) cos(θ) 0
0 0 0 e^-iϕ),
which is a fully general number-conserving two-qubit gate up to single-qubit Z rotations. Another natural choice would be to split the phase between the |00⟩ and |11⟩ states:
U_fSim'=(
e^-iϕ/2 0 0 0
0 cos(θ) isin(θ) 0
0 isin(θ) cos(θ) 0
0 0 0 e^-iϕ/2).
This latter definition, which is more directly related to the trotterized Heisenberg Hamiltonian, is related to ours by
U'_ fSim = e^-i σ_1^zϕ/4 e^-i σ_2^zϕ/4 U_ fSim = U_ fSim e^-i σ_1^zϕ/4 e^-i σ_2^zϕ/4.
It is not immediately obvious that our experiment is insensitive to whether we use U_ fSim or U'_ fSim, but this turns out to be the case. The transferred magnetization is independent of whether one uses U'_ fSim or U_ fSim as long as the number of cycles t is at most N_Q/2. We verified this similarly to how we verified that the transferred magnetization is independent of N_Q as long as t ≤ N_Q/2 in Table <ref>. For example, with μ=0.5, and (θ, ϕ) = (0.4π, 0.8π), a density matrix simulation on four qubits gives a kurtosis after two cycles of -0.30867052, to this many digits, regardless of whether one uses U_ fSim or U'_ fSim, and an 8-qubit simulation gives a kurtosis of -0.12588028 at cycle 4 regardless of which fSim gate one uses. The choice matters starting at cycle N_Q/2+1, but we are not interested in times past N_Q/2 because we have already seen (Table <ref>) that finite size effects appear there.
Similarly, the transferred magnetization is unchanged, even past time N_Q/2, under either θ→-θ or ϕ→-ϕ.
§.§ Comparison to the η, λ parameterization
Following Ref. <cit.>, it can be shown that Floquet application of the gate unitary,
(
1 0 0 0
0 sinη/sin(λ+η) sinλ/sin(λ+η) 0
0 sinλ/sin(λ+η) sinη/sin(λ+η) 0
0 0 0 1
),
gives the desired Heisenberg Hamiltonian evolution with Δ=cos(η) in the limit λ→ 0. Here, λ is imaginary and η is real in the gapless (Δ<1) regime, while λ is real and η is imaginary in the gapped (Δ>1) regime. We will here consider the latter case; however, a similar derivation applies for Δ<1 as well.
Setting this equal to e^iϕ/2 U_ fSim' (Eq. <ref>), we require that:
ie^iϕ/2sin(θ)=sinλ/sin(λ+η)
Comparing the magnitudes and phases of the two sides of this equation, one finds, respectively:
tan^2(θ)=-sin^2(λ)/sin^2(η),
tan(ϕ/2)=itan (λ)/tan(η).
Eliminating λ and using Δ=cos(η), we have:
Δ^2tan^2(θ) =tan^2(ϕ/2)(1+(1-Δ^2)tan^2(θ)),
Δ =sin(ϕ/2)/sin(θ)
|
http://arxiv.org/abs/2306.06458v1
|
20230610144901
|
Rate-Splitting Multiple Access for Simultaneous Multi-User Communication and Multi-Target Sensing
|
[
"Kexin Chen",
"Yijie Mao",
"Longfei Yin",
"Chengcheng Xu",
"Yang Huang"
] |
cs.IT
|
[
"cs.IT",
"eess.SP",
"math.IT"
] |
Rate-Splitting Multiple Access for Simultaneous Multi-User Communication and Multi-Target Sensing
Kexin Chen, Yijie Mao, Member, IEEE, Longfei Yin, Student Member, IEEE, Chengcheng Xu,
and Yang Huang, Member, IEEE
This work has been supported in part by the National Nature Science Foundation of China under Grant 62201347; and in part by Shanghai Sailing Program under Grant 22YF1428400. (Corresponding author: Yijie Mao)
K. Chen and Y. Mao are with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China (email: [email protected]; [email protected]).
L. Yin is with the Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, U.K (email: [email protected]).
C. Xu is with the College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China (email: [email protected]).
Y. Huang is with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China (email: [email protected]).
July 31, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, we initiate the study of rate-splitting multiple access (RSMA) for a mono-static integrated sensing and communication (ISAC) system, where the dual-functional base station (BS) simultaneously communicates with multiple users and detects multiple moving targets.
We aim at optimizing the ISAC waveform to jointly maximize the max-min fairness (MMF) rate of the communication users and minimize the largest eigenvalue of the Cramér-Rao bound (CRB) matrix for unbiased estimation. The CRB matrix considered in this work is general as it involves the estimation of angular direction, complex reflection coefficient, and Doppler frequency for multiple moving targets.
Simulation results demonstrate that RSMA maintains a larger communication and sensing trade-off than conventional space-division multiple access (SDMA) and it is capable of detecting multiple targets with a high detection accuracy.
The finding highlights the potential of RSMA as an effective and powerful strategy for interference management in the general multi-user multi-target ISAC systems.
Rate-splitting multiple access (RSMA), integrated sensing and communication (ISAC),
Cramér-Rao bound
§ INTRODUCTION
Integrated sensing and communication (ISAC) has gained recognition as a promising enabling technology for 6G and beyond.
The integration of these two functionalities into a unified framework is expected to unlock a wide range of new use cases and applications by elevating the overall performance, such as spectral efficiency and sensing accuracy.
Additionally, in light of the conflict and resource competition between remote sensing and communication, ISAC offers two significant advantages: integration gain and coordination gain, which stem from the shared utilization of spectrum, hardware architecture, and a collaborative signal processing platform.
One major challenge in ISAC lies in designing a dual-functional waveform, which is typically divided into the following three categories: sensing-centric design, communication-centric design, and joint design <cit.>.
In this work, we dedicate to the joint design category, which has been a mainstream in ISAC.
Another challenge introduced by ISAC is the interference between communication and sensing, which occurs due to the overlapping frequency bands.
Recent studies have shown that rate-splitting multiple access (RSMA), an advanced multiple access and dynamic interference management strategy, is capable to address the challenge and achieves a flexible and robust interference management between communication and sensing or among communication users
<cit.>.
The concept of RSMA-assisted ISAC was initially explored in<cit.>, which highlights that the common stream of RSMA serves dual purposes. It not only better manages interference between communication users, but also acts as a radar sequence to match the transmit beampattern.
Further, <cit.> showed the advantages of RSMA-assisted ISAC in a practical scenario of partial channel state information (CSIT) and moving communication users. <cit.> extended RSMA-assisted ISAC from terrestrial communications to satellite communications.
Additionally,
<cit.> demonstrated that RSMA improves the trade-off between max-min fairness (MMF) rate and Cramér-Rao bound (CRB) of the single sensing target.
However, all aforementioned works are limited to the study of single-target sensing.
To the best of our knowledge, the performance of RSMA in multi-user multi-target ISAC has not been investigated yet.
In this work, we initiate the study of RSMA in a multi-user multi-target ISAC system. To measure the sensing performance of multiple moving targets, we derive a more general CRB radar sensing metric by involving the estimation of angular direction, complex reflection coefficient, and Doppler frequency for the targets. Simulation results demonstrate that RSMA-assisted multi-user multi-target ISAC maintains a better communication and sensing trade-off than space division multiple access (SDMA) and it detects multiple targets with a high detection accuracy.
§ SYSTEM MODEL
We consider a mono-static ISAC system assisted by RSMA, where the transmit antennas are shared by the communication users and the moving targets.
The base station (BS), equipped with
N_t transmit antennas and N_r receive antennas, simultaneously communicates with K single-antenna downlink communication users and detects M moving targets. The communication users and moving targets are indexed by 𝒦={ 1,… ,K } and ℳ={ 1,… ,M }, respectively.
Consider the simplest and practical 1-layer RSMA model <cit.>, where the message U_k intended for communication user k is split into a common message U_c,k and a private message U_p,k, respectively. All common parts are collectively encoded into a single common stream s_c, and the private parts are separately encoded into private streams {s_p,k}_k=1^K.
Consider N transmission and radar pulse blocks in one coherent processing interval (CPI) indexed by 𝒩=[1,… ,N], the transmit data stream vector at each time index n is 𝐬[n]=[s_c[n],s_p,1[n],…,s_p,K[n]]^T∈ℂ^(K+1)× 1. The streams are linearly precoded by the precoding matrix 𝐖=[𝐰_c,𝐰_1,…, 𝐰_K]∈ℂ^N_t× (K+1), which remains consistent during one CPI. The transmit signal at time index n is
𝐱[n]=𝐖𝐬[n]=𝐰_cs_c[n]+∑_k∈𝒦𝐰_ks_p,k[n],
where the data streams satisfy 𝐬[n]𝐬[n]^H=𝐈_K+1, implying that the entries are independent from each other.
Hence, the covariance matrix of the transmit signal can be calculated by 𝐑_x=1/N∑_n∈𝒩𝐱[n]𝐱[n]^H=𝐖𝐖^H.
The signal received at the kth communication user at time index n is given as
y_k[n]
=𝐡_k^H𝐱[n]+z_k[n]
=𝐡_k^H𝐰_cs_c[n]+∑_i∈𝒦𝐡_k^H𝐰_is_p,i[n]+z_k[n], ∀ k∈𝒦,
where 𝐡_k∈ℂ^N_t× 1 is the communication channel between the BS and user k. It is assumed to be perfectly known at the BS and communication users. z_k[n] is the additive white Gaussian noise (AWGN) received at user k, which follows the distribution of 𝒞𝒩(0,σ _c^2).
As the transmit signal is also utilized for detecting the moving targets, the radar echo signal received at the BS at time index n is defined as
𝐲_s[n]=∑_m∈ℳα _me^j2πℱ_D_mnT𝐛(θ _m)𝐚^T(θ _m)𝐱[n]+𝐳_s[n],
where {α _m}_m=1^M represent the complex reflection coefficients proportional to the targets' radar cross-section (RCS). {ℱ_D_m}_m=1^M are the Doppler frequencies for different targets with ℱ_D_m=2v_mf_c/c, where v_m denotes the velocity of the mth moving target and c, f_c are the speed of light and carrier frequency, respectively. T represents the symbol period.
{θ_m}_m=1^M denote the interested targets' direction of departure (DoD) as well as the direction of arrival (DoA), which are equal in a mono-static system. 𝐚(θ_m)=[1,e^jπsin (θ_m),…,e^jπ(N_t-1)sin (θ_m) ]^T∈ℂ^N_t× 1, ∀ m ∈ℳ is the transmit steering vector, where the distance between adjacent array elements is half-wavelength. And 𝐛(θ_m)∈ℂ^N_r× 1 denotes the receive steering vector defined in the same way as 𝐚(θ_m).
𝐳_s[n] ∈ℂ^N_r× 1 denotes the AWGN following 𝒞𝒩(0^N_r× 1,𝐐), where 𝐐=σ _s^2𝐈_N_r.
For notation simplicity, equation (<ref>) is equivalently rewritten as
𝐲_s[n]=𝐁𝐔𝐄[n]𝐀^T𝐱[n]+𝐳_s[n],
where
𝐀=[𝐚(θ _1),…,𝐚(θ _M)],
𝐁=[𝐛(θ _1),…,𝐛(θ _M)],
α=[α _1,…,α _M]^T,
θ=[θ _1,… ,θ _M]^T,
𝐔=diag(α),
𝐄[n]=diag([e^j2πℱ_D_1nT, …, e^j2πℱ_D_MnT]^T).
§ PERFORMANCE METRICS AND PROBLEM FORMULATION
In this section, we specify the respective performance metrics for communication and radar sensing, namely, the MMF rate and CRB. The corresponding optimization problem to jointly maximize these two metrics is then presented.
§.§ Metrics for Multi-User Communication
We select MMF rate to evaluate the communication performance in the considered multi-user multi-target ISAC system. Following the principle of 1-layer RSMA <cit.>, each user sequentially decodes the common stream and its own private stream. The corresponding achievable rates for the common and private streams at each user are given as
R_c,k=log_2(1+|𝐡_k^H𝐰_c|^2/∑_i∈𝒦|𝐡_k^H𝐰_i|^2+σ_c^2), ∀ k∈𝒦,
R_p,k=log_2(1+|𝐡_k^H𝐰_k|^2/∑_i∈𝒦,i≠ k|𝐡_k^H𝐰_i|^2+σ_c^2), ∀ k∈𝒦.
In order for each user to successfully decode the common stream, the achievable common rate is defined by R_c=min_k∈𝒦{R_c,k}=∑_k∈𝒦C_k, with C_k denoting the allocated rate for transmitting user k's common message. Therefore, the achievable rate of user k is R_tot,k=C_k+R_p,k, ∀ k∈𝒦, of which the minimum value min_k∈𝒦{R_tot,k} is the MMF rate.
§.§ Metrics for Multi-Target Sensing
We choose the commonly used radar sensing metric CRB for target estimation. It is the lower bound on the variance of unbiased estimators<cit.> and equivalent to the inverse of the fisher information matrix (FIM) denoted by 𝐅, which means 𝐂𝐑𝐁=𝐅^-1.
The matrix 𝐅 involves the estimation of angular direction, complex reflection coefficient, and Doppler frequency for multiple moving targets. The parameters can be defined as ξ_m={θ_m ,α __m,α __m ,ℱ_D_m}^T, ∀ m∈ℳ, thus extending 𝐅 to a 4M dimensional matrix as
0.913!𝐅=2[ Re(𝐅_11) Re(𝐅_12) -Im(𝐅_12) -Im(𝐅_14); Re^T(𝐅_12) Re(𝐅_22) -Im(𝐅_22) -Im(𝐅_24); -Im^T(𝐅_12) -Im^T(𝐅_22) Re(𝐅_22) Re(𝐅_24); -Im^T(𝐅_14) -Im^T(𝐅_24) Re^T(𝐅_24) Re(𝐅_44) ],
with 𝐅_pq, p,q∈{ 1,2,4 } being calculated as
0.73!𝐅_11 =(Ḃ^H𝐐^-1Ḃ)⊙ (𝐔^∗𝐀^H𝐑_x^∗𝐀𝐔)⊙Σ_1
+(Ḃ^H𝐐^-1𝐁)⊙ (𝐔^∗𝐀^H𝐑_x^∗Ȧ𝐔)⊙Σ_1
+(𝐁^H𝐐^-1Ḃ)⊙ (𝐔^∗Ȧ^H𝐑_x^∗𝐀𝐔)⊙Σ_1
+(𝐁^H𝐐^-1𝐁)⊙ (𝐔^∗Ȧ^H𝐑_x^∗Ȧ𝐔)⊙Σ_1 ,
𝐅_12 =(Ḃ^H𝐐^-1𝐁)⊙ (𝐔^∗𝐀^H𝐑_x^∗𝐀)⊙Σ_1
+(𝐁^H𝐐^-1𝐁)⊙ (𝐔^∗Ȧ^H𝐑_x^∗𝐀)⊙Σ_1,
𝐅_14 =(Ḃ^H𝐐^-1𝐁)⊙ (𝐔^∗𝐀^H𝐑_x^∗𝐀𝐔)⊙Σ_2
+(𝐁^H𝐐^-1𝐁)⊙ (𝐔^∗Ȧ^H𝐑_x^∗𝐀𝐔)⊙Σ_2,
𝐅_22 =(𝐁^H𝐐^-1𝐁)⊙ (𝐀^H𝐑_x^∗𝐀)⊙Σ_1,
𝐅_24 =(𝐁^H𝐐^-1𝐁)⊙ (𝐀^H𝐑_x^∗𝐀𝐔)⊙Σ_2,
𝐅_44 =(𝐁^H𝐐^-1𝐁)⊙ (𝐔^∗𝐀^H𝐑_x^∗𝐀𝐔)⊙Σ_3,
where
0.89!Ȧ=[ ∂𝐚(θ _1)/∂θ _1 ,…, ∂𝐚(θ _M)/∂θ _M ] , Ḃ=[ ∂𝐛(θ _1)/∂θ _1,…, ∂𝐛(θ _M)/∂θ _M ],
(Σ_1)_ij=∑_n∈𝒩e^j2π (ℱ_D_j-ℱ_D_i)nT, ∀ i,j∈ℳ,
(Σ_2)_ij=∑_n∈𝒩2π nTe^j2π (ℱ_D_j-ℱ_D_i)nT, ∀ i,j∈ℳ,
(Σ_3)_ij=∑_n∈𝒩(2π nT)^2e^j2π (ℱ_D_j-ℱ_D_i)nT, ∀ i,j∈ℳ,
and 𝐀, 𝐁, 𝐔 are specified in (<ref>). 𝐑_x=𝐖𝐖^H, 𝐐=σ _s^2𝐈_N_r are the respective covariance matrices of the transmit signal and the AWGN at the BS. The detailed derivation procedure is provided in the Appendix.
Remark: The CRB metric derived in this work is more general than the one considered in existing works <cit.>.
On the one hand, the FIM in (<ref>) involves the estimation of multiple moving targets, which considers the single-target FIM in <cit.> as a special case. On the other hand, the FIM in (<ref>) measures the Doppler frequency in the radar echo signal. This contrasts with <cit.>, which considers multiple targets but neglects the Doppler frequency, i.e, (Σ_1)_ij=L.
§.§ Problem Formulation
In this work, we aim at jointly maximizing the MMF rate of the communication users and minimizing the largest eigenvalue of the CRB matrix of multiple moving targets. Note that the latter is equivalent to maximizing the smallest eigenvalue of 𝐅<cit.>. The ISAC waveform optimization problem is
max_𝐜,𝐖,g min_k∈𝒦{R_tot,k}+μ g
s.t. 𝐅≽ g𝐈_4M,
𝐜≥0,
R_c,k≥∑_i∈𝒦C_i, ∀ k∈𝒦,
diag(𝐖𝐖^H)=P1^N_t× 1/N_t,
where μ is the regularization parameter, through which we can shift the priority between communication and sensing functionality. 𝐜=[ C_1,…,C_K ] is the common rate allocation among the communication users. 𝐈_4M is the identity matrix with dimension of 4M equal to 𝐅. Constraint (<ref>) guarantees the matrix (𝐅-g𝐈_4M) is positive semi-definite, and g is the auxiliary value corresponding to the smallest eigenvalue of 𝐅. (<ref>) guarantees the non-negativity of the common rate allocations, where 𝐜=[C_1,…,C_K]^T∈ℂ^K× 1. Constraint (<ref>) ensures that each communication user can decode the common stream successfully. The power at the BS is constraint by (<ref>), with P denoting the total power budget<cit.>.
§ OPTIMIZATION FRAMEWORK
In this section, we present the optimization framework designed to solve problem (<ref>). Building upon the existing SCA-based algorithm introduced in <cit.> for minimizing the trace of CRB with quality of service (QoS) rate constraints, we extend it to solve problem (<ref>), which jointly maximizes the MMF rate and minimizes the largest eigenvalue of CRB.
By defining 𝐇_k=𝐡_k𝐡_k^H, 𝐖_c=𝐰_c𝐰_c^H, and 𝐖_k=∑_k∈𝒦𝐰_k𝐰_k^H, where rank(𝐖_c)=1 and rank(𝐖_k)=1, we have 𝐑_x=𝐖𝐖^H=𝐖_c+∑_k∈𝒦𝐖_k. Problem (<ref>) is then equivalently transformed into a semi-definite programming (SDP) problem.
To deal with the non-convex rate expressions, we introduce auxiliary variables r_m and 𝐫=[r_p,1,…,r_p,K]^T∈ℂ^K×1, where the former is the MMF rate of communication users and the latter denotes the lower bounds of corresponding private rate {R_p,k}_k=1^K.
We also introduce slack variables {φ _i,k}_k=1^K, {δ _i,k}_k=1^K, i∈{c, p}, where {e^φ _i,k}_k=1^K, i∈{c, p} are upper bounds of the interference-plus-noise term in common and private rate expressions, respectively. Correspondingly, {e^δ _i,k}_k=1^K, i∈{c, p} are lower bounds of the signal term plus interference-plus-noise term in common and private rate expressions.
Considering the high computational complexity caused by the nonlinear expressions {e^δ _i,k}_k=1^K, i∈{c, p}, we utilize slack variables {ζ_i,k}_k=1^K, i∈{c, p} to denote the upper bounds of them. With all the aforementioned slack variables, problem (<ref>) can be equivalently transformed as
(𝒫_2) max_𝐜, 𝐖_c, {𝐖_k}_k=1^K, 𝐫, r_m, g, δ, φ, ζ r_m+μ g
s.t. diag(𝐖_c+∑_k∈𝒦𝐖_k)=P1^N_t× 1/N_t, ∀ k ∈𝒦,
𝐖_c≽0, 𝐖_k≽0, ∀ k ∈𝒦,
rank(𝐖_c)=1, rank(𝐖_k)=1, ∀ k ∈𝒦,
C_k+r_p,k≥ r_m, ∀ k ∈𝒦,
∑_j∈𝒦C_jln2≤δ _c,k-φ _c,k, ∀ k ∈𝒦,
r_p,kln2≤δ _p,k-φ _p,k, ∀ k ∈𝒦,
e^φ _c,k≥∑_i∈𝒦tr(𝐖_i𝐇_k)+σ_c^2, ∀ k ∈𝒦,
e^φ _p,k≥∑_i∈𝒦,i≠ ktr(𝐖_i𝐇_k)+σ_c^2, ∀ k ∈𝒦,
ζ_i,klnζ_i,k≥δ _i,kζ_i,k, i∈{c, p}, ∀ k ∈𝒦,
ζ_c,k ≤∑_i∈𝒦tr(𝐖_i𝐇_k)+tr(𝐖_c𝐇_k)+σ _c^2, ∀ k ∈𝒦,
ζ_p,k ≤∑_i∈𝒦tr(𝐖_i𝐇_k)+σ_c^2, ∀ k ∈𝒦,
(<ref>), (<ref>).
The transformed problem (<ref>) remains non-convex due to constraints (<ref>)-(<ref>). We then approximate the convex left hand sides (LHS) e^φ _i,k, ζ_i,klnζ_i,k, i∈{c, p}, ∀ k ∈𝒦 by respectively employing the first-order Taylor approximation at points φ _i,k^[t], ζ _i,k^[t], i∈{c, p}, ∀ k ∈𝒦 at each iteration t.
The non-convex constraints (<ref>)-(<ref>) are therefore approximated at iteration t as (<ref>)-(<ref>). Constraints (<ref>) are approximated at iteration t and further transformed into the equivalent second-order cone (SOC) forms as (<ref>).
(1+φ _c,k-φ _c,k^[t])e^φ _c,k^[t]≥∑_i∈𝒦tr(𝐖_i𝐇_k)+σ_c ^2, ∀ k ∈𝒦,
(1+φ _p,k-φ _p,k^[t])e^φ _p,k^[t]≥∑_i∈𝒦,i≠ ktr(𝐖_i𝐇_k)+σ_c^2,
∀ k ∈𝒦,
[2√(ζ_i,k^[t]), δ _i,k+ζ _i,k-(1+lnζ_i,k^[t]) ]_2
≤-δ _i,k+ζ _i,k+(1+lnζ_i,k^[t]), i∈{c, p}, ∀ k ∈𝒦,
To handle the rank-one constraints (<ref>), we define 𝐮_c,max^[t] and {𝐮_k,max^[t]}_k=1^K as the normalized eigenvectors related to the maximum eigenvalues of 𝐖_c^[t] and {𝐖_k^[t]}_k=1^K, through which we then move (<ref>) to the objective function by
C_rank= ρ{[tr(𝐖_c)-(𝐮_c,max^[t])^H𝐖_c𝐮_c,max^[t]]
+ ∑_k∈𝒦[tr(𝐖_k)-(𝐮_k,max^[t])^H𝐖_k𝐮_k,max^[t]] },
where ρ is a negative penalty variable selected to ensure that C_rank closely approaches zero.
Based on the aforementioned approximation, we solve problem (<ref>) via a sequence of convex subproblems. At iteration t, we solve the following subproblem based on the optimal solution 𝐖_c^[t-1], {𝐖_k^[t-1]}_k=1^K, δ^[t-1], φ^[t-1], ζ^[t-1] obtained from the previous subsection.
(𝒫_3) max_𝐜, 𝐖_c, {𝐖_k}_k=1^K, 𝐫, r_m, g, δ, φ, ζ r_m+μ g+C_rank
s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>)-(<ref>), (<ref>),
(<ref>), (<ref>)-(<ref>),
Given the fact that the solution of problem (<ref>) at iteration-t-1 is a feasible solution at iteration-t, and objective function is monotonically increasing and bounded by the transmit power constraint, we then guarantee the convergence of problem (<ref>). Thus, the optimized MMF rate and 𝐖 are obtained, and based on the latter, CRB matrix can be calculated.
§ NUMERICAL RESULTS
In this section, we demonstrate the performance of the proposed RSMA-assisted multi-user multi-target ISAC model. The respective performance metrics for communication and sensing are the MMF rate and the trace of the weighted CRB, which is expressed as tr(Λ𝐅^-1), where Λ=diag([λ _1,…,λ _4M]^T)∈ℂ^4M×4M denotes the weights of different target parameters. Unless otherwise specified, the weights are set as {λ _i} _i=1^4M=1<cit.>. When varying numbers of targets are considered, for fairness of comparison, we consider the sensing performance metric as the average trace of CRB defined by tr(𝐅^-1)/M.
Unless otherwise specified, we consider the scenario where the BS is equipped with N_t=4 transmit antennas and N_r=9 receive antennas and it serves K=4 communication users. The total power budget at the BS is P=20 dBm and we assume σ_c^2=0 dBm without loss of generality. The SNR of radar is calculated by SNR_radar=|α |P/σ _s^2=-20 dBm, where |α |=|{α_m}_m=1^M|=1, ∀ m∈ℳ. Consider N=1024 radar pulse blocks in one CPI.
The channels between the BS and the communication users are assumed to be Rayleigh fading with each entry following the complex Gaussian distribution as 𝒞𝒩(0,1). We consider 7 different targets, among which target 1-3 are located at 45^∘, 30^∘, 15^∘ with velocities of 10 m/s, 14 m/s, 18 m/s, respectively. Target 4-7 are at 0^∘, 34^∘, 18^∘, 9^∘ with the same velocity of 10 m/s. We use linearly precoded SDMA as a baseline, which is achieved through disabling the common stream of RSMA. The results are averaged over 100 channel realizations.
Fig. <ref> illustrates the trade-off performance between communication and sensing when M=1 (target 1), M=2 (target 1-2) and M=3 (target 1-3). We observe that RSMA exhibits an explicit trade-off region gain over SDMA. Attributing to the additional degree-of-freedom (DoF) introduced by the common stream, RSMA achieves a superior MMF rate than SDMA at the rightmost corner point. As the number of targets increases, the trade-off regions of both RSMA and SDMA become worse since the loss of the beamforming power at each single target. Surprisingly, we observe that RSMA is capable of detecting more targets than SDMA while maintaining the QoS of the communication users, thus showing the great potential of RSMA to enhance the sensing functionality.
Fig. <ref> shows the trade-off comparison between RSMA and SDMA when the angle difference between M=2 targets defined by Δ u=sin(θ_1)-sin(θ_2) varies. To explicitly investigate the influence of the angle difference between the target, Δ f=ℱ_D_1-ℱ_D_2=0 is considered. There are N_t=4 transmit antennas and N_r=5 receive antennas at the BS. We set SNR_radar=10 dBm, N=256 and generate K=4 communication user channels randomly by following the complex Gaussian distribution<cit.>. As the angle difference grows from Δ u=0.16 (target 7, 4), Δ u=0.31 (target 6, 4) to Δ u=0.56 (target 5, 4), the sensing metric tr(CRB) tends to become lower, owing to the decrease of the interference between the radar echo signals at the receiver. We observe that under various angle differences, the proposed RSMA consistently outperforms SDMA.
To further evaluate the sensing capability of the proposed ISAC system, we employ the Capon beamformer to estimate the angles and velocities of the targets, which maximizes SINR and reduces noise and interference while ensuring that the desired signal is not distorted.
The Capon beamformer 𝐰_p∈ℂ^N_r× 1 is expressed as
𝐰_p(θ)=𝐑_y^-1𝐛(θ )/𝐛^H(θ )𝐑_y^-1𝐛(θ ),
where 𝐑_y=1/N∑_n∈𝒩𝐲_s[n]𝐲_s[n]^H is the covariance of the received signal. The Capon estimation of the complex reflection coefficient is defined as the minimizer of the following cost-function
α̂(θ, v) =argmin_αE{ |𝐰_p^H(θ )𝐲_s[n]
-𝐰_p^H(θ ) α e^j 2πℱ_DnT𝐛(θ )𝐚^T(θ)𝐱[n]|^2}
=E{𝐰_p^H(θ )𝐲_s[n]𝐱^H[n]𝐚^∗(θ ) e^-j 2πℱ_DnT/𝐚^T(θ)𝐱[n]𝐱^H[n]𝐚^∗(θ )}.
Substituting (<ref>) into (<ref>), we have
α̂(θ, v)
=𝐛^H(θ)𝐑_y^-1𝐘_s𝐃𝐗^H𝐚^∗(θ)/𝐛^H(θ)𝐑_y^-1𝐛(θ)p(θ)N,
where p(θ)=𝐚^T(θ)𝐑_x𝐚^∗(θ) is the transmit beam pattern, 𝐘_s=[𝐲_s[1],…,𝐲_s[N]], 𝐗=[𝐱[1],…,𝐱[N]], 𝐃=diag([e^-j 2πℱ_DT,…,e^-j 2πℱ_DNT]^T). Obtaining α̂ at each grid point of (θ, v) requires two dimension calculation as <cit.>
{θ_m, v_m}_m=1^M=argmax_θ, v|α̂(θ, v)|^2.
Therefore, we obtain M peak points corresponding to the M targets. Fig. <ref> shows two peak points, which correspond to the targets 4-5. The precoding matrix used here is the optimized solution of (<ref>) when N_t=4, N_r=5, SNR_radar=10 dBm, N=256, and μ=10^-3. The parameters of the two moving targets are correctly estimated, implying the high detection accuracy of RSMA.
§ CONCLUSION
This work initiates the study of RSMA in a mono-static ISAC system with multiple communication users and multiple sensing targets. We derive a general CRB sensing metric which embraces the estimation of angular direction, complex reflection coefficient, and Doppler frequency for multiple targets. By designing the transmit waveform to maximize the MMF rate of multiple users and minimize the largest eigenvalue of CRB of multiple moving targets, we show that RSMA achieves a better communication and sensing trade-off than conventional linearly precoded SDMA in multi-user multi-target ISAC.
Additionally, the trade-off gain of RSMA grows with increasing angle difference between targets.
Therefore, we conclude that RSMA offers a highly effective interference management solution. It has great potential for synergizing with ISAC in future wireless networks.
§ APPENDIX
§ DERIVATION OF THE FIM IN (<REF>)
CRB is the lower bound on the variance of unbiased estimators and is given by 𝐂𝐑𝐁=𝐅^-1. The matrix 𝐅 is related to four target parameters ξ_m={θ_m ,α __m,α __m ,ℱ_D_m}^T, ∀ m ∈ℳ. With definition of 𝐯[n]=𝐲_s[n]-𝐳_s[n]=𝐁𝐔𝐄[n]𝐀^T𝐱[n], we note that
𝐅_ℱ_D_iℱ_D_j=2Re[tr{∑_n∈𝒩∂𝐯[n]^H/∂ℱ_D_i𝐐^-1∂𝐯[n]/∂ℱ_D_j}], ∀ i,j∈ℳ,
the partial derivative can be calculated as
𝐯[n]/∂ℱ_D_i=𝐁𝐔 ( j2π nT )𝐄[n]𝐞_i𝐞_i^T𝐀^T𝐱[n], ∀ i∈ℳ,
where 𝐞_i is the ith column of 𝐈_K. Since tr(𝐀𝐁)=tr(𝐁𝐀) and 𝐔, 𝐄[n] are diagonal matrices, (<ref>) can be rewritten as
𝐅_ℱ_D_iℱ_D_j =2Re [tr{∑_n∈𝒩 ( 𝐁𝐔 ( j2π nT )𝐄[n]𝐞_i𝐞_i^T𝐀^T×
𝐱[n] )^H𝐐^-1 ( 𝐁𝐔 ( j2π nT )𝐄[n]𝐞_j𝐞_j^T𝐀^T𝐱[n] ) }]
=2Re [tr{∑_n∈𝒩𝐞_i^T𝐁^H𝐐^-1𝐁𝐞_j𝐞_j^T𝐔𝐄[n]j2π nT
×𝐀^T𝐱[n]𝐱[n]^H𝐀^∗𝐄^H[n] (-j2π nT )𝐔^H𝐞_i} ]
=2Re{(𝐁^H𝐐^-1𝐁)_ij (𝐔^∗𝐀^H𝐑_x^∗𝐀𝐔)_ij
×(Σ_3)_ij}, ∀ i,j∈ℳ,
where (· )_ij refers to the ith row and jth column element of the matrix. Thus we obtain 𝐅_ℱ_Dℱ_D=2Re(𝐅_44), with 𝐅_44 specified in (<ref>). Other terms of FIM can be calculated in the same way with the corresponding partial derivative.
Hence, we obtain the FIM in (<ref>).
IEEEtran
|
http://arxiv.org/abs/2306.12067v1
|
20230621073229
|
Optimal Algorithms for Stochastic Bilevel Optimization under Relaxed Smoothness Conditions
|
[
"Xuxing Chen",
"Tesi Xiao",
"Krishnakumar Balasubramanian"
] |
math.OC
|
[
"math.OC",
"cs.LG",
"stat.ML"
] |
Phononic graded meta-MEMS for elastic wave amplification and filtering
Federico Maspero3,
Jacopo Maria De Ponti1,
Luca Iorio1,
Annachiara Esposito2,
Riccardo Bertacco3,
Andrea di Matteo2,
Alberto Corigliano1, and
Raffaele Ardito1
1 Dept. of Civil and Environmental Engineering, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy
2STMicroelectronics, Viale Remo De Feo, 1, 80022 Arzano, Italy
3Dept. of Physics, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy
Manuscript received June, 2023
Corresponding author: A. Corigliano (email: [email protected])
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Stochastic Bilevel optimization usually involves minimizing an upper-level () function that is dependent on the arg-min of a strongly-convex lower-level () function. Several algorithms utilize Neumann series to approximate certain matrix inverses involved in estimating the implicit gradient of the function (hypergradient). The state-of-the-art StOchastic Bilevel Algorithm () <cit.> instead uses stochastic gradient descent steps to solve the linear system associated with the explicit matrix inversion. This modification enables to match the lower bound of sample complexity for the single-level counterpart in non-convex settings. Unfortunately, the current analysis of relies on the assumption of higher-order smoothness for the and functions to achieve optimality. In this paper, we introduce a novel fully single-loop and Hessian-inversion-free algorithmic framework for stochastic bilevel optimization and present a tighter analysis under standard smoothness assumptions (first-order Lipschitzness of the function and second-order Lipschitzness of the function). Furthermore, we show that by a slight modification of our approach, our algorithm can handle a more general multi-objective robust bilevel optimization problem. For this case, we obtain the state-of-the-art oracle complexity results demonstrating the generality of both the proposed algorithmic and analytic frameworks. Numerical experiments demonstrate the performance gain of the proposed algorithms over existing ones.
§ INTRODUCTION
Bilevel optimization is gaining increasing popularity within the machine learning community due to its extensive range of applications, including meta-learning <cit.>, hyperparameter optimization <cit.>, data augmentation <cit.>, and neural architecture search <cit.>. The objective of bilevel optimization is to minimize a function that is dependent on the solution of another optimization problem:
min_x∈⊆^d_xΦ(x) := f(x, y^*(x)) s.t. y^*(x) = _y∈^d_y g(x, y)
where the upper-level () function f (a.k.a. outer function) and the lower-level () function g (a.k.a. inner function) are two real-valued functions defined on ^d_x×^d_y. The set is either ^d_x or a convex compact set in ^d_x, and the function g is strongly convex. We call x the outer variable and y the inner variable. The objective function
Φ(x) is called the value function. In this paper, we consider the stochastic setting in which only the stochastic oracles of f and g are available:
() f(x, y) = _ξ∼_f[ F(x, y; ξ) ], () g(x, y) = _ϕ∼_g[ G(x, y; ϕ) ].
Stochastic bilevel optimization can be considered as an extension of bilevel empirical risk minimization <cit.>, allowing for the effective handling of online and streaming data (ξ, ϕ).
In many instances, the analytical expression of y^*(x) is unknown and can only be approximated using an optimization algorithm. This adds to the complexity of problem (<ref>) compared to its single-level counterpart. Under regular conditions such that Φ is differentiable, the hypergradient ∇Φ(x) derived by the chain rule and the implicit function theorem is given by
∇Φ(x) = ∇_1 f(x, y^*(x)) - ∇_12^2 g(x, y^*(x)) z^*(x),
where z^*(x)∈^d_y is the solution of a linear system:
z^*(x) = [∇_22^2g(x,y^*(x))]^-1∇_2 f(x, y^*(x)).
Solving (<ref>) using only stochastic oracles poses significant challenges since there is no direct unbiased estimator available for [∇_22^2g(x,y^*(x))]^-1 and also for ∇Φ(x) as a consequence.
To mitigate the estimation bias, many existing methods <cit.> employ a Hessian Inverse Approximation () subroutine, which involves drawing a mini-batch of stochastic Hessian matrices and computing a truncated Neumann series <cit.>. However, this subroutine comes with an increased computational burden and introduces an additional factor of log(ϵ^-1) in the sample complexity. Some alternative methods <cit.> calculate the explicit inverse of the stochastic Hessian matrix with momentum updates. To circumvent the need for explicit Hessian inversion and the subroutine, recent works <cit.> propose running Stochastic Gradient Descent () steps to approximate the solution z^*(x) of the linear system (<ref>). In particular, the state-of-the-art Stochastic Bilevel Algorithm () only utilizes SGD steps to simultaneously update three variables: the inner variable y, the outer variable x, and the auxiliary variable z. Remarkably, achieves the same complexity lower bound of its single-level counterpart (Φ∈_L^1,1 [ _L^p,p denotes p-times differentiability with Lipschitz k-th order derivatives for 0<k≤ p.]) in the non-convex setting <cit.>.
Despite the superior computational and sample efficiency of , its current theoretical framework assumes high-order smoothness for the function f and the function g such that z^*(x) has Lipschitz gradient. Specifically, unlike the typical assumptions in stochastic bilevel optimization that state f∈_L^1,1 and g∈_L^2,2 (), the current theory of requires f∈_L^2,2 and g∈_L^3,3 (). The necessity of () is counter-intuitive as the partial gradients of x, y, z utilized in constructing SGD steps are already Lipschitz continuous under (). Furthermore, assuming g is strongly convex and the partial gradient of the function with respect to the inner variable y is bounded for all pairs of (x, y^*(x)), (i.e., ∇_2 f(x, y^*(x))≤ L_f for all x∈), there exists a subset relation among three function classes as follows (Lemma 2.2 in <cit.>):
() {f∈_L^2,2, g∈_L^3,3}⊂ () {f∈_L^1,1, g∈_L^2,2}⊂{Φ∈_L^1,1}.
In light of this, it can be concluded that () is sufficient to ensure the first-order Lipschitzness of Φ, which is the standard assumption in the single-level setting. Therefore, a natural question follows:
Is it possible to develop a fully single-loop and Hessian-inversion-free algorithm for solving stochastic bilevel optimization problems that achieves an optimal sample complexity of (ϵ^-2) under standard smoothness assumptions {f∈_L^1,1, g∈_L^2,2}footnote: Smoothness?
In this paper, we provide an affirmative answer to the aforementioned question. Our contributions can be summarized as follows:
* We propose a class of fully single-loop and Hession-inversion-free algorithm, named Moving-Average (), which builds upon the algorithm by incorporating an additional sequence of average hypergradients. Unlike , achieves an optimal sample complexity of 𝒪(ϵ^-2) under standard smoothness assumptions, without relying on high-order smoothness. Moreover, the introduced sequence of average hypergradients converges to ∇Φ(x), thus offering a reliable termination criterion in the stochastic setting.
* We expand the scope of to tackle a broader class of problems, specifically the min-max multi-objective bilevel optimization problem with significant applications in robust machine learning. We introduce , an algorithm that can find an ϵ-first-order stationary point of the μ_λ-strongly-concave regularized formulation while achieving a sample complexity of 𝒪(n^5μ_λ^-4ϵ^-2), which fills a gap (in terms of the order of ϵ-dependency) in the existing literature.
* We conduct experiments on several machine learning problems. Our numerical results show the efficiency and superiority of our algorithms.
Related Work. The concept of bilevel optimization was initially introduced in the work of <cit.>. Since then, numerous gradient-based bilevel optimization algorithms have been proposed, broadly categorized into two groups: ITerative Differentiation (ITD) based methods <cit.> and Approximate Implicit Differentiation (AID) based methods <cit.>. The ITD-based algorithms typically involve approximating the solution of the inner problem using an iterative algorithm and then computing an approximate hypergradient through automatic differentiation. However, a major drawback of this approach is the necessity of storing each iterate of the inner optimization algorithm in memory. The AID-based algorithms leverage the implicit gradient given by (<ref>), which requires the solution of a linear system characterized by (<ref>). Extensive research has been conducted on designing and analyzing deterministic bilevel optimization algorithms with strongly-convex functions; see <cit.> and the references cited therein.
In recent years, there has been a growing interest in stochastic bilevel optimization, especially in the setting of a non-convex function and a strongly-convex function. To address estimation bias, one set of methods uses SGD iterations for the inner problem and employs truncated stochastic Neumann series to approximate the inverse of the Hessian matrix in z^*(x) <cit.>. The analysis of such methods was refined by <cit.> to achieve convergence rates similar to those of SGD. However, the Neumann approximation subroutine introduces an additional factor of log(ϵ^-1) in the sample complexity. Some alternative approaches <cit.> calculate the explicit inverse of the stochastic Hessian matrix with momentum updates. Nevertheless, these methods encounter challenges related to computational complexity in matrix inversion and numerical stability.
To avoid the need for explicit Hessian inversion and the Neumann approximation, recent algorithms <cit.> propose running SGD steps to approximate the solution z^*(x) of the linear system (<ref>). One such algorithm called <cit.> employs a double-loop approach and achieves an optimal sample complexity of (ϵ^-2) under regular assumptions. However, requires a growing batch size inversely proportional to ϵ. On the other hand, the single-loop algorithm <cit.> achieves the same complexity lower bound but with constant batch size. Unfortunately, the current analysis of relies on the assumption of higher-order smoothness for the and functions. In this work, we introduce a novel algorithm framework that differs slightly from but can achieve optimal sample complexity in theory without higher-order smoothness assumptions. A summary of our results and comparison to prior work is provided in Table <ref>.
In addition, there exist several variance reduction-based methods following the line of research by <cit.>. Some of these methods achieve an improved sample complexity of 𝒪(ϵ^-1.5) and match the lower bounds of their single-level counterparts when stochastic functions F_ξ and G_ϕ satisfy mean-squared smoothness assumptions and the algorithm is allowed simultaneous queries at the same random seed <cit.>. However, since we are specifically considering smoothness assumptions on f and g, we will not delve into the comparison with these methods.
The most recent advancements in (stochastic) bilevel optimization focus on several new ideas: (i) addressing constrained lower-level problems <cit.>, (ii) handling lower-level problems that lack strong convexity <cit.>, (iii) developing fully first-order (Hesssian-free) algorithms <cit.>, (iv) establishing convergence to the second-order stationary point <cit.>, and (v) expanding the framework to encompass multi-objective optimization problems <cit.>. It is promising to apply some of these advancements to our specific framework. However, in this work, we contribute to multi-objective bilevel problems with a slight modification of our approach. Other directions are left as future work.
Notation. We use · for ℓ^2 norm. _n denotes the all-one vector in ^n. Δ_n = {λ|λ_i≥ 0, ∑_i=1^nλ_i=1} denotes the probability simplex. Π_(·) denotes the orthogonal projection onto .
§ PROPOSED FRAMEWORK: THE MA-SOBA ALGORITHM
Similar to <cit.>, our algorithm initiates with inexact hypergradient descent techniques and seeks to offer an alternative in the stochastic setting. To provide a clear illustration, let us initially consider the deterministic setting. The framework keeps track of three sequences, denoted as {x^k, y^k, z^k}, and updates them using D_x, D_y, D_z as follows:
(inner) y^k+1 = y^k - β_k ∇_2 g(x^k, y^k) = y^k - β_k D_y(x^k, y^k, z^k)
(aux) z^k+1 = z^k - γ_k{∇_22^2g(x^k,y^*(x^k))z^k - ∇_2 f(x^k,y^*(x^k))}
bias→ ≈z^k - γ_k {∇_22^2g(x^k,y^k)z^k - ∇_2 f(x^k,y^k)} = z^k -γ_k D_z(x^k, y^k, z^k)
(outer) x^k+1 = x^k - α_k{∇_1 f(x^k, y^*(x^k)) - ∇_12^2 g(x^k, y^*(x^k)) z^*(x^k)} = x^k - α_k ∇Φ(x^k)
bias→ ≈ x^k - α_k {∇_1 f(x^k, y^k) - ∇_12^2 g(x^k, y^k) z^k} = x^k - α_k D_x(x^k, y^k, z^k)
where (<ref>) is the GD step to minimize g(x^k, ·), (<ref>) is the inexact hyper gradient descent step, and (<ref>) is the GD step to minimize a quadratic minimization problem with z^*(x^k) being the solution, i.e.,
z^*(x^k) = z 1/2∇_22^2g(x^k,y^*(x^k))z, z> - ∇_2 f(x^k,y^*(x^k)), z>.
Given that the above update rule, highlighted in blue, does not involve the Hessian matrix inversion, can directly utilize the stochastic oracles of ∇_1 f, ∇_2 f, ∇_2 g, ∇_22^2 g, ∇_12^2 g to obtain unbiased estimators of D_x, D_y, D_z in Eq.(<ref>), (<ref>), (<ref>). This approach circumvents the requirement for a Neumann approximation subroutine or a direct matrix inversion. However, due to the update rule for y, which only utilizes one-step SGD at each iteration k, the value of y^k does not coincide with y^*(x^k). As a result, a certain bias is introduced in the partial gradient of z in Eq.(<ref>). Similarly, when estimating the hypergradient ∇Φ(x), another bias term arises in Eq.(<ref>). Although the bias decreases to zero as y^k→ y^*(x^k) and z^k→ z^*(x^k) under standard smoothness assumptions as indicated by Lemma 3.4 in <cit.>, the current analysis of requires more regularity on f and g to carefully handle the bias; it assume that f has Lipschitz Hessian and g has Lipschitz third-order derivative.
The inability to obtain an unbiased gradient estimator is a common characteristic in stochastic optimization involving nested structures; see, for example, stochastic compositional optimization <cit.> as a specific case of (<ref>). One popular approach is to introduce a sequence of dual variables that approximates the true gradient by aggregating all past biased stochastic gradients using a moving averaging technique <cit.>. Motivated by this approach, we introduce another sequence of variables, denoted as {h^k}, and update it at k-th iteration given the past iterates _k as follows:
h^k+1 = (1-θ_k) h^k + θ_k w^k+1, [w^k+1|_k] = D_x(x^k, y^k, z^k), θ_k∈(0,1].
Following the update rule in the constrained setting (⊂^d_x) <cit.>, the outer variable is updated as x^k+1 = x^k + α_k(Π_(x^k - τ h^k) - x^k), which is reduced to the GD step when ≡^d_x. Denote the stochastic oracles of ∇_1 f(x^k, y^k), ∇_2 f(x^k, y^k), ∇_2 g(x^k, y^k), ∇_22^2 g(x^k, y^k), ∇_12^2 g(x^k, y^k) at k-th iteration as u_x^k+1, u_y^k+1, v^k+1, H^k+1, J^k+1 respectively. We present our method, referred to as Moving-Average (), in Algorithm <ref>.
§ THEORETICAL ANALYSIS
In this section, we provide convergence rates of under standard smoothness conditions on f, g and regular assumptions on stochastic oracles. We also present a proof sketch and have detailed discussions about assumptions made in the literature. The complete proofs are deferred in Appendix.
§.§ Preliminaries and Assumptions
As we consider the general setting in which can be either ^d_x or a closed compact set in ^d_x, we use the notion of gradient mapping to characterize the first-order stationarity, which is a classical measure widely used in the literature as a convergence criterion when solving nonconvex constrained problems <cit.>. For τ>0, we define the gradient mapping of at point x̅∈ as _(x̅, ∇Φ(x̅), τ) 1/τ (x̅ - _ (x̅ - τ∇Φ(x̅))). When ≡^d, the gradient mapping simplifies to ∇Φ(x̅). Our main goal in this work is to find an ϵ-stationary solution to (<ref>), in the sense of [𝒢_(x̅, ∇Φ(x̅), τ)^2]≤ϵ.
We first state some regularity assumptions on the functions f and g.
The functions f and g satisfy:
(a) (f∈_L^1,1 and g∈_L^2,2)footnote: Smoothness ∇ f, ∇ g, ∇^2 g are L_∇ f, L_∇ g, L_∇^2 g Lipschitz continuous respectively.
(b) (SC ) g is μ_g-strongly convex.
(c) ∇_2 f(x, y^*(x))≤ L_f < ∞ for all x∈.
The above assumption serves as a sufficient condition for the Lipschitz continuity of ∇Φ, y^*(x), and z^*(x), as well as D_x, D_y, and D_z in Eq. (<ref>), (<ref>), (<ref>). The inclusion of high-order smoothness assumptions (f∈_L^2,2 and g∈_L^3,3) in the current analysis of <cit.> is primarily intended to ensure the Lipschitzness of ∇ z^*(x). However, the necessity of such assumptions is subject to doubt, given that ∇ z^*(x) is not involved in designing the algorithm. Furthermore, the Lipschitzness of f or uniformly boundedness of ∇_2 f made in several previous works is unnecessary. Instead, the boundedness assumption on ∇_2 f is only required for all pairs of (x,y^*(x)) as demonstrated by (c).
Next, we discuss assumptions made on the stochastic oracles.
For any k≥ 0, define _k denotes the sigma algebra generated by all iterates with superscripts not greater than k, i.e., _k = σ{h^1, …, h^k, x^1,…, x^k, y^1, …, y^k, z^1,…,z^k}.
The stochastic oracles of ∇_1 f(x^k, y^k), ∇_2 f(x^k, y^k), ∇_2 g(x^k, y^k), ∇_22^2 g(x^k, y^k), ∇_12^2 g(x^k, y^k), denoted as u_x^k+1, u_y^k+1, v^k+1, H^k+1, J^k+1 respectively, used in Algorithm <ref> at k-th iteration are unbiased with bounded variance given _k. They are conditionally independent with respect to _k.
The unbiasedness and bounded variance assumptions on stochastic oracles are standard and typically satisfied in several practical stochastic optimization problems <cit.>. It is important to highlight that we explicitly impose these assumptions on the stochastic oracles, unlike Assumption 3.6 in <cit.>, which assumes [v^k+1^2|_k] ≤ B_y^2 (1+ D_y(x^k, y^k, z^k)^2) and [H^k+1z^k - u_y^k+1^2|_k] ≤ B_z^2 (1+ D_z(x^k, y^k, z^k)^2). In this case, B_y and B_z represent constants in terms of the Lipschitz constants (L) and variance bounds (σ^2). Moreover, Assumption 3.7 in <cit.> assumes [w^k+1^2|_k] ≤ B_x^2 holds for a constant B_x, which is considerably stronger than our assumptions and may not hold for a broad class of problems.
§.§ Convergence Results
We have the following theorem characterizing the convergence results of .
Define x_+^k = Π_(x^k - τ h^k). Suppose Assumptions <ref> and <ref> hold. Then there exist positive constants c_1, c_2, c_3, τ>0 such that if α_k ≡Θ(1/√(K)), β_k = c_1α_k, γ_k = c_2α_k, θ_k = c_3α_k, in Algorithm <ref>, then the iterates in Algorithm <ref> satisfy
1/K∑_k=1^K1/τ^2x_+^k - x^k^2 = (1/√(K)), 1/K∑_k=1^Kh^k - ∇Φ(x^k)^2 = (1/√(K)),
which imply
1/K∑_k=1^K[1/τ(x^k - Π_(x^k - τ∇Φ(x^k)))^2] = (1/√(K)).
That is to say, when uniformly randomly selecting a solution x^R from {x^1, …, x^K}, the sample complexity of Algorithm <ref> for finding an ϵ-stationary point is 𝒪(ϵ^-2).
In contrast to most existing methods, in , the introduced sequence of dual variables {h^k} converges to the exact hypergradient ∇Φ(x), even in the presence of estimation bias. This attribute provides reliable terminating criteria in practice. In addition, similar results with an extra factor of log(K) in the convergence rate can be established under decreasing α_k <cit.>.
§.§ Proof Sketch of Theorem <ref>
Define V_k = 1/τ^2x_+^k - x^k^2 + h^k - ∇Φ(x^k)^2. To obtain (<ref>), we consider the merit function W_k:
W_k = Φ(x^k) - η_(x^k, h^k, τ) + y^k - y_*^k^2 + z^k - z_*^k^2,
where η_(x,h,τ)= h, x_+ - x> + 1/2τx_+-x^2.
By leveraging the moving average updates of x^k (line 2 of Algorithm <ref>), we can obtain
∑_k=0^Kα_k[V_k] =(∑_k=0^K(α_k[[w^k+1|_k] - ∇Φ(x^k)^2] + α_k^2)),
which reduces the error analysis to controlling the hypergradient estimation bias, i.e., [w^k+1|_k] - ∇Φ(x^k)^2.
This term, by the construction of w^k+1, satisfies
!∑_k=0^Kα_k[[w^k+1|_k] - ∇Φ(x^k)^2]= (∑_k=0^Kα_k[x_+^k - x^k^2 + y^k-y_*^k^2 + z^k - z_*^k^2]).
It is worth noting that <cit.> requires the existence and Lipschitzness of ∇^2 f and ∇^3g to ensure the Lipschitzness of ∇ z^*(x) (see (<ref>)) which is used in proving the sufficient decrease of z^k - z_*^k^2. In contrast, based on the moving average updates of x^k and h^k, our refined analysis does not necessitate such high-order smoothness assumptions to obtain that
∑_k=0^Kα_k[y^k-y_*^k^2 + z^k - z_*^k^2] = (∑_k=0^Kα_k[x_+^k - x^k^2]).
The proof of Theorem <ref> can then be completed by choosing appropriate α_k, c_1, c_2, c_3, τ > 0.
§ MIN-MAX BILEVEL OPTIMIZATION
To incorporate robustness in the multi-objective setting where each objective can be expressed as a bilevel optimization problem in (<ref>), the following mini-max bilevel problem formulation was proposed in <cit.>:
min_x∈ max_1≤ i ≤ n Φ_i(x) f_i(x, y_i^*(x)) s.t. y_i^*(x) = _y_i∈^d_y_i g_i(x, y_i), 1≤ i≤ n.
Note that (<ref>) can be reformulated as a general nonconvex-concave min-max optimization problem (with a bilevel substructure):
min_x∈ max_λ∈Δ_nΦ(x,λ) ∑_i=1^nλ_i Φ_i(x).
Instead of solving (<ref>) directly, in this work, we focus on solving the following regularized version,
min_x∈ max_λ∈Δ_nΦ_μ_λ(x,λ) Φ(x,λ) - μ_λ/2λ - _n/n^2.
Note that in (<ref>), we include an ℓ^2 regularization term that penalizes the discrepancy between λ and _n/n. When μ_λ = 0, it corresponds to equation (<ref>), and as μ_λ→ +∞, it enforces λ = _n/n, leading to direct minimizing of the average loss. It is important to note that minimizing the worst-case loss (i.e., max_1≤ i≤ nf_i(x,y_i^*(x))) does not necessarily imply the minimization of the average loss (i.e., 1/n∑_i=1^nf_i(x, y_i^*(x))). Therefore, in practice, it may be preferable to select an appropriate μ_λ>0 <cit.> to strike a balance between these two types of losses.
§.§ Proposed Framework: the MORMA-SOBA Algorithm
The proposed algorithm, which we refer as to Multi-Objective Robust (), for solving (<ref>) is presented in <ref>. In addition to the basic framework of Algorithm <ref>, we also maintain a moving average step in the updates of λ^k for solving the max part of problem <ref>. It is worth noting that in its single-level counterpart without the inner variable y, the proposed algorithm is fundamentally similar to the single-timescale averaged algorithm proposed in the general nonconvex-strongly-concave setting <cit.>. Moreover, our algorithm framework can be leveraged to solve the distributionally robust compositional optimization problem discussed in <cit.>.
In contrast to our approach in (<ref>), the work of <cit.>, for the min-max bilevel problem, attempted to combine <cit.> and <cit.> to solve the nonconvex-concave problem as (<ref>). However, we identified an issue in <cit.> related to the ambiguity and inconsistency in the expectation and filtration, which may not be easily resolved within their current proof framework. As a consequence, their current proof is unable to demonstrate [max_i∈ [n]y_i^k - y_i^*(x^(k-1))^2 ]≤(√(n)K^-2/5) as claimed in Theorem 1 (10b) of <cit.>. Thus, the subsequent arguments made regarding the convergence analysis of x and λ are incorrect (at least in its current form); see Section <ref> for further discussions. Moreover, the practical implementation of incorporates momentum and weight decay techniques to optimize the simplex variable λ. This approach can be seen as a means of solving the regularized formulation in (<ref>).
§.§ Convergence Results
We first present additional assumptions required in the analysis of .
For any k≥ 0, functions Φ(x), ∇Φ_i(x) are bounded, functions f_i are L_f-Lipschitze continuous in the second input, and their stochastic versions are unbiased with bounded variance, i.e., there exists L_Φ, L_f, σ_f,0≥ 0 such that
|Φ_i(x)|≤ b_Φ, ∇Φ_i(x)≤ L_Φ, |f_i(x, y) - f_i(x,ỹ)|≤ L_fy - ỹ, for all x, y, ỹ, 1≤ i≤ n,
s^k+1 = (s_1^k+1, ..., s_n^k+1), [s_i^k+1|_k] = f_i(x^k, y_i^k), [ s_i^k+1 -f_i(x^k, y_i^k)^2|_k]≤σ_f,0^2.
⋃_i=1^n{u_x, i^k+1, u_y, i^k+1, v_i^k+1, H_i^k+1, J_i^k+1}∪{s^k+1} are conditionally independent with respect to _k.
We have the following convergence theorem of .
Suppose Assumptions <ref>, <ref> (for all f_i, g_i) and Assumption <ref> hold. Then there exist positive constants c_1, c_2, c_3, τ_x, τ_λ>0 such that if α_k ≡Θ(1/√(nK)), β_k = c_1α_k, γ_k = c_2α_k, θ_k = c_3α_k, μ_λ<1 in Algorithm <ref>, then the iterates in Algorithm <ref> satisfy
1/K∑_k=0^K[1/τ_x(x^k - Π_(x^k - τ_x∇Ψ_μ_λ(x^k)) )^2] = (n^2/μ_λ^2√(K)),
where Ψ_μ_λ(x):= max_λ∈Δ_nΦ_μ_λ(x, λ). That is to say, when uniformly randomly selecting a solution x^R from {x^1, …, x^K}, the sample complexity (the total number of calls to stochastic oracles) of finding an ϵ-stationary point by Algorithm <ref> is (n^5μ_λ^-4ϵ^-2).
Theorem <ref> indicates that Algorithm <ref> is capable of generating an ϵ-first-order stationary point of min_x Ψ_μ_λ(x) with K≳ n^5μ_λ^-4ϵ^-2. As μ_λ→ 0, the problem (<ref>) changes towards the nonconvex-concave problem (<ref>) and the sample complexity becomes worse, which to some extent implies the difficulty of directly solving (<ref>). Define x_+^k = Π_(x^k - τ_x h_x^k), λ_+^k = Π_Δ_n(λ^k + τ_λh_λ^k). Our proof of Theorem <ref> mainly relies on analyzing Ṽ_k as follows:
Ṽ_k = 1/τ_x^2x_+^k - x^k^2 + h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2 + 1/τ_λ^2λ_+^k - λ^k^2 + h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2,
which is commonly used in min-max optimization (see, e.g., <cit.>). The construction of the first two terms follows the same idea in Section <ref>, and for the last two terms we have
τ_λ^2μ_λ^2λ^k - λ_*^k^2 = (λ_+^k - λ^k^2 + τ_λ^2h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2).
Therefore, Ṽ_k = 0 implies x^k = Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k)) as well as λ^k = λ_*^k, which further imply the gradient mapping of problem min_xΨ_μ_λ(x) at x^k is 0, and hence the validity of our optimality measure. We defer the proof details to Section <ref> in the Appendix.
Note that in Theorem <ref> we explicitly characterize the dependency on n and μ_λ in the convergence rate and the sample complexity. It is worth noting that two variants of stochastic gradient descent ascent () algorithms for solving the nonconvex-strongly-concave min-max optimization problems (without bilevel substructures),
have been studied in <cit.>. While such algorithms are not immediately applicable to solve (<ref>) due to the presence of the additional bilevel substructure, it is instructive to compare to those methods assuming direct access to y_i^*(x) in (<ref>). Specifically, we observe that the sample complexity of with batch size M=Θ(n^1.5ϵ^-1) in <cit.> and moving-average with (1) batch size in <cit.> for solving (<ref>) assuming direct access to y_i^*(x) will be (n^4μ_λ^-2ϵ^-2) and (n^5μ_λ^-4ϵ^-2)[Note that Φ_μ_λ(x, λ) in (<ref>) is quadratic in λ, and these two sample complexities are obtained under this special case, i.e., ∇_2^2f(x,y) = -μ𝐈 applied to <cit.>.] respectively. Our results in Theorem <ref> indicate that the sample complexity of the proposed algorithm for solving min-max bilevel problems has the same dependency on n and μ_λ as the sample complexity of the moving-average introduced in <cit.> for solving min-max single-level problems, while also computing y_i^*(x) instead of assuming direct access.
§ EXPERIMENTS
While our contributions primarily focus on theoretical aspects, we also conducted experiments to validate our results. We first compare the performance of with other benchmark methods on two common tasks proposed in previous works <cit.>, hyperparameter optimization for ℓ^2 penalized logistic regression and data hyper-cleaning on the corrupted MNIST dataset. Our experiments are performed with the aid of the recently developed package <cit.> and the open-sourced bilevel optimization benchmark[<https://github.com/benchopt/benchmark_bilevel>]. For a fair comparison, we exclusively consider benchmark methods that do not utilize variance reduction techniques in Table <ref>: (i) <cit.>; (ii) <cit.>; (iii) <cit.>/ <cit.>; (iv) <cit.>. Noting that only differs from regarding time scales, we use to represent this class of approach. Also, we omit the comparison with <cit.> below, given that it is essentially a double-loop with increasing batch sizes. Detailed setups and additional experiments on all other methods are deferred in Appendix. The tunable parameters in benchmark methods are selected in the same manner as those in footnote: bilevel benchopt.
In the first task, we fit binary classification models on the IJCNN1 dataset[<https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html>]. The function f and g of the problem (<ref>) are the average logistic loss on the validation set and training set respectively, with ℓ^2 regularization for g. In Figure <ref>, we plot the suboptimality gap against the runtime for each method. Surprisingly, we observed that achieves lower objective values after several iterations compared to all benchmark methods. This improvement can be attributed to the convergence of average hypergradients {h^k}. These findings demonstrate the practical superiority of our algorithm framework, even with the same sample complexity results.
In the second task, we conduct data hyper-cleaning on the MNIST dataset introduced in <cit.>. Data cleaning aims to train a multinomial logistic regression model on the corrupted training set and determine a weight for each training sample. These weights should approach zero for samples with corrupted labels. We randomly replace the label by {0,1…, 9} in the training set of MNIST with probability p. The task can be formulated into the bilevel optimization problem (<ref>) with the inner variable y being the regression coefficients and the outer variable x being the sample weight. The function g is the sample-weighted cross-entropy loss on the corrupted training set with ℓ^2 regularization. The function f is the cross-entropy loss on the validation set. We report the test error in Figure <ref>. We observe that outperforms other benchmark methods by achieving lower test errors faster.
To demonstrate the practical performance of , we conduct experiments in robust multi-task representation learning introduced in <cit.> on the FashionMNIST dataset <cit.>. Each bilevel objective Φ_i in this setup represents a distinct learning “task” i∈[n] with its own training and validation sets. The optimization variable is engaged in a shared representation network, parameterized by the outer variable x, along with per-task linear models parameterized by each inner variable y_i. The function f_i is the average cross-entropy loss over the i-th validation set, and the function g_i is the ℓ^2 regularized cross-entropy loss over the i-th training set. The goal is to learn a shared representation and per-task models that generalize well on each task, which is usually solved by a single-objective problem that minimizes (1/n)∑_i Φ_i. In Figure <ref>, we compare our algorithm with the existing min-max bilevel algorithm <cit.> in terms of the average loss ((1/n)∑_i Φ_i) and maximum loss (max_i Φ_i). The results demonstrate the superiority of over in terms of lowering both the max loss and average loss at a faster rate.
§ CONCLUSION
In this work, we propose a novel class of algorithms () for solving stochastic bilevel optimization problems in (<ref>) by introducing the moving-average step to estimate the hypergradient. We present a refined convergence analysis of our algorithm, achieving the optimal sample complexity without relying on the high-order smoothness assumptions employed in the literature. Furthermore, we extend our algorithm framework to tackle a generic min-max bilevel optimization problem within the multi-objective setting, identifying and addressing the theoretical gap present in the literature.
§.§.§ Acknowledgements
We thank the authors of <cit.> for clarifications regarding their paper.
abbrv
§ EXPERIMENTAL DETAILS
All the experiments were conducted using Python. The initial two tasks, involving the comparison of with other stochastic bilevel optimization algorithms, utilized the package <cit.> and the open-sourced bilevel benchmark <cit.>[<https://github.com/benchopt/benchmark_bilevel>]. The final task, focused on robust multi-objective representation learning, was implemented in PyTorch, following the source code provided by <cit.>[<https://github.com/minimario/MORBiT>].
§.§ Experimental Details for MA-SOBA
Setup. In our experiments, we strictly adhere to the settings provided in bench_bilevel, as detailed in Appendix B.1 of <cit.>. The previous results and setups of <cit.> have also been available in <https://benchopt.github.io/results/benchmark_bilevel.html>. For completeness, we provide a summary of the setup below.
* To avoid redundant computations, we utilize oracles for the function F_ξ, G_ϕ, which provide access to quantities such as ∇_1 F_ξ(x,y), ∇_2 F_ξ(x, y), ∇_2 G_ϕ(x,y), ∇_22^2 G_ϕ(x,y)v, and ∇_12^2 G_ϕ(x,y)v, although this approach may violate the independence assumption in Assumption <ref>.
* In all our experiments, we employ a batch size of 64 for all methods, even for and that theoretically require increasing batch sizes.
* For methods involving an inner loop (, , ), we perform 10 inner steps per each outer iteration as proposed in those papers.
* For methods that involve the Neumann approximation for the Hessian vector product (such as , , , and ), we perform 10 steps of the subroutine per outer iteration. For , we perform 10 steps of SGD to approximate the inversion of the linear system.
* The step sizes and momentum parameters used in all benchmark algorithms are directly adopted from the fine-tuned parameters provided by <cit.>. From a grid search, we select the best constant step sizes for .
At present, we have excluded <cit.> from the benchmark due to the unavailability of an open-sourced implementation and its limited reported improvement over .
§.§.§ Hyperparameter Optimization on IJCNN1
In this experiment, we focus on selecting the regularization parameters for a multi-regularized logistic regression model on the IJCNN1 dataset, where we have one hyperparameter per feature. Specifically, the problem can be formulated as:
ν∈^dmin Φ(ν) (X, Y) ∼_val[ℓ(ω^*(ν), X> , Y)]_f(ν, ω^*(ν))
s.t. ω^*(ν) =ω∈^d(X, Y) ∼_train[ℓ(ω, X>, Y)] + 1/2ω^⊤e^ν_1, …, e^ν_dω_g(ν, ω)
In this case, |_train| = 49,990, |_val| = 91,701, and d=22. For each sample, the covariate and label are denoted as (X, Y), where X∈^22 and Y∈{0,1}. The inner variable (ω∈^22) is the regression coefficient. The outer variable (ν∈^22) is a vector of regularization parameters. The loss function ℓ(y', y) = -ylog (y') - (1-y) log(1-y') is the log loss.
To complement the comparison presented in the main paper, we conducted additional experiments that involved comparing all benchmark methods, including the variance reduction based method. In Figure <ref>, we plot the suboptimality gap (Φ(x) - Φ^*) against runtime and the number of calls to oracles. Unfortunately, the previous results obtained for and on the IJCNN1 dataset are not reproducible at the moment due to some conflicts in the current developer version of . As reported in <cit.>, exhibits similar performance to , while the curve of initially follows a similar trend as and eventually reaches a similar level as towards the end. Following a grid search, we have selected the parameters in as α_kτ = 0.02, β_k=γ_k=0.01, and θ_k = 0.1. As shown in Figure <ref>, our proposed method outperforms significantly, achieving a slightly lower suboptimality gap compared to the state-of-the-art variance reduction-based method .
§.§.§ Data Hyper-Cleaning on MNIST
The second experiment we perform involves data hyper-cleaning on the MNIST dataset. The dataset is partitioned into a training set _train, a validation set _val, and a test set _test, where |_train|=20,000, |_val|=5,000, and |_test|=10,000. Each sample is represented as a vector X of dimension 784, where the input image is flattened. The corresponding label takes values from the set {0,1,…,9}. We use Y∈^10 to denote its one-hot encoding. Each sample in the training set is corrupted with probability p by replacing its label with a random label {0,1,…,9}. The task of data hyper-cleaning can be formulated into the bilevel optimization problem as below:
ν∈^|_train|min Φ(ν) (X, Y) ∼_val[ℓ(W^*(ν) X, Y)]_f(ν, W^*(ν))
s.t. W^*(ν) =ω∈^d1/|_train|∑_(X_i, Y_i)∼_trainσ(ν_i)ℓ(WX_i, Y_i^corrupted) + C_rW^2 _g(ν, W),
where the outer variable (ν∈^20,000) is a vector of sample weights for the training set, the inner variable W∈^10× 784, and ℓ is the cross entropy loss and σ is the sigmoid function. The regularization parameter C_r=0.2 following <cit.>. The objective of data hyper-cleaning is to train a multinomial logistic regression model on the training set and determine a weight for each training sample using the validation set. The weights are designed to approach zero for corrupted samples, thereby aiding in the removal of these samples during the training process.
To supplement the comparison presented in the main paper, we conducted additional experiments that involved comparing all benchmark methods, including the variance reduction-based method. Following a grid search, we have selected the parameters in as α_kτ = 10^3, β_k = γ_k = 10^-2, and θ_k=10^-1. In Figure <ref>, we plot the test error against runtime and the number of calls to oracles with different corruption probability p∈{0.5, 0.7, 0.9}. We observe that has comparable performance to the state-of-the-art method . Remarkably, is the fastest algorithm to reach the best test accuracy when p=0.5.
§.§.§ Moving Average vs. Variance Reduction
Through empirical studies, we have demonstrated that our proposed method, , which utilizes a moving average (MA) technique, achieves comparable performance to the state-of-the-art variance reduction-based approach using updates <cit.>. In this context, we would like to highlight the key difference and relationship between these two methods.
We start with presenting the update rules of the sequence of estimated gradients {g^k} for the variance reduction techniques <cit.> and our moving average method () for the single-level problem.
SAGA (finite-sum:) min 1/n∑_i=1^n f_i(x)
g^k = ∇ f_i_k(x^k) - ∇ f_i_k(x̅_i_k) + 1/n∑_j=1^n∇ f_j(x̅_j)
The update is designed for finite-sum problems with offline batch data. At each iteration k, the algorithm randomly selects an index i_k ∈ [n] and updates the gradient variable g^k using a reference point x̅_i_k, which corresponds to the last evaluated point for ∇ f_i_k. However, it should be noted that requires storing the previously evaluated gradients ∇ f_j(x̅_j) in a table, which can be memory-intensive when sample size n or dimension d is large. In the finite-sum setting, there exist several other variance reduction methods, such as <cit.>, that can be employed to further enhance the dependence on the number of samples, n, for bilevel optimization problems. However, the -type method requires double gradient evaluations on each iteration of x^k and x^k-1.
MA (expectation): min _ξ[f(x;ξ)]
g^k = (1-α_k) g^k-1 + α_k ∇ f(x^k; ξ^k+1)
Unlike variance reduction techniques, the moving average methods can solve the general expectation-form problem with online and streaming data using a simple update per iteration. In addition, the moving average techniques offer two advantages compared to variance reduction-based methods:
Theoretical Assumption. All variance reduction methods, including <cit.>, <cit.>, <cit.>, <cit.>, and others, typically rely on assuming mean-squared smoothness assumptions. In particular, for stochastic optimization problems in the form of min_x {f(x) = 𝔼[F(x,ξ)]}, the definition of mean-squared smoothness (MSS) is:
(MSS) 𝔼_ξ[∇ F(x, ξ) - ∇ F(x', ξ)^2] ≤ L^2 x-x' ^2.
However, MSS is a stronger assumption than the general smoothness assumption on f:
∇ f(x) - ∇ f(x')≤ L x- x' .
By Jensen’s inequality, we have that MSS is stronger than the general smoothness assumption on f:
∇ f(x) - ∇ f(x')^2 ≤𝔼_ξ[∇ F(x, ξ) - ∇ F(x', ξ)^2].
In this work, the theoretical results of the proposed methods are only built on the smoothness assumption on the and functions f, g without further assuming MSS on F_ξ and G_ϕ. It is worth noting that a clear distinction in the lower bounds of sample complexity for solving the single-level stochastic optimization has been proven in <cit.>. Specifically, they establish a separation under the MSS assumption on F_ξ and smoothness assumptions on f ((ϵ^-1.5) vs. (ϵ^-2)).
Thus, it is important to emphasize that achieves the optimal sample complexity (ϵ^-2) under our weaker assumptions.
Practical Implementation. Variance reduction methods often entail additional space complexity, require double-loop implementation or double oracle computations per iteration. These requirements can be unfavorable for large-scale problems with limited computing resources. For instance, in the second task, the runtime improvement achieved by using is limited. This limitation can be attributed to the dimensionality of the variables ν (with a dimension of 20,000) and W (with a dimension of 10 × 784). The benefit of using variance reduction methods is expected to be less significant for more complex problems involving computationally expensive oracle evaluations.
§.§ Experimental Details for MORMA-SOBA
We adopt the same setup as described in <cit.>, which can be summarized as follows.
Setup. We consider binary classification tasks generated from the FashionMNIST data set where we select 8 “easy” tasks (lowest loss ∼ 0.3 from independent training) and 2 “hard” tasks (lowest loss ∼ 0.45 from independent training) for multi-objective robust representation learning:
* “easy” tasks: (0, 9), (1, 7), (2, 7), (2, 9), (4, 7), (4, 9), (3, 7), (3, 9)
* “hard” tasks: (0, 6), (2, 4)
For each task i∈[10] above, we partition its dataset into the training set _i^train, validation set _i^val, and test set _i^test. We also generate 7 (unseen) binary classification tasks for testing:
* “easy” tasks: (1, 9), (2, 5), (4, 5), (5, 6)
* “hard” tasks: (2, 6), (3, 6), (4, 6)
We train a shared representation network that maps the 784-dimensional (vectorized 28x28 images) input to a 100-dimensional space. Subsequently, each task learns a binary classifier based on this shared representation. To learn a shared representation and per-task models that generalize well on each task, we aim to solve the following min-max bilevel optimization problem:
E∈^100× 784min max_1≤ i ≤ n Φ_i(E) (X, Y)∼_i^val[ℓ( W_i^*(E) ∘ReLU(EX)^representation + b_i^*(E), Y)]_f_i(E, (W_i^*, b_i^*))
s.t. [ W_i^*(E); b_i^*(E) ] =
W_i∈^10× 100,b_i∈^10(X, Y)∼_i^train[ℓ(W_i^weight∘ReLU(EX) + b_i^bias, Y)]+ ρW_i_F^2_g_i(E, (W_i, b_i)) , 1≤ i≤ n.
Each sample is represented as a vector X of dimension 784, where the input image is flattened. The corresponding label takes values from the set {0,1,…,9}. We use Y∈^10 to denote its one-hot encoding. Each bilevel objective Φ_i above represents a distinct binary classification task i∈[n]. The optimization variable is engaged in a shared representation network, parameterized by the outer variable E∈^100× 784, along with per-task linear models parameterized by each inner variable (W_i, b_i). The function f_i is the average cross-entropy loss over the _i^val, and the function g_i is the ℓ^2 regularized cross-entropy loss over _i^train.
In the experiment, the regularization parameter in the function ρ=5× 10^-4. The implementation of follows the same manner described in <cit.>. Specifically, the code of <cit.> uses vanilla SGD with a learning rate scheduler and incorporates momentum and weight decay techniques to optimize each variable:
* Outer variable: learning rate = 0.01, momentum = 0.9, weight_decay = 10^-4
* Inner variable: learning rate = 0.01, momentum = 0.9, weight_decay = 10^-4
* Simplex variable: learning rate = 0.3, momentum = 0.9, weight_decay = 10^-4
In addition, adopts a straightforward iterative auto-differentiation to calculate the hypergradient without using the Neumann approximation of the Hession inversion.
For the implementation of , the regularization parameter μ_λ in <ref> is set to be 0.01. All remaining parameters are chosen as constant values, as listed below:
* Outer variable: τ_x= 1, α_k = 0.02,
* Inner variable: β_k = 0.02
* Auxiliary variable: γ_k=0.02
* Simplex variable: τ_λ = 1, α_k = 0.02
* Average gradient: θ_k=0.6
Both evaluated methods use batch sizes of 8 and 128 to compute g_i for each inner step and f_i for each outer iteration, respectively. In addition to Figure <ref>, which showcases the performance on 10 seen tasks used for representation learning, we present Figure <ref>. This figure displays the maximum/average loss values against the number of iterations on test sets consisting of 10 seen tasks and 7 unseen tasks. Our proposed approach, , demonstrates superior performance in terms of faster reduction of both the maximum and average loss.
§ PROOFS
We will prove Theorems <ref> and <ref> in Section <ref> and <ref> respectively. In each section we will first establish the relations between the optimality measure (see V_k, Ṽ_k in Sections <ref> and <ref>) and the gradient mapping, which reduce the proof of main theorems to proving the convergence of primal variables (x^k in Theorem <ref> or (x^k, λ^k) in Theorem <ref>) and dual variables (h^k in Theorem <ref> or (h_x^k, h_λ^k) in Theorem <ref>). Then we will prove the hypergradient estimation error, primal convergence and dual convergence separately. In our notation convention, the superscript k usually denotes the iteration number and the subscript i represents variables related to functions f_i, g_i. L_# with being a function # denotes its Lipschitz constant.
We first specify the constants in Assumption <ref>.
Assumption <ref>.
For any k≥ 0, define _k denotes the sigma algebra generated by all iterates with superscripts not greater than k, i.e., _k = σ{h^1, h^2, ..., h^k, x^1,..., x^k, y^1, ..., y^k, z^1,...,z^k}.
The stochastic oracles used in Algorithm <ref> at k-th iteration are unbiased with bounded variance given _k, i.e., there exist positive constants σ_f,1, σ_f,2, σ_g,1, σ_g,2 such that
[u_x^k+1|_k] = ∇_1 f(x^k, y^k), [u_x^k+1 - ∇_1 f(x^k, y^k)^2|_k]≤σ_f,1^2,
[u_y^k+1|_k] = ∇_2 f(x^k, y^k), [u_y^k+1 - ∇_2 f(x^k, y^k)^2 |_k]≤σ_f,2^2,
[v^k+1|_k] = ∇_2 g(x^k, y^k), [v^k+1 - ∇_2 g(x^k, y^k)^2|_k ]≤σ_g,1^2,
[H^k+1|_k] = ∇_22^2 g(x^k,y^k), [H^k+1 - ∇_22^2g(x^k,y^k)^2|_k]≤σ_g,2^2,
[J^k+1|_k] = ∇_12^2g(x^k, y^k), [J^k+1 - ∇_12^2g(x^k, y^k)^2|_k] ≤σ_g,2^2.
In addition, they are conditionally independent conditioned on _k.
Next we state some technical lemmas that will be used in both sections.
Suppose f(x) is μ-strongly convex and L-smooth. For any x and γ<2/μ + L, define x^+ = x - γ∇ f(x), x^*= f(x). Then we have
x^+ - x^*≤ (1-γμ)x-x^*
See, e.g., Lemma 10 in <cit.>.
Define
κ = max(L_∇ f/μ_g, L_∇ g/μ_g), z^*(x) = (∇_22^2g(x, y^*(x)))^-1∇_2f(x, y^*(x)).
Suppose Assumption <ref> holds. Then Φ(x) is differentiable and ∇Φ(x) is given by
Then Φ(x), y^*(x), z^*(x) are differentiable and ∇Φ(x), y^*(x), z^*(x) are L_∇Φ, L_y^*, L_z^*-Lipschitz continuous respectively, with their expressions as
∇Φ(x) = ∇_1 f(x, y^*(x)) - ∇_12^2 g(x, y^*(x))(∇_22^2g(x, y^*(x)))^-1∇_2 f(x, y^*(x)),
∇ y^*(x) = -∇_12^2 g(x, y^*(x))(∇_22^2g(x, y^*(x)))^-1.
and the constants given by
L_y^* = L_∇ g/μ_g = (κ), L_z^* = √(1 + L_y^*^2)(L_∇ f/μ_g + L_fL_∇_22^2g/μ_g^2) = (κ^3),
L_∇Φ = L_∇ f + 2L_∇ fL_∇ g + L_f^2L_∇^2 g/μ_g + 2L_fL_∇ gL_∇^2 g+L_∇ fL_∇ g^2/μ_g^2 + L_fL_∇^2 gL_∇ g^2/μ_g^3 = (κ^3).
Moreover, we have
z^*(x)≤L_f/μ_g.
See Lemma 2.2 in <cit.> for the proof of (<ref>), Lipschitz continuity of ∇Φ and y^*. For the Lipschitz continuity of z^* we have for any x, x̃, we know
z^*(x)- z^*(x̃)
= (∇_22^2g(x, y^*(x)))^-1∇_2f(x, y^*(x)) - (∇_22^2g(x̃, y^*(x̃)))^-1∇_2f(x̃, y^*(x̃))
≤ (∇_22^2g(x, y^*(x)))^-1∇_2f(x, y^*(x)) - (∇_22^2g(x̃, y^*(x̃)))^-1∇_2f(x, y^*(x))
+ (∇_22^2g(x̃, y^*(x̃)))^-1∇_2f(x, y^*(x)) - (∇_22^2g(x̃, y^*(x̃)))^-1∇_2f(x̃, y^*(x̃))
≤ L_f(∇_22^2g(x,y^*(x)))^-1∇_22^2g(x,y^*(x)) - ∇_22^2g(x̃,y^*(x̃))(∇_22^2g(x,y^*(x)))^-1
+ 1/μ_g∇_2f(x, y^*(x)) - ∇_2f(x̃, y^*(x̃))
≤ L_f L_∇_22^2g/μ_g^2√(x - x̃^2 + y^*(x) - y^*(x̃)^2) + L_∇ f/μ_g√(x - x̃^2 + y^*(x) - y^*(x̃)^2)
≤ L_z^*x - x̃,
where the first inequality uses triangle inequality, the second and third inequalities use Assumption <ref>, and the fourth inequality uses the Lipschitz continuity of y^*(x). The inequality in (<ref>) holds since g(x,·) is μ_g-strongly convex and ∇_2f(x, y^*(x))≤ L_f (see Assumption <ref>).
For any convex compact set , function η_(x,h,τ) defined in Section <ref> is differentiable
and ∇η_ is L_∇η_-Lipschitz continuous, with the closed form exression and constant given by
∇_1η_(x,h,τ) = -h + 1/τ(x - d̅), ∇_2η_(x,h,τ) = d̅ - x, L_∇η_ = 2 √((1+1/τ)^2 + (1 + τ/2)^2).
where d̅ is defined as
d̅ = _d∈{h, d - x> + 1/2τd-x^2} = Π_(x - τ h),
which satisfies the optimality condition
h + 1/τ(d̅ - x), d - d̅>≥ 0, for all d∈.
See Lemma 3.2 in <cit.>.
§.§ Proof of Theorem <ref>
For simplicity, we summarize the notations that will be used in Section <ref> as follows.
κ = max(L_∇ g/μ_g, L_∇ f/μ_g), w^k+1 =u_x^k+1 - J^k+1z^k,
y_*^k = y^*(x^k) = _y∈^d_y g(x^k, y), z_*^k = (∇_22^2g(x^k, y_*^k))^-1∇_2f(x^k, y_*^k),
Φ(x) = f(x, y^*(x)), η_(x, h, τ) = min_d∈ X{h, d - x> + 1/2τd-x^2}.
In this section we suppose Assumptions <ref> and <ref> hold.
We assume stepsizes in Algorithm <ref> satisfy
β_k = c_1α_k, γ_k = c_2α_k, θ_k = c_3α_k,
where c_1, c_2, c_3 > 0 are constants to be determined. We will utilize the following merit function in our analysis:
W_k = W_k,1 + W_k,2,
W_k,1 = Φ(x^k) -inf_x∈Φ(x) - 1/c_3η_(x^k,h^k,τ)
W_k,2 = 1/c_1y^k - y_*^k^2 + 1/c_2z^k - z_*^k^2.
By definition of η_, we can verify that W_k,1≥ 0. Moreover, as discussed in Section <ref>, we consider the following optimality measure:
V_k = 1/τ^2x_+^k - x^k^2 + h^k - ∇Φ(x^k)^2.
The following Lemma characterizes the relation between V_k and gradient mapping of problem <ref>.
Suppose Assumptions <ref> and <ref> hold. In Algorithm <ref> we have
1/τ^2x^k - Π_(x^k - τ∇Φ(x^k)) ^2≤ 2V_k.
Note that we have
x^k - Π_(x^k - τ∇Φ(x^k))^2
≤ 2(x_+^k - x^k^2 + Π_(x^k - τ h^k) - Π_(x^k - τ∇Φ(x^k))^2)
≤ 2(x_+^k - x^k^2 + τ^2h^k - ∇Φ(x^k) ^2) = 2V_k,
where the first inequality uses Cauchy-Schwarz inequality and the second inequality uses the non-expansiveness of projection onto a convex compact set. This completes the proof.
Next we present a technical lemma about the variance of w^k+1 and the bound for h^k+1 - h^k.
Suppose Assumptions <ref> and <ref> hold. In Algorithm <ref> we have
[w^k+1 - [w^k+1|_k]^2] ≤σ_w,k+1^2
σ_w,k+1^2 := σ_w^2 + 2σ_g,2^2[z^k - z_*^k^2], σ_w^2 = σ_f,1^2 + 2σ_g,2^2L_f^2/μ_g^2,
[h^k+1 - h^k^2]≤σ_h, k^2,
σ_h, k^2 := 2θ_k^2[h^k - ∇Φ(x^k)^2 + [w^k+1|_k] -∇Φ(x^k)^2]+θ_k^2σ_w,k+1^2
We first consider w^k. Note that
w^k+1 - [w^k+1|_k] = u_x^k+1 - [u_x^k+1|_k] - (J^k+1 - [J^k+1|_k])z^k.
Hence we know
[w^k+1 - [w^k+1|_k]^2|_k]
= [u_x^k+1 - [u_x^k+1|_k]^2|_k] +
[J^k+1 - [J^k+1|_k]^2|_k]z^k^2
≤ σ_f,1^2 + 2σ_g,2^2z_*^k^2 + 2σ_g,2^2z^k - z_*^k^2 ≤σ_f,1^2 + 2σ_g,2^2L_f^2/μ_g^2 + 2σ_g,2^2z^k - z_*^k^2,
where the first equality uses independence, the first inequality uses Cauchy-Schwarz inequality, and the second inequality uses (<ref>). This proves (<ref>). Next for h^k+1 - h^k we have
[h^k+1 - h^k^2|_k]
= θ_k^2[h^k - [w^k+1|_k]^2|_k] + θ_k^2[w^k+1 - [w^k+1|_k] ^2|_k]
≤ 2θ_k^2[h^k - ∇Φ(x^k)^2|_k] + 2θ_k^2[[w^k+1|_k] -∇Φ(x^k) ^2 |_k] + θ_k^2σ_w,k+1^2,
which proves of (<ref>) by taking expectation on both sides.
We would like to highlight that in (<ref>), we explicitly characterize the upper bound of the variance of w^k+1, which contains [z^k -z_*^k^2] and requires further analysis. In contrast, Assumption 3.7 in <cit.> directly assumes the second moment of D_x^t is uniformly bounded, i.e.,
[D_x^t^2]≤ B_x^2 for some constant B_x ≥ 0,
Note that D_x^t in <cit.> is the same as our w^k+1 (see (<ref>), line 5 of Algorithm <ref> and definition of w^k+1 in (<ref>)). The second moment bound can directly imply the variance bound, i.e.,
[D_x^t - [D_x^t]^2]≤[D_x^t^2]≤ B_x^2.
This implies that some stronger assumptions are needed to guarantee Assumption 3.7 in <cit.>, as also pointed out by the authors (see discussions right below it). Instead, our refined analysis does not require that.
§.§.§ Hypergradient Estimation Error
Note that Assumptions 3.1 and 3.2 in <cit.> state that the upper-level function f is twice differentiable, the lower-level function g is three times differentiable and ∇^2f, ∇^3g are Lipschitz continuous so that z_*^k, as a function of x^k (see (<ref>)), is smooth, which is a crucial condition for (63) - (67) in <cit.>, which follows the analysis in Equation (49) in <cit.>. In this section we show that, by incorporating the moving average technique recently introduced to decentralized bilevel optimization <cit.>, we can remove this additional assumption. We have the following lemma characterizing the error induced by y^k and z^k.
Suppose Assumptions <ref> and <ref> hold. If the stepsizes satisfy
β_k < 2/μ_g + L_∇ g, γ_k ≤min(1/4μ_g, 0.06μ_g/σ_g,2^2)
then in Algorithm <ref> we have
∑_k=0^Kα_k[y^k - y_*^k^2]≤ C_yx∑_k=0^Kα_k [x_+^k - x^k^2] + C_y,0 + C_y,1(∑_k=0^Kα_k^2)
∑_k=0^Kα_k[z^k-z_*^k^2] ≤ C_zx∑_k=0^Kα_k [x_+^k - x^k^2] + C_z,0 + C_z,1(∑_k=0^Kα_k^2).
where the constants are defined as
C_yx = 2L_y^*^2/c_1^2μ_g^2, C_y,0 = 1/c_1μ_g[y^0 - y_*^0^2], C_y,1=2c_1σ_g,1^2/μ_g,
C_zx = 5L_f^2/μ_g^2(L_∇_22^2g^2/μ_g^2 + 1)2L_y^*^2/c_1^2μ_g^2 + 4L_z^*^2/c_2^2μ_g^2
C_z,0 = 5L_f^2/μ_g^2(L_∇_22^2g^2/μ_g^2 + 1)·1/c_1μ_g[y^0 - y_*^0^2] + 1/c_2μ_g[z^0 - z_*^0^2]
C_z,1 = 5L_f^2/μ_g^2(L_∇_22^2g^2/μ_g^2 + 1)·2c_1σ_g,1^2/μ_g + 2c_2σ_w^2/μ_g.
We first consider the error induced by y^k. We have
y^k+1 - y_*^k+1^2 ≤(1 + β_kμ_g)y^k+1 - y_*^k^2 + (1 + 1/β_kμ_g)y_*^k+1 - y_*^k^2
≤(1 + β_kμ_g)y^k+1 - y_*^k^2 + (α_k^2/β_kμ_g + α_k^2)L_y^*^2x_+^k - x^k^2,
where the first inequality uses Cauchy-Schwarz inequality:
u + v^2 ≤ (1 + c)(u^2 + 1/cv^2), for any vectors u, v and constant c>0.
Thanks to the moving average step of x^k, our analysis of y_*^k+1 - y_*^k is simplified comparing to that in <cit.>. We also have
[y^k+1 - y_*^k^2|_k] = [y^k - β_k∇_2g(x^k, y^k) - y_*^k - β_k(v^k+1 - ∇_2g(x^k, y^k))^2|_k]
≤ y^k - β_k∇_2g(x^k, y^k) - y_*^k^2 + β_k^2σ_g,1^2
≤ (1-β_kμ_g)^2y^k - y_*^k^2 + β_k^2σ_g,1^2
where the first inequality uses Assumption (<ref>) and Lemma <ref>, and the second inequality uses Lemma <ref> (which requires strong convexity of g, Lipschtiz continuity of ∇_2g, and the first inequality in (<ref>)). Combining (<ref>) and (<ref>), we know
[y^k+1 - y_*^k+1^2|_k]
≤ (1 + β_kμ_g)(1-β_kμ_g)^2y^k - y_*^k^2 + (α_k^2/β_kμ_g + α_k^2)L_y^*^2x_+^k - x^k^2 + (1 + β_kμ_g)β_k^2σ_g,1^2
≤ (1- β_kμ_g)y^k - y_*^k^2 + 2α_k^2L_y^*^2/β_kμ_gx_+^k - x^k^2 + 2β_k^2σ_g,1^2.
where the second inequality uses β_k<2/μ_g + L_∇ g≤1/μ_g. Taking summation (k from 0 to K) on both sides and taking expectation, we know
∑_k=0^Kβ_kμ_g[y^k - y_*^k^2]≤[y^0 - y_*^0^2] + ∑_k=0^K2α_k^2L_y^*^2/β_kμ_g[x_+^k - x^k^2] + ∑_k=0^K2β_k^2σ_g,1^2,
which proves the first inequality in (<ref>) by dividing c_1μ_g on both sides.
Next we analyze the error induced by z^k. Our analysis is substantially different from <cit.>. We first notice that
z^k+1 - z_*^k+1^2 ≤(1 + γ_kμ_g/3)z^k+1 - z_*^k^2 + (1 + 3/γ_kμ_g)z_*^k+1 - z_*^k^2
≤(1 + γ_kμ_g/3)z^k+1 - z_*^k^2 + (3α_k^2/γ_kμ_g + α_k^2 )L_z^*^2x_+^k - x^k^2
where we use Cauchy-Schwarz inequality in the first and second inequality, we use the facts that ∇ y^* is Lipschitz continuous.
For z^k+1 - z_*^k, we may follow the analysis of SGD under the strongly convex setting:
z^k+1 - z_*^k = z^k - γ_k(H^kz^k - u_y^k) - z_*^k= z^k - γ_k∇_22^2g(x^k, y^k)z^k + γ_k∇_2 f(x^k, y^k) - z_*^k
- γ_k(H^k+1 - ∇_22^2g(x^k, y^k))z^k + γ_k(u_y^k - ∇_2 f(x^k, y^k))
which gives
[z^k+1 - z_*^k^2|_k]
≤ z^k - γ_k∇_22^2g(x^k, y^k)z^k + γ_k∇_2 f(x^k, y^k) - z_*^k^2 + γ_k^2σ_g,2^2z^k^2 + γ_k^2σ_f,1^2
= (I - γ_k∇_22^2g(x^k, y^k))(z^k - z_*^k) - γ_k(∇_22^2g(x^k,y^k)z_*^k - ∇_2 f(x^k, y^k))^2 + γ_k^2σ_g,2^2z^k^2 + γ_k^2σ_f,1^2
≤ (1 + γ_kμ_g/2)(I - γ_k∇_22^2g(x^k, y^k))(z^k - z_*^k)^2
+ (1 + 2/γ_kμ_g)γ_k(∇_22^2g(x^k,y^k)z_*^k - ∇_22^2g(x^k, y_*^k)z_*^k + ∇_2 f(x^k, y_*^k) - ∇_2 f(x^k, y^k))^2
+ 2γ_k^2σ_g,2^2(z^k - z_*^k^2 + z_*^k^2)+ γ_k^2σ_f,1^2
≤ ((1 + γ_kμ_g/2)(1-γ_kμ_g)^2 + 2γ_k^2σ_g,2^2)z^k-z_*^k^2
+ (4γ_k/μ_g + 2γ_k^2)(L_∇_22^2g^2z_*^k^2 + L_∇_2 f^2)y^k - y_*^k^2 + 2γ_k^2σ_g,2^2z_*^k^2 + γ_k^2σ_f,1^2.
≤ (1 - 4γ_kμ_g/3)z^k-z_*^k^2 + (4γ_k/μ_g + 2γ_k^2)(L_∇_22^2g^2L_f^2/μ_g^2 + L_f ^2)y^k - y_*^k^2 + (2σ_g,2^2L_f^2/μ_g^2 + σ_f,1^2)γ_k^2,
where the first inequality uses Assumption <ref>, the second inequality uses Cauchy-Schwarz inequality and the definition of z_*^k, the third inequality uses Cauchy-Schwarz inequality and the fact that g is μ_g-strongly convex, and the fourth inequality uses Cauchy-Schwarz inequality, (<ref>) and
-γ_kμ_g/6 + 2γ_k^2σ_g,2^2 + γ_k^3μ_g^3/2≤ 0,
which is a direct result from the bound of γ_k in (<ref>). It is worth noting that our estimation can be viewed as a refined version of (72) - (75) in <cit.>
Combining (<ref>) and (<ref>) we may obtain
[z^k+1 - z_*^k+1^2|_k]
≤ (1 + γ_kμ_g/3)[z^k+1 - z_*^k^2|_k] + (3α_k^2/γ_kμ_g + α_k^2 )L_z^*^2x_+^k - x^k^2
≤ (1 + γ_kμ_g/3)[(1 - 4γ_kμ_g/3)z^k-z_*^k^2 + (4γ_k/μ_g + 2γ_k^2)(L_∇_22^2g^2L_f^2/μ_g^2 + L_f^2)y^k - y_*^k^2]
+ (1 + γ_kμ_g/3)(2σ_g,2^2L_f^2/μ_g^2 + σ_f,1^2)γ_k^2 + (3α_k^2/γ_kμ_g + α_k^2 )L_z^*^2x_+^k - x^k^2
= (1 - γ_kμ_g)z^k-z_*^k^2 + (4γ_k/μ_g + 10γ_k^2/3 + 2γ_k^3μ_g/3)(L_∇_22^2g^2L_f^2/μ_g^2 + L_f^2)y^k - y_*^k^2
+ σ_w^2(γ_k^2 + γ_k^3μ_g/3) + (3α_k^2/γ_kμ_g + α_k^2 )L_z^*^2x_+^k - x^k^2
≤ (1 - γ_kμ_g)z^k-z_*^k^2 + 5γ_kL_f^2/μ_g(L_∇_22^2g^2/μ_g^2 + 1)y^k - y_*^k^2 + 2σ_w^2γ_k^2 + 4α_k^2L_z^*^2/γ_kμ_gx_+^k - x^k^2,
where the equality uses the definition of σ_w^2 in (<ref>) and the third inequality uses γ_kμ_g ≤1/4. Taking summation (k from 0 to K) and expectation, we know
∑_k=0^Kγ_kμ_g[z^k-z_*^k^2] ≤ [z^0 - z_*^0^2]+ ∑_k=0^K5γ_kL_f^2/μ_g(L_∇_22^2g^2/μ_g^2 + 1) [y^k - y_*^k^2]
+ ∑_k=0^K2σ_w^2γ_k^2 + ∑_k=0^K4α_k^2L_z^*^2/γ_kμ_g[x_+^k - x^k^2].
This completes the proof of the second inequality in (<ref>) by dividing c_2μ_g on both sides and replacing ∑_k=0^Kα_k[y^k - y_*^k^2] with its upper bound in (<ref>).
Suppose Assumptions <ref> and <ref> hold. We have
[w^k+1|_k] - ∇Φ(x^k)^2≤ 3 ((L_∇ f^2 + L_∇^2 g^2)y^k - y_*^k^2 + L_∇ g^2z^k - z_*^k^2),
Note that we have the following decomposition:
[w^k+1|_k] - ∇Φ(x^k)
= [u_x^k+1|_k] - ∇_1 f(x^k, y_*^k) - ([J^k+1|_k]z^k - ∇_12^2g(x^k,y_*^k)z_*^k)
= ∇_1 f(x^k, y^k) - ∇_1 f(x^k, y_*^k) - ∇_12^2g(x^k, y^k)( z^k - z_*^k) - (∇_12^2g(x^k, y^k) - ∇_12^2g(x^k,y_*^k))z_*^k.
which, together with Cauchy-Schwarz inequality, implies
[w^k+1|_k] - ∇Φ(x^k)^2
≤ 3∇_1 f(x^k, y^k) - ∇_1 f(x^k, y_*^k)^2 + 3∇_12^2g(x^k, y^k)( z^k - z_*^k)^2 + 3(∇_12^2g(x^k, y^k) - ∇_12^2g(x^k,y_*^k))z_*^k^2
≤ 3 ((L_∇ f^2 + L_∇^2 g^2)y^k - y_*^k^2 + L_∇ g^2z^k - z_*^k^2).
§.§.§ Primal Convergence
Suppose Assumptions <ref> and <ref> hold. If
α_k≤min(τ^2/20c_3, c_3/2τ(c_3L_∇Φ + L_∇η_), 1), τ < 1, c_3 ≤1/10,
then in Algorithm <ref> we have
∑_k=0^Kα_k/τ^2[x_+^k-x^k^2]≤ 2/τ[W_0,1] + 3∑_k=0^Kα_k[∇Φ(x^k) - [w^k+1|_k]^2]
+ 1/2∑_k=0^Kα_k[h^k - ∇Φ(x^k)^2] + ∑_k=0^K(α_k^2σ_g,2^2[z^k - z_*^k^2] + α_k^2σ_w^2),
The L_∇Φ-smoothness of Φ(x) and L_∇η_-smoothness of η_ in Lemma <ref> and <ref> imply
Φ(x^k+1) - Φ(x^k)≤α_k∇Φ(x^k), x_+^k - x^k> + L_∇Φ/2x^k+1-x^k^2
and
η_(x^k,h^k,τ) - η_(x^k+1,h^k+1,τ)
≤ -h^k + 1/τ(x^k - x_+^k), x^k - x^k+1> + x_+^k - x^k, h^k - h^k+1>
+ L_∇η_/2(x^k+1-x^k^2 + h^k+1 - h^k^2)
= α_kh^k, x_+^k - x^k> + α_k/τx_+^k - x^k^2 +θ_kh^k, x_+^k - x^k> - θ_kw^k+1, x_+^k - x^k>
+ L_∇η_/2(x^k+1-x^k^2 + h^k+1 - h^k^2)
≤ - θ_k/τx_+^k - x^k^2 - θ_kw^k+1, x_+^k-x^k> + L_∇η_/2(x^k+1-x^k^2 + h^k+1 - h^k^2),
where the first inequality uses L_∇η_-smoothness of ∇η_, and the second inequality uses the optimality condition (<ref>) (with d = x^k). Hence by computing (<ref>) + (<ref>)/ c_3 and taking conditional expectation with respect to _k we know
α_k/τx_+^k-x^k^2
≤ 1/c_3([η_(x^k+1,h^k+1,τ)|_k] - η_(x^k,h^k,τ)) + Φ(x^k) - [Φ(x^k+1)|_k]
+ α_k∇Φ(x^k) - [w^k+1|_k], x_+^k - x^k> + (c_3L_∇Φ + L_∇η_)/2c_3x^k+1-x^k^2
+ L_∇η_/2c_3[h^k+1 - h^k^2|_k].
= W_k,1 - [W_k+1,1|_k] + α_k∇Φ(x^k) - [w^k+1|_k], x_+^k - x^k>
+ (c_3L_∇Φ + L_∇η_)/2c_3x^k+1-x^k^2 + L_∇η_/2c_3[h^k+1 - h^k^2|_k]
≤ W_k,1 - [W_k+1,1|_k] + α_k(τ∇Φ(x^k) - [w^k+1|_k]^2 + 1/4τ x_+^k - x^k^2)
+ α_k/4τx_+^k-x^k^2 + 5/2c_3τ[h^k+1 - h^k^2|_k],
where the second inequality uses Young's inequality and the following inequalities:
α_k^2(c_3L_∇Φ + L_∇η_)/2c_3≤α_k/4τ, L_∇η_<5/τ when (<ref>) holds.
Note that by (<ref>) we know
5/c_3τ^2[h^k+1 - h^k^2]
≤ 10c_3α_k^2/τ^2[h^k - ∇Φ(x^k)^2 + [w^k+1|_k] -∇Φ(x^k)^2]
+ 5c_3α_k^2/τ^2σ_w^2 + 10c_3α_k^2σ_g,2^2/τ^2[z^k - z_*^k^2].
≤ α_k/2[h^k - ∇Φ(x^k)^2] + α_k[[w^k+1|_k] -∇Φ(x^k)^2]
+ α_k^2σ_w^2 + α_k^2σ_g,2^2[z^k - z_*^k^2],
where the second inequality uses (<ref>). Taking summation and expectation on both sides of (<ref>) and using (<ref>), we obtain (<ref>)
§.§.§ Dual Convergence
Suppose Assumptions <ref> and <ref> hold. In Algorithm <ref> we have
∑_k=0^Kα_k[h^k - ∇Φ(x^k)^2]≤ 1/c_3[h^0 - ∇Φ(x^0)^2] + 2∑_k=0^Kα_k[[w^k+1|_k] - ∇Φ(x^k)^2]
+ 2L_∇Φ^2/c_3^2∑_k=0^Kα_k[x_+^k - x^k^2] + 2c_3σ_g,2^2∑_k=0^Kα_k^2[z^k - z_*^k^2] + ∑_k=0^Kc_3α_k^2σ_w^2.
Note that by moving average update of h^k, we have
h^k+1 - ∇Φ(x^k+1)
= (1 - θ_k)h^k + θ_k(w^k+1 - [w^k+1|_k]) + θ_k[w^k+1|_k] - ∇Φ(x^k+1)
= (1 - θ_k)(h^k - ∇Φ(x^k)) + θ_k([w^k+1|_k] - ∇Φ(x^k)) + ∇Φ(x^k) - ∇Φ(x^k+1)
+ θ_k(w^k+1 - [w^k+1|_k])
Hence we know
[h^k+1 - ∇Φ(x^k+1)^2|_k]
= (1 - θ_k)(h^k - ∇Φ(x^k)) + θ_k([w^k+1|_k] - ∇Φ(x^k)) + ∇Φ(x^k) - ∇Φ(x^k+1)^2
+ θ_k^2[w^k+1 - [w^k+1|_k]^2|_k]
≤ (1 - θ_k)h^k - ∇Φ(x^k)^2
+ θ_k[w^k+1|_k] - ∇Φ(x^k) + 1/θ_k(∇Φ(x^k) - ∇Φ(x^k+1))^2 +θ_k^2σ_w,k+1^2
≤ (1 - θ_k)h^k - ∇Φ(x^k)^2
+ 2θ_k[w^k+1|_k] - ∇Φ(x^k)^2 + 2/θ_k∇Φ(x^k) - ∇Φ(x^k+1)^2 + θ_k^2σ_w,k+1^2
≤ (1 - θ_k)h^k - ∇Φ(x^k)^2
+ 2θ_k[w^k+1|_k] - ∇Φ(x^k)^2 + 2α_k^2L_∇Φ^2/θ_kx_+^k - x^k^2 + θ_k^2σ_w,k+1^2,
where the first equality uses the fact that x^k, h^k, x^k+1, are all _k-measurable and are independent of w^k+1 given _k, the first inequality uses the convexity of ·^2 and (<ref>), the second inequality uses Cauchy-Schwarz inequality, the third inequality uses the Lipschitz continuity of ∇Φ in Lemma <ref>, and the update rules of x^k+1. Taking summation, expectation on both sides of (<ref>), dividing c_3 and using (<ref>), we know (<ref>) holds.
§.§.§ Proof of Theorem <ref>
Now we are ready to prove Theorem <ref>. From Lemma <ref> we know it suffices to bound V_k. By definition of V_k in (<ref>), (<ref>) and (<ref>) we have
∑_k=0^Kα_k[V_k] = ∑_k=0^K(α_k/τ^2[x_+^k-x^k^2] + α_k[h^k - ∇Φ(x^k)^2])
≤ 2L_∇Φ^2/c_3^2∑_k=0^Kα_k[x_+^k - x^k^2] +1/2∑_k=0^Kα_k[h^k - ∇Φ(x^k)^2]
+ 5∑_k=0^Kα_k[∇Φ(x^k) - [w^k+1|_k]^2] + (1 + 2c_3)σ_g,2^2∑_k=0^Kα_k^2[z^k - z_*^k^2]
+ 2/τ[W_0,1] + 1/c_3[h^0 - ∇Φ(x^0)^2] + (1+c_3)σ_w^2(∑_k=0^Kα_k^2),
≤ 2L_∇Φ^2/c_3^2∑_k=0^Kα_k[x_+^k - x^k^2] +1/2∑_k=0^Kα_k[h^k - ∇Φ(x^k)^2]
+ 15∑_k=0^Kα_k[(L_∇ f^2 + L_∇^2 g^2)y^k - y_*^k^2 + L_∇ g^2z^k - z_*^k^2] + L_∇ g^2∑_k=0^Kα_k[z^k - z_*^k^2]
+ 2/τ[W_0,1] + 1/c_3[h^0 - ∇Φ(x^0)^2] + (1+c_3)σ_w^2(∑_k=0^Kα_k^2)
≤ C_vx∑_k=0^Kα_k [x_+^k - x^k^2] + C_vh∑_k=0^Kα_k[h^k - ∇Φ(x^k)^2] + C_v,0 + C_v,1(∑_k=0^Kα_k^2),
where we assume
(1 + 2c_3)σ_g,2^2α_k≤ L_∇ g^2,
in the second inequality. The constants are defined as
C_vx = 15(L_∇ f^2 + L_∇^2 g^2)C_yx + 16L_∇ g^2C_zx + 2L_∇Φ^2/c_3^2 , C_vh = 1/2,
C_v,0 = 15(L_∇ f^2 + L_∇^2 g^2)C_y,0 + 16L_∇ g^2C_z,0 + 2/τ[W_0,1] + 1/c_3[h^0 - ∇Φ(x^0)^2],
C_v,1 = 15(L_∇ f^2 + L_∇^2 g^2)C_y,1 + 16L_∇ g^2C_z,1 + (1+c_3)σ_w^2.
Using constants defined in Lemma <ref>, we know
C_vx= (κ^8/c_1^2 + κ^4/c_2^2 + κ^6/c_3^2), C_vh = (1),
C_v,0 = (κ^5/c_1 + κ^2/c_2 + 1/τ), C_v,1 = (c_1κ^5 + c_2κ^2).
Hence we can pick
α_k ≡Θ(1/√(K)), τ = Θ(κ^-4), c_1 = (1), c_2 = (1), c_3 = (1)
so that the conditions ((<ref>), (<ref>) and (<ref>)) in previous lemmas hold. Then we have
1/K∑_k=0^K[V_k] = (κ^5/√(K)).
which, together with Lemma <ref>, proves Theorem <ref>.
§.§ Proof of Theorem <ref>
In this section we present our proof of Theorem <ref>. For simplicity, we summarize the notations that will be used in our proof as follows.
L_∇ f = max_1≤ i≤ nL_∇ f_i, L_∇ g = max_1≤ i≤ nL_∇ g_i, L_∇^2 g_i = max_1≤ i≤ nL_∇^2 g_i, μ_g = max_1≤ i≤ nμ_g_i,
κ = max(L_∇ g/μ_g, L_∇ f/μ_g), u_x^k+1 = ∑_i=1^nu_x,i^k+1, w^k+1 = ∑_i=1^nλ_i^k(u_x,i^k+1 - J_i^k+1z_i^k),
λ_*(x) = _λ∈Δ_nΦ_μ_λ(x,λ), λ_*^k = λ_*(x^k),
y_*,i^k = y_i^*(x^k) = _y∈^d_y g_i(x^k, y), z_*,i^k = (∇_22^2g_i(x^k, y_*,i^k))^-1∇_2f_i(x^k, y_*,i^k),
Φ_i(x) = f_i(x, y_i^*(x)), Φ^k = (Φ_1(x^k), ..., Φ_n(x^k)),
Ψ(x) = max_λ∈Δ_nΦ_μ_λ(x,λ) = max_λ∈Δ_n(∑_i=1^nλ_iΦ_i(x) -μ_λ/2λ - _n/n^2),
η_X(x, h, τ) = min_d∈ X{h, d - x> + 1/2τd-x^2}, where X = or Δ_n.
In this subsection we suppose Assumptions <ref>, <ref> hold for all f_i, g_i and Assumption <ref> holds. We suppose stepsizes in Algorithm <ref> satisfy
β_k = c_1α_k, γ_k = c_2α_k, θ_k = c_3α_k,
where c_1, c_2, c_3 > 0 are constants to be determined. We will utilize the following merit function in our analysis:
W̃_k = W̃_k,1 + W̃_k,2, W̃_k,1 = W̃_k,1^(1) + W̃_k,1^(2)
W̃_k,1^(1) = Ψ(x^k) - Φ_μ_λ(x^k,λ^k) - 1/c_3η_Δ_n(λ^k,-h_λ^k,τ_λ)
W̃_k,1^(2) = Ψ(x^k) -inf_x∈Ψ(x) - 1/c_3η_(x^k,h_x^k,τ_x)
W̃_k,2 = ∑_i=1^n(1/c_1y_i^k - y_*,i^k^2 + 1/c_2z_i^k - z_*,i^k^2).
By definition of Ψ, η_, η_Δ_n, we can verify that W̃_k,1^(1)≥ 0, W̃_k,1^(2)≥ 0. Moreover, as discussed in Section <ref>, we consider the following optimality measure:
Ṽ_k,1 = 1/τ_x^2x_+^k - x^k^2 + h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2,
Ṽ_k,2 = 1/τ_λ^2λ_+^k - λ^k^2 + h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2,
Ṽ_k = 1/τ_x^2x_+^k - x^k^2 + h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2_Optimality of min problem + 1/τ_λ^2λ_+^k - λ^k^2 + h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2_Optimality of max problem.
The following lemma provides some smoothness of functions that we will use in our proof.
Functions ∇Ψ(·), ∇_1Φ_μ_λ(·, λ), ∇_1Φ(·, λ), ∇_1Φ_μ_λ(x,·), ∇_1Φ(x,·), ∇_2Φ_μ_λ(·, λ),
∇_2Φ_μ_λ(x, ·) are L_∇Ψ, L_∇Φ, L_∇Φ, L_∇_1Φ_μ_λ, L_∇_1Φ_μ_λ, L_∇_2Φ_μ_λ, μ_λ-Lipschitz continuous respectively, with the constants given by
L_∇Ψ = n/μ_λ(L_Φ^2 + b_ΦL_∇Φ) + L_∇Φ, L_∇_1Φ_μ_λ = L_∇_2Φ_μ_λ = √(n)L_Φ
For ∇Ψ we first notice that the nonconvex-strongly-concave problem in (<ref>) can be reformulated as a bilevel problem:
min_x∈Ψ(x) = Φ_μ_λ(x, λ^*(x)) s.t. λ^*(x) = _λ∈Δ_n(-Φ_μ_λ(x,λ)) = μ_λ/2λ - _n/n^2 - ∑_i=1^nλ_iΦ_i(x).
By Lemma <ref> we know
∇Ψ(x) = ∇_1Φ_μ_λ(x,λ^*(x)) - ∇_12^2Φ_μ_λ(x,λ^*(x))(∇_22^2Φ_μ_λ(x,λ^*(x)))^-1∇_2Φ_μ_λ(x,λ^*(x))
= ∑_i=1^nλ_i^*(x)∇Φ_i(x) + 1/μ_λ(∇Φ_1(x), ..., ∇Φ_n(x))[ [ Φ_1(x); ⋮; Φ_n(x) ] - μ_λ(λ^*(x) - _n/n)]
= 1/μ_λ∑_i=1^nΦ_i(x)∇Φ_i(x) + 1/n∑_i=1^n∇Φ_i(x),
from which we know ∇Ψ(·) is L_∇Ψ-Lipschitz continuous since
Φ_i(x)∇Φ_i(x) - Φ_i(x̃)∇Φ_i(x̃)
≤ Φ_i(x)∇Φ_i(x) - Φ_i(x)∇Φ_i(x̃) + Φ_i(x)∇Φ_i(x̃) - Φ_i(x̃)∇Φ_i(x̃)
≤ (L_Φ^2 + b_ΦL_∇Φ)x-y.
Note that for any fixed λ∈Δ_n and x, x̃∈, we have
∇_1Φ_μ_λ(x,λ) = ∇_1Φ(x,λ) = ∑_i=1^nλ_i∇Φ_i(x),
and
∇_1Φ_μ_λ(x,λ) - ∇_1Φ_μ_λ(x̃, λ) = ∑_i=1^nλ_i(∇Φ_i(x) - ∇Φ_i(x̃))≤ L_∇Φx - x̃.
Similarly, for any fixed x∈ and λ, λ̃∈Δ_n we know
∇_1Φ_μ_λ(x,λ) - ∇_1Φ_μ_λ(x,λ̃) = ∑_i=1^n(λ_i-λ̃_i)∇Φ_i(x)≤√(n)L_Φλ - λ̃.
(<ref>), (<ref>) and (<ref>) imply ∇_1Φ_μ_λ(·, λ), ∇_1Φ(·, λ) are L_∇Φ-Lipschitz continuous and ∇_1Φ_μ_λ(x,·), ∇_1Φ(x,·) are L_∇_1Φ_μ_λ-Lipschitz continuous. Finally, for ∇_2Φ_μ_λ(x,λ) we have
∇_2Φ_μ_λ(x,λ) = (Φ_1(x), ..., Φ_n(x)) - μ_λ(λ - _n/n),
and thus ∇_2Φ_μ_λ(·,λ), ∇_2Φ_μ_λ(x,·) are √(n)L_Φ, μ_λ-Lipschitz continuous respectively.
Next we present a technical lemma that will be used in analyzing the strongly convex function over a convex compact set.
Suppose f(x) is μ-strongly convex and L-smooth over a convex compact set . For any τ≤1/L define x_+ = Π_(x - τ∇ f(x)) and x_* = _x∈ f(x), we have
(1 - √(1 - τμ))x - x_*≤x-x_+.
By Corollary 2.2.4 in <cit.> we know
1/τx-x_+, x - x_*> ≥1/2τx - x_+^2 + μ/2x - x_*^2 + μ/2x_+ - x_*^2
= (1/2τ + μ/2)x - x_+^2 + μx - x_*^2 - μx - x_+, x - x_*>
which implies
x - x_+x - x_*≥x-x_+, x - x_*> ≥1/2x - x_+^2 + rx - x_*^2
where r = μ/1/τ + μ≤1/2. Applying Young's inequality to the left hand side of the above inequality, we know
1 + √(1-2r)/4rx - x_+^2 + r/1 + √(1-2r)x - x_*^2≥1/2x - x_+^2 + rx - x_*^2
which gives
x - x_+≥(1 - √(1-2r))x - x_*≥(1 - √(1 - τμ))x - x_*.
This completes the proof.
The next lemma shows the relation between the stationarity used in Theorem <ref> and our measure of optimality Ṽ_k in (<ref>).
Suppose Assumptions <ref>, <ref> hold for all f_i, g_i and Assumption <ref> holds. If τ_λμ_λ = 1, then in Algorithm <ref> we have
1/τ_x^2x^k - Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k)) ^2≤ 2(1/τ_x^2x_+^k - x^k^2 + h_x^k - ∇_1Φ_μ_λ(x^k, λ^k) ^2),
λ^k - λ_*^k^2≤2/μ_λ^2(1/τ_λ^2λ_+^k - λ^k^2 + h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2),
which together imply
1/τ_x(x^k - Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k)) )^2 + λ^k - λ_*^k^2≤max(2, 2/μ_λ^2)Ṽ_k.
The first inequality follows (<ref>):
x^k - Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k))^2
≤ 2(x_+^k - x^k^2 + Π_(x^k - τ_x h_x^k) - Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k))^2)
≤ 2(x_+^k - x^k^2 + τ_x^2h_x^k - ∇_1Φ_μ_λ(x^k, λ^k) ^2),
where the first inequality uses Cauchy-Schwarz inequality and the second inequality uses the non-expansiveness of projection onto a convex compact set. Recall that
λ_*^k = _λ∈Δ_nΦ_μ_λ(x^k, λ)
which is a minimizer (over the probability simplex) of a μ_λ-smooth and μ_λ-strongly convex function Φ_μ_λ(x^k, ·). Hence we know from Lemma <ref> that
μ_λ^2λ_*^k - λ^k^2
≤ (1 + √(1-τ_λμ_λ))^2/τ_λ^2λ^k - Π_Δ_n(λ^k + τ_λ∇_2Φ_μ_λ(x^k, λ^k))^2
≤ 2(1 + √(1-τ_λμ_λ))^2/τ_λ^2(λ_+^k - λ^k^2 + Π_Δ_n(λ^k + τ_λh_λ^k) - Π_Δ_n(λ^k + τ_λ∇_2Φ_μ_λ(x^k, λ^k))^2)
≤ 2(1 + √(1-τ_λμ_λ))^2/τ_λ^2(λ_+^k - λ^k^2 + τ_λ^2h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2),
where the second inequality uses Cauchy-Schwarz inequality and the third inequality uses non-expansiveness of the projection onto a convex compact set. Setting τ_λμ_λ= 1 completes the proof.
Suppose Assumptions <ref>, <ref> hold for all f_i, g_i and Assumption <ref> holds. In Algorithm <ref> we have
[w^k+1 - [w^k+1|_k]^2] ≤σ_w,k+1^2
σ_w,k+1^2 := σ_w^2 + 2σ_g,2^2[∑_i=1^nλ_i^kz_i^k - z_*,i^k^2], σ_w^2 = σ_f,1^2 + 2σ_g,2^2L_f^2/μ_g^2
[h_x^k+1 - h_x^k^2]≤σ_h_x, k^2, [h_λ^k+1 - h_λ^k^2]≤σ_h_λ, k^2,
σ_h_x, k^2 := 2θ_k^2[h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2 + [w^k+1|_k] -∇_1Φ_μ_λ(x^k, λ^k) ^2] +θ_k^2σ_w,k+1^2
σ_h_λ, k^2 :=θ_k^2[h_λ^k - ∇_2Φ_μ_λ(x^k,λ^k)^2] + nθ_k^2σ_f,0^2.
We first consider w^k. Note that
w^k+1 - [w^k+1|_k] = ∑_i=1^nλ_i^k(u_x,i^k+1 - [u_x,i^k+1|_k] - (J_i^k+1 - [J_i^k+1|_k])z_i^k).
Hence we know
[w^k+1 - [w^k+1|_k]^2|_k]
= ∑_i=1^n(λ_i^k)^2([u_x,i^k - [u_x,i^k|_k]^2|_k] +
[J_i^k+1 - [J_i^k+1|_k]^2|_k]z_i^k^2 )
≤ ∑_i=1^nλ_i^k(σ_f,1^2 + 2σ_g,2^2z_*,i^k^2 + 2σ_g,2^2z_i^k - z_*,i^k^2)
≤ σ_f,1^2 + 2σ_g,2^2L_f^2/μ_g^2 + 2σ_g,2^2∑_i=1^nλ_i^kz_i^k - z_*,i^k^2,
which proves (<ref>). Next for h_x^k+1 - h_x^k we have
[h_x^k+1 - h_x^k^2|_k]
= θ_k^2[h_x^k - [w^k+1|_k]^2|_k] + θ_k^2[w^k+1 - [w^k+1|_k] ^2|_k]
≤ 2θ_k^2[h_x^k - ∇_1Φ(x^k,λ^k)^2|_k] + 2θ_k^2[[w^k+1|_k] -∇_1Φ(x^k, λ^k) ^2 |_k] + θ_k^2σ_w,k+1^2,
which proves the first inequality of (<ref>). Similarly we have
[h_λ^k+1 - h_λ^k^2|_k]
= θ_k^2[h_λ^k - [s^k+1|_k] + μ_λ(λ^k - _n/n)^2|_k] + θ_k^2[s^k+1 - [s^k+1|_k]^2|_k]
≤ θ_k^2[h_λ^k - ∇_2Φ_μ_λ(x^k,λ^k)^2|_k] + nθ_k^2σ_f,0^2,
which proves the second inequality of (<ref>).
§.§.§ Hypergradient Estimation Error
Suppose Assumptions <ref>, <ref> hold for all f_i, g_i and Assumption <ref> holds. In Algorithm <ref> if the stepsizes satisfy
β_k < 2/μ_g + L_∇ g, γ_k ≤min(1/4μ_g, 0.06μ_g/σ_g,2^2)
then we have
∑_k=0^Kα_k[∑_i=1^ny_i^k - y_*,i^k^2] ≤ nC_yx∑_k=0^Kα_k[x_+^k - x^k^2] + ∑_i=1^nC_y_i, 0 + nC_y,1(∑_k=0^Kα_k^2)
∑_k=0^Kα_k[∑_i=1^nz_i^k-z_*,i^k^2] ≤ nC_zx∑_k=0^Kα_k[x_+^k - x^k^2] + ∑_i=1^nC_z_i, 0 + nC_z,1(∑_k=0^Kα_k^2)
where constants C_yx, C_y,1, C_zx, C_z,1 are defined the same as those in Lemma <ref>. C_y_i, 0, C_z_i, 0 are defined as
C_y_i, 0 = 1/c_1μ_g[y_i^0 - y_*,i^0^2],
C_z_i,0 = 5L_f^2/μ_g^2(L_∇_22^2g^2/μ_g^2 + 1)·1/c_1μ_g[y_i^0 - y_*,i^0^2] + 1/c_2μ_g[z_i^0 - z_*,i^0^2].
Note that the proof follows almost the same reasoning in Lemma <ref>. Since Assumptions <ref> and <ref> hold for all f_i, g_i, by replacing y^k, y_*^k, z^k, z_*^k with y_i^k, y_*,i^k, z_i^k, z_*,i^k respectively, we have similar results hold for each 1≤ i≤ n
∑_k=0^Kα_k[y_i^k - y_*,i^k^2]≤ C_yx∑_k=0^Kα_k [x_+^k - x^k^2] + C_y_i,0 + C_y,1(∑_k=0^Kα_k^2)
∑_k=0^Kα_k[z_i^k-z_*,i^k^2] ≤ C_zx∑_k=0^Kα_k [x_+^k - x^k^2] + C_z_i,0 + C_z,1(∑_k=0^Kα_k^2).
Taking summation on both sides of (<ref>), we complete the proof.
The next lemma shows that the inequalities above will be used in the error analysis of
[w^k+1|_k] - ∇_1Φ(x^k, λ^k).
Suppose Assumptions <ref>, <ref> hold for all f_i, g_i and Assumption <ref> holds. We have
[w^k+1|_k] - ∇_1Φ_μ_λ(x^k, λ^k)^2≤ ∑_i=1^n3λ_i^k {(L_∇ f^2 + L_∇^2 g^2)y_i^k - y_*,i^k^2 + L_∇ g^2z_i^k - z_*,i^k^2},
[w^k+1|_k] - ∇Ψ(x^k)^2 ≤ ∑_i=1^n4λ_i^k {(L_∇ f^2 + L_∇^2 g^2)y_i^k - y_*,i^k^2 + L_∇ g^2z_i^k - z_*,i^k^2}
+ 8nL_Φ^2{λ_+^k - λ^k^2 + 1/μ_λ^2h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2}.
Note that we have the following decomposition:
[w^k+1|_k] - ∇_1Φ_μ_λ(x^k, λ^k)
= [u_x^k+1|_k] - ∑_i=1^nλ_i^k∇_1 f_i(x^k, y_*,i^k) - ∑_i=1^nλ_i^k ([J_i^k+1|_k]z_i^k - ∇_12^2g_i(x^k,y_*,i^k)z_*,i^k)
= ∑_i=1^nλ_i^k{∇_1 f_i(x^k, y_i^k) - ∇_1 f_i(x^k, y_*,i^k) - ∇_12^2g_i(x^k, y_i^k)( z_i^k - z_*,i^k)
- [∇_12^2g_i(x^k, y_i^k) - ∇_12^2g_i(x^k,y_*,i^k)]z_*,i^k}.
which, together with Cauchy-Schwarz inequality, implies
[w^k+1|_k] - ∇_1Φ_μ_λ(x^k, λ^k)^2
≤ 3∑_i=1^nλ_i^k (∇_1 f_i(x^k, y_i^k) - ∇_1 f_i(x^k, y_*,i^k))^2 + 3∑_i=1^nλ_i^k∇_12^2g_i(x^k, y_i^k)( z_i^k - z_*,i^k)^2
+ 3∑_i=1^n(∇_12^2g_i(x^k, y_i^k) - ∇_12^2g_i(x^k,y_*,i^k))z_*,i^k^2
≤ ∑_i=1^n3λ_i^k ((L_∇ f^2 + L_∇^2 g^2)y_i^k - y_*,i^k^2 + L_∇ g^2z_i^k - z_*,i^k^2).
Similarly we have
[w^k+1|_k] - ∇Ψ(x^k) = [w^k+1|_k] - ∇_1Φ_μ_λ(x^k, λ^k) + ∇_1Φ_μ_λ(x^k, λ^k) - ∇_1Φ_μ_λ(x^k, λ_*^k).
Applying Cauchy-Schwarz inequality, Assumption <ref> and Lemma <ref> to the above equation and (<ref>), we know
[w^k+1|_k] - ∇Ψ(x^k)^2
≤ 4∑_i=1^nλ_i^k (∇_1 f_i(x^k, y_i^k) - ∇_1 f_i(x^k, y_*,i^k))^2 + 4∑_i=1^nλ_i^k∇_12^2g_i(x^k, y_i^k)( z_i^k - z_*,i^k)^2
+ 4∑_i=1^n(∇_12^2g_i(x^k, y_i^k) - ∇_12^2g_i(x^k,y_*,i^k))z_*,i^k^2 + 4∇_1Φ(x^k,λ^k) - ∇_1Φ(x^k,λ_*^k)^2
≤ ∑_i=1^n4λ_i^k {(L_∇ f^2 + L_∇^2 g^2)y_i^k - y_*,i^k^2 + L_∇ g^2z_i^k - z_*,i^k^2} + 4nL_Φ^2λ^k-λ_*^k^2,
which together with Lemma <ref> completes the proof.
§.§.§ Primal Convergence
Suppose Assumptions <ref>, <ref> hold for all f_i, g_i and Assumption <ref> holds. If
α_k≤min(τ_x^2/20c_3, c_3/2τ_x(c_3L_∇Φ + L_∇η_), c_3/4τ_λ(L_∇η_Δ_n +c_3μ_λ), nτ_λL_Φ^2/L_Ψ + L_∇Φ, 1),
τ_x < 1, τ_λ = 1/μ_λ, c_3 ≤min(1/10, 1/8(μ_λ + 1)^2),
then in Algorithm <ref> we have
∑_k=0^Kα_k/τ_x^2[x_+^k-x^k^2]
≤ 2/τ_x[W̃_0,1^(1)] + 2∑_k=0^Kα_k[[w^k+1|_k] - ∇Ψ(x^k)^2]
+ ∑_k=0^Kα_k[[w^k+1|_k] -∇_1Φ_μ_λ(x^k,λ^k)^2] +1/2∑_k=0^Kα_k[h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2]
+ σ_g,2^2∑_k=0^Kα_k^2[∑_i=1^nλ_i^kz_i^k - z_*,i^k^2] + σ_w^2∑_k=0^Kα_k^2,
∑_k=0^Kα_k/τ_λ^2[λ_+^k - λ^k^2]
≤ 2/τ_λ[W̃_0,1^(2)] + 1/2∑_k=0^Kα_k[h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2] + 4L_f^2∑_k=0^Kα_k[∑_i=1^ny_i^k-y_*,i^k^2]
+ 13nL_Φ^2∑_k=0^Kα_k[x_+^k - x^k^2] + nσ_f,0^2∑_k=0^Kα_k^2.
The proof of the first inequality in (<ref>) is almost the same as that in (<ref>). Note that by replacing Φ, h^k, W_k,1 with Ψ, h_x^k, W̃_k,1, we know
α_k/τ_xx_+^k-x^k^2
≤ W̃_k,1^(1) - [W̃_k+1,1^(1)|_k] + α_k(τ_x∇Ψ(x^k) - [w^k+1|_k]^2 + 1/4τ_x x_+^k - x^k^2)
+ α_k/4τ_xx_+^k-x^k^2 + 5/2c_3τ_x[h_x^k+1 - h_x^k^2|_k],
Similar to (<ref>), from (<ref>) we have that
5/c_3τ_x^2[h_x^k+1 - h_x^k^2]
≤ 10c_3α_k^2/τ_x^2[h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2 + [w^k+1|_k] -∇_1Φ_μ_λ(x^k,λ^k)^2] + 5c_3α_k^2/τ_x^2σ_w^2
+ 10c_3α_k^2σ_g,2^2/τ_x^2[∑_i=1^nλ_i^kz_i^k - z_*,i^k^2].
≤ α_k/2[h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2] + α_k[[w^k+1|_k] -∇_1Φ_μ_λ(x^k,λ^k)^2] + α_k^2σ_w^2
+ α_k^2σ_g,2^2[∑_i=1^nλ_i^kz_i^k - z_*,i^k^2],
where the second inequality uses (<ref>). Taking summation and expectation on both sides of (<ref>) and using (<ref>), we obtain the first inequality in (<ref>).
For the second inequality in (<ref>), the L_∇Ψ-smoothness of Ψ(x) and L_∇η_-smoothness of η_ in Lemma <ref> imply
Ψ(x^k+1) - Ψ(x^k)≤α_k∇Ψ(x^k), x_+^k - x^k> + L_∇Ψ/2x^k+1-x^k^2,
and
η_Δ_n(λ^k,-h_λ^k,τ_λ) - η_Δ_n(λ^k+1,-h_λ^k+1,τ_λ)
≤ h_λ^k + 1/τ_λ(λ^k - λ_+^k), λ^k - λ^k+1> + λ_+^k - λ^k, -h_λ^k + h_λ^k+1> + L_∇η_Δ_n/2(λ^k+1-λ^k^2 + -h_λ^k+1 + h_λ^k^2)
= α_k-h_λ^k, λ_+^k - λ^k> + α_k/τ_λλ_+^k - λ^k^2 + θ_kλ_+^k - λ^k, s^k+1 -h_λ^k - μ_λ(λ^k - _n/n)>
+ L_∇η_Δ_n/2(λ^k+1-λ^k^2 + h_λ^k+1 - h_λ^k^2)
≤ - θ_k/τ_λλ_+^k - λ^k^2 + θ_ks^k+1 - μ_λ(λ^k - _n/n), λ_+^k-λ^k> + L_∇η_Δ_n/2(λ^k+1-λ^k^2 + h_λ^k+1 - h_λ^k^2).
We also have
Φ_μ_λ(x^k,λ^k) - Φ_μ_λ(x^k+1, λ^k+1)
= ∑_i=1^n(λ_i^kΦ_i(x^k) - λ_i^k+1Φ_i(x^k+1)) + μ_λ/2λ^k+1 - _n/n^2 - μ_λ/2λ^k - _n/n^2
= λ^k, Φ^k> - λ^k+1, Φ^k+1> + μ_λ/2(λ^k+1 - λ^k + λ^k - _n/n^2 - λ^k - _n/n^2)
= λ^k - λ^k+1, Φ^k> + λ^k+1, Φ^k - Φ^k+1> + μ_λα_kλ^k - _n/n, λ_+^k-λ^k> + μ_λ/2λ^k+1-λ^k^2
= α_kλ^k - λ_+^k, [s^k+1|_k] - μ_λ(λ^k - _n/n)> + α_kλ^k - λ_+^k, Φ^k - [s^k+1|_k]>
+ μ_λ/2λ^k+1-λ^k^2 + λ^k+1, Φ^k - Φ^k+1>
≤ α_kλ^k - λ_+^k, [s^k+1|_k] - μ_λ(λ^k - _n/n)> + α_kλ^k - λ_+^k, Φ^k - [s^k+1|_k]>
+ μ_λ/2λ^k+1-λ^k^2-α_k∇_1Φ(x^k, λ^k), x_+^k - x^k> + √(n)L_Φλ^k+1 - λ^kx_+^k - x^k
+ L_∇Φ/2x^k+1 - x^k^2.
where the inequality uses Lemma <ref> and (c) in Assumption <ref> to obtain
λ^k+1, Φ^k - Φ^k+1>
= ∑_i=1^nλ_i^k+1(Φ_i(x^k) - Φ_i(x^k+1))
≤ ∑_i=1^nλ_i^k+1(∇Φ_i(x^k), x^k - x^k+1> + L_∇Φ/2x^k - x^k+1^2)
= -α_k∇_1Φ(x^k, λ^k+1), x_+^k - x^k> + L_∇Φ/2x^k+1 - x^k^2
≤ -α_k∇_1Φ(x^k, λ^k), x_+^k - x^k> + √(n)L_Φλ^k+1 - λ^kx_+^k - x^k + L_∇Φ/2x^k+1 - x^k^2.
Taking conditional expectation with respect to _k on (<ref>) + (<ref>) / c_3 + (<ref>), we know
α_k/τ_λλ_+^k - λ^k^2
≤ W̃_k,1^(2) - [W̃_k+1,1^(2)|_k] + α_k∇Ψ(x^k) - ∇_1Φ(x^k, λ^k), x_+^k - x^k>
+ α_kλ^k - λ_+^k, Φ^k - [s^k+1|_k]> + (L_∇Ψ + L_∇Φ)/2x^k+1 - x^k^2
+ (L_∇η_Δ_n +c_3μ_λ)/2c_3λ^k+1-λ^k^2 + √(n)L_Φλ^k+1 - λ^kx_+^k - x^k + L_∇η_Δ_n/2c_3[h_λ^k+1 - h_λ^k^2|_k]
≤ W̃_k,1^(2) - [W̃_k+1,1^(2)|_k] + α_k√(n)L_Φλ^k-λ_*^kx_+^k - x^k
+ α_kL_fλ_+^k - λ^k(∑_i=1^ny_i^k-y_*,i^k^2)^1/2 + α_k√(n)L_Φλ_+^k - λ^kx_+^k - x^k
+ α_k^2(L_∇Ψ + L_∇Φ)/2x_+^k - x^k^2 + α_k^2(L_∇η_Δ_n +c_3μ_λ)/2c_3λ_+^k-λ^k^2 + L_∇η_Δ_n/2c_3[h_λ^k+1 - h_λ^k^2|_k]
≤ W̃_k,1^(2) - [W̃_k+1,1^(2)|_k] + α_k(1/16τ_λλ^k-λ_*^k^2 + 4nτ_λL_Φ^2x_+^k - x^k^2 )
+ α_k(1/8τ_λλ_+^k - λ^k^2 + 2τ_λL_f^2∑_i=1^ny_i^k-y_*,i^k^2) + α_k(1/8τ_λλ_+^k - λ^k^2 + 2nτ_λL_Φ^2x_+^k - x^k^2)
+ α_knτ_λL_Φ^2/2x_+^k - x^k^2 + α_k/8τ_λλ_+^k-λ^k^2
+ L_∇η_Δ_n/2c_3[h_λ^k+1 - h_λ^k^2|_k],
where the second inequality uses Lemma <ref>, and the third inequality uses Young's inequality and the conditions on α_k (see (<ref>)):
α_k/8τ_λ - α_k^2(L_∇η_Δ_n +c_3μ_λ)/2c_3≥ 0, α_k^2(L_∇Ψ + L_∇Φ)≤α_knτ_λL_Φ^2.
Recall that in Lemma <ref> we have
λ^k-λ_*^k^2≤ 2λ_+^k - λ^k^2 + 2/μ_λ^2h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2,
and by (<ref>) we know
L_∇η_Δ_n/c_3τ_λ[h_λ^k+1 - h_λ^k^2] ≤ 2c_3α_k^2(μ_λ + 1)^2([h_λ^k - ∇_2Φ_μ_λ(x^k,λ^k)^2] + nσ_f,0^2)
≤α_k/4[h_λ^k - ∇_2Φ_μ_λ(x^k,λ^k)^2] + nα_k^2σ_f,0^2.
where the second inequality uses
2c_3(μ_λ + 1)^2≤1/4, α_k≤ 1
in (<ref>). Combining (<ref>), (<ref>), and (<ref>), we have
α_k/τ_λ^2[λ_+^k - λ^k^2]
≤ 2/τ_λ[W̃_k,1^(2) - W̃_k+1,1^(2)] + α_k/2[h_λ^k - ∇_2Φ_μ_λ(x^k,λ^k)^2]
+ 4α_k L_f^2[∑_i=1^ny_i^k-y_*,i^k^2] + 13α_kn L_Φ^2[x_+^k - x^k^2] + nα_k^2σ_f,0^2,
which implies the second inequality in (<ref>) by taking summation.
§.§.§ Dual Convergence
Suppose Assumptions <ref>, <ref> hold for all f_i, g_i and Assumption <ref> holds. In Algorithm <ref> we have
∑_k=0^Kα_k[h_x^k - ∇_1Φ_μ_λ(x^k, λ^k)^2]
≤ 1/c_3[h_x^0 - ∇_1Φ_μ_λ(x^0, λ^0)^2] + 3∑_k=0^Kα_k[[w^k+1|_k] - ∇_1Φ_μ_λ(x^k, λ^k)^2]
+ 3L_∇Φ^2/c_3^2∑_k=0^Kα_k[x_+^k - x^k^2] + 3nL_Φ^2/c_3^2∑_k=0^Kα_k[λ_+^k - λ^k^2]
+ 2c_3σ_g,2^2∑_k=0^Kα_k^2[∑_i=1^nλ_i^kz_i^k - z_*,i^k^2] + c_3σ_w^2∑_k=0^Kα_k^2,
∑_k=0^Kα_k[h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2]
≤ 1/c_3[h_λ^0 - ∇_2Φ_μ_λ(x^0, λ^0)^2] + 3α_kL_f^2∑_i=1^n[y_i^k - y_*,i^k^2]
+ 3nL_Φ^2/c_3^2∑_k=0^Kα_k[x_+^k - x^k^2] + 3μ_λ^2/c_3^2∑_k=0^Kα_k[λ_+^k - λ^k^2] + nc_3σ_f,0^2∑_k=0^Kα_k^2.
The proof is similar to that of Lemma <ref>, except that we now have another λ^k to handle. Since ∇_1Φ(x,λ) = ∇_1Φ_μ_λ(x,λ) for all (x,λ) (see (<ref>)), for simplicity we omit the subscript μ_λ in ∇_1Φ_μ_λ(x,λ) in this proof. Note that by moving average update of h_x^k, we have
h_x^k+1 - ∇_1Φ(x^k+1, λ^k+1)
= (1 - θ_k)h_x^k + θ_k(w^k+1 - [w^k+1|_k]) + θ_k[w^k+1|_k] - ∇_1Φ(x^k+1, λ^k+1)
= (1 - θ_k)(h_x^k - ∇_1Φ(x^k, λ^k)) + θ_k([w^k+1|_k] - ∇_1Φ(x^k, λ^k))
+ ∇_1Φ(x^k, λ^k) - ∇_1Φ(x^k+1, λ^k+1) + θ_k(w^k+1 - [w^k+1|_k])
Hence we know
[h_x^k+1 - ∇_1Φ(x^k+1, λ^k+1)^2|_k]
= (1 - θ_k)(h_x^k - ∇_1Φ(x^k, λ^k)) + θ_k([w^k+1|_k] - ∇_1Φ(x^k, λ^k)) .
+ . ∇_1Φ(x^k, λ^k) - ∇_1Φ(x^k+1, λ^k+1)^2+ θ_k^2[w^k+1 - [w^k+1|_k]^2|_k]
≤ (1 - θ_k)h_x^k - ∇_1Φ(x^k, λ^k)^2 +θ_k^2σ_w,k+1^2
+ θ_k([w^k+1|_k] - ∇_1Φ(x^k, λ^k)) + 1/θ_k(∇_1Φ(x^k, λ^k) - ∇_1Φ(x^k+1, λ^k+1))^2
≤ (1 - θ_k)h_x^k - ∇_1Φ(x^k, λ^k)^2 + 3θ_k[w^k+1|_k] - ∇_1Φ(x^k, λ^k)^2 +θ_k^2σ_w,k+1^2
+ 3/θ_k∇_1Φ(x^k, λ^k) - ∇_1Φ(x^k+1, λ^k)^2 + 3/θ_k∇_1Φ(x^k+1, λ^k) - ∇_1Φ(x^k+1, λ^k+1)^2
≤ (1 - θ_k)h_x^k - ∇_1Φ(x^k, λ^k)^2 + 3θ_k[w^k+1|_k] - ∇_1Φ(x^k, λ^k)^2
+ 3α_k^2/θ_k(L_∇Φ^2x_+^k - x^k^2 + nL_Φ^2λ_+^k - λ^k^2) + θ_k^2σ_w,k+1^2,
where the first equality uses the fact that x^k, λ^k, h_x^k, x^k+1, λ^k+1 are all _k-measurable and are independent of w^k+1 given _k, the first inequality uses the convexity of ·^2 and (<ref>), the second inequality uses Cauchy-Schwarz inequality, the third inequality uses the Lipschitz continuity of ∇_1Φ in Lemma <ref>, and the update rules of x^k+1 and λ^k+1. Taking summation, expectation on both sides of (<ref>), dividing c_3, and applying (<ref>), we know the first inequality in (<ref>) holds.
Similarly we have
h_λ^k+1 - ∇_2Φ_μ_λ(x^k+1, λ^k+1)
= (1 - θ_k)h_λ^k + θ_k(s^k+1 - μ_λλ^k + μ_λ_n/n) - ∇_2Φ_μ_λ(x^k+1, λ^k+1)
= (1 - θ_k)(h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)) + θ([s^k+1|_k] - ∇_2Φ(x^k,λ^k))
+ ∇_2Φ_μ_λ(x^k, λ^k) - ∇_2Φ_μ_λ(x^k+1, λ^k+1) + θ_k(s^k+1 - [s^k+1|_k]).
where the second equality uses ∇_2Φ_μ_λ(x^k, λ^k) = ∇_2Φ(x^k, λ^k) - μ_λ(λ^k - _n/n). Hence we know
[h_λ^k+1 - ∇_2Φ_μ_λ(x^k+1, λ^k+1)^2|_k]
= (1 - θ_k)(h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)) + θ([s^k+1|_k] - ∇_2Φ(x^k,λ^k)) .
+ . ∇_2Φ_μ_λ(x^k, λ^k) - ∇_2Φ_μ_λ(x^k+1, λ^k+1)^2 + θ_k^2[s^k+1 - [s^k+1|_k]^2|_k]
≤ (1 - θ_k)h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2 + nθ_k^2σ_f,0^2
+ θ_k[s^k+1|_k] - ∇_2Φ(x^k,λ^k) + 1/θ_k(∇_2Φ_μ_λ(x^k, λ^k) - ∇_2Φ_μ_λ(x^k+1, λ^k+1))^2
≤ (1 - θ_k)h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2 + 3θ_k[s^k+1|_k] - ∇_2Φ(x^k,λ^k)^2 + nθ_k^2σ_f,0^2
+ 3/θ_k∇_2Φ_μ_λ(x^k, λ^k) - ∇_2Φ_μ_λ(x^k+1, λ^k)^2 + 3/θ_k∇_2Φ_μ_λ(x^k+1, λ^k) - ∇_2Φ_μ_λ(x^k+1, λ^k+1)^2
≤ (1 - θ_k)h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2 + 3θ_kL_f^2∑_i=1^ny_i^k - y_*,i^k^2
+ 3α_k^2/θ_k(nL_Φ^2x_+^k - x^k^2 + μ_λ^2λ_+^k - λ^k^2) + nθ_k^2σ_f,0^2,
where the third inequality uses Lemma <ref> and the fact that
[s^k+1|_k] = (f_1(x^k, y_1^k), ..., f_n(x^k, y_n^k)), ∇_2Φ(x^k,λ^k) = (f_1(x^k, y_*,1^k), ..., f_n(x^k, y_*,n^k))
Taking summation, expectation on both sides of (<ref>), and dividing c_3, we know the second inequality in (<ref>) holds.
§.§.§ Proof of Theorem <ref>
Now we are ready to present our main convergence results. Note that by Lemmas (<ref>) and (<ref>), for Ṽ_k,1 we have
∑_k=0^Kα_k[Ṽ_k,1] = ∑_k=0^Kα_k/τ_x^2[x_+^k-x^k^2] + ∑_k=0^Kα_k[h_x^k - ∇_1Φ_μ_λ(x^k, λ^k)^2]
≤ 3L_∇Φ^2/c_3^2∑_k=0^Kα_k[x_+^k - x^k^2] + 1/2∑_k=0^Kα_k[h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2]
+ 2/τ_x[W̃_0,1^(1)] + 1/c_3[h_x^0 - ∇_1Φ_μ_λ(x^0, λ^0)^2] + 2∑_k=0^Kα_k[[w^k+1|_k] - ∇Ψ(x^k)^2]
+ 4∑_k=0^Kα_k[[w^k+1|_k] -∇_1Φ_μ_λ(x^k,λ^k)^2] + 3nL_Φ^2/c_3^2∑_k=0^Kα_k[λ_+^k - λ^k^2]
+ (1+2c_3)σ_g,2^2∑_k=0^Kα_k^2[∑_i=1^nλ_i^kz_i^k - z_*,i^k^2] + (1+c_3)σ_w^2(∑_k=0^Kα_k^2).
By Lemma <ref> we know
4∑_k=0^Kα_k[[w^k+1|_k] -∇_1Φ_μ_λ(x^k,λ^k)^2] + 2∑_k=0^Kα_k[[w^k+1|_k] - ∇Ψ(x^k)^2]
≤ ∑_k=0^Kα_k[∑_i=1^n20((L_∇ f^2 + L_∇^2 g^2)y_i^k - y_*,i^k^2 + L_∇ g^2z_i^k - z_*,i^k^2)]
+ ∑_k=0^K16nL_Φ^2α_k[λ_+^k - λ^k^2 + 1/μ_λ^2h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2].
Choosing
(1+2c_3)σ_g,2^2α_k≤ L_∇ g^2
in (<ref>), and using (<ref>), we know
∑_k=0^Kα_k[Ṽ_k,1]≤ C_v_1, xτ_x^2∑_k=0^Kα_k/τ_x^2[x_+^k - x^k^2] + C_v_1, h_x∑_k=0^Kα_k[h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2]
+ C_v_1, λτ_λ^2∑_k=0^Kα_k/τ_λ^2[λ_+^k - λ^k^2] + C_v_1, h_λ∑_k=0^Kα_k[h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2]
+ C_v_1, 0 + C_v_1, 1(∑_k=0^Kα_k^2),
where the constants are defined as
C_v_1, x = 20n(L_∇ f^2 + L_∇^2 g^2)C_yx + 21nL_∇ g^2C_zx + 3L_∇Φ^2/c_3^2, C_v_1, h_x = 1/2,
C_v_1, λ = (16+3/c_3^2)nL_Φ^2, C_v_1, h_λ = 16nL_Φ^2/μ_λ^2,
C_v_1, 0 = 20(L_∇ f^2 + L_∇^2 g^2)(∑_i=1^nC_y_i, 0) + 21L_∇ g^2(∑_i=1^nC_z_i, 0) + 2/τ_x[W̃_0,1^(1)]
+ 1/c_3[h_x^0 - ∇_1Φ_μ_λ(x^0, λ^0)^2],
C_v_1, 1 = 20n(L_∇ f^2 + L_∇^2 g^2)C_y,1 + 21nL_∇ g^2C_z,1.
For Ṽ_k,2 we have
∑_k=0^Kα_k[Ṽ_k,2] = ∑_k=0^Kα_k/τ_λ^2[λ_+^k - λ^k^2] + ∑_k=0^Kα_k[h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2]
≤ 3μ_λ^2/c_3^2∑_k=0^Kα_k[λ_+^k - λ^k^2] + 1/2∑_k=0^Kα_k[h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2]
+ 2/τ_λ[W̃_0,1^(2)] + 1/c_3[h_λ^0 - ∇_2Φ_μ_λ(x^0, λ^0)^2] + 7L_f^2∑_k=0^Kα_k[∑_i=1^ny_i^k-y_*,i^k^2]
+ (13 + 3/c_3^2)nL_Φ^2∑_k=0^Kα_k[x_+^k - x^k^2] + n(1 + c_3)σ_f,0^2(∑_k=0^Kα_k^2),
which implies
∑_k=0^Kα_k[Ṽ_k,2] ≤ C_v_2, xτ_x^2∑_k=0^Kα_k/τ_x^2[x_+^k - x^k^2] + C_v_2, h_x∑_k=0^Kα_k[h_x^k - ∇_1Φ_μ_λ(x^k,λ^k)^2]
+ C_v_2, λτ_λ^2∑_k=0^Kα_k/τ_λ^2[λ_+^k - λ^k^2] + C_v_2, h_λ∑_k=0^Kα_k[h_λ^k - ∇_2Φ_μ_λ(x^k, λ^k)^2]
+ C_v_2, 0 + C_v_2, 1(∑_k=0^Kα_k^2)
where the constants are defined as
C_v_2, x = 7nL_f^2C_yx + (13 + 3/c_3^2)nL_Φ^2, C_v_2, h_x = 0,
C_v_2, λ = 3μ_λ^2/c_3^2, C_v_2, h_λ = 1/2,
C_v_2, 0 = 7L_f^2(∑_i=1^nC_y_i, 0) + 2/τ_λ[W̃_0,1^(2)] + 1/c_3[h_λ^0 - ∇_2Φ_μ_λ(x^0, λ^0)^2]
C_v_2, 1 = 7nL_f^2C_y,1 + n(1+c_3)σ_f,0^2.
According to the definition of the constants in Lemmas <ref> and <ref>, we could obtain (for simplicity we omit the dependency on κ here)
C_v_1, x = (n/c_1^2 + n/c_2^2 + 1/c_3^2), C_v_1, h_x = 1/2 = (1), C_v_1,λ = (n + n/c_3^2), C_v_1, h_λ = (n/μ_λ^2),
C_v_1, 0 = (n/c_1 + n/c_2 + 1/c_3 + 1/τ_x + 1/c_3τ_x), C_v_1, 1 = (nc_1 + nc_2),
C_v_2, x = (n/c_1^2 + n + n/c_3^2), C_v_2, h_x = 0, C_v_2,λ = (1/c_3^2), C_v_2, h_λ = 1/2 = (1),
C_v_2, 0 = (n/c_1 + 1/c_3), C_v_2, 1 = (nc_1 + n + nc_3).
Hence we can pick α_k, c_1, c_2, c_3, τ_x, τ_λ such that
α_k ≡Θ(1/√(nK)), c_1 = c_2 = √(n), c_3 = Θ(1), τ_x = (μ_λ/n), τ_λ = 1/μ_λ
which leads to
C_v_1, xτ_x^2 ≤1/2, C_v_2, xC_v_1,λτ_x^2τ_λ^2≤1/8, C_v_2, λτ_λ^2≤1/2,
and the conditions ((<ref>), (<ref>), and (<ref>)) in previous lemmas hold. Moreover, using the above conditions in (<ref>) and (<ref>), we can get
∑_k=0^Kα_k[Ṽ_k,1]≤1/2∑_k=0^Kα_k[Ṽ_k,1] + C_v_1,λτ_λ^2∑_k=0^Kα_k[Ṽ_k,2] + (n)
∑_k=0^Kα_k[Ṽ_k,2]≤1/2∑_k=0^Kα_k[Ṽ_k,2] + C_v_2,xτ_x^2∑_k=0^Kα_k[Ṽ_k,1] + (√(n)).
Combining the above two inequalities, we have
1/K∑_k=0^K[Ṽ_k,1] = (n^2/μ_λ^2√(K)), 1/K∑_k=0^K[Ṽ_k,2] = (n/√(K)),
which completes the proof of Theorem <ref> since we have
1/τ_x(x^k - Π_(x^k - τ_x∇Ψ_μ_λ(x^k)) )^2
≤ 2/τ_x^2x^k - Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k)) ^2 + 2/τ_x^2Π_(x^k - τ_x∇Ψ_μ_λ(x^k)) - Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k)) ^2
≤ 2/τ_x^2x^k - Π_(x^k - τ_x∇_1Φ_μ_λ(x^k, λ^k)) ^2 + 2nL_Φ^2λ^k - λ_*^k^2
≤ 4Ṽ_k,1 + 4nL_Φ^2/μ_λ^2Ṽ_k,2 = (n^2/μ_λ^2√(K))
where the second inequality uses non-expansiveness of projection operator and √(n)L_Φ-Lipschitz continuity of ∇_1Φ_μ_λ(x,·) in Lemma <ref>. Note that we have n^2 in the numerator since we explicitly write out the Lipschitz constant L_∇_1Φ_μ_λ.
§ DISCUSSIONS ON THE PRIOR WORK <CIT.>
In this section, we discuss several issues in the current form of <cit.>, which introduces a Multi-Objective Robust Bilevel Two-timescale optimization algorithm ().
The primary issue in the current analysis of arises from the ambiguity and inconsistency regarding the expectation and filtration. As a consequence, the current form of the paper was unable to demonstrate [max_i∈ [n]y_i^k - y_i^*(x^(k-1))^2 ]≤(√(n)K^-2/5) claimed in Theorem 1 (10b) of <cit.>. The subsequent arguments are incorrect. We discuss some mistakes made in <cit.> as follows.
We start by looking at Lemma 8 (informal) and Lemma 14 (formal) in <cit.> that characterize the upper bound of the ^(k+1) - ^(k) where ^(k) = [∑_i=1^nλ_i^(k)ℓ_i(x^(k))]. Here, the function ℓ_i is the function Φ_i(x) in our notation. The paper incorrectly asserted that
^k = ∑_i=1^nλ_i^(k)[ℓ_i(x^(k))].
To see why, let _k denote the sigma algebra generated by all iterates (x, y, λ) with superscripts not greater than k. It is important to note that both {λ_i^(k)} and x^(k) are random objectives given the filtration
_k. The ambiguity lies in the lack of clarity regarding the randomness over which the expectation operation is performed. In fact, we can rewrite the claim of Lemma 14 in <cit.> without hiding the randomness. Let ^(k) = ∑_i=1^nλ_i^(k)ℓ_i(x^(k)). Then, we have
^(k+1) - ^(k)≤ (α) (∑_i=1^nλ_i^k y_i^k+1 - y_i^*(x^(k)))^2_≤max_i∈[n]y_i^k+1 - y_i^*(x^(k))^2
- 1/αx^k+1 - x^k^2 + (γ n) + (α) h_x^(k) - [h_x^(k)|_k]^2,
where α,β, γ are step sizes for x, y, and λ respectively. We hide the dependency for constants in their assumptions for simplicity. In addition, we want to emphasize that, unlike our notation, h_x^(k) and h_λ^(k) are stochastic gradients at step k. Therefore, h_x^(k) and h_λ^(k) are random objects given _k. By taking expectations over all the randomness above, we can see that Lemma 14 in <cit.> is incorrect because it writes in the form of max[·] instead of [max(·)]. Therefore, the subsequent arguments regarding the convergence of x,y,λ are incorrect, at least in the current form.
Regardless of the error, one may be able to proceed with the proof by utilizing Eq.(<ref>) since our ultimate goal is to demonstrate the convergence of [max_i∈ [n]y_i^k - y_i^*(x^(k-1))^2]. One possible direction is to utilize the basic recursive inequality of max_i∈ [n]y_i^k+1 - y_i^*(x^(k)) ^2. Observe that for each i∈[n], we can establish the following inequality similar to Lemma 13 in <cit.> without hiding the randomness:
y_i^(k+1) - y_i^*(x^(k))^2 ≤(1-(μ_gβ)) y_i^(k) - y_i^*(x^(k-1))^2 + (1/μ_gβ)x^k - x^k-1^2
+ (β^2) h_y,i^(k) - [h_y,i^(k)|_k]^2 + (β) y_i^(k) - y_i^*(x^(k-1)),h_y,i^(k) - [h_y,i^(k)|_k] >
However, the order of taking the expectation over the randomness and the maximum over i∈[n] adds complexity to the problem. The last inner-product term can only be zero when first taking the conditional expectation with respect to _k. When applying Young's inequality to bound this term, it inevitably introduces terms such as (β) h_y,i^(k) - [h_y,i^(k)|_k]^2 or (1) y_i^(k) - y_i^*(x^(k-1))^2, which make it challenging to proceed further with the convergence analysis.
Finally, we remark about the choice of the stationarity condition used in <cit.>. Although the algorithmic aspect in <cit.> is motivated by <cit.>, the notion of stationarity for λ in <cit.> is different from <cit.>. Under the notion of stationarity in <cit.> (Definition 3.7) Φ_1/2ℓ(·) is the Moreau envelope of Φ(·), which is defined after taking the max over y (i.e., λ in our notation) in Definition 3.5 in <cit.>, and a point x is ϵ-stationarity when ∇Φ_1/2ℓ(x)≤ϵ. It is unclear if (10a) and (10c) in <cit.> will imply similar convergence results under the notion of stationarity in Definition 3.7 in <cit.>.
|
http://arxiv.org/abs/2306.02741v1
|
20230605094151
|
ZIGNeRF: Zero-shot 3D Scene Representation with Invertible Generative Neural Radiance Fields
|
[
"Kanghyeok Ko",
"Minhyeok Lee"
] |
cs.CV
|
[
"cs.CV"
] |
The Learning Prescription, A Neural Network Hearing Aid Core
Matthew R. Flax <[email protected]>
============================================================
Generative Neural Radiance Fields (NeRFs) have demonstrated remarkable proficiency in synthesizing multi-view images by learning the distribution of a set of unposed images. Despite the aptitude of existing generative NeRFs in generating 3D-consistent high-quality random samples within data distribution, the creation of a 3D representation of a singular input image remains a formidable challenge. In this manuscript, we introduce ZIGNeRF, an innovative model that executes zero-shot Generative Adversarial Network (GAN) inversion for the generation of multi-view images from a single out-of-domain image. The model is underpinned by a novel inverter that maps out-of-domain images into the latent code of the generator manifold. Notably, ZIGNeRF is capable of disentangling the object from the background and executing 3D operations such as 360-degree rotation or depth and horizontal translation. The efficacy of our model is validated using multiple real-image datasets: Cats, AFHQ, CelebA, CelebA-HQ, and CompCars.
§ INTRODUCTION
The remarkable success of generative adversarial networks (GANs) <cit.> has spurred significant advancements in realistic image generation with high quality. Particularly, following the emergence of StyleGAN <cit.>, numerous 2D-based generative adversarial network models have benefited from a deeper understanding of latent spaces <cit.>. Consequently, various computer vision tasks, such as conditional image generation and style transfer <cit.>, have shown substantial progress. However, 2D-based image generation models are constrained in their ability to generate novel view images due to their limited understanding of the underlying 3D geometry of real-world scenes.
To overcome this challenge, several studies have adopted the neural radiance field (NeRF) <cit.> approach, which encodes a scene into a multi-layer perceptron (MLP) to provide 3D rendering. Although conventional NeRF <cit.> has successfully facilitated the development of 3D-aware models and reduced computational costs in novel view synthesis tasks, it remains impractical to train a model overfitted to a single scene with multi-view images <cit.>. Consequently, various studies have extended NeRF by integrating it with generative models, i.e., generative NeRF. Generative NeRF <cit.> models can be trained on unposed real-world images, whereas conventional NeRF necessitates multiple images of a single scene <cit.>. Moreover, generative NeRF has been employed for obtaining conditional samples through techniques such as class label information <cit.> or text encoding <cit.>.
Despite the convenience and intuitiveness of these approaches, they possess limitations in image editing and generating 3D representations of specific inputs, such as out-of-domain images or real-world images. To enable more practical applications, generative NeRF models have also incorporated GAN inversion techniques <cit.> for the 3D representation of particular input images, including out-of-distribution or real-world images. However, previous studies have faced a constraint that necessitates fine-tuning on pre-trained models for specific images <cit.>. This requirement hinders the application of these models to numerous real samples simultaneously and renders the process time-inefficient, as it demands extensive fine-tuning.
In this study, we propose a novel zero-shot methodology for the generation of multi-view images, derived from input images unseen during the training process. This approach leverages a 3D-aware GAN inversion technique. Notably, our model proffers 3D-consistent renderings of unposed real images during inference, eliminating the need for supplementary fine-tuning.
Our architectural design bifurcates into two distinct components: the 3D-generation module and the 3D-aware GAN inversion module. The former is founded on the principles of GIRAFFE <cit.>, which successfully amalgamates the compositional attributes of 3D real-world scenes into a generative framework. To enhance the precision of 3D real-world reconstruction and improve image quality, we introduce modifications to the GIRAFFE module, specifically in the decoder and neural renderer. The 3D-aware GAN inverter, on the other hand, is trained with images synthesized from the generator. This strategic approach enables the inverter to accurately map the input image onto the generator's manifold, regardless of the objects' pose. Example results of our model is displayed in Fig. <ref>.
We subject our model to rigorous evaluation, utilizing five diverse datasets: Cats, CelebA, CelebA-HQ, AFHQ, and CompCars. Additionally, we demonstrate the model's robustness by inputting FFHQ images into a model trained on CelebA-HQ.
The primary contributions of this work are as follows:
* We present ZIGNeRF, a pioneering approach that delivers a 3D-consistent representation of real-world images via zero-shot estimation of latent codes. To our knowledge, this is the first instance of such an approach in the field.
* ZIGNeRF exhibits robust 3D feature extraction capabilities and remarkable controllability with respect to input images. Our model can perform 3D operations, such as a full 360-degree rotation of real-world car images, a feat not fully achieved by many existing generative NeRF models.
§ RELATED WORK
§.§ Neural Radiance Field (NeRF)
NeRF is an influential method for synthesizing photorealistic 3D scenes from 2D images. It represents a 3D scene as a continuous function using a multi-layer perceptron (MLP) that maps spatial coordinates to RGB and density values, and then generates novel view images through conventional volume rendering techniques. Consequently, NeRF significantly reduces computational costs compared to existing voxel-based 3D scene representation models <cit.>. However, the training method of NeRF, which overfits a single model to a single scene, considerably restricts its applicability and necessitates multiple structured training images, including camera viewpoints <cit.>.
§.§ Generative NeRF
Generative NeRFs optimize networks to learn the mapping from latent code to 3D scene representation, given a set of unposed 2D image collections rather than using multi-view supervised images with ground truth camera poses. Early attempts, such as GRAF <cit.> and pi-GAN <cit.>, demonstrated promising results and established the foundation for further research in the generative NeRF domain. Recent works on generative NeRF have concentrated on generating high-resolution 3D-consistent images. The recently proposed StyleNeRF <cit.> successfully generates high-resolution images by integrating NeRF into a style-based generator, while EG3D <cit.> exhibits impressive results with a hybrid architecture that improves computational efficiency and image quality.
However, real-life applications frequently necessitate conditional samples that exhibit the desired attribute rather than random samples in data distribution. We adopt GAN inversion as a conditional method, as opposed to class-based or text encoding conditional methods, which are prevalent in 2D generative models <cit.>. The aforementioned conditional generation techniques, such as class-based or text encoding methods, possess limitations. Firstly, the training dataset must include conditional information, such as labels or text corresponding to each sample. Secondly, they cannot provide 3D representation of real-world images as conditional input. We address these limitations in existing conditional generative NeRF models by introducing GAN Inversion into generative NeRF for conditional generation.
§.§ 3D aware GAN inversion
With the remarkable progress of GANs, numerous studies have endeavoured to understand and explore their latent space to manipulate the latent code meaningfully. GAN inversion represents the inverse process of the generator in GANs. Its primary objective is to obtain the latent code by mapping a given image to the generator's latent space. Ideally, the latent code optimized with GAN inversion can accurately reconstruct an image generated from the pre-trained generator. The output sample can be manipulated by exploring meaningful directions in the latent space <cit.>. Moreover, real-world images can be manipulated in the latent space using GAN inversion.
Several studies have investigated 3D GAN inversion with generative NeRF to generate multi-view images of input samples and edit the samples in 3D manifolds. Most previous works fine-tuned the pre-trained generator due to the utilization of optimization-based GAN inversion methods. However, additional steps for fine-tuning the generator for GAN inversion impose limitations in terms of adaptability and computational costs.
In this paper, we propose a novel inverter for 3D-aware zero-shot GAN inversion. The proposed inverter can map out-of-domain images into the latent space of the generator. Our model can generate 3D representations of real-world images without requiring additional training steps. The proposed 3D-aware zero-shot GAN inversion maximizes applicability since the trained model can be directly applied to out-of-domain images.
§ METHOD
This work seeks to generate multi-view images from an out-of-domain image by combining generative NeRF with GAN inversion. The proposed method, graphically delineated in Fig. <ref>, encompasses two distinct phases: the 3D-generation segment and the 3D-aware inverter. The first phase involves training the 3D-generation component, an architecture based on GIRAFFE, augmented by enhancements in the neural renderer and the discriminator modules to fortify and expedite the training process. In the second phase, the 3D-aware inverter is trained with the pre-trained generator. The novel inverter is designed to transform out-of-domain images into latent codes within the generator's latent space. Consequently, the generator can produce multi-view images of the out-of-domain image using the latent code derived from the inverter. Throughout the training of the inverter, we utilize the images generated from the generator, imbued with 3D information, as the training dataset. At test time, the inverter executes zero-shot inversion on real-world images, obviating the need for additional fine-tuning for unseen images. The proposed method thereby holds great promise for generating 3D-consistent multi-view images from real-world input images.
§.§ 3D Generation
Compositional Generative Neural Feature Field. Our 3D-generator represents a scene with a compositional generative neural feature field, a continuous function inherited from GIRAFFE, to represent a scene. This is essentially a combination of feature fields, each representing an object in a single scene, with the background also considered an object. In the 3D-generator, a 3D location, 𝐱∈ ℝ^3, a viewing direction, d ∈ 𝕊^2, and latent code, z∼𝒩(0,1), are mapped to a volume density σ∈ ℝ^+ and a high-dimensional feature field f ∈ ℝ^M_f, rather than RGB colour c ∈ ℝ^3.
Affine transformation is applied to objects in the scene so that each object can be controlled in terms of poses, which include scale, translation, and rotation:
T = {s, t, R},
where s, t ∈ 𝕊 indicate scale and translation parameters, respectively, and R ∈ SO(3) determine rotation. The affine transformation enables object-level control by generating the bounding box corresponding to T of a single object:
τ = R·sI· +t,
where I is the 3 × 3 identity matrix. Compositional generative neural feature field is parameterized with an MLP as follows:
C((σ_i, 𝐟_i)^N_i=1)=C(f_θ i(γ(τ^-1(x)), γ(τ^-1(d)), z_i)^N_i=1),
z = [z^1_s, z^1_a, ..., z^N_s, z^N_a],
where γ( ·) is positional encoding function <cit.>, which is applied separately to x and d, and C ( ·) is the compositional operator that composites feature field from the N-1 objects and a background. We then volume render the composited volume density and feature field rather than directly output the final image. 2D-feature map, which is fed into neural renderer for final synthesized output, is attained by volume rendering function π_v,
π_v(C(σ, f))=F.
Neural renderer with residual networks. Our model outputs final synthetic image with neural rendering on the output feature map of volume rendering. We observe that the original neural renderer of GIRAFFE does not preserve the feature well. Furthermore, the learning rate of the decoder and the neural renderer is not synchronized; hence the training of the generator is unstable.
We improve the simple and unstable neural renderer of GIRAFFE. Our neural renderer replaces 3×3 convolution layer blocks with residual blocks <cit.> and employs the ReLU activation rather than leaky ReLU activation <cit.> for faster and more effective rendering. To stabilize the neural rendering, we adopt spectral normalization <cit.> as weight normalization. We experimentally verify that the modified neural renderer improves the stability of the training and the quality of the outputs. Our neural renderer, which maps the feature map F to the final image Î∈ ℝ^H× W× 3, is parameterized as:
π_θ(F)=Î.
Discriminator. As the vanilla GAN <cit.>, the discriminator outputs probability, which indicates whether the input image is real or fake. We replace the 2D CNN-based discriminator with residual blocks employing spectral normalization as weight normalization.
Objectives. The overall objective function of the 3D-generative part is:
L_G, D=L_GAN+λ L_R1,
where λ = 10. We use GAN objective <cit.> with R1 gradient penalty <cit.> to optimize the network.
§.§ 3D-aware Invertor
To invert a given image into latent codes within the generator's latent space, we introduce a novel inverter. This inverter is designed by stacking the residual encoder block with ReLU activations, as depicted in Fig. <ref>. Four linear output layers are situated at the culmination of the inverter to facilitate output. These residual blocks extract the feature of the input image, and each linear output layer estimates the z^obj_s, z^obj_a, z^bg_s, z^bg_a of the input image.
The challenge of 3D-aware GAN inversion involves mapping multi-view images of a single object into a unique latent code. To construct a 3D-aware inverter, we opt to use the synthesized image Î as the training data. Given that we already possess the source parameters of the generated image, the inverter solely estimates the latent code z^predict of the input image. The generated training images equip the inverter to extract the feature of unseen images, which vary in viewing direction, scale, and rotation. Following the latent code inference, the pre-trained generator reconstructs the input image using z^predict and source parameters, which include camera pose, ξ^source, and compositional parameter, T^source = {s, t, R}:
I_θ(Î) = z^predict,
G_θ(z^predict, T^source ,ξ^source)= Î^reconst.
As the inverter learns to estimate the latent source code, we found that the L1 loss between the two latent codes in latent space was inadequate for reconstructing the scene. Thus, we opted to employ GAN loss and L1 as an image-level loss to generate a plausible image. In addition, we incorporated two perceptual losses, namely the Structural Similarity Index Measure (SSIM) <cit.> and the Learned Perceptual Image Patch (LPIPS) <cit.> loss, to conserve the fine details of the source image. The inverter can be optimized using the following function:
L_I =L_GAN(Î^predict)
+λ_1 L_latent(z^source, z^predict)
+λ_2 L_reconst(Î^source, Î^predict)
+λ_3 L_percept(Î^source, Î^predict),
where Î^predict indicates the image reconstructed by the pre-trained generator using z^predict. L_latent and L_reconst represent latent-level and image-level loss, respectively, both utilizing L1 loss. L_percept signifies image-level perceptual loss, employing the LPIPS loss and SSIM loss.
§.§ Training specifications
During the training phase, we randomly sample the latent codes z_s, z_a ∼𝒩 (0,1), and a camera pose ξ∼p_ξ. The parameters λ_1, λ_2, and λ_3 are set to 10, 100, and 1, respectively, for training the inverter. The model is optimized using the RMSProp optimizer <cit.>, with learning rates of 1 × 10^4, 7 × 10^5, and 1 × 10^4 for the generator, the discriminator, and the inverter, respectively. We utilize a batch size of 32. For the first 100,000 iterations, the generator and the discriminator are trained, and the inverter is trained for the next 50,000 iterations. During the training process of the inverter, the generator and the discriminator remain frozen.
§ EXPERIMENTS
ZIGNeRF is evaluated concerning zero-shot feature extraction, 3D controllability, and adaptability. We test on five real-world datasets: Cats, AFHQ <cit.>, CelebA <cit.>, CelebA-HQ <cit.>, and CompCar <cit.>. An additional dataset, FFHQ <cit.>, is used to demonstrate the robust adaptation capabilities of the proposed model. All input images shown in this section were not used during the training process, thereby validating the zero-shot 3D GAN inversion with unseen images. We commence with a visual validation of the proposed model, examining the similarity between the input image and the reconstructed images and 3D-consistent controllability. The model is then evaluated using Fréchet Inception Distance (FID) <cit.> as a metric. We conclude with ablation studies to validate the efficacy of the loss function in optimizing the inverter.
§.§ Controllable 3D Scene Reconstruction
We visually demonstrate that our proposed model generates multi-view consistent images corresponding to the input image. Fig. <ref> showcases 3D reconstruction on CelebA-HQ <cit.> and AFHQ <cit.>, substantiating that the inverter successfully extracts facial features irrespective of gender or skin colour in human faces, and species in animal faces. Fig. <ref> exhibits the model's controllability and object disentanglement with CompCar <cit.>, indicating that the inverter estimates the latent code of the object and background effectively. Notably, the proposed model can facilitate 3D-consistent 360-degree rotation, a common limitation of generative NeRF methods. We further attest to the robustness of our model by applying it to FFHQ, as shown in Fig. <ref>.
§.§ Quantitative Evaluation
To thoroughly evaluate the efficacy of our proposed model, ZIGNeRF, we conduct experiments in both conditional and unconditional generation modes. The evaluation process involves a random sampling of 20,000 real images alongside 20,000 synthesized images, which is a conventional method to compare generative models. The results are displayed in Tab. 1.
In the context of the unconditional model, we generate samples using random latent codes. The training process entails 100,000 iterations. Notably, our model, ZIGNeRF, significantly outperforms the baseline GIRAFFE <cit.> model. As an illustration, for the CelebA(HQ) 256^2 dataset, ZIGNeRF achieves a score of 14.98, substantially lower than the GIRAFFE's score of 23.14. This is indicative of the model's ability to produce higher-quality images with fewer iterations.
Turning to the conditional synthesis, the latent codes estimated by the inverter are employed on randomly sampled real images. The training process for the generator is conducted over 100,000 iterations, while the inverter training comprises 50,000 iterations, during which the generator is kept static. When compared to GIRAFFE, ZIGNeRF demonstrate superior performance in conditional samples as well. For instance, in the AFHQ 128^2 dataset, our model attains a score of 14.02, marking a significant improvement over the GIRAFFE's score of 35.03.
§.§ Ablation study
In the interest of validating the loss function deployed in training the inverter, we undertake an ablation study. The study scrutinizes the necessity of each loss component: latent loss, reconstruction loss, GAN loss, and perceptual loss. The imperative nature of each loss function is demonstrated through its incremental addition to the naive model, which is trained solely via latent code comparison. Fig. 7 illustrates the individual contribution of each loss function. It is observed that the naive model exhibits limited capability in reconstructing the input image. The reconstruction loss L_reconst aligns the reconstructed image with the input at an image-level. The GAN loss L_GAN is observed to enhance the realism of the reconstructed image, independent of improving the input-reconstructed image similarity. The full model elucidates that the perceptual loss L_percept plays a pivotal role in refining the expression of minute attributes, skin colour, and texture.
§ CONCLUSION
In this paper, we have proposed ZIGNeRF, an innovative technique that manifests a 3D representation of real-world images by infusing a 3D-aware zero-shot GAN inversion into generative NeRF. Our inverter is meticulously designed to map an input image onto a latent manifold, a learning process undertaken by the generator. During testing, our model generates a 3D reconstructed scene from a 2D real-world image, employing a latent code ascertained from the inverter. Rigorous experiments conducted with four distinct datasets substantiate that the inverter adeptly extracts features of input images with varying poses, thereby verifying the 3D controllability and immediate adaptation capabilities of our model.
Our novel approach carries the potential for wide application, given that our pipeline can be generally applied to other existing generative NeRFs. It is worth noting that this zero-shot approach is a pioneering contribution to the field, bringing forth a paradigm shift in 3D image representation. In future work, we envisage extending the proposed method by manipulating the inverted latent code for editing the input image, thereby further enhancing the capabilities of this innovative model.
abbrv
§ INTRODUCTION OF SUPPLEMENTARY MATERIAL
In this supplemental document, we offer a detailed overview of the various architectural elements within the network – including the feature fields, the neural renderer, and the discriminator, all discussed in Section <ref>. Furthermore, in Section <ref>, we elucidate a quantitative analysis of our ablation study results, underscoring the efficacy of our loss functions during the training phase of the inverter. In conclusion, we bring forth additional qualitative findings on datasets such as CelebA-HQ <cit.>, CompCar <cit.>, and AFHQ <cit.>. Two novel experimental approaches are also introduced: the style-mixed 3D representation of two facial input images, and the generation of two objects within a single scene using a generator trained on single-object scenes.
§ NETWORK ARCHITECTURES
In this section, we provide the details of network architecture: feature fields, neural renderer, and the discriminator as exhibited in Fig. <ref> and <ref>.
Fig. <ref> presents a detailed overview of the architecture underpinning the feature fields and the neural renderer. The construct of the feature fields is parameterized via multi-layer perceptrons, colloquially referred to as MLPs, a feature vividly displayed in subfigure (a). This setup maps a three-dimensional point, the viewing direction, along with latent codes into a volume density and a feature. Subfigure (b) unravels the process behind the neural renderer blocks, demonstrating how these blocks transform a volume-rendered feature image, into the ultimate synthesized image.
Fig. <ref> explicates the architecture of the discriminator network, emphasizing the steps involved in processing the input image. Initially, the image is subjected to a series of residual convolution blocks, which are fortified with spectral normalization. This is followed by the execution of an average pooling operation. The process culminates with the derivation of the output probability, which is obtained post the final linear layer, again, involving spectral normalization.
§ SUPPLEMENTARY EXPERIMENTAL RESULTS
§.§ The Necessity of Loss Components in Training Session: A Quantitative Evaluation
Tab. <ref> offers a quantitative testament to the indispensable nature of the loss components used in the training session of the inverter. These encompass latent loss, reconstruction loss, GAN loss, and perceptual loss. It is observed that the Fréchet Inception Distance (FID) <cit.> experiences a steady enhancement with each loss component incrementally added to the naive model, which originally only employs the latent loss.
§.§ Extended Operational Results
In this section, we present the application results of the proposed model through Fig. <ref> and <ref>, showcasing style-mixed 3D synthesis and the generation of two objects within a single scene.
Our model demonstrates a unique ability to generate multiple objects within a single scene, even when trained on a dataset consisting primarily of single-object scenes. This is accomplished by leveraging multiple decoder segments within our network architecture. Although our empirical exploration has only been executed on one dataset, the theoretical underpinnings suggest a promising generalizability of this phenomenon. A testament to the robustness of our model is its successful exhibition of zero-shot learning capabilities, as evidenced by an experiment where two CompCars images are synthesized into one image. Like the generation of individual objects, each object within the composite scene retains the ability to undergo transformations such as longitudinal displacement and rotation.
Additionally, we incorporate style mixing in our model with the application of the inverter structure we proposed, utilizing the CelebA-HQ dataset. In the style mixing paradigm that we suggest, our inverter, producing two distinct outputs, generates a shape vector from one image, and an appearance vector from another. These two vectors are subsequently utilized as input for the generator to synthesize a novel object. This process further underscores the model's zero-shot learning capability.
§.§ Supplementary Results
Fig. <ref>, <ref>, and <ref> deliver additional examples on CelebA-HQ <cit.>, AFHQ <cit.>, and CompCar <cit.> datasets.
We embark on rigorous evaluation of our model using a diverse range of input images sourced from varied datasets. With the CelebA dataset, we assess the model's performance using faces of different genders, ages, and ethnic backgrounds, all of which yield impressive quality in output.
In the context of the AFHQ dataset, we utilize images from a variety of categories as input for our testing phase. It is worth noting that these results, encompassing distinct categories, are obtained using a single model with different conditional vector inputs, thereby highlighting the large capacity of our model.
The CompCars dataset allows us to experiment with 360-degree image generation using real image inputs representing various car models, colours, and camera poses. It is important to note that a significant advantage of our model is the freedom it provides in the longitudinal movement of objects, along with the capacity to alter the background. This flexibility underpins the model's capacity for highly controllable image synthesis, an attribute that holds immense potential for a wide array of applications.
|
http://arxiv.org/abs/2306.04539v1
|
20230607154453
|
Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
|
[
"Paul Pu Liang",
"Chun Kai Ling",
"Yun Cheng",
"Alex Obolenskiy",
"Yudong Liu",
"Rohan Pandey",
"Alex Wilf",
"Louis-Philippe Morency",
"Ruslan Salakhutdinov"
] |
cs.LG
|
[
"cs.LG",
"cs.CL",
"cs.CV",
"cs.IT",
"math.IT",
"stat.ML"
] |
On the Design Fundamentals of
Diffusion Models: A Survey
Ziyi Chang,
George A. Koulieris,
and Hubert P. H. Shum, Senior Member, IEEE
The authors are with the Department of Computer Science, Durham University, Durham, DH1 3LE, United Kingdom.
Email: {ziyi.chang, georgios.a.koulieris, hubert.shum}@durham.ac.uk
Corresponding author: Hubert P. H. Shum
==================================================================================================================================================================================================================================================================================================================================
In many machine learning systems that jointly learn from multiple modalities, a core research question is to understand the nature of multimodal interactions: the emergence of new task-relevant information during learning from both modalities that was not present in either alone.
We study this challenge of interaction quantification in a semi-supervised setting with only labeled unimodal data and naturally co-occurring multimodal data (e.g., unlabeled images and captions, video and corresponding audio) but when labeling them is time-consuming.
Using a precise information-theoretic definition of interactions, our key contributions are the derivations of lower and upper bounds to quantify the amount of multimodal interactions in this semi-supervised setting.
We propose two lower bounds based on the amount of shared information between modalities and the disagreement between separately trained unimodal classifiers, and derive an upper bound through connections to approximate algorithms for min-entropy couplings. We validate these estimated bounds and show how they accurately track true interactions. Finally, two semi-supervised multimodal applications are explored based on these theoretical results: (1) analyzing the relationship between multimodal performance and estimated interactions, and (2) self-supervised learning that embraces disagreement between modalities beyond agreement as is typically done.
§ INTRODUCTION
A core research question in multimodal learning is to understand the nature of multimodal interactions across modalities in the context of a task: the emergence of new task-relevant information during learning from both modalities that was not present in either modality alone <cit.>. In settings where labeled multimodal data is abundant, the study of multimodal interactions has inspired advances in theoretical analysis <cit.> and representation learning <cit.> in language and vision <cit.>, multimedia <cit.>, healthcare <cit.>, and robotics <cit.>. In this paper, we study the problem of interaction quantification in a setting where there is only unlabeled multimodal data 𝒟_M = {(x_1,x_2)} but some labeled unimodal data 𝒟_i = {(x_i,y)} collected separately for each modality. This multimodal semi-supervised paradigm is reminiscent of many real-world settings with the emergence of separate unimodal datasets like large-scale visual recognition <cit.> and text classification <cit.>, as well as the collection of data in multimodal settings (e.g., unlabeled images and captions or video and audio <cit.>) but when labeling them is time-consuming <cit.>.
Using a precise information-theoretic definition of interactions <cit.>, our key contributions are the derivations of lower and upper bounds to quantify the amount of multimodal interactions in this semi-supervised setting with only 𝒟_i and 𝒟_M. We propose two lower bounds for interaction quantification: our first lower bound relates multimodal interactions with the amount of shared information between modalities, and our second lower bound introduces the concept of modality disagreement which quantifies the differences of classifiers trained separately on each modality.
Finally, we propose an upper bound through connections to approximate algorithms for min-entropy couplings <cit.>.
To validate our derivations, we experiment on large-scale synthetic and real-world datasets with varying amounts of interactions.
In addition, these theoretical results naturally yield new algorithms for two applications involving semi-supervised multimodal data:
* We first analyze the relationship between interaction estimates and downstream task performance when optimal multimodal classifiers are learned access to multimodal data. This analysis can help develop new guidelines for deciding when to collect and fuse labeled multimodal data.
* As the result of our analysis, we further design a new family of self-supervised learning objectives that capture disagreement on unlabeled multimodal data, and show that this learns interactions beyond agreement conventionally used in the literature <cit.>. Our experiments show strong results on four datasets: relating cartoon images and captions <cit.>, predicting expressions of humor and sarcasm from videos <cit.>, and reasoning about multi-party social interactions <cit.>.
More importantly, these results shed light on the intriguing connections between disagreement, interactions, and performance. Our code is available at <https://github.com/pliang279/PID>.
§ PRELIMINARIES
§.§ Definitions and setup
Let 𝒳_i and 𝒴 be finite sample spaces for features and labels.
Define Δ to be the set of joint distributions over (𝒳_1, 𝒳_2, 𝒴).
We are concerned with features X_1, X_2 (with support 𝒳_i) and labels Y (with support 𝒴) drawn from some distribution p ∈Δ. We denote the probability mass function by p(x_1,x_2,y), where omitted parameters imply marginalization.
In many real-world applications <cit.>, we only have partial datasets from p rather than the full distribution:
* Labeled unimodal data 𝒟_1 = {(x_1,y): 𝒳_1 ×𝒴}, 𝒟_2 = {(x_2,y): 𝒳_2 ×𝒴}.
* Unlabeled multimodal data 𝒟_M = {(x_1,x_2): 𝒳_1 ×𝒳_2}.
𝒟_1, 𝒟_2 and 𝒟_M follow the pairwise marginals p(x_1, y), p(x_2, y) and p(x_1, x_2).
We define Δ_p_1,2 = { q ∈Δ: q(x_i,y)=p(x_i,y) ∀ y∈𝒴, x_i ∈𝒳_i, i ∈ [2] } as the set of joint distributions which agree with the labeled unimodal data 𝒟_1 and 𝒟_2, and Δ_p_1,2,12 = { r ∈Δ: r(x_1,x_2)=p(x_1,x_2), r(x_i,y)=p(x_i,y) } as the set of joint distributions which agree with all 𝒟_1, 𝒟_2 and 𝒟_M.
Despite partial observability, we often still want to understand the degree to which two modalities can interact to contribute new information not present in either modality alone, in order to inform our decisions on multimodal data collection and modeling <cit.>. We now cover background towards a formal information-theoretic definition of interactions and their approximation.
§.§ Information theory, partial information decomposition, and synergy
Information theory formalizes the amount of information that a variable (X_1) provides about another (X_2), and is quantified by Shannon's mutual information (MI) and conditional MI <cit.>:
I(X_1; X_2) = ∫ p(x_1,x_2) logp(x_1,x_2)/p(x_1) p(x_2) dx, I(X_1;X_2|Y) = ∫ p(x_1,x_2|y) logp(x_1,x_2|y)/p(x_1|y) p(x_2|y) dx dy.
The MI of two random variables X_1 and X_2 measures the amount of information (in bits) obtained about X_1 by observing X_2, and by extension, conditional MI is the expected value of MI given the value of a third (e.g., Y). However, the extension of information theory to three or more variables to describe the synergy between two modalities for a task remains an open challenge. Among many proposed frameworks, Partial information decomposition (PID) <cit.> posits a decomposition of the total information 2 variables X_1,X_2 provide about a task Y into 4 quantities: I_p({X_1,X_2}; Y) = R + U_1 + U_2 + S where I_p({X_1,X_2}; Y) is the MI between the joint random variable (X_1,X_2) and Y, redundancy R describes task-relevant information shared between X_1 and X_2, uniqueness U_1 and U_2 studies the task-relevant information present in only X_1 or X_2 respectively, and synergy S investigates the emergence of new information only when both X_1 and X_2 are present <cit.>:
(Multimodal interactions) Given X_1, X_2, and a target Y, we define their redundant (R), unique (U_1 and U_2), and synergistic (S) interactions as:
R = max_q ∈Δ_p_1,2 I_q(X_1; X_2; Y), U_1 = min_q ∈Δ_p_1,2 I_q(X_1; Y | X_2), U_2 = min_q ∈Δ_p_1,2 I_q(X_2; Y| X_1),
S = I_p({X_1,X_2}; Y) - min_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y),
where the notation I_p(·) and I_q(·) disambiguates mutual information (MI) under p and q respectively.
I(X_1; X_2; Y) = I(X_1; X_2) - I(X_1;X_2|Y) is a multivariate extension of information theory <cit.>. Most importantly, R, U_1, and U_2 can be computed exactly using convex programming over distributions q ∈Δ_p_1,2 with access only to the marginals p(x_1,y) and p(x_2,y) by solving an equivalent max-entropy optimization problem q^* = _q ∈Δ_p_1,2 H_q(Y | X_1, X_2) <cit.>. This is a convex optimization problem with linear marginal-matching constraints (see Appendix <ref>). This gives us an elegant interpretation that we need only labeled unimodal data in each feature from 𝒟_1 and 𝒟_2 to estimate redundant and unique interactions.
§ ESTIMATING SYNERGY WITHOUT MULTIMODAL DATA
Unfortunately, S is impossible to compute via equation (<ref>) when we do not have access to the full joint distribution p, since the first term I_p(X_1, X_2;Y) is unknown.
Instead, we will aim to provide lower and upper bounds in the form S≤ S ≤ which depend only on 𝒟_1, 𝒟_2, and 𝒟_M.
§.§ Lower bounds on synergy
Our first insight is that while labeled multimodal data is unavailable, the output of unimodal classifiers may be compared against each other. Let _𝒴 = { r ∈ℝ_+^|𝒴| | ||r||_1 = 1 } be the probability simplex over labels 𝒴. Consider the set of unimodal classifiers ℱ_i ∋ f_i: 𝒳_i →δ_𝒴 and multimodal classifiers ℱ_M ∋ f_M: 𝒳_1 ×𝒳_2 →δ_𝒴.
The crux of our method is to establish a connection between modality disagreement and a lower bound on synergy.
(Modality disagreement) Given X_1, X_2, and a target Y, as well as unimodal classifiers f_1 and f_2, we define modality disagreement as α(f_1,f_2) = 𝔼_p(x_1,x_2) [d(f_1,f_2)] where d: 𝒴×𝒴→ℝ^≥0 is a distance function in label space scoring the disagreement of f_1 and f_2's predictions.
Quantifying modality disagreement gives rise to two types of synergy as illustrated in Figure <ref>: agreement synergy and disagreement synergy. As their names suggest, agreement synergy happens when two modalities agree in predicting the label and synergy arises within this agreeing information. On the other hand, disagreement synergy happens when two modalities disagree in predicting the label, and synergy arises due to disagreeing information.
Agreement synergy We first consider the case when two modalities contain shared information that leads to agreement in predicting the outcome. In studying these situations, a driving force for estimating S is the amount of shared information I(X_1;X_2) between modalities, with the intuition that more shared information naturally leads to redundancy which gives less opportunity for new synergistic interactions.
Mathematically, we formalize this by relating S to R <cit.>,
S = R - I_p(X_1;X_2;Y) = R - I_p(X_1;X_2) + I_p(X_1;X_2|Y).
implying that synergy exists when there is high redundancy and low (or even negative) three-way MI I_p(X_1;X_2;Y) <cit.>.
By comparing the difference in X_1,X_2 dependence with and without the task (i.e., I_p(X_1;X_2) vs I_p(X_1;X_2|Y)), 2 cases naturally emerge (see top half of Figure <ref>):
* 𝐒>𝐑: When both modalities do not share a lot of information as measured by low I(X_1;X_2), but conditioning on Y increases their dependence: I(X_1;X_2|Y) > I(X_1;X_2), then there is synergy between modalities when combining them for task Y. This setting is reminiscent of common cause structures. Examples of these distributions in the real world are multimodal question answering, where the image and question are less dependent (some questions like `what is the color of the car' or `how many people are there' can be asked for many images), but the answer (e.g., `blue car') connects the two modalities, resulting in dependence given the label. As expected, S = 4.92,R=0.79 for the VQA 2.0 dataset <cit.>.
* 𝐑>𝐒: Both modalities share a lot of information but conditioning on Y reduces their dependence: I(X_1;X_2)>I(X_1;X_2|Y), which results in more redundant than synergistic information. This setting is reminiscent of common effect structures.
A real-world example is in detecting sentiment from multimodal videos, where text and video are highly dependent since they are emitted by the same speaker, but the sentiment label explains away some of the dependencies between both modalities. Indeed, for multimodal sentiment analysis from text, video, and audio of monologue videos on MOSEI <cit.>, R=0.26 and S=0.04.
However, I_p(X_1;X_2|Y) cannot be computed without access to the full distribution p. In Theorem <ref>, we obtain a lower bound on I_p(X_1;X_2|Y), resulting in a lower bound for synergy.
(Lower-bound on synergy via redundancy) We can relate S to R as follows
= R - I_p(X_1;X_2) + min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) ≤ S
We include the full proof in Appendix <ref>, but note that min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) is equivalent to a max-entropy optimization problem solvable using convex programming. This implies that can be computed efficiently using only unimodal data 𝒟_i and unlabeled multimodal data 𝒟_M.
Disagreement synergy We now consider settings where two modalities disagree in predicting the outcome: suppose y_1=_y p(y|x_1) is the most likely prediction from the first modality, y_2=_y p(y|x_2) for the second modality, and y=_y p(y|x_1,x_2) the true multimodal prediction. During disagreement, there are again 2 cases (see bottom half of Figure <ref>):
* 𝐔>𝐒: Multimodal prediction y=_y p(y|x_1,x_2) is the same as one of the unimodal predictions (e.g., y=y_2), in which case unique information in modality 2 leads to the outcome.
A real-world dataset that we categorize in this case is MIMIC involving mortality and disease prediction from tabular patient data and time-series medical sensors <cit.> which primarily shows unique information in the tabular modality. The disagreement on MIMIC is high α=0.13, but since disagreement is due to a lot of unique information, there is less synergy S=0.01.
* 𝐒>𝐔: Multimodal prediction y is different from both y_1 and y_2, then both modalities interact synergistically to give rise to a final outcome different from both disagreeing unimodal predictions.
This type of joint distribution is indicative of real-world examples such as predicting sarcasm from language and speech - the presence of sarcasm is typically detected due to a contradiction between what is expressed in language and speech, as we observe from the experiments on MUStARD <cit.> where S=0.44 and α=0.12 are both relatively large.
We formalize these intuitions via Theorem <ref>, yielding a lower bound based on disagreement minus the maximum unique information in both modalities:
(Lower-bound on synergy via disagreement, informal) We can relate synergy S and uniqueness U to modality disagreement α(f_1,f_2) of optimal unimodal classifiers f_1,f_2 as follows:
= α(f_1,f_2) · c - max(U_1,U_2) ≤ S
for some constant c depending on the label dimension |𝒴| and choice of label distance function d.
Theorem <ref> implies that if there is substantial disagreement α(f_1,f_2) between unimodal classifiers, it must be due to the presence of unique or synergistic information. If uniqueness is small, then disagreement must be accounted for by synergy, thereby yielding a lower bound . Note that the notion of optimality in unimodal classifiers is important: poorly-trained unimodal classifiers could show high disagreement but would be uninformative about true interactions. We include the formal version of the theorem based on Bayes' optimality and a full proof in Appendix <ref>.
Hence, agreement and disagreement synergy yield separate lower bounds and . Note that these bounds always hold, so we could take S=max{, }.
§.§ Upper bound on synergy
While the lower bounds tell us the least amount of synergy possible in a distribution, we also want to obtain an upper bound on the possible synergy, which together with the above lower bounds sandwich S.
By definition, S = I_p({X_1,X_2}; Y) - max_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y). Thus, upper bounding synergy is the same as maximizing the MI I_p(X_1,X_2;Y), which can be rewritten as
max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y) =
max_r ∈Δ_p_1,2,12{ H_r(X_1, X_2) + H_r(Y) - H_r(X_1, X_2, Y) }
=
H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y),
where the second line follows from the definition of Δ_p_1,2,12. Since the first two terms are constant, an upper bound on S requires us to look amongst all multimodal distributions r ∈Δ which match the unimodal 𝒟_i and unlabeled multimodal data 𝒟_M, and find the one with minimum entropy.
Solving r^* = _r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y) is NP-hard, even for a fixed |𝒴| ≥ 4.
Theorem <ref> suggests we cannot tractably find a joint distribution which tightly upper bounds synergy when the feature spaces are large. Thus, our proposed upper bound is based on a lower bound on min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y), which yields
(Upper-bound on synergy)
S ≤ H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_12,y H_r(X_1, X_2, Y) - max_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y) =
where Δ_p_12,y = { r ∈Δ : r(x_1,x_2)=p(x_1,x_2), r(y)=p(y) }. The second optimization problem is solved with convex optimization. The first is the classic min-entropy coupling over (X_1, X_2) and Y, which is still NP-hard but admits good approximations <cit.>.
Proofs of Theorem <ref>, <ref>, and approximations for min-entropy couplings are deferred to Appendix <ref> and <ref>.
§ EXPERIMENTS
We design comprehensive experiments to validate these estimated bounds and show new relationships between disagreement, multimodal interactions, and performance, before describing two applications in (1) estimating optimal multimodal performance without multimodal data to prioritize the collection and fusion data sources, and (2) a new disagreement-based self-supervised learning method.
§.§ Verifying predicted guarantees and analysis of multimodal distributions
Synthetic bitwise datasets: We enumerate joint distributions over 𝒳_1, 𝒳_2, 𝒴∈{0,1} by sampling 100,000 vectors in the 8-dimensional probability simplex and assigning them to each p(x_1,x_2,y).
Using these distributions, we estimate p̂(y|x_1) and p̂(y|x_2) to compute disagreement and the marginals p̂(x_1,y), p̂(x_2,y), and p̂(x_1,x_2) to estimate the lower and upper bounds.
R0.3
< g r a p h i c s >
Our two lower bounds and track actual synergy S from below, and the upper bound tracks S from above. We find that , tend to approximate S better than .
Large real-world multimodal datasets: We also use the large collection of real-world datasets in MultiBench <cit.>: (1) MOSI: video-based sentiment analysis <cit.>,
(2) MOSEI: video-based sentiment and emotion analysis <cit.>, (3) MUStARD: video-based sarcasm detection <cit.>, (5) MIMIC: mortality and disease prediction from tabular patient data and medical sensors <cit.>, and (6) ENRICO: classification of mobile user interfaces and screenshots <cit.>.
While the previous bitwise datasets with small and discrete support yield exact lower and upper bounds, this new setting with high-dimensional continuous modalities requires the approximation of disagreement and information-theoretic quantities: we train unimodal neural network classifiers f̂_θ(y|x_1) and f̂_θ(y|x_2) to estimate disagreement, and we cluster representations of X_i to approximate the continuous modalities by discrete distributions with finite support to compute lower and upper bounds.
We summarize the following regarding the utility of each bound (see details in Appendix <ref>):
1. Overall trends: For the 100,000 bitwise distributions, we compute S, the true value of synergy assuming oracle knowledge of the full multimodal distribution, and compute -S, -S, and S- for each point. Plotting these points as a histogram in Figure <ref>, we find that the two lower bounds track actual synergy from below (-S and -S approaching 0 from below), and the upper bound tracks synergy from above (S- approaching 0 from above). The two lower bounds are quite tight, as we see that for many points -S and -S are approaching close to 0, with an average gap of 0.18. The disagreement bound seems to be tighter empirically than the agreement bound: for half the points, is within 0.14 and is within 0.2 of S. For the upper bound, there is an average gap of 0.62. However, it performs especially well on high synergy data. When S > 0.6, the average gap is 0.24, with more than half of the points within 0.25 of S.
On real-world MultiBench datasets, we show the estimated bounds and actual S (assuming knowledge of full p) in Table <ref>. The lower and upper bounds track true S: as estimated and increases from MOSEI to UR-FUNNY to MOSI to MUStARD, true S also increases.
For datasets like MIMIC with disagreement but high uniqueness, can be negative, but we can rely on to give a tight estimate on low synergy. Unfortunately, our bounds do not track synergy well on ENRICO. We believe this is because ENRICO displays all interactions: R=0.73,U_1=0.38,U_2=0.53,S=0.34, which makes it difficult to distinguish between R and S using or U and S using since no interaction dominates over others, and is also quite loose relative to the lower bounds. Given these general observations, we now carefully analyze the relationships between interactions, agreement, and disagreement.
2. The relationship between redundancy and synergy: In Table <ref> we show the classic agreement XOR distribution where X_1 and X_2 are independent, but Y=1 sets X_1 ≠ X_2 to increase their dependence. I(X_1;X_2;Y) is negative, and = 1 ≤ 1=S is tight.
On the other hand, Table <ref> is an extreme example where the probability mass is distributed uniformly only when y=x_1=x_2 and 0 elsewhere. As a result, X_1 is always equal to X_2 (perfect dependence), and yet Y perfectly explains away the dependence between X_1 and X_2 so I(X_1;X_2|Y) = 0: = 0 ≤ 0=S. A real-world example is multimodal sentiment analysis from text, video, and audio on MOSEI, R=0.26 and S=0.03, and as expected the lower bound is small = 0 ≤ 0.03=S (Table <ref>).
3. The relationship between disagreement and synergy: In Table <ref> we show an example called disagreement XOR. There is maximum disagreement between marginals p(y|x_1) and p(y|x_2): the likelihood for y is high when y is the opposite bit as x_1, but reversed for x_2. Given both x_1 and x_2: y seems to take a `disagreement' XOR of the individual marginals, i.e. p(y|x_1,x_2) = _y p(y|x_1) XOR _y p(y|x_2), which indicates synergy (note that an exact XOR would imply perfect agreement and high synergy). The actual disagreement is 0.15, synergy is 0.16, and uniqueness is 0.02, indicating a very strong lower bound =0.14 ≤ 0.16=S. A real-world equivalent dataset is MUStARD, where the presence of sarcasm is often due to a contradiction between what is expressed in language and speech, so disagreement α=0.12 is the highest out of all the video datasets, giving a lower bound =0.11 ≤ 0.44 = S.
On the contrary, the lower bound is low when all disagreement is explained by uniqueness (e.g., y=x_1, Table <ref>), which results in = 0 ≤ 0 = S (α and U cancel each other out). A real-world equivalent is MIMIC: from Table <ref>, disagreement is high α=0.13 due to unique information U_1=0.25, so the lower bound informs us about the lack of synergy = -0.12 ≤ 0.02 = S.
Finally, the lower bound is loose when there is synergy without disagreement, such as agreement XOR (y=x_1 XOR x_2, Table <ref>) where the marginals p(y|x_i) are both uniform, but there is full synergy: = 0 ≤ 1 = S. Real-world datasets which fall into agreement synergy include UR-FUNNY where there is low disagreement in predicting humor α=0.03, and relatively high synergy S=0.18, which results in a loose lower bound = 0.01 ≤ 0.18=S.
4. On upper bounds for synergy: Finally, we find that the upper bound for MUStARD is quite close to real synergy, = 0.79 ≥ 0.44=S. On MIMIC, the upper bound is the lowest = 0.41, matching the lowest S=0.02. Some of the other examples in Table <ref> show bounds that are quite weak.
This could be because (i) there indeed exists high synergy distributions that match 𝒟_i and 𝒟_M, but these are rare in the real world, or (ii) our approximation used in Theorem <ref> is mathematically loose. We leave these as open directions for future work.
§.§ Application 1: Estimating multimodal performance for multimodal fusion
Now that we have validated the accuracy of these lower and upper bounds, we can apply them towards estimating multimodal performance without labeled multimodal data. This serves as a strong signal for deciding (1) whether to collect paired and labeled data from a second modality, and (2) whether one should use complex fusion techniques on collected multimodal data.
Method: Our approach for answering these two questions is as follows: given 𝒟_1, 𝒟_2, and 𝒟_M, we can estimate synergistic information based on our derived lower and upper bounds S and S. Together with redundant and unique information which can be computed exactly, we will use the total information to estimate the performance of multimodal models trained optimally on the full multimodal distribution. Formally, we estimate optimal performance via a result from <cit.> and Fano's inequality <cit.>, which together yield tight bounds of performance as a function of total information I_p({X_1,X_2}; Y).
Let P_acc(f_M^*) = 𝔼_p [ 1[ f_M^*(x_1,x_2) = y ] ] denote the accuracy of the Bayes' optimal multimodal model f_M^* (i.e., P_acc (f_M^*) ≥ P_acc (f'_M) for all f'_M ∈ℱ_M). We have that
2^I_p({X_1,X_2}; Y)-H(Y)≤ P_acc(f_M^*) ≤I_p({X_1,X_2}; Y) + 1/log |𝒴|,
where we can plug in R+U_1,U_2+S≤ I_p({X_1,X_2}; Y) ≤ R+U_1,U_2+ to obtain lower P_acc(f_M^*) and upper P_acc(f_M^*) bounds on optimal multimodal performance (refer to Appendix <ref> for full proof). Finally, we summarize estimated multimodal performance as the average P̂_M = (P_acc(f_M^*) + P_acc(f_M^*))/2. A high P̂_M suggests the presence of important joint information from both modalities (not present in each) which could boost accuracy, so it is worthwhile to collect the full distribution p and explore multimodal fusion <cit.> to learn joint information over unimodal methods.
Results: For each MultiBench dataset, we implement a suite of unimodal and multimodel models spanning simple and complex fusion. Unimodal models are trained and evaluated separately on each modality. Simple fusion includes ensembling by taking an additive or majority vote between unimodal models <cit.>. Complex fusion is designed to learn higher-order interactions as exemplified by bilinear pooling <cit.>, multiplicative interactions <cit.>, tensor fusion <cit.>, and cross-modal self-attention <cit.>. See Appendix <ref> for models and training details. We include unimodal, simple and complex multimodal performance, as well as estimated lower and upper bounds on optimal multimodal performance in Table <ref>.
R0.6
.3
< g r a p h i c s >
.3
< g r a p h i c s >
Datasets with higher estimated multimodal performance P̂_M tend to show improvements from unimodal to multimodal (left) and from simple to complex multimodal fusion (right).
RQ1: Should I collect multimodal data?
We compare estimated performance P̂_M with the actual difference between unimodal and best multimodal performance in Figure <ref> (left).
Higher estimated P̂_M correlates with a larger gain from unimodal to multimodal. MUStARD and ENRICO show the most opportunity for multimodal modeling, but MIMIC shows less improvement.
RQ2: Should I investigate multimodal fusion?
From Table <ref>, synergistic datasets like MUStARD and ENRICO show best reported multimodal performance only slightly above the estimated lower bound, indicating more work to be done in multimodal fusion. For datasets with less synergy like MOSEI and MIMIC, the best multimodal performance is much higher than the estimated lower bound, indicating that existing fusion methods may already be quite optimal. We compare P̂_M with the performance gap between complex and simple fusion methods in Figure <ref> (right). We again observe trends between higher P̂_M and improvements with complex fusion, with large gains on MUStARD and ENRICO. We expect new methods to further improve the state-of-the-art on these datasets due to their generally high interaction values and low multimodal performance relative to estimated lower bound P_acc(f_M^*).
§.§ Application 2: Self-supervised multimodal learning via disagreement
R0.6
0.6
< g r a p h i c s >
Masked predictions do not always agree across modalities, as shown in this example from the Social-IQ dataset <cit.>. Adding a slack term enabling pre-training with modality disagreement yields strong performance improvement over baselines.
Finally, we highlight an application of our analysis towards self-supervised pre-training, which is generally performed by encouraging agreement as a pre-training signal on large-scale unlabeled data <cit.> before supervised fine-tuning <cit.>.
However, our results suggest that there are regimes where disagreement can lead to synergy that may otherwise be ignored when only training for agreement. We therefore design a new family of self-supervised learning objectives that capture disagreement on unlabeled multimodal data.
Method: We build upon masked prediction that is popular in self-supervised pre-training: given multimodal data of the form (x_1,x_2) ∼ p(x_1,x_2) (e.g., x_1= caption and x_2= image), first mask out some words (x_1') before using the remaining words (x_1 \ x_1') to predict the masked words via learning f_θ(x_1'|x_1 \ x_1'), as well as the image x_2 to predict the masked words via learning f_θ(x_1'|x_2) <cit.>. In other words, maximizing agreement between f_θ(x_1'|x_1 \ x_1') and f_θ(x_1'|x_2) in predicting x_1':
ℒ_agree = d(f_θ(x_1'|x_1\ x_1'), x_1') + d(f_θ(x_1'|x_2), x_1')
for a distance d such as cross-entropy loss for discrete word tokens. To account for disagreement, we allow predictions on the masked tokens x_1' from two different modalities i,j to disagree by a slack variable λ_ij. We modify the objective such that each term only incurs a loss penalty if each distance d(x,y) is larger than λ as measured by a margin distance d_λ(x,y) = max (0, d(x,y) - λ):
ℒ_disagree = ℒ_agree + ∑_1 ≤ i < j ≤ 2 d_λ_ij (f_θ(x_1'|x_i), f_θ(x_1'|x_j))
These λ terms are hyperparameters, quantifying the amount of disagreement we tolerate between each pair of modalities during cross-modal masked pretraining (λ=0 recovers full agreement). We show this visually in Figure <ref> by applying it to masked pre-training on text, video, and audio using MERLOT Reserve <cit.>, and also apply it to FLAVA <cit.> for images and text experiments (see extensions to 3 modalities and details in Appendix <ref>).
Setup: We choose four settings with natural disagreement: (1) UR-FUNNY: humor detection from 16,000 TED talk videos <cit.>, (2) MUsTARD: 690 videos for sarcasm detection from TV shows <cit.>, (3) Social IQ: 1,250 multi-party videos testing social intelligence knowledge <cit.>, and (4) Cartoon: matching 704 cartoon images and captions <cit.>.
Results: From Table <ref>, allowing for disagreement yields improvements on these datasets, with those on Social IQ, UR-FUNNY, MUStARD being statistically significant (p-value <0.05 over 10 runs). By analyzing the value of λ resulting in the best validation performance through hyperparameter search, we can analyze when disagreement helps for which datasets, datapoints, and modalities. On a dataset level, we find that disagreement helps for video/audio and video/text, improving accuracy by up to 0.6% but hurts for text/audio, decreasing the accuracy by up to 1%. This is in line with intuition, where spoken text is transcribed directly from audio for these monologue and dialog videos, but video can have vastly different information. In addition, we find more disagreement between text/audio for Social IQ, which we believe is because it comes from natural videos while the others are scripted TV shows with more agreement between speakers and transcripts.
We further analyze individual datapoints with disagreement. On UR-FUNNY, the moments when the camera jumps from the speaker to their presentation slides are followed by an increase in agreement since the video aligns better with the speech. In MUStARD, we observe disagreement between vision and text when the speaker's face expresses the sarcastic nature of a phrase. This changes the meaning of the phrase, which cannot be inferred from text only, and leads to synergy. We include more qualitative examples including those on the Cartoon captioning dataset in Appendix <ref>.
§ RELATED WORK
Multivariate information theory: The extension of information theory to 3 or more variables <cit.> remains on open problem. Partial information decomposition (PID) <cit.> was proposed as a potential solution that satisfies several appealing properties <cit.>. Today, PID has primarily found applications in cryptography <cit.>, neuroscience <cit.>, physics <cit.>, complex systems <cit.>, and biology <cit.>, but its application towards machine learning, in particular multimodality, is an exciting but untapped research direction. To the best of our knowledge, our work is the first to provide formal estimates of synergy in the context of unlabeled or unpaired multimodal data which is common in today's self-supervised paradigm <cit.>.
Understanding multimodal models: Information theory is useful for understanding co-training <cit.>, multi-view learning <cit.>, and feature selection <cit.>, where redundancy is an important concept. Prior research has also studied multimodal models via additive or non-additive interactions <cit.>, gradient-based approaches <cit.>, or visualization tools <cit.>. This goal of quantifying and modeling multimodal interactions <cit.> has also motivated many successful learning algorithms, such as contrastive learning <cit.>, agreement and alignment <cit.>, factorized representations <cit.>, as well as tensors and multiplicative interactions <cit.>.
Disagreement-based learning has been used to estimate performance from unlabeled data <cit.>, active learning <cit.>, and guiding exploration in reinforcement learning <cit.>. In multimodal learning, however, approaches have been primarily based on encouraging agreement in prediction <cit.> or feature space <cit.> in order to capture shared information. Our work has arrived at similar conclusions regarding the benefits of disagreement-based learning, albeit from different mathematical motivations and applications.
§ CONCLUSION
We proposed estimators of multimodal interactions when observing only labeled unimodal data and some unlabeled multimodal data, a general setting that encompasses many real-world constraints involving partially observable modalities, limited labels, and privacy concerns. Our key results draw new connections between multimodal interactions, the disagreement of unimodal classifiers, and min-entropy couplings. Future work should investigate more applications of multivariate information theory in designing self-supervised models, predicting multimodal performance, and other tasks involving feature interactions such as privacy-preserving and fair representation learning.
§ ACKNOWLEDGEMENTS
This material is based upon work partially supported by Meta, National Science Foundation awards 1722822 and 1750439, and National Institutes of Health awards R01MH125740, R01MH132225, R01MH096951 and R21MH130767.
PPL is partially supported by a Facebook PhD Fellowship and a Carnegie Mellon University's Center for Machine Learning and Health Fellowship.
RS is supported in part by ONR N000141812861, ONR N000142312368 and DARPA/AFRL FA87502321015.
Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, NIH, Meta, Carnegie Mellon University’s Center for Machine Learning and Health, ONR, DARPA, or AFRL, and no official endorsement should be inferred. Finally, we would also like to acknowledge NVIDIA’s GPU support.
plainnat
§ APPENDIX
§ BROADER IMPACT
Multimodal semi-supervised models are ubiquitous in a range of real-world applications with only labeled unimodal data and naturally co-occurring multimodal data (e.g., unlabeled images and captions, video and corresponding audio) but when labeling them is time-consuming. This paper is our attempt at formalizing the learning setting of multimodal semi-supervised learning, allowing us to derive bounds on the information existing in multimodal semi-supervised datasets and what can be learned by models trained on these datasets. We do not foresee any negative broad impacts of our theoretical results, but we do note the following concerns regarding the potential empirical applications of these theoretical results in real-world multimodal datasets:
Biases: We acknowledge risks of potential biases surrounding gender, race, and ethnicity in large-scale multimodal datasets <cit.>, especially those collected in a semi-supervised setting with unlabeled and unfiltered images and captions <cit.>. Formalizing the types of bias in multimodal datasets and mitigating them is an important direction for future work.
Privacy: When making predictions from multimodal datasets with recorded human behaviors and medical data, there might be privacy risks of participants. Following best practices in maintaining the privacy and safety of these datasets, (1) these datasets have only been collected from public data that are consented for public release (creative commons license and following fair use guidelines of YouTube) <cit.>, or collected from hospitals under strict IRB and restricted access guidelines <cit.>, and (2) have been rigorously de-identified in accordance with Health Insurance Portability and Accountability Act such that all possible personal and protected information has been removed from the dataset <cit.>. Finally, we only use these datasets for research purposes and emphasize that any multimodal models trained to perform prediction should only be used for scientific study and should not in any way be used for real-world harm.
§ DETAILED PROOFS
§.§ Information decomposition
Partial information decomposition (PID) <cit.> decomposes of the total information 2 variables provide about a task I({X_1,X_2}; Y) into 4 quantities: redundancy R between X_1 and X_2, unique information U_1 in X_1 and U_2 in X_2, and synergy S. <cit.>, who first proposed PIDs, showed that they should satisfy the following consistency equations:
R + U_1 = I(X_1; Y),
R + U_2 = I(X_2; Y),
U_1 + S = I(X_1; Y | X_2),
U_2 + S = I(X_2; Y | X_1),
R - S = I(X_1; X_2; Y).
We choose the PID definition by <cit.>, where redundancy, uniqueness, and synergy are defined by the solution to the following optimization problems:
R = max_q ∈Δ_p I_q(X_1; X_2; Y)
U_1 = min_q ∈Δ_p I_q(X_1; Y | X_2)
U_2 = min_q ∈Δ_p I_q(X_2; Y| X_1)
S = I_p({X_1,X_2}; Y) - min_q ∈Δ_p I_q({X_1,X_2}; Y)
where Δ_p = { q ∈Δ: q(x_i,y)=p(x_i,y) ∀ y, x_i, i ∈{1,2}}, Δ is the set of all joint distributions over X_1, X_2, Y, and the notation I_p(·) and I_q(·) disambiguates MI under joint distributions p and q respectively. The key difference in this definition of PID lies in optimizing q ∈Δ_p to satisfy the marginals q(x_i,y)=p(x_i,y), but relaxing the coupling between x_1 and x_2: q(x_1,x_2) need not be equal to p(x_1,x_2). The intuition behind this is that one should be able to infer redundancy and uniqueness given only access to separate marginals p(x_1,y) and p(x_2,y), and therefore they should only depend on q ∈Δ_p which match these marginals. Synergy, however, requires knowing the coupling p(x_1,x_2), and this is reflected in equation (<ref>) depending on the full p distribution.
§.§ Computing q^*, redundancy, and uniqueness
According to <cit.>, it suffices to solve for q using the following max-entropy optimization problem q^* = _q ∈Δ_p H_q(Y | X_1, X_2), the same q^* equivalently solves any of the remaining problems defined for redundancy, uniqueness, and synergy.
This is a concave maximization problem with linear constraints. When 𝒳_i and 𝒴 are small and discrete, we can represent all valid distributions q(x_1,x_2,y) as a set of tensors Q of shape |𝒳_1| × |𝒳_2| × |𝒴| with each entry representing Q[i,j,k] = p(X_1=i,X_2=j,Y=k). The problem then boils down to optimizing over valid tensors Q ∈Δ_p that match the marginals p(x_i,y) for the objective function H_q(Y | X_1, X_2). We rewrite conditional entropy as a KL-divergence <cit.>, H_q(Y|X_1, X_2) = log |𝒴| - KL(q||q̃), where q̃ is an auxiliary product density of q(x_1,x_2) ·1/|𝒴| enforced using linear constraints: q̃(x_1, x_2, y) = q(x_1,x_2) / |𝒴|. The KL-divergence objective is recognized as convex, allowing the use of conic solvers such as SCS <cit.>, ECOS <cit.>, and MOSEK <cit.>.
Finally, optimizing over Q ∈Δ_p that match the marginals can also be enforced through linear constraints: the 3D-tensor Q summed over the second dimension gives q(x_1,y) and summed over the first dimension gives q(x_2,y), yielding the final optimization problem:
_Q,Q̃ KL(Q||Q̃), s.t. Q̃(x_1, x_2, y) = Q(x_1,x_2) / |𝒴|,
∑_x_2 Q = p(x_1,y), ∑_x_1 Q = p(x_2,y), Q ≥ 0, ∑_x_1,x_2,y Q = 1.
After solving this optimization problem, plugging q^* into (<ref>)-(<ref>) yields the desired estimators for redundancy and uniqueness: R = I_q^*(X_1; X_2; Y), U_1 = I_q^*(X_1; Y | X_2), U_2 = I_q^*(X_2; Y| X_1), and more importantly, can be inferred from access to only labeled unimodal data p(x_1,y) and p(x_2,y). Unfortunately, S is impossible to compute via equation (<ref>) when we do not have access to the full joint distribution p, since the first term I_p(X_1, X_2;Y) is unknown.
Instead, we will aim to provide lower and upper bounds in the form S≤ S ≤ so that we can have a minimum and maximum estimate on what synergy could be. Crucially, S and should depend only on 𝒟_1, 𝒟_2, and 𝒟_M in the multimodal semi-supervised setting.
§.§ Lower bound on synergy via redundancy (Theorem <ref>)
We first restate Theorem <ref> from the main text to obtain our first lower bound linking synergy to redundancy:
(Lower-bound on synergy via redundancy, same as Theorem <ref>) We can relate S to R as follows
= R - I_p(X_1;X_2) + min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) ≤ S
where Δ_p_1,2,12 = { r ∈Δ: r(x_1,x_2)=p(x_1,x_2), r(x_i,y)=p(x_i,y) }. min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) is a max-entropy convex optimization problem which can be solved exactly using linear programming.
By consistency equation (<ref>) S = R - I_p(X_1;X_2;Y) = R - I_p(X_1;X_2) + I_p(X_1;X_2|Y). This means that lower bounding the synergy is the same as obtaining a lower bound on the mutual information I_p(X_1,X_2|Y), since R and I_p(X_1;X_2) can be computed exactly based on p(x_1, y), p(x_2, y), and p(x_1,x_2). To lower bound I_p(X_1,X_2|Y), we consider minimizing it subject to the marginal constraints with p, which gives
min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) = min_r ∈Δ_p_1,2,12 H_r(X_1) - I_r(X_1;Y) - H_r(X_1|X_2,Y)
= H_p(X_1) - I_p(X_1;Y) - max_r ∈Δ_p_1,12 H_r(X_1|X_2,Y)
where in the last line the p_2 constraint is removed since H_r(X_1|X_2,Y) is fixed with respect to p(x_2,y). To solve max_r ∈Δ_p_1,12 H_r(X_1|X_2,Y), we observe that it is also a concave maximization problem with linear constraints. When 𝒳_i and 𝒴 are small and discrete, we can represent all valid distributions r(x_1,x_2,y) as a set of tensors R of shape |𝒳_1| × |𝒳_2| × |𝒴| with each entry representing R[i,j,k] = p(X_1=i,X_2=j,Y=k). The problem then boils down to optimizing over valid tensors R ∈Δ_p_1,12 that match the marginals p(x_1,y) and p(x_1,x_2). Given a tensor R representing r, our objective is the concave function H_r(X_1 | X_2, Y) which we rewrite as a KL-divergence log |𝒳_1| - KL(r||r̃) using an auxiliary distribution r̃ = r(x_2,y) ·1/|𝒳_1| and solve it exactly using convex programming with linear constraints:
_R,R̃ KL(R||R̃), s.t. R̃(x_1, x_2, y) = R(x_2,y) / |𝒴|,
∑_x_2 R = p(x_1,y), ∑_y R = p(x_1,x_2), R ≥ 0, ∑_x_1,x_2,y R = 1.
with marginal constraints R ∈Δ_p_1,12 enforced through linear constraints on tensor R. Plugging the optimized r^* into (<ref>) yields the desired lower bound = R - I_p(X_1;X_2) + I_r^*(X_1;X_2|Y).
§.§ Lower bound on synergy via disagreement (Theorem <ref>)
We first restate some notation and definitions from the main text for completeness. The key insight behind Theorem <ref>, a relationship between disagreement and synergy, is that while labeled multimodal data is unavailable, the output of unimodal classifiers may be compared against each other. Let _𝒴 = { r ∈ℝ_+^|𝒴| | ||r||_1 = 1 } be the probability simplex over labels 𝒴. Consider the set of unimodal classifiers ℱ_i ∋ f_i: 𝒳_i →δ_𝒴 and multimodal classifiers ℱ_M ∋ f_M: 𝒳_1 ×𝒳_2 →δ_𝒴.
(Unimodal and multimodal loss) The loss of a given unimodal classifier f_i ∈ℱ_i is given by L(f_i) = 𝔼_p(x_i,y)[ ℓ( f_i(x_i), y) ] for a loss function over the label space ℓ: 𝒴×𝒴→ℝ^≥0. We denote the same for multimodal classifier f_M ∈ℱ_M, with a slight abuse of notation L(f_M) = 𝔼_p(x_1,x_2,y)[ ℓ( f_M(x_1, x_2), y) ] for a loss function over the label space ℓ.
(Unimodal and multimodal accuracy) The accuracy of a given unimodal classifier f_i ∈ℱ_i is given by P_acc (f_i) = 𝔼_p [ 1[ f_i(x_i) = y ] ]. We denote the same for multimodal classifier f_M ∈ℱ_M, with a slight abuse of notation P_acc (f_M) = 𝔼_p [ 1[ f_M(x_1, x_2) = y ] ].
An unimodal classifier f_i^* is Bayes-optimal (or simply optimal) with respect to a loss function L if L(f_i^*) ≤ L(f'_i) for all f'_i ∈ℱ_i. Similarly, a multimodal classifier f_M^* is optimal with respect to loss L if L(f_M^*) ≤ L(f'_M) for all f'_M ∈ℱ_M.
Bayes optimality can also be defined with respect to accuracy, if P_acc (f_i^*) ≥ P_acc (f'_i) for all f'_i ∈ℱ_i for unimodal classifiers, or if P_acc (f_M^*) ≥ P_acc (f'_M) for all f'_M ∈ℱ_M for multimodal classifiers.
The crux of our method is to establish a connection between modality disagreement and a lower bound on synergy.
(Modality disagreement) Given X_1, X_2, and a target Y, as well as unimodal classifiers f_1 and f_2, we define modality disagreement as α(f_1,f_2) = 𝔼_p(x_1,x_2) [d(f_1,f_2)] where d: 𝒴×𝒴→ℝ^≥0 is a distance function in label space scoring the disagreement of f_1 and f_2's predictions,
where the distance function d must satisfy some common distance properties, following <cit.>:
(Relaxed triangle inequality) For the distance function d: 𝒴×𝒴→ℝ^≥0 in label space scoring the disagreement of f_1 and f_2's predictions, there exists c_d ≥ 1 such that
∀ŷ_1, ŷ_2, ŷ_3 ∈𝒴̂, d(ŷ_1, ŷ_2) ≤ c_d ( d(ŷ_1, ŷ_3) + d(ŷ_3, ŷ_2) ).
(Inverse Lipschitz condition) For the function d, it holds that for all f,
𝔼 [d(f(x_1,x_2), f^*(x_1,x_2))] ≤ |L(f)- L(f^*)|
where f^* is the Bayes optimal multimodal classifier with respect to loss L, and
𝔼 [d(f_i(x_i), f_i^*(x_i))] ≤ |L(f_i)- L(f_i^*)|
where f_i^* is the Bayes optimal unimodal classifier with respect to loss L.
(Classifier optimality) For any unimodal classifiers f_1,f_2 in comparison to the Bayes' optimal unimodal classifiers f_1^*,f_2^*, there exists constants ϵ_1,ϵ_2>0 such that
| L(f_1) - L(f_1^*) |^2 ≤ϵ_1, | L(f_2) - L(f_2^*) |^2 ≤ϵ_2
We now restate Theorem <ref> from the main text obtaining , our second lower bound on synergy linking synergy to disagreement:
(Lower-bound on synergy via disagreement, same as Theorem <ref>) We can relate synergy S and uniqueness U to modality disagreement α(f_1,f_2) of optimal unimodal classifiers f_1,f_2 as follows:
= α(f_1,f_2) · c - max(U_1,U_2) ≤ S
for some constant c depending on the label dimension |𝒴| and choice of label distance function d.
Theorem <ref> implies that if there is substantial disagreement between the unimodal classifiers f_1 and f_2, it must be due to the presence of unique or synergistic information. If uniqueness is small, then disagreement must be accounted for by the presence of synergy, which yields a lower bound.
The first part of the proof is due to an intermediate result by <cit.>, which studies how multi-view agreement can help train better multiview classifiers. We restate the key proof ideas here for completeness. The first step is to relate I_p(X_2;Y|X_1) to | L(f_1^*) - L(f^*) |^2, the difference in errors between the Bayes' optimal unimodal classifier f_1^* with the Bayes' optimal multimodal classifier f^* for some appropriate loss function L on the label space:
| L(f_1^*) - L(f^*) |^2 = | 𝔼_X 𝔼_Y|X_1,X_2ℓ (f^*(x_1,x_2), y) - 𝔼_X 𝔼_Y|X_1ℓ (f^*(x_1,x_2), y) |^2
≤ | 𝔼_Y|X_1,X_2ℓ (f^*(x_1,x_2), y) - 𝔼_Y|X_1ℓ (f^*(x_1,x_2), y) |^2
≤KL (p(y|x_1,x_2), p(y|x_1) )
≤𝔼_X KL (p(y|x_1,x_2), p(y|x_1) )
= I_p(X_2;Y|X_1),
where we used Pinsker's inequality in (<ref>) and Jensen's inequality in (<ref>). Symmetrically, | L(f_2^*) - L(f^*) |^2 ≤ I_p(X_1;Y|X_2), and via the triangle inequality through the Bayes' optimal multimodal classifier f^* and the inverse Lipschitz condition we obtain
𝔼_p(x_1,x_2) [d(f_1^*,f_2^*)] ≤𝔼_p(x_1,x_2) [d(f_1^*,f^*)] + 𝔼_p(x_1,x_2) [d(f^*,f_2^*)]
≤ | L(f_1^*) - L(f^*) |^2 + | L(f_2^*) - L(f^*) |^2
≤ I_p(X_2;Y|X_1) + I_p(X_1;Y|X_2).
Next, we relate disagreement α(f_1,f_2) to I_p(X_2;Y|X_1) and I_p(X_1;Y|X_2) via the triangle inequality through the Bayes' optimal unimodal classifiers f_1^* and f_2^*:
α(f_1,f_2) = 𝔼_p(x_1,x_2) [d(f_1,f_2)]
≤ c_d ( 𝔼_p(x_1,x_2) [d(f_1,f_1^*)] + 𝔼_p(x_1,x_2) [d(f_1^*,f_2^*)] + 𝔼_p(x_1,x_2) [d(f_2^*,f_2)] )
≤ c_d ( ϵ_1' + I_p(X_2;Y|X_1) + I_p(X_1;Y|X_2) + ϵ_2' )
≤ 2 c_d (max(I_p(X_1;Y|X_2), I_p(X_2;Y|X_1)) + max(ϵ_1', ϵ_2'))
where used classifier optimality assumption for unimodal classifiers f_1, f_2 in (<ref>). Finally, we use consistency equations of PID relating U and S in (<ref>)-(<ref>): to complete the proof:
α(f_1,f_2) ≤ 2 c_d (max(I_p(X_1;Y|X_2), I_p(X_2;Y|X_1)) + max(ϵ_1', ϵ_2'))
= 2 c_d (max(U_1+S, U_2+S) + max(ϵ_1', ϵ_2'))
= 2 c_d (S + max(U_1, U_2) + max(ϵ_1', ϵ_2')),
In practice, setting f_1 and f_2 as neural network function approximators that can achieve the Bayes' optimal risk <cit.> results in max(ϵ_1', ϵ_2') = 0, and rearranging gives us the desired inequality.
§.§ Proof of NP-hardness (Theorem <ref>)
Our proof is based on a reduction from the restricted timetable problem, a well-known scheduling problem closely related to constrained edge coloring in bipartite graphs.
Our proof description proceeds along 4 steps.
* Description of our problem.
* How the minimum entropy objective can engineer “classification” problems using a technique from <cit.>.
* Description of the RTT problem of <cit.>, how to visualize RTT as a bipartite edge coloring problem, and a simple variant we call Q-RTT which RTT reduces to.
* Polynomial reduction of Q-RTT to our problem.
§.§.§ Formal description of our problem
Recall that our problem was
min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y)
where
Δ_p_1,2,12 = { r ∈Δ: r(x_1,x_2)=p(x_1,x_2), r(x_i,y)=p(x_i,y) }. [Strictly speaking, the marginals p(x_1, x_2) and p(x_i, y) ought to be rational. This is not overly restrictive, since in practice these marginals often correspond to empirical distributions which would naturally be rational.]
Our goal is to find the minimum-entropy distribution over 𝒳_1 ×𝒳_2 ×𝒴 where the pairwise marginals over (X_1, X_2), (X_1, Y) and (X_2, Y) are specified as part of the problem. Observe that this description is symmetrical, X_i and Y could be swapped without loss of generality.
§.§.§ Warm up: using the min-entropy objective to mimic multiclass classification
We first note the strong similarity of our min-entropy problem to the classic min-entropy coupling problem in two variables. There where the goal is to find the min-entropy joint distribution over 𝒳×𝒴 given fixed marginal distributions of p(x) and p(y). This was shown to be an NP-hard problem which has found many practical applications in recent years. An approximate solution up to 1 bit can be found in polynomial time (and is in fact the same approximation we give to our problem). Our NP-hardness proof involves has a similar flavor as <cit.>, which is based on a reduction from the classic subset sum problem, exploiting the min-entropy objective to enforce discrete choices.
Subset sum There are d items with value c_1 … c_d ≥ 0, which we assume WLOG to be normalized such that ∑_i^d c_i = 1.
Our target sum is 0 ≤ T ≤ 1. The goal is to find if some subset 𝒮⊆ [d] exists such that ∑_i ∈𝒮 c_i = T.
Reduction from subset sum to min-entropy coupling <cit.> Let 𝒳 be the d items and 𝒴 be binary, indicating whether the item was chosen. Our joint distribution is of size |𝒳| × |𝒴|. We set the following constraints on marginals.
* p(x_i) = c_i for all i, (row constraints)
* p(include)=T, p(omit)=1-T, (column constraints)
Constraints (i) split the value of each item additively into nonnegative components to be included and not included from our chosen subset, while (ii) enforces that the items included sum to T. Observe that the min-entropy objective H(X,Y) = H(Y|X)+H(X), which is solely dependent on H(Y|X) since H(X) is a constant given marginal constraints on X. Thus, H(Y|X) is nonnegative and is only equal to 0 if and only if Y is deterministic given X, i.e., r(x_i, include) = 0 or r(x_i, omit) = 0. If our subset sum problem has a solution, then this instantiation of the min-entropy coupling problem would return a deterministic solution with H(Y|X)=0, which in turn corresponds to a solution in subset sum. Conversely, if subset sum has no solution, then our min-entropy coupling problem is either infeasible OR gives solutions where H(Y|X) > 0 strictly, i.e., Y|X is non-deterministic, which we can detect and report.
Relationship to our problem
Observe that our joint entropy objective may be decomposed
H_r(X_1, X_2, Y) = H_r(Y|X_1, X_2) + H_r(X_1, X_2).
Given that p(x_1, x_2) is fixed under Δ_p_1,2,12, our objective is equivalent to minimizing H_r(Y| X_1, X_2).
Similar to before, we know that H_r(Y| X_1, X_2) is nonnegative and equal to zero if and only if Y is deterministic given (X_1, X_2).
Intuitively, we can use 𝒳_1, 𝒳_2 to represent vertices in a bipartite graph, such that (X_1, X_2) are edges (which may or may not exist), and 𝒴 as colors for the edges. Then, the marginal constraints for p(x_1, x_2) could be used alongside the min-entropy objective to ensure that each edge has exactly one color. The marginal constraints p(x_1, y) and p(x_2, y) tell us (roughly speaking) the number of edges of each color that is adjacent to vertices in 𝒳_1 and 𝒳_2.
However, this insight alone is not enough; first, edge coloring problems in bipartite graphs (e.g., colorings in regular bipartite graphs) can be solved in polynomial time, so we need a more difficult problem. Second, we need an appropriate choice of marginals for p(x_i, y) that does not immediately `reveal' the solution. Our proof uses a reduction from the restricted timetable problem, one of the most primitive scheduling problems available (and closely related to edge coloring or multicommodity network flow).
§.§.§ Restricted Timetable Problem (RTT)
The restricted timetable (RTT) problem was introduced by <cit.>, and has to do with how to schedule teachers to classes they must teach. It comprises the following
* A collection of { T_1, …, T_n }, where T_i ⊆ [3]. These represent n teachers, each of which is available for the hours given in T_i.
* m students, each of which is available at any of the 3 hours
* An binary matrix { 0, 1}^ n × m. R_ij = 1 if teacher i is required to teach class j, and 0 otherwise. Since R_ij is binary, each class is taught by a teacher at most once.
* Each teacher is tight, i.e., |T_i| = ∑_j=1^m R_ij. That is, every teacher must teach whenever they are available.
Suppose there are exactly 3 hours a day. The problem is to determine if there exists a meeting function
f: [n] × [m] × [3] →{ 0, 1},
where our goal is to have f(i,j,h) = 1 if and only if teacher i teaches class j at the h-th hour. We require the following conditions in our meeting function:
* f(i,j,h)=1 h ∈ T_i. This implies that teachers are only teaching in the hours they are available.
* ∑_h ∈ [3] f(i,j,h) = R_ij for all i ∈ [n], j∈[m]. This ensures that every class gets the teaching they are required, as specified by R.
* ∑_i ∈ [n] f(i,j,h) ≤ 1 for all j ∈ [m] and h ∈ [3]. This ensures no class is taught by more than one teachers at once.
* ∑_j ∈ [m] f(i,j,h) ≤ 1 for all i ∈ [n] and h ∈ [3]. This ensures no teacher is teaching more than one class simultaneously.
<cit.> showed that RTT is NP-hard via a clever reduction from 3-SAT. Our strategy is to reduce RTT to our problem.
calc,
cd
node split radius/.initial=1,
node split color 1/.initial=red,
node split color 2/.initial=green,
node split color 3/.initial=blue,
node split half/.style=node split=#1,#1+180,
node split/.style args=#1,#2
path picture=
x=((path picture bounding box.east)-(path picture bounding box.center)),
y=((path picture bounding box.north)-(path picture bounding box.center)),
radius=/tikz/node split radius
[count=, remember=as (initially #1)] in #2,360+#1
[line join=round, draw, fill=/tikz/node split color ]
(path picture bounding box.center)
–++(:/tikz/node split radius)
arc[start angle=, end angle=] –cycle;
hold1/.style=draw=hour1, ultra thick,
hold2/.style=draw=hour2, ultra thick,
hold3/.style=draw=hour3, ultra thick
Viewing RTT through the lens of bipartite edge coloring
RTT can be visualized as a variant of constrained edge coloring in bipartite graphs (Figure <ref>). The teachers and classes are the two different sets of vertices, while R gives the adjacency structure. There are 3 colors available, corresponding to hours in a day. The task is to color the edges of the graph with these 3 colors such that
* No two edges of the same color are adjacent. This ensures students and classes are at most teaching/taking one session at any given hour (condition 3 and 4)
* Edges adjacent to teacher i are only allowed colors in T_i. This ensures teachers are only teaching in available hours (condition 1)
If every edge is colored while obeying the above conditions, then it follows from the tightness of teachers (in the definition of RTT) that every class is assigned their required lessons (condition 2). The decision version of the problem is to return if such a coloring is possible.
Time Constrained RTT (Q-RTT) A variant of RTT that will be useful is when we impose restrictions on the number of classes being taught at any each hour. We call this Q-RTT, where Q = (q_1,q_2,q_3) ∈ℤ^3. Q-RTT returns true if, in addition to the usual RTT conditions, we require the meeting function to satisfy
∑_i ∈ [n],j ∈ [m] f(i,j,h) = q_h.
That is, the total number of hours taught by teachers in hour h is exactly q_h.
From the perspective of edge coloring, Q-RTT simply imposes an additional restriction on the total number of edges of each color, i.e., there are q_k edges of color k for each k∈[3].
Obviously, RTT can be Cook reduced to Q-RTT: since there are only 3 hours and a
total of g = ∑_i ∈ [n],j ∈ [m] R_ij total lessons to be taught, there are at most 𝒪(g^2) ways of splitting the required number of lessons up amongst the 3 hours. Thus, we can solve RTT by making at most 𝒪(g^2) calls to Q-RTT. This is polynomial in the size of RTT, and we conclude Q-RTT is NP-hard.
§.§.§ Reduction of Q-RTT to our problem
We will reduce Q-RTT to our problem.
Let α = 1/(∑_i,j R_ij + 3m ), where 1/α should be seen as a normalizing constant given by the number of edges in a bipartite graph. One should think of α as an indicator of the boolean TRUE and 0 as FALSE.
We use the following construction
* Let 𝒳_1 = [n] ∪𝒵, where 𝒵 = {Z_1, Z_2, Z_3}. From a bipartite graph interpretation, these form one set of vertices that we will match to classes. Z_1, Z_2, Z_3 are “holding rooms”, one for each of the 3 hours. Holding rooms are like teachers whose classes can be assigned in order to pass the time. They will not fulfill any constraints on R, but they can accommodate multiple classes at once.
We will explain the importance of these holding rooms later.
* Let 𝒳_2 = [m]. These form the other set of vertices, one for each class.
* Let 𝒴 = [3] ∪{ 0 }. 1, 2, and 3 are the 3 distinct hours, corresponding to edge colors. 0 is a special “null” color which will only be used when coloring edges adjacent to the holding rooms.
* Let p(i,j, · )= α· R_ij and p(i, j) = α for all i ∈𝒵, j ∈ [m]. Essentially, there is an edge between a teacher and class if R dictates it. There are also always edges from every holding room to each class.
* For i ∈ [n], set p(i, ·, h) = α if h ∈ T_i, 0 otherwise. For Z_i ∈𝒵, we set
p(Z_i, ·, h)=
α· q_i h = 0
α· (m-q_i) h = i
0 otherwise
In order words, at hour h, when a class is not assigned to some teacher (which would to contribute to q_h), they must be placed in holding room Z_h.
* Let p(·, j, h) = α for h ∈ [3], and p(·, j, h) = α·∑_i ∈ [n] R_i, j. The former constraint means that for each of the 3 hours, the class must be taking some lesson with a teacher OR in the holding room. The second constraint assigns the special “null” value to the holding rooms which were not used by that class.
calc,
cd
node split radius/.initial=1,
node split color 1/.initial=red,
node split color 2/.initial=green,
node split color 3/.initial=blue,
node split half/.style=node split=#1,#1+180,
node split/.style args=#1,#2
path picture=
x=((path picture bounding box.east)-(path picture bounding box.center)),
y=((path picture bounding box.north)-(path picture bounding box.center)),
radius=/tikz/node split radius
[count=, remember=as (initially #1)] in #2,360+#1
[line join=round, draw, fill=/tikz/node split color ]
(path picture bounding box.center)
–++(:/tikz/node split radius)
arc[start angle=, end angle=] –cycle;
hold1/.style=draw=hour1, ultra thick,
hold2/.style=draw=hour2, ultra thick,
hold3/.style=draw=hour3, ultra thick
A solution to our construction with 0 conditional entropy implies a valid solution to Q-RTT Suppose that our construction returns a distribution r such that every entry r(x_1,x_2,y) is either α or 0.
We claim that the meeting function f(i,j,h)=1 if r(i,j,h)=α and 0 otherwise solves Q-RTT.
* Teachers are only teaching in the hours they are available, because of our marginal constraint on p(i,·, h).
* Every class gets the teaching they need. This follows from the fact that teachers are tight and the marginal constraint p(i,·,h), which forces teachers to be teaching whenever they can. The students are getting the lessons from the right teachers because of the marginal constraint on p(i, j, ·), since teachers who are not supposed to teach a class have those marginal values set to 0.
* No class is taught by more than one teacher at once. This follows from marginal constraint p(·, j, h). For each of the hours, a class is with either a single teacher or the holding room.
* No teacher is teaching more than one class simultaneously. This holds again from our marginal constraint on p(i,·, h).
* Lastly, the total number of lessons (not in holding rooms) held in each hour is q_h as required by Q-RTT. To see why, we consider each color (hour). Each color (excluding the null color) is used exactly m times by virtue of p(·, j, h). Some of these are in holding rooms, other are with teachers. The former (over all classes) is given by m-q_h because of our constraint on p(i, ·, h), which means that exactly q_h lessons in hour h as required.
A valid solution to Q-RTT implies a solution to our construction with 0 conditional entropy Given a solution to Q-RTT, we recover a candidate solution to our construction in a natural way. If teacher i is teaching class j in hour h, then color edge ij with color h, i.e., r(i,j,h)=α and r(i,j,h')=0 if h' ≠ h. Since in RTT each teacher and class can be assigned one lesson per hour at most, there will be no clashes with this assignment. For all other i ∈ [3], j∈[m] where R_ij=0, we assign r(i,j,·)=0. Now, we will also need to assign students to holding rooms. For h ∈ [3], we set r(Z_h, j, h) = α if class j was not assigned to any teacher in hour h. If class j was assigned some teacher in hour h, then r(Z_h, j, 0)=α, i.e., we give it the special null color. All other entries are given a value of 0. We can verify
* r is a valid probability distribution. The nonnegativity of r follows from the fact that α > 0 strictly. We need to check that r sums to 1. We break this down into two cases based on whether the first argument of r is some Z_h or i.
In Case 1, we have
∑_i ∈ [n], h ∈ [3] ∪{ 0 }, j ∈ [m] r(i,j,h)
= ∑_i ∈ [n], h ∈ [3], j ∈ [m] r(i,j,h)
= α·∑_i ∈ [n], j ∈ [m] R_ij,
where the first line follows from the fact that we never color a teacher-class edge with the null color, and the second line is because every class gets its teaching requirements satisfied. In Case 2, we know that by definition every class is matched to every holding room and assigned either the null color or that room's color, hence
∑_i ∈{Z_1, Z_2, Z_3}, h ∈ [3] ∪{ 0 }, j ∈ [m] r(i, j, h)
= 3m
Summing them up, we have α·( 3m + ∑_i ∈ [n], j ∈ [m] R_ij) = 1 (by our definition of α.
* This r distribution has only entries in α or 0. This follows by definition.
* This r distribution has minimum conditional entropy. For a fixed i,j, r(i,j,·) is either α or 0. That is, Y is deterministic given X_1,X_2, hence H(Y |X_1, X_2)=0.
* All 3 marginal constraints in our construction are obeyed. We check them in turn.
* Marginal constraint r(i, j) = p(i, j). When i ∈ [3]: (i) when R_ij=1 exactly one time h is assigned to teacher i and class j, hence r(i,j)=α = p(i,j) as required, (ii)when R_ij=0 as specified. Now when i ∈{Z_1, Z_2, Z_3 }, we have r(i,j,·)=α = p(i,j) since every holding room is either assigned it's color to a class, or assigned the special null color.
* Marginal constraint r(i, h)=p(i,h). When i ∈ [3], this follows directly from tightness.
Similarly, when i ∈{ Z_1, Z_2 ,Z_3}, we have by definition of Q-RTT the assignments to holding rooms equal to m - q_h for hour h, and consequently, q_h null colors adjacent to Z_h as required.
* Marginal constraint r(j,h)=p(j,h). For every h ∈ [3], the class is assigned either to a teacher or a holding room, so this is equal to α as required. For h = 0, i.e., the null color, this is used exactly ∑_i ∈ [n] R_ij times (since these were the number hours that were not assigned to teachers), as required, making its marginal ∑_i ∈ [n] R_ij and r(j,h)=α·∑_i ∈ [n] R_ij as required.
Thus, if RTT returns TRUE, our construction will also return a solution with entries in { 0, α}, and vice versa.
Corollary The decision problem of whether there exists a distribution in r ∈Δ_p_1,2,12 such that H(Y| X_1, X_2) = 0 is NP-complete. This follows because the problem is in NP since checking if Y is deterministic (i.e., H(Y|X_1, X_2) = 0) can be done in polynomial time, while NP-hardness follows from the same argument as above.
§.§ Upper bound on synergy (Theorem <ref>)
We begin by restating Theorem <ref> from the main text:
(Upper-bound on synergy, same as Theorem <ref>).
S ≤ H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_12,y H_r(X_1, X_2, Y) - max_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y) =
where Δ_p_12,y = { r ∈Δ : r(x_1,x_2)=p(x_1,x_2), r(y)=p(y) }.
Recall that this upper bound boils down to finding max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y). We have
max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y) =
max_r ∈Δ_p_1,2,12{ H_r(X_1, X_2) + H_r(Y) - H_r(X_1, X_2, Y) }
=
H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y),
≤ H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_12,y H_r(X_1, X_2, Y)
where the first two lines are by definition. The last line follows since Δ_p_12,y is a superset of r ∈Δ_p_1,2,12, which implies that minimizing over it would yield a a no larger objective.
In practice, we use the slightly tighter bound which maximizes over all the pairwise marginals,
max_r ∈Δ_p_1,2,12 I_r(X_1,X_2; Y)
≤ H_p(X_1, X_2) + H_p(Y) - maxmin_r ∈Δ_p_12,y H_r(X_1, X_2, Y)
min_r ∈Δ_p_1,x_2 H_r(X_1, X_2, Y)
min_r ∈Δ_p_2,x_1 H_r(X_1, X_2, Y) .
Estimating using min-entropy couplings
We only show how to compute min_r ∈Δ_p_12,y H_r(X_1, X_2, Y), since the other variants can be computed in the same manner via symmetry.
We recognize that by treating (X_1, X_2)=X as a single variable, we recover the classic min-entropy coupling over X and Y, which is still NP-hard but admits good approximations <cit.>.
There are many methods to estimate such a coupling, for example <cit.> give a greedy algorithm running in linear-logarithmic time, which was further proven by <cit.> to be a 1-bit approximation of the minimum coupling [This a special case when there are 2 modalities. For more modalities, the bounds will depend on the sizes and number of signals.]. Another line of work was by <cit.>, which constructs an appropriate coupling and shows that it is optimal to 1-bit to a lower bound H(p(x_1,x_2) ∧ p(y)), where ∧ is the greatest-lower-bound operator, which they showed in <cit.> can be computed in linear-logarithmic time. We very briefly describe this method; more details may be found in <cit.> directly.
Remark A very recent paper by <cit.> show that one can get an approximation tighter than 1-bit. We leave the incorporation of these more advanced methods as future work.
Without loss of generality, suppose that 𝒳 and 𝒴 are ordered and indexed such that p(x) and p(y) are sorted in non-increasing order of the marginal constraints, i.e., p(X=x_i) ≥ p(X=x_j) for all i ≤ j. We also assume WLOG that the supports of X and Y are of the same size n, if they are not, then pad the smaller one with dummy values and introduce marginals that constrain these values to never occur (and set n accordingly if needed). For simplicity, we will just refer to p_i and q_j for the distributions of p(X=x_i) and p(Y=y_j) respectively.
Given 2 distributions p, q we say that p is majorized by q, written as p ≼ q if and only if
∑_i=1^k p_i ≤∑_i=1^k q_i for all k ∈ 1 … n
As <cit.> point out, there is a strong link between majorization and Schur-convex functions; in particular, if p ≼ q, then we have H(p) ≥ H(q).
Indeed, if we treat ≽ as a partial order and consider the set
𝒫^n = { p = (p_1, …, p_n) : p_i ∈ [0, 1], ∑_i^n p_i = 1, p_i≥ p_i+1}
as the set of finite (ordered) distributions with support size n with non-increasing probabilities, then we obtain a lattice with a unique greatest lower bound (∧) and least upper bound (∨). Then, <cit.> show that that p ∧ q can be computed recursively as p ∧ q = α(p, q) = (a_1, …, a_n) where
a_i = min{∑_j=1^i p_j, ∑_j=1^i q_j} + ∑_j=1^i-1 a_j-1
It was shown by <cit.> that any coupling satisfying the marginal constraints given by p and q, i.e.,
M ∈ C(p, q) = { M = m_ij: ∑_j m_ij = p_i, ∑_i m_ij = p_j}
has entropy H(M) ≥ H(p ∧ q). In particular, this includes the min-entropy one.
Since we only need the optimal value of such a coupling and not the actual coupling per-se, we can use plug the value of H(p ∧ q) into the minimization term (<ref>), which yields an upper bound for max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y), which would form an upper bound on itself.
§ EXPERIMENTAL DETAILS
§.§ Verifying lower and upper bounds
Synthetically generated datasets: To test our derived bounds on synthetic data, We randomly sampled 100,000 distributions of {X_1, X_2, Y} to calculate their bounds and compare with their actual synergy values. We set X_1, X_2, and Y as random binary values, so each distribution can be represented as a size 8 vector of randomly sampled entries that sum up to 1.
Results: We calculated the lower bound via redundancy, lower bound via disagreement, and upper bound of all distributions and plotted them with actual synergy value (Figure <ref>). We define a distribution to be on the boundary if its lower/upper bound is within 10% difference from its actual synergy value. We conducted the least mean-square-error fitting on these distributions close to the boundary. We plot actual synergy against in Figure <ref> (left), and find that it again tracks a lower bound of synergy. In fact, we can do better and fit a linear trend y=1.095x on the distributions along the margin (RMSE =0.0013).
We also plot actual synergy against computed in Figure <ref> (middle).
As expected, the lower bound closely tracks actual synergy. Similarly, we can again fit a linear model on the points along the boundary, obtaining y=1.098x with a RMSE of 0.0075 (see this line in Figure <ref> (middle)).
Finally, we plot actual synergy against estimated in Figure <ref> (right). Again, we find that the upper bound consistently tracks the highest attainable synergy - we can fit a single constant y=x-0.2 to obtain an RMSE of 0.0022 (see this line in Figure <ref> (right)). This implies that our bound enables both accurate comparative analysis of relative synergy across different datasets, and precise estimation of absolute synergy.
Real-world datasets: We also use the large collection of real-world datasets in MultiBench <cit.>: (1) MOSI: video-based sentiment analysis <cit.>,
(2) MOSEI: video-based sentiment and emotion analysis <cit.>, (3) MUStARD: video-based sarcasm detection <cit.>, (5) MIMIC: mortality and disease prediction from tabular patient data and medical sensors <cit.>, and (6) ENRICO: classification of mobile user interfaces and screenshots <cit.>.
While the previous bitwise datasets with small and discrete support yield exact lower and upper bounds, this new setting with high-dimensional continuous modalities requires the approximation of disagreement and information-theoretic quantities: we train unimodal neural network classifiers f̂_θ(y|x_1) and f̂_θ(y|x_2) to estimate disagreement, and we cluster representations of X_i to approximate the continuous modalities by discrete distributions with finite support to compute lower and upper bounds.
Implementation details: We first apply PCA to reduce the dimension of multimodal data. For the test split, we use unsupervised clustering to generate 20 clusters. We obtain a clustered version of the original dataset 𝒟={(x_1,x_2,y)} as 𝒟_cluster={(c_1,c_2,y)} where c_i∈{1,…,20} is the ID of the cluster that x_i belongs to. In our experiments, where 𝒴 is typically a classification task, we set the unimodal classifiers f_1 = p̂(y|x_1) and f_2 = p̂(y|x_2) as the Bayes optimal classifiers for multiclass classification tasks.
For classification, 𝒴 is the set of k-dimensional 1-hot vectors. Given two logits ŷ_1, ŷ_2 obtained from x_1, x_2 respectively, define d(ŷ_1, ŷ_2) = (ŷ_1-ŷ_2)^2. We have that c_d=1, and ϵ_1 = |L(f_1) - L(f_1^*)|^2 = 0 and ϵ_2 = |L(f_2) - L(f_2^*)|^2 = 0 for well-trained neural network unimodal classifiers f_1 and f_2 for Theorem <ref>. For datasets with 3 modalities, we perform the experiments separately for each of the 3 modality pairs, before taking an average over the 3 modality pairs. Extending the definitions of redundancy, uniqueness, and synergy, as well as our derived bounds on synergy for 3 or more modalities is an important open question for future work.
§.§ Relationships between agreement, disagreement, and interactions
1. The relationship between redundancy and synergy: We give some example distributions to analyze when the lower bound based on redundancy is high or low. The bound is high for distributions where X_1 and X_2 are independent, but Y=1 sets X_1 ≠ X_2 to increase their dependence (i.e., agreement XOR distribution in Table <ref>). Since X_1 and X_2 are independent but become dependent given Y, I(X_1;X_2;Y) is negative, and the bound is tight = 1 ≤ 1=S. Visual Question Answering 2.0 <cit.> falls under this category, with S = 4.92,R=0.79, where the image and question are independent (some questions like `what is the color of the object' or `how many people are there' can be asked for many images), but the answer connects the two modalities, resulting in dependence given the label. As expected, the estimated lower bound for agreement synergy: = 4.03 ≤ 4.92=S.
Conversely, the bound is low for Table <ref> with the probability mass distributed uniformly only when y=x_1=x_2 and 0 elsewhere. As a result, X_1 is always equal to X_2 (perfect dependence), and yet Y perfectly explains away the dependence between X_1 and X_2 so I(X_1;X_2|Y) = 0: = 0 ≤ 0=S.
Note that this is an example of perfect redundancy and zero synergy - for an example with synergy, refer back to disagreement XOR in Table <ref> - due to disagreement there is non-zero I(X_1;X_2) but the label explains some of the relationships between X_1 and X_2 so I(X_1;X_2|Y) < I(X_1;X_2): = -0.3 ≤ 1=S.
A real-world example is multimodal sentiment analysis from text, video, and audio of monologue videos on MOSEI, R=0.26 and S=0.04, and as expected the lower bound is small = 0.01 ≤ 0.04=S.
2. The relationship between disagreement and synergy: To give an intuition of the relationship between disagreement, uniqueness, and synergy, we use one illustrative example shown in Table <ref>, which we call disagreement XOR. We observe that there is maximum disagreement between marginals p(y|x_1) and p(y|x_2): the likelihood for y is high when y is the same bit as x_1, but reversed for x_2. Given both x_1 and x_2: y seems to take a `disagreement' XOR of the individual marginals, i.e. p(y|x_1,x_2) = p(y|x_1) XOR p(y|x_2), which indicates synergy (note that an exact XOR would imply perfect agreement and high synergy). The actual disagreement is 0.15, synergy is 0.16, and uniqueness is 0.02, indicating a very strong lower bound =0.13 ≤ 0.16=S. A real-world equivalent dataset is MUStARD for sarcasm detection from video, audio, and text <cit.>, where the presence of sarcasm is often due to a contradiction between what is expressed in language and speech, so disagreement α=0.12 is the highest out of all the video datasets, giving a lower bound =0.11 ≤ 0.44 = S.
On the contrary, the lower bound is low when all disagreement is explained by uniqueness (e.g., y=x_1, Table <ref>), which results in = 0 ≤ 0 = S (α and U cancel each other out). A real-world equivalent is MIMIC involving mortality and disease prediction from tabular patient data and time-series medical sensors <cit.>. Disagreement is high α=0.13 due to unique information U_1=0.25, so the lower bound informs us about the lack of synergy = -0.12 ≤ 0.02 = S.
Finally, the lower bound is loose when there is synergy without disagreement, such as agreement XOR (y=x_1 XOR x_2, Table <ref>) where the marginals p(y|x_i) are both uniform, but there is full synergy: = 0 ≤ 1 = S. Real-world datasets which fall into agreement synergy include UR-FUNNY where there is low disagreement in predicting humor α=0.03, and relatively high synergy S=0.18, which results in a loose lower bound = 0.01 ≤ 0.18=S.
R0.3
0.3
< g r a p h i c s >
Comparing the qualities of the bounds when there is agreement and disagreement synergy. During agreement synergy, is tight, is loose, and is tight. For disagreement synergy, is loose, is tight, and is loose with respect to true S.
3. On upper bounds for synergy: We also run experiments to obtain estimated upper bounds on synthetic and MultiBench datasets. The quality of the upper bound shows some intriguing relationships with that of lower bounds. For distributions with perfect agreement synergy such as y = x_1 XOR x_2 (Table <ref>), = 1 ≥ 1 = S is really close to true synergy, = 1 ≤ 1 = S is also tight, but = 0 ≤ 1 = S is loose. For distributions with disagreement synergy (Table <ref>), = 0.52 ≥ 0.13 = S far exceeds actual synergy, = -0.3 ≤ 1=S is much lower than actual synergy, but =0.13 ≤ 0.16=S is tight (see relationships in Figure <ref>).
Finally, while some upper bounds (e.g., MUStARD, MIMIC) are close to true S, some of the other examples in Table <ref> show bounds that are quite weak.
This could be because (i) there indeed exists high synergy distributions that match 𝒟_i and 𝒟_M, but these are rare in the real world, or (ii) our approximation used in Theorem <ref> is mathematically loose. We leave these as open directions for future work.
§ APPLICATION 1: ESTIMATING MULTIMODAL PERFORMANCE FOR FUSION
Formally, we estimate performance via a combination of <cit.> and Fano's inequality <cit.> together yield tight bounds of performance as a function of total information I_p({X_1,X_2}; Y). We restate Theorem <ref> from the main text:
Let P_acc(f_M^*) = 𝔼_p [ 1[ f_M^*(x_1,x_2) = y ] ] denote the accuracy of the Bayes' optimal multimodal model f_M^* (i.e., P_acc (f_M^*) ≥ P_acc (f'_M) for all f'_M ∈ℱ_M). We have that
2^I_p({X_1,X_2}; Y)-H(Y)≤ P_acc(f_M^*) ≤I_p({X_1,X_2}; Y) + 1/log |𝒴|,
where we can plug in R+U_1,U_2+S≤ I_p({X_1,X_2}; Y) ≤ R+U_1,U_2+ to obtain lower P_acc(f_M^*) and upper P_acc(f_M^*) bounds on optimal multimodal performance.
We use the bound from <cit.>, where we define the Bayes' optimal classifier f_M^* is the one where given x_1,x_2 outputs y such that p(Y=y|x_1,x_2) is maximized over all y ∈𝒴. The probability that this classifier succeeds is max_y p(Y=y|x_1,x_2), which is 2^-H_∞ (Y|X_1=x_1,X_2=x_2)) where -H_∞ (Y|X_1,X_2) is the min-entropy of the random variable Y conditioned on X_1,X_2. Over all inputs (x_1,x_2), the probability of accuracy is
P_acc(f_M^*) = 𝔼_x_1,x_2[ 2^-H_∞(Y|X_1=x_1,X_2=x_2))] ≥ 2^-𝔼_x_1,x_2[ H_∞(Y|X_1=x_1,X_2=x_2)) ]
≥ 2^-𝔼_x_1,x_2[ H_p(Y|X_1=x_1,X_2=x_2)) ] ≥ 2^-H_p(Y|X_1,X_2) = 2^I_p({X_1,X_2}; Y)-H(Y).
The upper bound is based on Fano's inequality <cit.>. Starting with H_p(Y|X_1,X_2) ≤ H(P_err) + P_err (log |𝒴| -1) and assuming that Y is uniform over |𝒴|, we rearrange the inequality to obtain
P_acc(f_M^*) ≤H(Y) - H_p(Y|X_1,X_2) + log 2/log |𝒴| = I_p({X_1,X_2}; Y) + 1/log |𝒴|.
Finally, we summarize estimated multimodal performance as the average between estimated lower and upper bounds on performance: P̂_M = (P_acc(f_M^*) + P_acc(f_M^*))/2.
Unimodal and multimodal performance: Table <ref> summarizes all final performance results for each dataset, spanning unimodal models and simple or complex multimodal fusion paradigms, where each type of model is represented by the most recent state-of-the-art method found in the literature.
§ APPLICATION 2: SELF-SUPERVISED MULTIMODAL LEARNING VIA DISAGREEMENT
§.§ Training procedure
We continuously pretrain MERLOT Reserve Base on the datasets before finetuning. The continuous pretraining procedure is similar to Contrastive Span Training, with the difference that we add extra loss terms that correspond to modality disagreement. The pretraining procedure of MERLOT Reserve minimizes a sum of 3 component losses,
ℒ=ℒ_𝓉ℯ𝓍𝓉 + ℒ_𝒶𝓊𝒹𝒾ℴ + ℒ_𝒻𝓇𝒶𝓂ℯ
where each of the component losses is a contrastive objective. Each of the objectives aims to match an independent encoding of masked tokens of the corresponding modality with the output of a Joint Encoder, which takes as input the other modalities and, possibly, unmasked tokens of the target modality.
We modify the procedure by adding disagreement losses between modalities to the objective. This is done by replacing the tokens of a modality with padding tokens before passing them to the Joint Encoder, and then calculating the disagreement between representations obtained when replacing different modalities. For example, ℒ_frame uses a representation of video frames found by passing audio and text into the Joint Encoder. Excluding one of the modalities and passing the other one into the Encoder separately leads to two different representations, f̂_t for prediction using only text and f̂_a for prediction using only audio. The distance between the representations is added to the loss. Thus, the modified component loss is
ℒ_disagreement, frame = ℒ_frame + d_λ_text, audio( f̂_t, f̂_a )
where d_λ_text, audio(x, y)=max(0, d(x, y) - λ_text, audio), and d(x, y) is the cosine difference:
d(x, y)=1 - x·y/|x||y|
Similarly, we modify the other component losses by removing one modality at a time, and obtain the new training objective
ℒ_disagreement=ℒ_𝒹𝒾𝓈𝒶ℊ𝓇ℯℯ𝓂ℯ𝓃𝓉, 𝓉ℯ𝓍𝓉 + ℒ_𝒹𝒾𝓈𝒶ℊ𝓇ℯℯ𝓂ℯ𝓃𝓉, 𝒶𝓊𝒹𝒾ℴ + ℒ_𝒹𝒾𝓈𝒶ℊ𝓇ℯℯ𝓂ℯ𝓃𝓉, 𝒻𝓇𝒶𝓂ℯ
§.§ Training details
We continuously pretrain and then finetune a pretrained MERLOT Reserve Base model on the datasets with a batch size of 8. During pretraining, we train the model for 960 steps with a learning rate of 0.0001, and no warm-up steps, and use the defaults for other hyperparameters. For every dataset, we fix two of {λ_text, audio, λ_vision, audio, λ_text, vision} to be +∞ and change the third one, which characterizes the most meaningful disagreement. This allows us to reduce the number of masked modalities required from 3 to 2 and thus reduce the memory overhead of the method. For Social-IQ, we set λ_text, vision to be 0. For UR-FUNNY, we set λ_text, vision to be 0.5. For MUStARD, we set λ_vision, audio to be 0. All training is done on TPU v2-8 accelerators, with continuous pretraining taking 30 minutes and using up to 9GB of memory.
§.§ Dataset level analysis
We visualize the impact of pairwise modality disagreement on model performance by fixing two modalities M_1, M_2 and a threshold t, and setting the modality pair-specific disagreement slack terms λ according to the rule
λ_a, b=
t, a=M_1, b=M_2
+∞, else
This allows us to isolate d_λ_M_1, M_2 while ensuring that the other disagreement loss terms are 0. We also modify the algorithm to subtract d_λ_M_1, M_2 from the loss rather than adding it (see Section <ref>). By decreasing t, we encourage higher disagreement between the target modalities. In Figure <ref>, we plot the relationship between model accuracy and t for the MUStARD dataset to visualize how pairwise disagreement between modalities impacts model performance.
§.§ Datapoint level analysis
After continuously pretraining the model, we fix a pair of modalities (text and video) and find the disagreement in these modalities for each datapoint. We show examples of disagreement due to uniqueness and synergy in Figure <ref>. The first example shows a speaker using descriptive slides, leading to less unique information being present in the text and higher agreement between modalities. In the second example, the facial expression of the person shown does not match the text being spoken, indicating sarcasm and leading to disagreement synergy.
§.§ Alternative training procedure
We also explore an alternative training procedure, which involves subtracting the disagreements d_λ_a, b from the loss rather than adding them. This achieves the opposite effect of pushing modalities further away from each other if they disagree significantly. The reasoning behind this is that in some settings, such as sarcasm prediction in MUStARD, we expect modalities not just to disagree, but to store contradicting information, and disagreement between them should be encouraged. However, we find that the results obtained using this method are not as good as the ones obtained using the procedure outlined in Section <ref>.
|
http://arxiv.org/abs/2306.10857v1
|
20230619111855
|
Pattern Mining for Anomaly Detection in Graphs: Application to Fraud in Public Procurement
|
[
"Lucas Potin",
"Rosa Figueiredo",
"Vincent Labatut",
"Christine Largeron"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
L>l<R>r<Y>X
Z>X
Pattern Mining in Graphs for Public Procurement Fraud Detection
L. Potin et al.
Laboratoire Informatique d'Avignon – UPR 4128, F-84911, Avignon, France
{firstname}-{lastname}@univ-avignon.fr Laboratoire Hubert Curien – UMR 5516, F-42023, Saint-Etienne, France
[email protected]
Pattern Mining for Anomaly Detection in Graphs: Application to Fraud in Public Procurement
Lucas Potin 1
Rosa Figueiredo1
Vincent Labatut1
Christine Largeron2
July 31, 2023
==========================================================================================
In the context of public procurement, several indicators called red flags are used to estimate fraud risk. They are computed according to certain contract attributes and are therefore dependent on the proper filling of the contract and award notices. However, these attributes are very often missing in practice, which prohibits red flags computation. Traditional fraud detection approaches focus on tabular data only, considering each contract separately, and are therefore very sensitive to this issue. In this work, we adopt a graph-based method allowing leveraging relations between contracts, to compensate for the missing attributes. We propose PANG (Pattern-Based Anomaly Detection in Graphs), a general supervised framework relying on pattern extraction to detect anomalous graphs in a collection of attributed graphs. Notably, it is able to identify induced subgraphs, a type of pattern widely overlooked in the literature. When benchmarked on standard datasets, its predictive performance is on par with state-of-the-art methods, with the additional advantage of being explainable. These experiments also reveal that induced patterns are more discriminative on certain datasets. When applying PANG to public procurement data, the prediction is superior to other methods, and it identifies subgraph patterns that are characteristic of fraud-prone situations, thereby making it possible to better understand fraudulent behavior.
Cite as: L. Potin, R. Figueiredo, V. Labatut & C. Largeron. “Pattern Mining for Anomaly Detection in Graphs: Application to Fraud in Public Procurement”, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Springer, 2023. DOI: TBD
§ INTRODUCTION
Public procurement refers to the purchase of goods, services and works by a public authority (the buyer), from a legal entity governed by public or private law (the winner). In the European Union, when the contract exceeds some price threshold, the buyer must first advertise a call for tenders defining its needs in detail, and later the corresponding award notice, which describes the content of the contract eventually concluded with one or more winners. These documents must be published in the Official Journal of the European Union (OJEU). The online version of this journal, called the Tenders Electronic Daily (TED) <cit.>, publishes more than 650,000 procurement notices a year.
Consequently, the public procurement sector provides a huge amount of publicly available data.
Historically, anomalies in public procurement, which refer to doubtful behavior, are linked to specific characteristics associated with contracts. In the literature, these characteristics are called red flags, and are used as indicators of potential fraud <cit.>. For instance, modifying the contract price during the procedure, or receiving a single offer for a given call for tenders, are typically considered as red flags <cit.>. But the information required to compute these red flags is not always available. In the French subset of the TED, some essential attributes are largely missing <cit.>, e.g. the number of offers answering a call for tenders is not documented in 30% of the cases. For such contracts, one can compute only partial red flags, in the best of cases, or even no red flags at all.
Anomaly detection approaches are commonly used in fraud detection <cit.>. However, when applied to public procurement, most studies are based on tabular data <cit.>, i.e. each contract is considered separately, as a set of attribute values. Only a very few authors try to take advantage of the relationships between contracts by adopting a graph-based approach. Fazekas & Tóth propose the CRI, a composite score combining several red flags, and leverage graphs <cit.>, but only to visualize its distribution over their dataset. Wachs et al. <cit.> use graphs in order to estimate the proportion of red flags in the core agents, i.e. buyers and winners with the most frequent relationships, compared to the others. However, to the best of our knowledge, no method in the literature dedicated to anomaly or fraud detection in public procurement uses graphs to create predictive models.
This leads us to propose a graph-based method to identify anomalies in public procurement. Our work makes three main contributions. First, we propose the PANG framework (Pattern-Based Anomaly Detection in Graphs), that leverages pattern mining to solve this problem. When evaluated on a benchmark of standard datasets, its performance is on par with state-of-the-art methods, with the additional advantage of being explainable. In addition, it allows looking for different types of patterns, including induced subgraphs, which are generally overlooked in the literature. Our second contribution is to show empirically that such subgraphs can result in better classification performance on certain datasets. As a third contribution, we apply our generic framework to public procurement data, and identify the relevant patterns characterizing risky behaviors.
The rest of the article is structured as follows. Section <ref> gives an overview of the literature regarding graph anomaly detection and graph pattern mining. Section <ref> introduces the terminology used throughout this paper, as well as our problem formulation. Section <ref> describes our framework PANG and assesses its performance on standard datasets. Section <ref> applies PANG to public procurement. Finally, we comment the main aspects of our work in Section <ref>.
§ RELATED WORK
The goal of anomaly detection is to detect behaviors significantly differing from expected norms. The methods dealing with this task on graphs either focus on single elements (vertices, edges) or larger structures (subgraphs, graphs) <cit.>. When considering whole graphs, the task can be seen as a classification problem consisting in labelling the graph as normal or anomalous. The standard approach consists in building a vector-based representation of the graph, in order to apply classic data mining tools <cit.>. Most recent works focus on deep learning methods such as Graph Neural Networks (GNN) <cit.>, which not only learn this representation, but also tackle the classification task. However, one limitation of these methods lies in the lack of explainability: while some approaches have been proposed to make GNNs explainable <cit.>, achieving this goal is non-trivial, especially when considering graphs with edge features. An alternative is to build the representation in a more controlled way, in order to retain its semantics <cit.>. Among the methods following this path, pattern-based approaches rely on the subgraphs that compose the graphs <cit.>. They require retrieving the most characteristic of these patterns, generally the most frequent ones, in order to represent each graph in terms of absence or presence of these patterns.
There are different algorithms to extract frequent subgraphs from a collection of graphs <cit.>, i.e. patterns appearing in more graphs than a fixed threshold. The main issue encountered with this approach is the pattern explosion problem, which states that the number of patterns increases exponentially when decreasing this threshold.
To alleviate the computational cost, some algorithms mine more constrained patterns, such as closed frequent patterns <cit.>, maximal frequent patterns <cit.>, or approximate patterns <cit.>. As these notions are not the focus of this paper, we refer the reader to <cit.> for further details.
Moreover, all frequent patterns may not be relevant when dealing with a graph classification problem: some could occur equally in all classes, and thus provide no information to distinguish them. To overcome this issue, some methods have been proposed to mine discriminative patterns. Leap <cit.> relies on a notion of structural proximity when building its search tree, that lets it compare branches in order to avoid exploring those that are similar. CORK <cit.> is based on a metric that evaluates a pattern in relation to a collection of patterns already selected, which allows accounting for the proximity between frequent patterns. Moreover, this metric is submodular, and can thus be integrated into tools such as gSpan <cit.> to mine discriminative patterns efficiently. It also allows CORK to automatically select the number of patterns to extract. In <cit.>, the notion of discriminative pattern is extended in order to mine jumping emerging patterns: subgraphs appearing in only one class. However, this notion is very restricted, as it requires that a pattern never appears in one of the two classes. As a consequence, in practice, it often leads to very infrequent patterns <cit.>. Our objective is to propose a generic classification framework which allows choosing the number of discriminative patterns to keep, as well as their type and, then to apply it for identifying fraud in public procurement.
§ PROBLEM FORMULATION
To detect fraud in public procurement, we adopt a network representation inspired by information retrieval or text mining, and previously successfully used for chemical compound classification <cit.>. In the same way that a document can be modeled as a bag-of-words, we propose to represent a graph as a bag-of-subgraphs, i.e. the set of its constituting subgraphs, called patterns. To do this, we construct a global dictionary constituted of the patterns appearing in a collection of attributed graphs. Based on this dictionary, each graph can then be represented as a fixed-length numerical vector, which can be used as an input by any standard machine learning algorithm.
In this section, we first describe how we define such vector-based representation, and then formulate our anomaly detection task as a classification problem.
An attributed graph is defined as a tuple G = (V,E,𝐗,𝐘) in which V is the set of n vertices, E the set of m edges of G, 𝐗 the n × d_v matrix whose row 𝐱_i is the d_v-dimensional attribute vector associated with vertex v_i ∈ V, and 𝐘 the m × d_e matrix whose row 𝐲_i is the d_e-dimensional attribute vector associated with edge e_i ∈ E.
As an illustration, we consider a collection of such graphs, as shown in Figure <ref>. In this example, each vertex has an attribute corresponding to its color (brown or purple) as well as each edge (green or red).
Let us assume that each graph G has a label ℓ_G picked in ℒ ={ A, N}, denoting an anomalous or a normal graph, respectively. Importantly, this label is not known for all the graphs at our disposal. Let 𝒢 be the set of graphs whose label is known. The set 𝒢 can be split into two disjoint subsets: 𝒢 = 𝒢_A ∪𝒢_N (𝒢_A ∩𝒢_N = ∅). Set 𝒢_A contains the anomalous graphs, and 𝒢_N the normal ones. Using the labeled set of graphs 𝒢, our aim is to train a classifier able to predict the unknown label for the other graphs. For this purpose, we use a pattern-based graph representation.
Let G = (V,E,𝐗,𝐘) be an attributed graph. A graph P is a pattern of G if it is isomorphic to a subgraph H of G, i.e. ∃ H ⊆ G: P ≅ H.
As we consider attributed graphs, we adopt the definition of a graph isomorphism proposed by Hsieh et al. <cit.>, i.e. an isomorphism must preserve not only edges, but also vertex and edge attributes. We consider that P is a pattern for a set of graphs 𝒢 when P is a pattern of at least one of its graphs. Figure <ref> shows three examples of patterns of G_1, and therefore of 𝒢, from Figure <ref>.
It should be noted that, according to Definition <ref>, a pattern P may not include all the edges originally present in G between the considered vertices. We can restrict this definition by considering induced patterns. Similarly to Definition <ref>, P is an induced pattern of G if it is isomorphic to an induced subgraph H of G.
Let G = (V, E, 𝐗, 𝐘) be an attributed graph. The subgraph H = (V_H, E_H, 𝐗_H, 𝐘_H) induced by a vertex subset V_H ∈ V is such that E_H = {(u,v) ∈ E : u,v ∈ V_H}, and 𝐗_H and 𝐘_H retain only the rows of 𝐗 and 𝐘 matching V_H and E_H, respectively.
In Figure <ref>, P_1 is an induced pattern of G_1. On the contrary, P_2 is a general pattern of G_1, but not an induced pattern, because edge (v_3,v_5) from G_1 has no image in P_2. We consider that P is an induced pattern of 𝒢 when P is an induced pattern of at least one of its graphs. To measure the importance of a pattern in 𝒢, we now need the notion of graph frequency.
The graph frequency GF(P,𝒢) of a pattern P in 𝒢 is the number of graphs in 𝒢 having P as a pattern:
GF(P,𝒢) = | { G ∈𝒢 : ∃ H ⊆ G s.t. P ≅ H } |.
It indicates the number of graphs having a specific pattern, but does not give any information about the number of times the pattern appears in these graphs. For this, we use the subgraph frequency.
The subgraph frequency SF(P,𝒢) of a pattern P in 𝒢 is its total number of occurrences over all G ∈𝒢:
SF(P,𝒢) = ∑_G ∈𝒢 | { H ⊆ G : P ≅ H } |.
Graph frequency can be used to define the notion of closed pattern, which in turn allows finding a more compact set of relevant patterns.
A pattern P of 𝒢 is said to be closed if it has no supergraph P', or equivalently if P is not the subgraph of any graph P', such that GF(P',𝒢) = GF(P,𝒢).
As a consequence, the set of closed patterns is a subset of the set of general patterns. In our example, there is no supergraph of P_1 appearing in 2 graphs, which makes it a closed pattern of 𝒢.
Regardless of the type of pattern, we note 𝒫_A and 𝒫_N the sets of patterns of 𝒢_A and 𝒢_N, respectively, and 𝒫 the complete set of patterns of 𝒢: 𝒫 = 𝒫_A ∪𝒫_N. Not all patterns are equally relevant to solve a given task. For instance, in Figure <ref>, P_3 is much more common than both other patterns in 𝒢 from Figure <ref>. To distinguish them, we rely on the discrimination score from <cit.>, that characterizes each pattern according to its frequency in the two subsets.
The discrimination score of a pattern P of 𝒢 is defined as disc(P) = |F(P,𝒢_A) - F(P,𝒢_N)|, where F is GF or SF.
Our definition generalizes that of <cit.>, so that it can be applied to both frequencies (GF and SF). A score close to 0 indicates a pattern that is as frequent in 𝒢_A as in 𝒢_N, while a higher score means that the pattern is more frequent in one of the two subsets.
We use this score to rank the patterns in 𝒫, and select the s most discriminative ones (1 ≤ s ≤ |𝒫|). Some methods, like CORK <cit.>, estimate s automatically, which can be an advantage or a drawback, depending on the level of control desired by the user.
The resulting subset 𝒫_s ⊆𝒫 constitutes our dictionary, which means that s lets us control the dimension of our graph representation. The representation of each graph G_i ∈𝒢 is a vector 𝐡_i ∈ℝ^s whose components measure how important each pattern of 𝒫_s is to G_i. These measures can be computed according to different formula, as discussed in Section <ref>. Finally, we build the matrix 𝐇∈ℝ^|𝒢| × s by considering the vector representations of all the graphs in 𝒢.
Based on this graph representation, our anomaly detection problem amounts to classifying graphs with unknown labels as anomalous or normal. More formally, given the training set composed of a set of graphs 𝒢 = {G_i, i = 1, …, |𝒢|} with the labels ℓ_G_i∈ℒ and the vector representations 𝐡_𝐢, the goal is to learn a function f : ℝ^s→{A,N}, which associates a label (anomalous or normal) to the vector representation of an unlabeled graph.
§ PANG FRAMEWORK
§.§ Description of the Framework
To solve our classification problem, we propose the PANG framework (Pattern-Based Anomaly Detection in Graphs), whose source code is publicly available online[<https://github.com/CompNet/Pang/releases/tag/v1.0.0>]. A preliminary step consists in extracting the graphs, but as it is data-dependent, we defer its description to Section <ref>. The rest of the process is constituted of four steps, as represented in Figure <ref>:
* Identify all the patterns of 𝒢 and build 𝒫.
* Select the most discriminative patterns 𝒫_s among them.
* Use these patterns to build the vector-based representation of each graph.
* Train a classifier to predict the graph labels based on these representations.
Step #1: Pattern Identification
In order to create 𝒫, we use an existing graph pattern extractor. Several tools are available to enumerate patterns, such as gSpan <cit.>, FFSM <cit.>, or more recently TKG <cit.> and cgSpan <cit.>.
gSpan and cgSpan respectively search the frequent and closed frequent patterns in a set of graphs. Both rely on an iterative procedure, which starts from the simplest pattern possible, i.e. a single vertex with a specific attribute, in order to initialize the list of ranked frequent patterns. At each step, the algorithm takes the most frequent pattern according to this list, and tries to extend it by adding an edge. This expansion results in a set of new patterns, which are added or not to the ranked list, according to their frequency. This list is updated over the iterations, until it is no longer possible to find any new pattern with a frequency potentially higher than a predefined threshold.
In the case of cgSpan, the algorithm is able to find the set of closed frequent patterns, which, as explained before, is included in the set of frequent patterns. A smaller set of patterns allows reducing the computation time during the pattern mining phase, but also at post-processing, e.g. when computing the discrimination scores, since there are fewer patterns to consider, and consequently a smaller size for the vector representation.
We choose to use gSpan <cit.> and cgSpan <cit.>. The former mines an important number of frequent patterns while requiring less memory than TKG. The latter is able to efficiently identify closed patterns. Both algorithms are implemented in Java, and are available as a part of software SPMF <cit.>, which provides numerous tools for pattern mining.
The process used for the induced patterns is based on two steps: first, each pattern is extracted using one of these algorithms. Then, we filter the induced patterns using the ISMAGS algorithm <cit.> implemented in NetworkX <cit.>.
Step #2: Discriminative Pattern Selection
Next, we compute the discrimination score of each extracted pattern as explained in Definition <ref>. We keep the s most discriminative patterns to construct 𝒫_s.
Step # 3: Vector-Based Representation
Once we have 𝒫_s, we compute the vector representation of each graph in 𝒢. In this work, we use several approaches. First, we build a binary vector indicating the presence or absence of each pattern in the considered graph. In that case, for each graph G_i ∈𝒢 and each pattern P_j ∈𝒫, H_ij equals 1 if this pattern P_j is present in G_i and 0 otherwise.
This representation is somewhat limited, though, as it ignores how much patterns are present in graphs. To solve this issue, we propose an integer representation based on the number of occurrences in the graph. This number is computed with the VF2 algorithm <cit.>, available in Networkx <cit.>. Given a pattern P and a graph G, VF2 identifies the number of subgraph isomorphisms of P in G, which we store in H_ij.
Figure <ref> shows the representations obtained for the graphs of Figure <ref>, using the patterns from Figure <ref> as 𝒫_s. Vectors h^b_j and h^z_j denote the binary and integer representations of each graph G_j, respectively.
It is worth noting that two different graphs can have the same vector representation, as is the case for the binary representation of G_3 and G_4 in our example.
For the sake of consistency, we compute the discrimination scores based on GF when using the binary representation, and on SF when using the integer one.
Step #4: Classifier Training
After the previous step, each graph is represented by a fixed-sized vector, no matter its number of vertices or edges. We leverage this representation to train a classifier into predicting the graph labels. Our framework is general and allows any classifier, but we select C-SVM <cit.> in this article, as it gives the best experimental results (cf. Appendix <ref> for more classifiers).
§.§ Assessment on Benchmarks
Before focusing on fraud detection in public procurement, we assess PANG on FOPPA, the public procurement dataset that we use in our application, as well as four real-world datasets commonly used in the literature as benchmarks. These last datasets, our protocol and the results of this first experiment are detailed in the following. The FOPPA data is described in Section <ref>.
Experimental Protocol
MUTAG <cit.> contains 188 graphs representing molecules, where vertices are atoms and edges bonds between them. The graphs are distributed over two classes, depending on the molecule mutagenicity. PTC_FR <cit.> contains 350 graphs, also representing molecules. There are also two graph classes, depending on the molecule carcinogenicity on male and female mice and rats. NCI1 <cit.> contains 4,110 graphs representing chemical compounds. Each vertex stands for an atom, while edges represent the bonds connecting them. Like before, there are two classes distinguishing the compounds depending on their carcinogenicity. D&D <cit.> is composed of 1,178 protein structures. Each vertex is an amino acid, and two vertices are connected if they are less than 6 angstroms apart. There are two graph classes corresponding to enzymes vs. non-enzymes. Table <ref> shows the main characteristics of these datasets: number of graphs, and average numbers of vertices and edges.
Regarding graph representations, we compute the six types proposed in PANG:
* PANG_GenBin: binary representation considering general patterns.
* PANG_GenOcc: integer representation considering general patterns.
* PANG_IndBin: binary representation using only induced patterns.
* PANG_IndOcc: integer representation using only induced patterns.
* PANG_CloBin: binary representation using only closed patterns.
* PANG_CloOcc: integer representation using only closed patterns.
We compare our results with four different types of baselines. First, as an alternative pattern-based method, we use CORK (cf. Section <ref>), which automatically estimates the size of the representation. The second baseline type is graph kernels. We use the kernel matrices of the graphs as representations, associating each row of the matrix with the corresponding graph. These matrices are computed from the implementation of the WL kernel <cit.> and the WL_OA <cit.> kernel, both available in the GraKel <cit.> library. The third type is whole graph embedding neural methods, for which we use Graph2Vec <cit.>, available in the KarateClub library <cit.>. We set an embedding size of 128, which is standard in the literature. For each of these representations, we train a C-SVM as indicated in Step 4 of Section <ref>.
The fourth baseline type is Graph Neural Networks, with DGCNN <cit.>. This method produces a graph representation, which can be fetched to the SVM, but it can also perform the classification step directly. The results reported here are the best ones, obtained in this second setting, using the implementation from StellarGraph <cit.>, with the optimal parameter values as indicated in <cit.>.
Experimental Results
We adopt a 10-fold cross-validation to assess classifier performance. Table <ref> shows the average F-Score (with standard deviation) for the Anomalous class. Each column corresponds to one of the considered datasets: 4 benchmarks and FOPPA.
No method dominates the others over all datasets, therefore we can assume that some graph representations are more relevant to model certain systems. We plan to investigate this question further, but this is out of this article's scope.
The performance of PANG is systematically above CORK, its most similar method. This is because, on the considered datasets, CORK identifies a very restricted set of discriminative patterns and trades classification performance against representation size.
Moreover, PANG is on par with the remaining methods on NCI1, D&D and PTC, and has the best performance on MUTAG and, importantly, on FOPPA, our application dataset. Thus, we assume that PANG is able to capture the same information as embedding- and GNN-based methods. On the one hand, it requires numerous patterns to be mined, and is therefore more time-consuming than these methods. On the other hand, it has the advantage of being interpretable, allowing us to identify the most discriminative patterns. This is why we apply it to fraud detection in public procurement, in Section <ref>.
§ PUBLIC PROCUREMENT USE CASE
In this section, we apply PANG to real data representing public procurement. We first describe the process used to extract graphs from a database of French public procurement contracts (Section <ref>), then we discuss our results (Section <ref>).
§.§ Extraction of the Graph Dataset
Raw Data
The FOPPA <cit.> database lists all French contracts award notices published at the European level. Each such contract involves at least two economic agents: a buyer and a winner, and may be constituted of several lots. It is described by a collection of attributes such as the total price, the number of offers, the bid ranking criteria, and whether the procedure was accelerated.
In this paper, we consider the specific subset of contracts concerning period 2015–19, containing 417,809 lots.
Contract Filtering
We could apply our graph extraction process to the whole set of French contracts, however this would result in a single graph, combining heterogeneous activity domains and agent types. Yet, some attributes, for example the weight of social and environmental criteria, directly depend on these domains and types <cit.>. Instead, we select only a part of the available data to constitute a collection of consistent contracts. For this purpose, we filter them according to five aspects: agent category, activity sector, temporal period, geographic region and size. Regarding the agents, we focus on municipalities, because they are very numerous, and automating their identification is more straightforward than for the other types of public agents. For each municipality present in the dataset, we build a subset of contracts containing not only its own contracts, but also those involving their winners, as well as the other municipalities with which they have obtained contracts. The other four filters allow us to control the size of these subsets of contracts, while retaining a certain homogeneity: we keep only those related to works, covering periods of one year, and involving only suppliers belonging to the same French administrative subdivision.
After this filtering, we obtain a collection of contract subsets containing a total of 25,252 contracts. For each contract, we compute a standard red flag from the literature, in order to model how fraudulent it could be. A contract is red flagged if the number of offers received is exactly 1, which reveals a lack of competition <cit.>.
Graph Extraction
For each contract subset obtained after the filtering, we extract a graph G. We consequently build a set of graphs, corresponding to 𝒢 in Section <ref>. In the context of public procurement, due to the complexity of the data, one can extract various types of graphs <cit.>, depending on what the vertices, edges, and their attributes, represent.
We use vertices to model agents, and edges to represent relationships between them, i.e. their joint involvement in at least one contract. Each vertex has an attribute, indicating whether the agent is a buyer or a winner, while each edge has an attribute related to the number of lots contracted between a buyer and a winner. We limit the latter to three levels: 1) exactly one lot; 2) between 2 and 5 lots; and 3) 6 lots or more. This allows us to identify cases where a buyer has many contracts with a single winner, a behavior generally associated with red flags in the literature <cit.>.
We consider that an edge is anomalous if it represents at least one red flagged contract, i.e. a contract that received exactly one offer. The label of a graph depends on its total number of anomalous edges: normal if there are fewer than 2, anomalous otherwise.
Our graph extraction method produces 389 normal and 330 anomalous graphs.
Table <ref> shows the main characteristics of the resulting FOPPA dataset, which is publicly available online with our source code[<https://github.com/CompNet/Pang/releases/tag/v1.0.0>].
§.§ Results on Public Procurement Data
Comparison With a Tabular Representation
In order to study the impact of our graph-based representations, we compare them to a baseline relying on the traditional tabular approach. For each contract, we use as predictive features 15 fields available in FOPPA. We select only relevant fields such as the type of procedure, or the presence of a framework agreement. With these features, we aim to predict a binary class, based on the same red flag as before: the number of offers for the contract. Class 0 contains the contracts with more than 1 tender, and Class 1 those with a unique tender. Note that the predictive features are independent from the number of offers.
Like for the graphs, we train an SVM with 10-fold cross-validation, on the same 25,252 contracts obtained after the filtering step. However, the resulting prediction is defined at the contract level (one row in the tabular data), whereas PANG works at the agent level (one graph in the collection). To compare these results, we need to group the tabular predictions by agent. For this purpose, we proceed as in Section <ref>, by considering any agent with two red flagged contracts or more as anomalous.
Table <ref> compares the obtained performance with our best graph-based results. The F-Scores are averaged over the 10 folds, with standard deviation, for the Anomalous and Normal classes.
For the same contracts and classifier (C-SVM), the graphs allow us to predict fraudulent behaviors much more efficiently than the tabular data, notably for anomalous agents. This clearly confirms the interest of taking advantage of relationships between agents to tackle fraud detection, especially when red flags are missing.
Discrimination Score
When applied to our dataset, gSpan returns a total of 15,793 distinct patterns. Figure <ref>.a shows the distribution of their discrimination score. It is in [0;20] for most patterns (85%), which can thus be considered as non-discriminative.
Figure <ref>.b shows examples of 2 discriminative patterns, with respective scores of 64 and 91. Both of them include several relations with an intermediary number of lots, which are rather common in large graphs, and more often associated with anomalous graphs.
Impact of the Number of Discriminative Patterns
We now study how the performance is affected by the number s of patterns in 𝒫_s, i.e. the vector representation size.
Table <ref> shows how the F-Score changes depending on s, for anomalous and normal graphs. The last row indicates the performance obtained with all the identified patterns (s = |𝒫|). A representation based on only 100 patterns, i.e. less than 1% of the 15,793 patterns, is sufficient to reach the 0.8 bar for both classes. This represents around 90% of the maximal F-Score, obtained with all patterns. Therefore, only a small number of patterns are required to convey the information necessary to tackle the classification task.
Impact of the Type of Patterns
We also study how the type of pattern influences the constitution of 𝒫_s, and therefore the classification performance. For this purpose, we set s = 100, and compare the six representations proposed by PANG, as we did in Section <ref>.
Table <ref> shows the F-Score obtained with each representation, for both classes.
Representations based on induced and closed patterns systematically lead to better results. Yet, a manual examination of 𝒫_s reveals that the discrimination scores of their selected patterns are similar to the general case. The worst selected pattern reaches a score of 67 for general patterns, vs. 61 for induced and 64 for closed patterns. The difference lies in the nature of the selected patterns, which are more diverse than when mining general patterns. For induced and closed patterns, 𝒫_s includes respectively 16 and 13 patterns that do not appear when dealing with general patterns.
Interpretation of Fraudulent Behavior through Pattern Analysis
An important advantage of our framework is the identification of the most discriminative patterns, and thus the possibility to leverage human expertise to interpret these patterns and better understand the reasons why an agent is considered fraudulent. For illustration, Figure <ref> shows two discriminative patterns, P_4 and P_6.
Pattern P_4 represents a relationship between two winners and two buyers, with more than one contract between them. This type of pattern occurs more frequently in graphs with more contracts, which is typical of anomalous graphs. Pattern P_6 has a winner connected to several buyers, and a single of these edges is green. This can be interpreted as favoritism: a winner works much more with a municipality than with the others.
§ CONCLUSION
In this paper, we propose PANG, a pattern-based generic framework that represents graphs as vectors, by identifying and leveraging their most discriminative subgraphs. We show how PANG, coupled with a standard classifier such as SVM, can detect fraud in public procurement, by applying it to an existing database (FOPPA). Traditional fraud detection approaches typically use tabular data to compute red flags to estimate risk, and fail when these data are incomplete. PANG leverages relational information between economical agents, and our experiments confirm that the use of graphs makes it possible to overcome this issue. They also show that prediction performance can be improved by mining closed or induced patterns, which constitute a set of predictors less redundant than general patterns. Finally, in this context, a clear advantage of PANG relies on the explainability of these discriminative patterns, which can be interpreted and associated with human behaviors such as favoritism.
splncs04
§ CLASSIFIER COMPARISON
Table <ref> complements the results presented in Section <ref>, by showing the performance obtained by a selection of classifiers on the FOPPA dataset, using the PANG_GenBin representation and all available patterns.
§ ETHICAL IMPLICATIONS
Anomaly detection can have ethical implications, for instance if the methods are used to discriminate against certain individuals. In this respect, however, our PANG methodological framework does not present any more risk than the supervised classification methods developed in machine learning.
Moreover, this work takes place in the framework of a project aiming, among other things, at proposing ways of automatically red flagging contracts and economic agents depending on fraud risk. Therefore, the method that we propose is meant to be used by public authorities to better regulate public procurement and the management of the related open data.
Finally, the data used in this article are publicly shared, and were collected from a public open data repository handled by the European Union. They do not contain any personal information, and cannot be used directly to infer any personal information, as they only describe the economic transactions of companies and public institutions regarding public procurement.
|
http://arxiv.org/abs/2306.03549v1
|
20230606095245
|
A hard-sphere quasicrystal stabilized by configurational entropy
|
[
"Etienne Fayen",
"Laura Filion",
"Giuseppe Foffi",
"Frank Smallenburg"
] |
cond-mat.soft
|
[
"cond-mat.soft"
] |
^1Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France
^2Soft Condensed Matter, Debye Institute of Nanomaterials Science, Utrecht University, Utrecht, Netherlands
Due to their aperiodic nature, quasicrystals are one of the least understood phases in statistical physics. One significant complication they present in comparison to their periodic counterparts is the fact that any quasicrystal can be realized as an exponentially large number of different tilings, resulting in a significant contribution to the quasicrystal entropy. Here, we use free-energy calculations to demonstrate that it is this configurational entropy which stabilizes a dodecagonal quasicrystal in a binary mixture of hard spheres on a plane. Our calculations also allow us to quantitatively confirm that in this system all tiling realizations are essentially equally likely, with free-energy differences less than 0.0001k_BT per particle – an observation that could be the related to the observation of only random tilings in soft matter quasicrystals. Owing to the simplicity of the model and its available counterparts in colloidal experiments, we believe that this system is a excellent candidate to achieve the long-awaited quasicrystal self-assembly on the micron scale.
A hard-sphere quasicrystal stabilized by configurational entropy
Etienne Fayen^1, Laura Filion^2, Giuseppe Foffi^1, and Frank Smallenburg^1
July 31, 2023
==============================================================================
Hard sphere have played a foundational role in our quest to understand classical phase behavior – from helping to understand how purely entropic systems can crystallize, to revealing new insights into the behavior of glassy materials, to nucleation, to melting in 2d, and many more <cit.>. Their success as a model system stems partly from their inherent simplicity, making them amenable to efficient simulations and analytical theories. Moreover, advances in colloidal particle synthesis have largely made it possible to quantitatively test theoretical and numerical predictions in the lab.
Until recently, quasicrystals were one of the few states of matter inaccessible by this simple model system. Quasicrystals are exotic structures which can display symmetries that are forbidden to periodic crystal phases. While highly controversial when first discovered, their place in material science is now well established, with their formation demonstrated in a growing number of both atomic <cit.> and colloidal <cit.> systems. Toy models that display quasicrystalline behavior have generally been fairly complex – many early models made use of non-additive binary mixtures of Lennard-Jones particles <cit.> and oscillatory interaction potentials <cit.>, while more recent work has also explored patchy particles <cit.>, anisotropic interactions <cit.>, and step-wise interactions <cit.>. Recently, however, we demonstrated the spontaneous self-assembly of two quasicrystal structures in binary mixtures of hard spheres on a plane <cit.>. This work opens the door to exploring the statistical physics of quasicrystals without the added complication of energetic interactions or orientational degrees of freedom – in a system that should be realizable in colloidal experiments<cit.>. One major question in the study of quasicrystals is the role of configurational entropy in their stability<cit.>. When systems such as hard spheres form quasicrystals, does this happen because the quasicrystal structure maximizes the freedom of particles to vibrate around their quasicrystalline lattice position? Or are they stabilized by the configurational entropy associated with the large number of possible quasicrystal realizations?
Here, using computer simulations and free-energy calculations, we show that the dodecagonal quasicrystal formed by hard spheres on a plane is stabilized by configurational entropy. In fact, without the configurational entropy the quasicrystal would be metastable with respect to a phase separation of periodic crystals. Instead the configurational entropy promotes a random tiling quasicrystal where – for this simple hard sphere model – all realizations contribute equally to the free energy.
As illustrated in Fig. <ref> we consider binary mixtures of hard spheres constrained to lie on a flat substrate. We focus on systems with a size ratio q = σ_S / σ_L, where σ_S(L) denotes the diameter of the small (large) spheres. Due to the confinement to a flat plane, the particles can move only in two dimensions, and hence in practice we simulate an effective mixture of non-additive hard disks, where the minimum distance of approach between two disks of unequal size is given by σ_LS = √(σ_S σ_L). Such mixtures are characterized by the composition x_S = N_S / N, with N_S(L) the number of small (large) spheres and N the total number of spheres. The last free parameter in this model is the packing fraction, which we define as η = (N_S σ_S^2 + N_L σ_L^2)π / 4A, with A the (two-dimensional) volume of the system. Note that since our binary hard-disk mixture is non-additive the total packing fraction might exceed 1 in some cases.
Previous work showed that a dodecagonal quasicrystalline phase (QC12) is stable at infinite pressure in this system for size ratios in the range 0.46 ≲ q ≤ 0.5<cit.>. Moreover, the quasicrystal also forms spontaneously in self-assembly simulations, demonstrating that it is kinetically accessible at finite pressures <cit.>. However, this does not prove the thermodynamic stability of this phase, as it could still be metastable with respect to competing periodic crystal phases. Here, we perform free-energy calculations to settle this question. We focus on mixtures with a size ratio q = 0.46.
Since the QC12 phase only appears for compositions x_S < 0.5, we only consider systems with compositions x_S ≤ 0.5. Interestingly, binary mixtures of hard spheres, not constrained to a plane, have been explored by DFT for size ratios around ∼ 0.8 in a search for quasicrystals with icosahedral symmetry, which turned out not to be stable <cit.>.
To prove the thermodynamic stability of the QC12 phase, we use explicit free-energy calculations based on both event-driven molecular dynamics simulations <cit.> and Monte Carlo simulations <cit.>. In particular, we calculate the free energy of different competing phases as a function of the pressure and composition using thermodynamic integration methods <cit.>. For the fluid phase, we use the ideal gas as a reference state. For the periodic crystal phases, we obtain a reference free energy using the Einstein molecule variant <cit.> of the Frenkel-Ladd method <cit.>. As candidate structures, we consider the phases that are expected to be stable (or nearly stable) at infinite pressure, namely the hexagonal, S1, Sigma, and QC12 phases <cit.>. The candidate phases are depicted in Figure <ref>.
Determining the stability of a quasicrystal using computer simulations presents challenges that are not present for other crystal phases. First, quasicrystals are non-periodic, and hence the finite-size effects of approximating its aperiodic structure with a periodic approximant should be carefully checked. More importantly, the quasicrystals we expect in colloidal systems are typically examples of so-called random-tiling quasicrystals, which results in a configuration entropy contribution to the total free energy of the phase. The quasicrystals of interest here, as well as dodecagonal quasicrystals discovered in soft matter experiments <cit.> and simulations <cit.>, are based on a random tiling of the plane by squares and equilateral triangles, with the large particles in the system forming the corners of both shapes. As the number of possible arrangement of these tiles scales exponentially in the number of particles, the freedom of choice in generating this configuration contributes to the total entropy of the phase, and hence needs to be taken into account in any free-energy calculations. This issue is most easily handled if we make the assumption that all realizations of the quasicrystal are equally likely, also known as the random tiling hypothesis <cit.>. If this is the case, then consistent with the approach of Ref. pattabhiraman2015 we can split the total free energy of our hard-sphere quasicrystal into two parts:
F_tot(N,A,T) = F_vib(N,A,T) - T S_conf,
where F_vib is the vibrational free energy of any given quasicrystal realization, S_conf is the configurational entropy associated with the quasicrystal tiling, and T is the temperature. The vibrational free energy can again be directly calculated for any given realization using the same Einstein molecule approach as we use for the periodic phases.
The configurational entropy of the QC12 square-triangle tiling is well studied <cit.>. When the ratio of the number of squares N_sq and triangles N_tr reaches N_sq/N_tr=√(3)/4, the random tiling ensemble reaches a maximum entropy, meaning that the number of tilings in the ensemble, or equivalently the number of possible configurations for the squares and triangles, is the highest. At this point, the random tiling ensemble forms a so-called random-tiling quasicrystal of 12-fold symmetry <cit.>.
The configurational entropy of the square-triangle tiling was first estimated with transfer matrix <cit.> and numerical <cit.> approaches, before exact analytical expressions were obtained with a Bethe ansatz <cit.>. Based on these works, the random tiling configurational entropy per particle is given by
S_conf/Nk_B = [ln(108) - 2√(3)ln(2+√(3))] (1 - x_S^QC12) ≈ 0.082.
Here, x_S is the composition of the system which corresponds to the ratio of squares and triangles required for a quasicrystal, i.e. x_S^QC12 = √(3) / (2 + 2√(3)) ≈ 0.317. Importantly, S_conf is sharply peaked at this composition, and -TS_conf is non-convex on either side of the maximum <cit.>, such that random tilings at any compositions other than x_S^QC12 are strongly entropically disfavored.
To explore the stability of the QC12 phase, we construct the phase diagram as a function of the composition x_S and pressure p. To this end, we transform the (Helmholtz) free energies obtained from our thermodynamic integration into Gibbs free energies using the equation of state of the respective phases. The coexistence regions are mapped out using common tangent constructions at constant pressure.
Note that we do not expect phases with x_S > 0.5 to play a role in this phase diagram, as the S1 phase (at x_S = 0.5) is the best-packed phase for this system and previous self-assembly studies did not show any self-assembly of higher-composition phases at x_S < 0.5.
The resulting phase diagram is shown in Fig. <ref>, and clearly indicates a broad stable region for the QC12 phase. Additionally, we observe a binary S1 solid phase, a hexagonal solid of large particles, and a binary fluid phase. Note that in addition to these phases, we have confirmed that the Sigma phase, which is the first approximant to the dodecagonal quasicrystal, is not stable.
Although the QC12 phase mostly coexist with other solid phases for most of its stability range, there is a narrow band of pressures where it coexists with a fluid with a larger concentration of small particles. As self-assembly is likely to be easier to achieve from a fluid phase, this suggests that self-assembly of this phase may be easiest by starting from an off-stoichiometric fluid with x_S > x_S^QC12. This scenario is in line with earlier self-assembly observations <cit.>, and has previously been reported for other phases as well <cit.>.
An interesting question is whether the quasicrystal is stabilized purely by its vibrational entropy, as previously proposed for the same quasicrystal phase in particles interacting via a square-shoulder repulsive potential <cit.>, or whether the configurational entropy is essential for its stability. To check this, in Figure <ref> we compare the free energies of the dodecagonal quasicrystal and the competing coexistence of the hexagonal and S1 solids at the quasicrystal composition. Without the configurational entropy term, the Hex_L-S1 coexistence prevails and the quasicrystal is not stable. Clearly, for this system, the tiling contribution to the total entropy is critical for the quasicrystal stability. Unfortunately, this also implies that we can only conclude that the QC12 phase is stable if we are justified in the assumption that all tiling realizations are equally likely, and hence that Eq. <ref> is correct. In order to confirm this, we need to check that different configurations of the random tiling ensemble are degenerate in vibrational entropy.
To this end, we perform high-precision calculations of the vibrational entropy of various tiling realizations. In particular, we compare the vibrational entropy of several types of ideal quasicrystal configurations, as well as fully randomized tilings. Ideal quasicrystal configurations can be generated by so-called inflation methods, in which every tile of a tiling is replaced by a cluster of tiles. By iterating the inflation rules on an initial seed, one generates larger and larger patches of tiling that converge to a quasicrystalline configuration. Several inflation rules exist for the square-triangle tiling. The most common way of producing a fully deterministic quasicrystal tiling with dodecagonal symmetry is via the Schlottmann inflation rule <cit.>. Alternatively, a quasicrystal tiling with hexagonal symmetry can be constructed using the simpler Stampfli inflation rule <cit.>. A slight variation of the Stampfli uses random choices to generate a limited ensemble of random tiling realizations with 12-fold symmetry on average. Finally, configurations from the full random tiling ensemble can be sampled by reshuffling ideal configurations using so-called zipper moves that rearrange tiles along a closed path in the tiling <cit.>. More details on the generation of our tiling configurations can be found in the Supplemental Information (SI).
Note that in terms of quasicrystal language, these different types of tilings all have zero perpendicular strain, but differ in terms of the fluctuations of their representative surface in the perpendicular space <cit.>. In particular, the deterministic Schlottmann and Stampfli rules have minimal fluctuations in their representative surface, while the full random tiling ensemble has much stronger fluctuations.
To test whether the vibrational entropies of these different families of quasicrystal tilings are degenerate, we use again the Einstein molecule approach. We calculate the free energy of configurations from each of these families for several different system sizes. For the randomly generated tilings, we create 5 different random configurations by applying zipper moves to the approximant and calculate the free energy of each. The density is fixed at 1.5 σ_LL^-2 for all systems. Note that in order to minimize statistical error and reduce the error bars, we repeat the free-energy calculation for each configuration at least 100 times (see SI) and average over the results.
The results are shown in Fig. <ref>. The finite size scaling of the free energy appears to be non-linear for each structure, and adding the heuristic finite size correction term ln(N)/(2N) proposed in Ref. does not remove the non-linearity. One could argue that a linear regime is reached for very large system sizes, with an almost zero slope. Therefore, we perform no extrapolation and use the value of the free energy per particle for the largest systems as our estimate of the thermodynamic-limit value. We obtain β F/N = 5.50309 (5) for the Schlottmann quasicrystal, 5.50317 (4) for the random Stampfli quasicrystal, 5.50342 (4) for the Stampfli hexagonal quasicrystal and 5.50392 (4) for the average over the 5 largest realizations of the random tiling quasicrystal.
An important first observation is that the free energies of the 5 random-tiling quasicrystals generated at each system size are consistently degenerate within our errorbars (black clusters in Figure <ref>). The absence of any outliers gives us confidence that the vast majority of configurations in the random tiling ensemble indeed have essentially the same vibrational entropy. This observation quantitatively validates the assumption that all realizations are equally likely in our system and justifies the treatment of the QC12 as a random tiling phase with the configurational entropy given by Eq. <ref>.
The measurements show, nonetheless, that some configurations in the random tiling ensemble are special. The free energy of the inflated quasicrystals is consistently lower than that of the random configurations, with the difference on the order of 10^-3 k_BT per particle. This difference is much too small to affect the stability of the random quasicrystal phase, as can be seen by comparing it to the scale of free-energy differences in Fig. <ref>. However, it is measurable, and of the same order of magnitude as the free-energy difference between face-centered cubic (FCC) and hexagonal close-packed (HCP) crystals of monodisperse hard spheres <cit.>. Using a self-consistent field theory, Duan et. al. also demonstrated a free-energy difference between ideal and random configurations of a dodecagonal quasicrystal in a system of tetrablock copolymers <cit.>. Moreover, we find that the ideal dodecagonal quasicrystal obtained with Schlottmann inflation has slightly more vibrational entropy than both the ideal hexagonal Stampfli and random Stampfli quasicrystals, although the difference with the latter is very small.
The vibrational entropy difference between random and ideal quasicrystals can be understood from the different local environments that can be found in the underlying tiling. For instance, ideal quasicrystals obtained by the inflation method contain no local environment formed of 4 squares meeting at the same vertex while the randomized ones contain a non-zero concentration of those <cit.> (see Fig. <ref>). We expect however that the first-neighbor local environments alone do not explain fully the entropy difference. Indeed, both the dodecagonal and hexagonal ideal quasicrystals have the same distribution of local environments when considering only the first neighbor shell. Hence, neighbor shells beyond the first one certainly play a non-negligible role.
From the point of view of quasicrystal theory, the vibrational entropy difference between ideal and random structures is an interesting illustration of phonon-phason coupling <cit.>, albeit very weak. The vibrational entropy of each system can be interpreted as stemming from the total entropy contribution from all phonon modes accessible to the quasicrystal. In this picture, the lower vibrational entropy of the random quasicrystals shows that the presence of phason modes in these systems hinders lattice vibrations, i.e. reduces the amplitude of the phonon modes.
In conclusion, our results demonstrate the thermodynamic stability of a dodecagonal quasicrystal in a binary mixture of hard spheres confined to lie on a flat substrate. As it consists of hard spheres, the quasicrystal considered here is inherently stabilized by entropy alone. Importantly, however, it is also an example of a quasicrystal that is stabilized by its configurational, rather than vibrational, entropy. This configurational entropy stems from the many different possible tiling realizations, which are – as shown by our precise free-energy calculations – nearly indistinguishable in terms of their vibrational freedom. Due to the tiny free-energy difference between different realizations, random tilings are overwhelmingly more likely to form than perfect inflationary tilings. We speculate that this observation could be related to the fact that, in soft matter, all the quasicrystalline systems observed thus far appear to indeed be random.
Note, however, that in some systems, sufficiently strong particle interactions could favor or suppress different sets of quasicrystal realizations, lowering the configurational entropy and potentially destabilizing the quasicrystal phase <cit.>.
Additionally, since sedimented systems of hard colloidal spheres can be readily realized in the lab <cit.>, this equilibrium quasicrystal is extremely promising for the creation and study of quasicrystals on the colloidal scale. Such a realization would be an important step forward in the study of (soft-matter) quasicrystals, as it would provide an ideal platform for the real-space study of e.g. defect dynamics, perpendicular strain relaxation, and other phenomena that are hard to study in molecular or atomic quasicrystals.
§ ACKNOWLEDGEMENTS
We thank Anuradha Jagannathan, Marianne Impéror-Clerc, Pavel Kalugin, and Alfons van Blaaderen for interesting and useful discussions.
EF, GF, and FS acknowledge funding from the Agence Nationale de la Recherche (ANR), grant ANR-18-CE09-0025. LF acknowledges funding from the Dutch Research Council (NWO) under the grant number OCENW.GROOT.2019.071. The authors acknowledge the use of the Ceres high-performance computer cluster at the Laboratoire de Physique des Solides to carry out the research reported in this article.
|
http://arxiv.org/abs/2306.03103v1
|
20230602095515
|
Sampling and Ranking for Digital Ink Generation on a tight computational budget
|
[
"Andrei Afonin",
"Andrii Maksai",
"Aleksandr Timofeev",
"Claudiu Musat"
] |
cs.HC
|
[
"cs.HC",
"cs.CL"
] |
EPFL, Lausanne, Switzerland Google Research, Zürich, Switzerland
Sampling and Ranking for Digital Ink Generation on a tight computational budget
Andrei Afonin1work done as a student researcher at Google Research, Zürich, SwitzerlandThese authors contributed equally to this work and share first authorship Andrii Maksai2[2] Aleksandr Timofeev1[1] Claudiu Musat2
July 31, 2023
==============================================================================================================================================================================================================================
Digital ink (online handwriting) generation has a number of potential applications for creating user-visible content, such as handwriting autocompletion, spelling correction, and beautification.
Writing is personal and usually the processing is done on-device. Ink generative models thus need to produce high quality content quickly, in a resource constrained environment.
In this work, we study ways to maximize the quality of the output of a trained digital ink generative model, while staying within an inference time budget. We use and compare the effect of multiple sampling and ranking techniques, in the first ablation study of its kind in the digital ink domain.
We confirm our findings on multiple datasets - writing in English and Vietnamese, as well as mathematical formulas - using two model types and two common ink data representations. In all combinations, we report a meaningful improvement in the recognizability of the synthetic inks, in some cases more than halving the character error rate metric, and describe a way to select the optimal combination of sampling and ranking techniques for any given computational budget.
§ INTRODUCTION
Digital ink (online handwriting) offers users of digital surfaces a way of expression similar to pen and paper.
This mode of expression is gaining popularity with the increasing adoption of styluses and digital pens for tablets.
In its digital form, ink
is a medium that offers rich possibilities for personalized intelligent assistance for creativity and productivity.
One direct way of offering the assistance is via ink synthesis, enabling user-facing features such as handwriting autocompletion, spelling correction, beautification, assisted diagramming and sketching.
Making these assistance experiences convenient and comfortable requires maximizing the output quality of the models, while respecting privacy and latency constraints. The same is true of other types of generated content, but standards might be higher in the case of digital ink generation, for example:
* Since assistive handwriting content appears in the same space as the content generated by the user, it's vital that the generated content is readable and not look "out-of-place". The users of generative image models for content creation purposes might be more forgiving to model mistakes, because there the model assists in the creative process where the users don't necessarily know what exactly they are looking for.
* Personalized assistive handwriting often requires the models to observe the user's handwriting and transfer that style to the generated output. Unlike other modalities, handwriting is a personally-identifiable data. Therefore, it is important for the models to run on-device, rather than server-side.
* Generating suggestions (for example when doing autocompletion in handwriting) requires the models to be fast enough to produce their suggestions before the user has moved on or decided to add new content themselves. When the content is produced too slowly, it gets in the way of the user's flow rather than helping. This problem is further exacerbated by the constraint that the models run on-device.
In this work, we aim, given a trained generative model of digital ink and a computation budget, to produce readable outputs as often as possible, under the assumption that the model is going to be run on-device. To achieve this goal, we consider two classes of approaches that work well together.
Sampling. This constrained ink modelling problem resembles text and audio generation.
Following the work that has been done there <cit.>, we first concentrate on using perturbed probability distributions for sampling from autoregressive models. This improves the quality within a single inference call, by picking a sampling technique that minimizes the number of repetitive or incoherent samples. Examples of generated digital ink can be found in Fig. <ref>.
Ranking. We additionally train ranking models to predict the recognizability of an ink. We employ these models by first generating a diverse set of candidates and then ranking them to select the best output. This improves the quality if the time budget allows for multiple inference calls.
Our proposed ranking approach would actually work for any binary quality measure (like thresholded L_2 distance in the style embedding space for style transfer <cit.> or edit-aware Chamfer distance for spelling correction <cit.>), but we focus on recognizability, since likely for any application of digital ink synthesis, the output should be recognizable.
Our contributions are as follows[A notebook accompanying this submission that can run inference on example models for each dataset, data representation, and model type, and includes test label sets, is available here: <https://colab.research.google.com/drive/1AkwmDOkEIkifbOYEBdcB9PrR_Ll-fcmz>]:
* We use sampling and ranking techniques for digital ink generation, and perform an ablation study on the ranking model objective, training, and tuning. To our knowledge, ours is the first work on this topic in the digital ink space.
* We show that selecting appropriate sampling parameters improves the quality of the output significantly compared to the typically used baselines, across multiple datasets, model types, and data representations.
* We show that ranking further improves the quality, and discover that depending on the computational budget, the highest quality ranking models may not lead to optimal quality. We provide practical way of selecting the ranking model.
§ RELATED WORK
Errors in autoregressive generative models. Autoregressive generative models often generate samples with artifacts <cit.>. Artifacts appear when the generation process gets stuck in either high- or low-probability regions of the sampling space, and results in two types of errors, overconfidence (usually manifested as repeated tokens) <cit.> and incoherence errors, respectively. We show examples of such errors during Digital Ink generation process in Fig. <ref>. This is also known as the likelihood trap <cit.> and stems from exposure bias <cit.>, which is difference between training done with 'teacher forcing' and inference <cit.>.
Sampling. One common way of finding the trade-off between overconfidence and incoherence errors, often used in Text-to-Speech (TTS) and Natural Language Processing (NLP), is sampling <cit.>, which modifies the distribution from which the points in the autoregressive model are sampled. Sampling from original distribution is called ancestral sampling; popular sampling techniques that extend it include Top-K <cit.> and Top-P, or nucleus <cit.> sampling. Originally introduced for text generation, they propose picking a word from the distribution of the top most likely next words, limited by either number (in Top-K) or cumulative probability (in Top-P). Variations of the sampling techniques above include Typical sampling <cit.>, which selects components closest to a dynamically selected probability, Mirostat sampling <cit.>, which select K in Top-K sampling adaptively, and Beam search <cit.>.
Ranking models. Another way to improve the generation quality is to generate several samples and choosing the best one among them. This is frequently done in information retrieval domains such as question answering <cit.>, text summarization <cit.>, and code generation <cit.>. Approaches most similar to ours are the ones that use ranking models for conditional generative modeling. In <cit.>, the ranking model is trained to predict the best text continuation, with positive samples coming from real text and negative samples coming from different parts of the text and model-generated continuations. In <cit.>, two ranking models are trained to predict the match between the generated audio and the target label, as well as between the generated audio and the source audio used for style extraction. They are combined with weights specified by the user, to rank audio generated with specific style.
Handwriting synthesis.
Two of the most popular models for digital ink generation are multi-layer LSTMs with monotonic attention over the label <cit.> (also known in TTS as Tacotron <cit.>) and the encoder-decoder Transformer architecture <cit.>. Other architectures include VRNN <cit.> used in <cit.>, Neural ODEs <cit.>, and Diffusion models <cit.>.
These architectures underpin applications such as sketch generation <cit.> and completion <cit.>, style transfer <cit.>, beautification <cit.>, spelling correction <cit.>, and assisted diagramming <cit.>.
Metrics for evaluating the quality of digital ink generative models of text typically include Character Error Rate for text generation readability <cit.>, writer identification for style transfer <cit.>, and human evaluation <cit.>.
Most digital ink generation approaches use either ancestral sampling or greedy sampling, with exception of <cit.>, which uses biased sampling <cit.> for the task of generating the synthetic training data.
To our knowledge, no studies on the effects of sampling and ranking for digital ink generation have been performed. Similarly, no studies have looked at the relationship between the generation speed and quality.
§ METHOD
Given an autoregressive generative model of digital ink that takes a text label as input and produces a sequence representing digital ink as output, we are interested in maximizing the average quality M_Θ_S,Θ_R(S, B, R) of the model output, while guaranteeing that the maximum inference time does not exceed a certain threshold 𝒯_𝓂𝒶𝓍. Here, S is the sampling method used by the generative model, B is the size of the batch for generation, and R is an inference-time parameter of the ranking model, Θ_S are fixed trained weights of the model, Θ_R are the trainable parameters of the ranking model, which we will describe below.
During inference, given a label, the generative model will use sampling method S to produce a batch of B digital inks, which will be scored according to the ranking model Θ_R. The highest-ranking sample will be returned as the output; if B=1, the ranking model is bypassed. Fig. <ref> illustrates the approach.
Our main results concern the trade-off between the inference time and model output quality, and are presented in Sec. <ref>. The rest of this section is organized as follows: we describe our approach to measuring quality and inference time in Sec. <ref>; Sec. <ref> outlines the data representation for digital ink and sampling methods S that can be used with it; Sec. <ref> describes the ranking models we use and how to train them.
§.§ Evaluation
We propose an evaluation method linked to the system's usability.
Similar to other works <cit.>, as quality measure M we use the Character Error Rate (CER) of a trained handwriting recognition model on the generated samples. This stems from the assumption that the generated text is not useful if it is not readable, regardless of other attributes like style and beauty.
A second axis of interest for usability is the inference time. We report the worst case inference time per character. We measure the worst case latency, with the assumption that exceeding the budget makes the functionality unusable for users. We measure time per character since processing time is expected to scale linearly with the sequence length.
§.§ Data representation and sampling
Two frequently used representations of the digital ink data are raw and curve representation, which both encode the ink as a sequence of input tokens in ℝ^d×{0,1}^2, with first d values describing the shape of the stroke between two points, and the last 2 binary values indicating whether (i) a particular token is at the end of the stroke, and whether (ii) it is the last token in the sequence (end of ink). For the raw representation, d=2 and describes the offset between two adjacent points, and for the curve representation, d=6 and describes the parameters of Bezier curve fit to a segment of the stroke <cit.>.
Following the approach of <cit.> and most of the later literature on the topic, we parameterize the output distribution of every step of the autoregressive generative model by a set of parameters (π, μ, Σ, e_s, e_i), where π, μ, Σ describe weights, means, and covariances of a mixture of Gaussians, from which ℝ^d stroke parameters are sampled, and e_s and e_i describe the parameters of Bernoulli distributions from which the pen-up (end-of-stroke) and end-of-sequence events are sampled. Σ is full-covariance matrix for raw features (d=2) and diagonal otherwise. We provide more details in Sec. <ref>.
Sampling. We consider two types of distortions for the output distribution: distortion of the mixture weights π and distortion of the diagonal components of the covariance matrix Σ. To distort the mixture weights, we consider several standard approaches: Top-K (parameterized by the value of K), and Top-P and Typical sampling (both parameterized by the value of P). To distort the covariance matrix, we subtract a sampling bias value b from the diagonal elements of the covariance matrix, before applying the softplus <cit.> function to it to ensure positive values. This reduces the variance after the model has been trained, to avoid sampling in low-confidence regions. The sampling parameters S=(s,m,b) are therefore the sampling method s∈{Top-K, Top-P, Typical}, the mixture parameter m, and the sampling bias value b.
§.§ Ranking models
Running a ranking model to order the generated samples may be computationally costly. For this reason, we differentiate between a process to rank all candidates and one that ranks only the most promising ones.
Following the approach commonly used in information retrieval <cit.>, our ranking approach is two-staged, with a "fast" ranker ℛ_1 that runs on all B generated outputs simultaneously, and a slower, more trustworthy "good" ranker ℛ_2, which is used to re-rank the samples ranked highest by ℛ_1. The inference time parameter R of the ranking model, introduced at the beginning of this section, is the number of top samples according to ℛ_1 that are re-ranked by ℛ_2. When R=B, this corresponds to using only ℛ_2, and when R=1, only ℛ_1 is used. We describe both rankers below, and provide more details about them in Sec. <ref>.
"Good" ranker ℛ_2. Since our goal is to generate samples with lowest possible Character Error Rate, an obvious choice for ℛ_2 to use the recognizer model that measures CER as the ranking model - that is, select the sample that is perfectly recognizable or has the lowest character error rate. However, running the recognizer on-device can be slow depending on the implementation, and we will see that having a faster first stage is beneficial.
"Fast" ranker ℛ_1.
Following the approach of <cit.>, our ℛ_1 ranker is a model learned to predict whether the generated sample is recognizable or not, that is, whether the recognizer would return the target label given the generated ink. In other words, this ranker is an approximation of the "good" ranker and tries to predict its output. Since inference time is one of the main focuses of our work, we consider a much simpler ranking model than the one described in <cit.>. Instead of looking at both the generated ink and target label, our ranker just uses the generated ink. It consists of two convolutional layers followed by global average pooling. We study this choice of ranking model in terms of inference speed and the types of errors that it can address in Sec. <ref>.
Training dataset for ℛ_1. As described above, ℛ_1 ranker is trained to be a fast approximation of the ℛ_2 ranker, and it predicts whether synthesized ink is even close to being recognizable. To train ℛ_1, we don’t use real data: we use the synthesizer for generating a sample for a given text label, and ℛ_2 ranker for generating a binary label of whether the sample is recognizable (recognition result matches the text label) or not. The pair of generated ink and binary label is the training data for ℛ_1 (more details in Sec. <ref>).
We first train the ranking model, and then, select the sampling method S that performs best on the 𝒟_tune dataset. Doing the reverse would require training a ranking model for each possible sampling parameter setting, which would be prohibitively expensive. This means that during training of ℛ_1, the sampling method is yet unknown. To accommodate this, we create the training dataset for ℛ_1 by generating samples with (s, m, b) selected at random, for each sample. This allows ℛ_1 to be robust to any future selection of S, so that the sampling parameters can be chosen after the ranker is trained. We evaluate this method of training dataset creation in Sec. <ref>.
§ RESULTS
§.§ Setup
To show that both sampling and ranking bring forth significant improvements in generation quality, and show the robustness of the proposed approach, we will evaluate it on 4 datasets across 3 different languages, with two frequently used model types, and two data representations.
We consider 4 digital ink datasets for text generation: English <cit.> and <cit.>, Vietnamese <cit.>, and an internal dataset of mathematical expressions. We use two data representations described in Sec. <ref>, and , and evaluate two different model types, <cit.> and <cit.>.
§.§ Implementation details
For both and , we use 10-component Gaussian mixtures in the model output. For , we use one-hot encoding of labels and 3 layers of size 256 in the decoder. For , we use 2 layers with 4 attention heads and embedding size 64 in the label encoder, and 6 layers with 4 attention heads and embedding size 128 in the decoder. We use the Pre-LN implementation <cit.>. We train models with Adam with global clipnorm of 0.1, and learning rate of 1e-3 for and learning rate schedule described in <cit.> for . Models are trained for 2× 10^6 steps with batch size 256. For training the ℛ_1 ranker, we generate 10^5 samples with labels from the generator training data as the training set, and 1000 samples with labels from the generator validation data as the validation set. As described in Sec. <ref>, for each sample, we select a sampling method at random to generate it. The pool of sampling methods includes Top-P, Typical samplings with m∈{0.0,0.1,…,1.0} and Top-K sampling with m∈{1,2,…,10}, and sampling biases b∈{0,1,5,25,100,∞}. The ℛ_2 ranker is a state-of-the-art recognizer that has been trained on internal data not related to public datasets and is an LSTM-CTC model with 6 layers of size 216 <cit.>, which is combined with word and character language models during beam search decoding, similar to <cit.>.
For , we use testset_v for validation, testset_f for tuning sampling parameters (via grid search over all possible samplings), and testset_t for testing. For , we use the version of the dataset split by individual words. Since this dataset does not have the tuning subset, we use validation data labels for tuning sampling parameters. For , since this dataset does not have tuning or testing subset, we extracted 1500 labels whose lengths have the same mean and variance as the validation data, from the labels present in the IAMonDO dataset (we include these labels with the submission for clarity). Models were implemented in Tensorflow and the time measurements were done after conversion to TFLite on a Samsung Galaxy Tab S7+ tablet.
§.§ Baselines
Sampling model baseline. We compare the model with tuned sampling parameters, with a model with fixed sampling method. Since different works in the literature consider different sampling methods, to have a fair comparison to them, as to a baseline, we report the best result with S=(Top-P,m,b), m∈{0.0, 1.0}, b∈{0.0,∞}, that is, greedy or ancestral sampling of component with infinite or zero bias for the offset parameters. We will refer to the optimal sampling method as S_opt, and to baseline as S_base.
Ranking model baseline. We compare the ℛ_1 ranker that predicts the recognizability of the generated ink, described in Sec. <ref>, with an approach described in <cit.>, which trains a model to distinguish between real and synthesized samples, with the goal of selecting the most "real-looking" samples. We will refer to it as ℛ_base.
§.§ Quantitative analysis
Effect of sampling and ranking
In Table <ref>, we compare the results of applying different sampling and ranking techniques for all datasets, model types, and data types.
A first major finding of our study is that tuning the sampling technique helps in almost all cases - in 13 cases out of 16, with the remaining ones being ties.
The second conclusion is that using a ranking model helps in all cases.
There is still a significant gap between the performance when using ℛ_1 and the quality-optimal ℛ_2. However, as we show in the next paragraph, achieving such quality comes with penalties for inference time.
Finally, we can conclude that using ranker that predicts whether the ink is recognizable or not is superior to using a baseline ranker <cit.> that predicts whether a given ink is real or synthetic. However the latter ranker also helps in most cases, as compared to not using ranking at all.
Comparison under a time budget.
The inference time for the model consists of 3 separate parts: (i) generating a batch of B samples; (ii) ranking them with the ℛ_1 ranker (unless B=R, in which case we can use just ℛ_2); (iii) Re-ranking the top R candidates with ℛ_2 (unless B=1 in which case the generated sample can be returned directly). We show how these values scale with the input batch size for the model (that is, B for generative model and ℛ_1, and R for ℛ_2), in Table <ref>, and the trade-off between CER and inference time in Fig. <ref>.
Here we present the comparison of model quality vs inference time budget, by varying the values of B and R.
To connect the input sequence length to inference time, we fix the maximum number of decoding steps the model is allowed to make per input sequence symbol. In other words, our inference time is measured as time needed for one decoding step times the maximum allowed number of tokens per input symbol. The generation is always run until the maximum number of frames. In the models we used for this evaluation, 99% of the samples generated less than 5 frames per output character, which is the ratio that we fixed.
Table <ref> shows the inference time for synthesis model, ℛ_1, and ℛ_2, in ms per character as a function of the input batch size. Notice that both the autoregressive generative model and the convolution-based ranker are able to take advantage of vectorization and are 7.5 and 3.2 times faster for large batch sizes than if run individually. The recognizer, used as ℛ_2, however, does not parallelize well due to CTC <cit.> decoding and combination with language models, thus scaling linearly with the batch size.
Based on the data in Table <ref>, we plot the numbers for model quality and worst-case inference time for different values of B and R in Fig. <ref>. Points with (B=4,R=2), (B=8,R=4), and (B=16,R=8) are on the Pareto frontier, verifying our earlier statement that there are scenarios where the best performance can be achieved by combining the two rankers. Points (B=2,R=1) and (B=4,R=1) are also on the frontier, verifying our statement that there are cases where the best performance can be achieved without using the recognizer part of the ranking model at all.
Discussion and limitations. We note that the findings we present here are not universal, and the exact inference time depends on a multitude of factors such as specific generative model type and size, hardware, length of the sequence to be generated (processor caching makes longer sequences faster on a per-character basis), ranking model type and size (for the recognizer ranker, we rely on a model using CTC decoding which is hard to vectorize, whereas Seq2Seq models may parallelize better, although usually have worse accuracy). Furthermore, the average/median inference time might differ from the worst case significantly: The generative model produces an average 3.7 output frames per input character, compared to 5 which we used for the worst case analysis. Also when using the recognizer as a ranker, we need not recognize all of the candidates as we can stop at the first candidate that is perfectly recognizable, which may happen sooner or later depending on the exact sampling type and model quality. However, we believe that this does not invalidate our findings: depending on the time budget, better performance may be achieved by using a fast learned ranking model or combining it with a recognizer.
Ablation study.
In Table <ref> we evaluate our choice of the construction of the ranker training dataset, and tuning of the sampling parameters for every setup (generation model type and feature type).
Firstly, we compare our approach of generating training data for the ranker by using random sampling parameters for every label to two other baseline approaches: (i) using a fixed ancestral sampling when generating the training data; this intuitively makes sense as sampling from "widest" possible distribution should cover all the whole diversity of the generated data. (ii) for each setup, using the sampling parameters that yield the lowest CER if ℛ_2 is used as ranker; this makes sense as ℛ_1 tries to approximate ℛ_2, and it is reasonable to assume that their optimal sampling parameters should be similar. We observe that on average our proposed way of constructing a training dataset is optimal, never being more than one decimal point worse than other approaches, but at times significantly outperforming them.
Secondly, we show that the optimal sampling parameters differ a lot between the setups, so it is important to tune them for each setup. The only reliable signals we observed was that for the representation, it is often preferable to sample more "greedily" (lower value of K in Top-K or P in Top-P sampling) than for the representation, and that the optimal samplings seem to be somewhat close between the two model types.
§.§ Qualitative analysis
In this section, we first attempt to confirm that: (i) the two types of errors, overconfidence and incoherence, actually happen when generating digital ink samples, and (ii) both the choice of sampling and ranking has effect on these errors. Results are presented with the Tacotron model on Deepwriting dataset with curve representation, but we have observed largely similar trends for other cases. Afterwards, we present examples of model output on various datasets.
Fig <ref> shows examples of generated ink with various samplings - with both incoherence and overconfidence examples visible. As we can observe, overconfidence errors typically result in very long ink, that can not be recognized as the label, with repeating pattern inside. Given this observation, we attempt to quantify the number of errors of each type by looking at samples that can not be recognized (meaning the label returned by the recognizer differs from the input label to the generative model), and within those samples, whether the generation process reached the maximum number of steps (implying overconfidence) or not (implying incoherence). Table <ref> shows the number of errors, estimated by this approach, as a function of sampling parameters (value of p in Top-P sampling), and it confirms the intuition about how it should behave. We can see that as the sampling parameters go from greedy sampling closer to ancestral sampling, the number of overconfidence errors goes down, while the number of incoherence errors goes up. When we use the ranking model, we see that the number of incoherence samples first goes down, and then goes up. We attribute this to the fact that as sampling becomes more diverse, the ranking model is able to select better candidates, but as sampling becomes too diverse, all candidates start being less recognizable. Overall, using ranking seems to reduce the number of overconfidence errors by 50-90%, and number of incoherence errors by up to 50%.
Fig. <ref> shows of the model outputs, sorted according to the score provided by the ranker, left-to-right. As can be seen, the rightmost sample in every row is recognizable and matches the label, while the leftmost sample is mostly not recognizable. It is expected that in many cases at least one of 5 samples is not recognizable - if that were not the case, that would mean that the selected sampling method is too conservative and should be relaxed to produce samples with higher diversity (which would trade-off having all 5 candidates recognizable in "easy" cases for improved performance in "difficult" cases where all 5 samples were not recognizable).
§ CONCLUSION
In this paper, we investigated the effects of combining sampling and ranking strategies to improve digital ink generation.
These methods, used before in other domains such as NLG and TTS, proved to be highly useful, and complementary to each other in the case of digital ink. Until now, however, they were not explored in this domain, with most methods using ancestral or greedy sampling, and no candidate ranking.
We evaluate sampling and ranking techniques, on four datasets - two containing writing in English and one in Vietnamese, as well as a fourth one with mathematical formulas. We test the robustness of the findings using two model types (Tacotron and Transformer) and two common ink data representations ( and ). In all the combinations, we report significant improvements in the recognizability of the synthetic inks: taken together, a well-chosen sampling method, followed by fast ranking consistently improve recognizability, in many cases halving the character error rates.
An important factor in the perceived quality of ink synthesis is speed. Potential applications, such as handwriting autocompletion, spelling correction, and beautification usually process user inputs on-device, so ink generative models need to be fast. We thus report the findings with respect to a given computational budget.
splncs04
|
http://arxiv.org/abs/2306.03375v1
|
20230606032947
|
Identifying Shared Decodable Concepts in the Human Brain Using Image-Language Foundation Models
|
[
"Cory Efird",
"Alex Murphy",
"Joel Zylberberg",
"Alona Fyshe"
] |
cs.AI
|
[
"cs.AI",
"cs.CV"
] |
Identifying Shared Decodable Concepts
in the Human Brain Using Image-Language Foundation Models
Cory Efird
Computing Science and Psychology
University of Alberta
Alex Murphy
Computing Science and Psychology
University of Alberta
Joel Zylberberg
Physics and Astronomy
York University
Alona Fyshe
Computer Science and Psychology
University of Alberta
==============================================================================================================================================================================================================================================================================================================================================================
We introduce a method that takes advantage of high-quality pretrained multimodal representations to explore fine-grained semantic networks in the human brain. Previous studies have documented evidence of functional localization in the brain, with different anatomical regions preferentially activating for different types of sensory input. Many such localized structures are known, including the fusiform face area and parahippocampal place area. This raises the question of whether additional brain regions (or conjunctions of brain regions) are also specialized for other important semantic concepts. To identify such brain regions, we developed a data-driven approach to uncover visual concepts that are decodable from a massive functional magnetic resonance imaging (fMRI) dataset. Our analysis is broadly split into three sections. First, a fully connected neural network is trained to map brain responses to the outputs of an image-language foundation model, CLIP <cit.>. Subsequently, a contrastive-learning dimensionality reduction method reveals the brain-decodable components of CLIP space. In the final section of our analysis, we localize shared decodable concepts in the brain using a voxel-masking optimization method to produce a shared decodable concept (SDC) space. The accuracy of our procedure is validated by comparing it to previous localization experiments that identify regions for faces, bodies, and places. In addition to these concepts, whose corresponding brain regions were already known, we localize novel concept representations which are shared across participants to other areas of the human brain. We also demonstrate how this method can be used to inspect fine-grained semantic networks for individual participants. We envisage that this extensible method can also be adapted to explore other questions at the intersection of AI and neuroscience.
§ INTRODUCTION
To navigate the world, individuals must learn to quickly interpret what they see. Evolution created pressure to quickly extract certain types of visual information. For example recognizing and interpreting faces is core to many of our social interactions, recognizing animate objects is key to avoiding a predator (or pursuing prey).
As a byproduct, the visual system identifies a core set of concepts necessary for a successful existence in our world. But what are these core concepts, and how does the brain represent them? Seeking the answer to this question has been central to decades of neuroscience research.
Some have argued that the brain has specific areas tuned to detecting specific concepts. There is significant evidence suggesting there are areas of the brain that preferentially activate for stimuli containing faces <cit.>, places <cit.>, and more recently, there have been reports of food-specific brain areas <cit.>. The controversy around these findings is driven largely by the observation that the brain areas are not “tuned” specifically for faces or places; they also respond to other visual stimuli, meaning they are not face- or place-specific <cit.>. Thus, the key question remains unanswered: Are there dimensions of meaning recoverable from the brain's responses to image stimuli that are consistent in 1) content, and 2) localization across participants?
In this work we take an entirely data-driven approach to uncovering dimensions of meaning within the human brain. We use the Natural Scenes Dataset <cit.>, one of the largest and most comprehensive visual stimulus functional Magnetic Resonance Imaging (fMRI) datasets to date, and CLIP, a shared text and image embedding space <cit.>. We present a new decoding model that produces high top-1 accuracy, predicting CLIP space from fMRI. We then use the predicted CLIP space to learn a new embedding space we call the Shared Decodable Concept (SDC) space. SDC-space:
* is trained across participants specifically to identify the dimensions of meaning that are decodable from fMRI
* has a small number of coherent concepts per dimension
* shows consistent cross-participant localization in brain-space
SDC-space allows for a data-driven mapping of concepts to brain areas, which allowed us to find several new concepts localized to specific brain areas.
The hunt for concept-specific areas of the brain has been a decades-long venture. Very early work focused on the tuning of neurons for very low level features <cit.>, followed by the discovery of brain areas preferentially activated for specific concepts and dimensions of semantic meaning <cit.>. These studies were largely hypothesis-driven, with stimuli chosen specifically to search for areas tuned to certain concepts.
Hypothesis-driven experimental design is a cornerstone of neuroscience research, and has contributed greatly to our current understanding of the brain. However, recently some have argued for a more data-driven naturalistic approach to neuroscience <cit.>. Our work differs from previous data-driven approaches in that we perform dimensionality reduction in CLIP embedding space. For contrast, other work does the dimensionality reduction directly in fMRI voxel space, often for a single ROI. By using voxels from multiple ROIs, our method makes use of all of the information decodable from cortex. In addition, or SDC space has dimensions that are highly interpretable. This is in stark contrast to other methods, like Principal Components Analysis (PCA) where the different dimensions often lack interpretability. Our SDC space also leverages connections between the true and predicted CLIP space, which is not possible with typical PCA-style analyses.
A data-driven approach has the potential to uncover brain areas tuned to concepts that we might not otherwise have considered, as well as to expand our understanding of the specificity of certain brain areas beyond narrow visual concept classes. What follows is a framework for uncovering such visual concepts that suggests multiple new avenues for future hypothesis-driven research.
§ DECODING CLIP-SPACE FROM BRAIN IMAGES
To identify shared decodable concepts in the brain, we require a mapping from brain space to a suitable representational space. In this section we describe the pieces necessary for creating this mapping: a multimodal image-language embedding model (CLIP), a brain imaging dataset (NSD), and our method to map from per-image brain responses to their associated multimodal embeddings. We consider two models in this section, and verify that our proposed neural network decoder outperforms a regression-based linear model.
§.§ Data
fMRI Data
The natural scenes dataset (NSD) is a massive fMRI dataset acquired to study the underpinnings of natural human vision. Eight participants were presented with 30,000 images (10,000 unique images over 3 repetitions) from the Common Objects in Context (COCO) naturalistic image dataset <cit.>. A set of 1,000 shared images were shown to all participants, while the other 9,000 images were unique to each participant. Single-trial beta weights were derived from the fMRI time series using the GLMSingle toolbox <cit.>. This method fits numerous haemodynamic response functions (HRFs) to each voxel, as well as an optimised denoising technique and voxelwise fractional ridge regression, specifically optimised for single-trial fMRI acquisition paradigms. Some participants did not complete all sessions, and three sessions were held out by the NSD team for the Algonauts challenge. Further details can be found in <cit.>.
Representational Space for Visual Stimuli
To generate representations for each stimulus image, we use a model trained on over 400 million text-image pairs with the contrastive language-image pretraining objective (CLIP <cit.>). The CLIP model consists of a text-encoder and image-encoder that are jointly trained to maximize the cosine similarity of corresponding text and image embeddings in a shared low-dimensional space. We use the 32-bit Transformer model (ViT-B/32) implementation of CLIP to create 512-dimensional representations for each of the stimulus images used in the NSD experiment. We train our decoder to map from fMRI responses during image viewing to the associated CLIP vector for that same image.
§.§ Data Preparation
Data Split
We randomly split the per-image brain responses X and CLIP embeddings Y_CLIP into training (X^Train, Y^Train_CLIP), validation (X^Val, Y^Val_CLIP), and test (X^Test, Y^Test_CLIP) folds. The validation and test folds were each chosen to have exactly 1,000 images. Some participants in the NSD did not complete all scanning sessions and only viewed certain images once or twice. We assign these images to the training set. Of the shared 1,000 images, 413 were shown three times to every participant across the sessions released by NSD. These 413 images appear in each participant's testing fold.
Voxel Selection
The noise ceiling is often used to identify the voxels that most reliably respond to visual stimuli. The NSD fMRI data comes with voxelwise noise ceiling estimates, but they are calculated using the full dataset. These estimates can be used to extract a subset of voxels for decoding analyses, but this takes into account voxel sensitivity to images we later want to tune and test on, and is a form of double-dipping <cit.>. We therefore re-calculated the per-voxel noise ceiling estimates specifically on the designated training data only. Voxels with noise ceiling estimates above 5% variance explainable by the stimulus were used as inputs to the decoding model, resulting in 10k-30k voxel subsets per participant (see Supplementary Info for exact per-participant voxel dimensions).
§.§ Decoding Methodology
Decoding Model
The decoding model g: ℝ^v→ℝ^512 is trained to map a vector of brain responses X = [x_1, … , x_n], x_i ∈ℝ^v to the CLIP embeddings of the corresponding stimulus images Y_CLIP = [y_1, …, y_n], y_i ∈ℝ^512. Here n is the number of training instances, and v is the number of voxels. An illustration of the decoding procedure is given in <ref>. We define g to be a multi-layer perceptron (MLP) with a single hidden layer of size 5,000, following by a leaky ReLU activation (slope=0.01). The model is trained for 12 epochs (approximately 1,500 iterations) with the Adam optimizer and a batch size of 128. The learning rate is initialized to 1e^-4 and decreased by a factor of 10 after epochs 3, 6, and 9. We train the brain decoder using the the InfoNCE loss function <cit.>, which is defined as:
ℒ_contrastive = -log exp(q· k_+/τ)/∑_i=0^Kexp(q· k_i/τ)
In the original CLIP setting, q and k both represent image and language embeddings, where the loss is minimized when these embeddings have a high similarity for the same images (positive class) and low similarity otherwise. In our implementation, we replace the language embeddings with the brain responses to images. We set τ = 1 in our implementation. We compare our model g to a baseline ridge regression model trained on the same data. We used grid search to select the best ridge regularization parameter λ∈{0.1, 1, 10, 100, 1000, 10000, 100000} using the validation data.
Evaluation
We evaluate our models using top-k accuracy, which is computed by sorting in ascending order all true representations {y_1 … y_n } by their cosine distance to a predicted representation ŷ_i. Top-k Accuracy is the percentage of instances for which the true representation y_i is amongst the top-k items in the sorted list. Chance top-k accuracy is 100 · k/n% where n is the number of held-out data points used for evaluation.
Figure <ref> shows the results of this evaluation. The MLP model outperforms ridge regression across all values of k, motivating the need for this more complex model.
Recall that our end goal is to identify shared decodable concepts (SDC) in the brain. Our methodology for this task relies on the predicted CLIP vectors, and so an accurate deocoding model is of utmost importance.
§ OPTIMIZING FOR SHARED DECODABLE CONCEPTS BY TRANSFORMING CLIP-SPACE
Because CLIP was trained on images and text, it is an efficient embedding space for those modalities. However, we are interested in the dimensions of meaning available in images that are decodable from fMRI recordings. To explore the brain-decodable dimensions of meaning in CLIP space we pursued two directions. First, we learned a mapping to transform CLIP into a pre-existing 49-dimensional model which was trained to on human behavioral responses to naturalistic image data from the THINGS-initiative <cit.>. Second, to find dimensions of meaning specifically available in fMRI data, we explored several possible linear projections of the brain-decoded CLIP embeddings, Y_Brain. This method, which combines predicted CLIP embeddings across participants, produced dramatic increases in top-1 accuracy over the original CLIP space and the THINGS space transformation of CLIP (Figure <ref>)
§.§ THINGS Concepts
THINGS Experiment
The THINGS-Images database consists of 1854 object classes with 12 images per class <cit.>. These images were used to gather approximately 1.46 million responses to an odd-one-out task, during which MTurk workers were asked to select the one image (from a group of 3) that least belonged in the group. These responses were then used to learn THINGS object embeddings [t_1,..., t_1854] = T , t_i ∈ℝ^49. Initially, T is randomly initialized. Then, for each triplet of classes i, j, k, and the human-chosen odd one out (i), the model is trained to maximize the dot products of the embeddings for the non-odd-one-out concepts (t_j ·t_k), and minimize the other two dot products (t_i ·t_j and t_i ·t_k). The model is constrained ensure non-negativity and encourage sparsity in T. After training, T contains 49 human-interpretable dimensions that are most important for performing the odd-one-out similarity judgments. These 49 dimensions were assigned semantic labels by hand.
Translation from CLIP to THINGS space
We trained a function h_THINGS: ℝ^512→ℝ^49 to map from CLIP space to THINGS space. To train this mapping, all 12 images for each of the 1,854 Things object classes are passed into the CLIP image encoder to obtain U∈ℝ^(12·1845) × 512. We then averaged CLIP embeddings for all images within an object class to obtain U_avg∈ℝ^ 1854 × 512. These averaged embeddings are used to fit a linear ridge regression model:
min_W_THINGS ‖U_avgW_THINGS^T - T‖_2^2 +
α‖W_THINGS‖_2^2
Because the THINGS-embeddings T are constrained to be non-negative, after training we introduced an additional ReLU operation, h_THINGS(Y) = ReLU(YW_THINGS^T), and further finetuned W_THINGS. Empirical results for this fine tuning can be seen in the Supplementary Material. The CLIP-to-THINGS mapping function h_THINGS allows us to map CLIP embeddings derived from NSD stimulus images to THINGS space. In addition, we can use brain responses to those images passed through the decoder g(X) to derive decoded THINGS embeddings directly from the NSD fMRI.
Results for computing top-k accuracy in THINGS space appear in Figure <ref> (green bar). Decoding performance in THINGS space is lower than in the original CLIP space. This implies that the THINGS concepts, originally derived from behavioral data, do not sufficiently capture the visual concepts available in CLIP space that are decodable from fMRI. This motivates the search for a new embedding space tuned specifically for the decodable concepts shared amongst all participants in the NSD dataset.
§.§ Finding Shared Decodable Concepts (SDC)
In this section, we describe our method for deriving a transformation of CLIP space that reveals brain-decodable interpretable dimensions that are shared across participants. First, each participant's brain decoded and ground truth stimulus representations Y_CLIP^Val, Y_Brain^Val∈ℝ^3000 × 512 are concatenated into shared matrices Y_CLIP^ValAll, Y_Brain^ValAll∈ℝ^8·3000 × 512. The function that maps CLIP space to a shared decodable concept space (SDC) h_SDC: ℝ^512→ℝ^c where c is the chosen dimensionality of the SDC-space. Similar to h_THINGS, the mapping function is defined as a multiplication by a weight matrix followed by a ReLU, i.e. h_SDC(Y) = ReLU(YW_SDC^T). The weight matrix W_SDC is found by optimizing
min_W_SDC ℒ_contrastive
(
LeakyReLU(Y_CLIP^ValW_SDC^T) ,
LeakyReLU(Y_Brain^ValW_SDC^T)
)
The weight matrix W_SDC is randomly initialized and trained for 10,000 iterations using the Adam optimizer, a batch size of 3,000, learning rate 2e-4. A leaky ReLU with a negative slope of 0.05 is used during optimization because it encourages convergence of the SDC space components. The SDC model is fit for different numbers of components c ∈{32, 64, 128, 256, 512, 1024}.
Results for computing top-k accuracy in SDC space appear in Figure <ref> (orange bars). Decoding performance in SDC space is higher than in the original CLIP space, and grows with increasing dimension. This implies that there are visual concepts available in CLIP space that are decodable from fMRI, but that CLIP contains information not available from the fMRI in our experimental setting. Figure <ref> depicts the top 10 nearest-neighbour images for each learned concept vector, namely, each row in W_SDC with c = 32. We then visualized a selection of concept vectors by projecting a larger selection of nearest-neighbours (N=250) down to 2-dimensional space using t-distributed stochastic neighbour embedding (t-SNE) projection. Figure <ref> outlines two dimensions we found to represent animal concepts. Further dimensions associated with other semantically-coherent concepts are given in the Appendix: food (Fig. <ref>), household rooms (Fig. <ref>), buildings (Fig. <ref>) and images associated with strong uniform backgrounds (skies, snow-covered mountains etc.) (Fig. <ref>). The next sections explore this new SDC space for consistency and specificity across participants.
§ BRAIN-DECODABLE CONCEPTS THAT CONSISTENTLY CORRESPOND TO SPECIFIC BRAIN AREAS
We localize each concept dimension from our learned W_SDC matrix to a small subset of voxels with a masking procedure that selects sparse voxel sub-groups. We investigate the spatial contiguity and sparsity of concept voxel sub-groups, and the cross-participant and cross-concept consistency via participant- and concept-specific voxel masks, m_i,s. To test consistency across participants, we define a new metric which takes into account the fractional overlap of voxels present in concept masks across the ROIs calculated using the Human Connectome Project Atlas (HCP-MMP1) for each participant. This allows us to determine broad spatial similarity patterns across participants in their native brain spaces (not aligned to an average brain template), allowing smaller voxel subgroups tuned to finer-grained semantic distinctions to be compared, while maintaining idiosyncratic participant-specific functional anatomy.
§.§ Finding Concept Brain Areas
For each CLIP concept vector w_i ∈ℝ^512 from W_THINGS and W_SDC, our objective is to find a sparse binary voxel mask m_i,s∈{0, 1}^v (v is the number of voxels) that defines a set of voxels that support the decodability of the concept w_i for participant s ∈{1, …, 8}. The masks are derived by fitting a LASSO regression model
min_m_i,s^lasso 1/2n‖X^Valm^lasso_i,s -
Y^Val_CLIPw_i ‖_2^2 +
α‖m_i,s^lasso‖_1
Here n is the number of data points and the regularization hyperparameter α = 1e-3 is used for all concept masks. All non-zero values in m_i,s^lasso are set to 1 to derive the binary mask m_i,s that is used for further analysis in this section.
§.§ Evaluation of SDC concept masks
Mask Concept Specificity An initial question is whether the brain regions represented by m_i,s are truly specific to their corresponding concept vectors w_i, or do they equally support the decodability of other concepts w_j where j i? To help answer this, we measure the relative change in brain decodability for a concept w_j after applying a mask m_i,s. The matrix of relative changes in brain decodability D^s for a participant s is constructed as follows:
D_i, j^s =
pearsonr(
Y^Test_CLIP·w_i,
g(X^Test⊙m_j, s) ·w_i
)
/pearsonr(
Y^Test_CLIP·w_i,
g(X^Test) ·w_i
)
where ⊙ denotes element-wise multiplication and pearsonr(.) is the Pearson correlation coefficient. Then the participant-specific D^s are averaged across participants to obtain D which is displayed in Figure <ref>.
Cross Participant Consistency Next, we investigate whether the masks m_i, s exhibit consistency in their spatial locations within the brain across participants. The highly sparse and disjoint nature of the masks motivated the use of an ROI-based similarity measure. The HCP_MMP1 atlas parcellates the cortical surface into 180 distinct regions per hemisphere. We used this atlas to define ROI fraction vectors f_i, s∈ℝ^360 for each concept mask m_i, s. Each element in f_i, s represents the number of voxels in m_i, s that intersect a particular ROI in the atlas, divided by the total number of voxels in m_i, s. The similarity of mask regions can be compared across participants by taking the cosine similarity between ROI fraction vectors, i.e. cosine similarity(f_i, s, f_j, t) where i, j index concepts and s, t index participants. Figure <ref> outlines the method we use to identify shared decodable concepts.
Using our shared decodable concepts matrix W_SDC, (Fig. <ref>a), we use LASSO regularization to induce a participant-specific subset of voxels we call a voxel mask, such that each individual participant has a sparse voxel mask for each dimension of W_SDC (Fig. <ref>b). The dimensionalities of the masks per-participant and per-mask are given in <ref>. We demonstrate the procedure by selecting 3 synthetic example dimensions for purposes of illustration, m1, m5 and m7 for three participants in the fMRI dataset. In Fig-<ref>c we show an example of how these voxel masks for 3 participants might be spatially organised along the inflated ventral surface (images generated using Freesurfer's Freeview program <cit.>). Dimension m1 (orange voxels) is highly consistent across participants, meaning that it frequently appears in ROI-1 and ROI-3 of each participant. Dimension m5 (magenta voxels) is consistent between participants 2 and 3, both appearing in ROI-2 but not in ROI-2 in Participant 1. This represents partial consistency. Finally, dimension m7 (blue voxels) represents a situation where the spatial organization of the mask does not intersect any defined ROI consistently across participants. By taking the 360 automatically-generated ROIs (180 from each hemisphere) from the Human Connectome Project (HCP) Atlas <cit.>, calculated by Freesurfer's recon-all program, we count the number of voxels in each mask that intersect with each ROI and for each participant's set of masks. This yields a 360-dimensional vector for each participant and each mask (Figure <ref>d). We then calculate the cosine similarity between these vectors in order to determine if there is a consistent shared representation of mask voxels across the ROIs as shown in Fig. <ref>e. For each dimension of W_SDC, we calculate a list of consistent (high cosine similarity) and inconsistent (low cosine similarity) scores according to the algorithms given in Fig. <ref>f.
By comparing the distributions of both sets of results, we identify dimensions that share stable mask distributions across participants (Fig. <ref>g). For some dimensions of our optimized matrix, we find greatly increased consistency across participants. We selected the top 7 dimensions and plot the 10 most associated images for that dimension in Figure <ref>. These shared decodable concept dimensions have associated images that are superficially very distinct but have a clearly consistent semantic interpretation.
Voxel subselection
The mask optimization procedure does not explicitly encourage the discovery of contiguous voxel subgroups, yet this is exactly what we find when generating masks for each of the decodable semantic concepts.
This allows us to explore mask distributions across ROIs associated with specific semantically congruent stimuli, such as dimensions often used in fMRI localizer scans. We consistently find that semantic dimensions in our shared decoding space that contain prominent faces, bodies and places have contiguous mask locations in the expected ROIs. These voxel subgroups in our results occupy much smaller and fine-grained areas of ROIs found in localizer experiments, which may be related to processing the various fine-grained semantic distinctions. We expand on this analysis in Appendix <ref>. Using the dimensions we found to be most consistent across participants (Figure <ref>: right), we further explore the voxel mask locations to identify which brain areas support the semantic dimensions of W_SDC that are consistent across participants.
§.§ Identifying Brain Regions Underpinning Fine-Grained Shared Decodable Concepts
We visualized the spatial locations of voxel masks for key semantic dimensions which we found to be consistent across subjects in Figures <ref> and <ref>. We noted several consistencies and plot color-coded voxel masks for a sample of 4 NSD participants in Figure <ref>. For Participant 1, we found an area where the masks for dimensions 16 and 18 largely overlap. The top images in dimension 16 depict people performing a variety of sports activities, and dimension 18 mainly depicts people bodily motion (i.e. jumping). The brain area where the two corresponding maps overlap is PFcm, which has been associated with the action observation network <cit.>. We highlight this region later in Figure <ref>. In the same participant, we also note that masks for dimensions 9 and 20 overlap substantially. The top images in those dimensions contain horizon scenes and trucks/boats, respectively, and are conceptually linked by their outdoor settings. The brain area where these maps overlap is area TF of parahippocampal cortex: importantly, the parahippocampal place area (PPA) is known to represent places and identification of such voxel masks points to specialized subsections that could underlie more fine-grained representations relating to the broad semantic concept of place.
The t-SNE results in Section <ref> (and Appendix <ref>) show remarkably coherent semantic networks containing hierarchical representations that cluster together in human-interpretable ways. We further explore the spatial organization of some of these dimensions by inspecting the participant-specific voxel masks learned for these dimensions in order to derive hypotheses for future work out of our data-driven approach.
Figure <ref> showed two dimensions related to animals, in which one (dimension 17) contained clusters of cats, birds, bears, giraffes and elephants, while the other (dimension 21) contained clusters of zebras, farm animals and (again) elephants. Figure <ref> shows the 8 participants' voxel masks for these two dimensions. We first note that the masks are largely non-overlapping, but we do see bilateral sections of the EBA selected (red circles) consistently across participants, though often at different extrema of the EBA's boundaries, reflecting an individual's functional / anatomical individuality. Furthermore, for participants 1,2,3,5,6,8 we see much greater voxel selection in PPA for dimension 21 (yellow), which is the dimension that represents animals in the wild, compared to dimension 17, which is largely related to indoor cats or birds in the sky. We also find in some participants that voxel masks are learned that share adjoining yet non-overlapping continua (blue circles). We believe this an interesting finding, after discovering the most selective images associated with these components were semantically linked to animals. We perform the same analysis on the food dimension that we identified (dimension 28) in Appendix <ref>.
§.§.§ Identifying Within-Participant Brain Regions Underpinning Shared Semantics
In Section <ref> it was suggested that Participant 1's voxel mask distribution to dimension 10 ("bodies in motion") had found a large continuous patch of voxels in region PFcm, a region that had previously been linked to bilateral activation when processing action / activity <cit.>. This voxel mask largely overlapped with the voxel mask for dimension 16 ("outdoor sports"). We looked for other semantic concepts that implied movement / action and selected dimension 19, due to the presence of skiers, snowboarders and skateboarders in the t-SNE visualisation of the top images in that cluster (see <ref>). Encouragingly, we find that this semantic dimension has also induced a learned voxel mask that overlaps with the same patch of cortex in this area of interest. This appears to serve as a signature of observed action when processing images, given the varied surface level features of the visual images in each dimension. Figure <ref> shows the flat map representation of Participant 1's cortex (middle), with the voxel mask locations for these two dimensions overlaid. We further examined the t-SNE clusters for these dimensions (top: dimension 19, bottom: dimension 18). When only plotting the top-10 nearest neighbours earlier in Figure <ref>, we only identified the concepts implying body motion indoors, but extending this to a larger numbers of images, we also see that this dimension captures bodies in motion during sports, beyond just jumping. We note that virtually all images connected to dimension 18 involve a form of action, while a portion of dimension 19 also overlap in the same semantic regions (skiing, snowboarding, skateboarding, kite-flying). We find overlap in the left PFcm region, which has been previously associated with action observation (yellow box). The patch of cortex where both dimension masks overlap reveal that the mask for dimension 19 is smaller than that of dimension 18, which makes sense as only a subset of the images associated with dimension 19 are specific to action observation.
These t-SNE clusters reveal two sets of images that are superficially different but we find semantic overlaps that can be linked to adjoining / overlapping patches of cortex that sparse voxel masks can detect, which can then be related to prior literature that previously found evidence that the same region supports the same semantic overlap we observed in the image dimensions, namely action observation. We find that for a few participants that the contrast of these two dimensions results in overlapping continuous strips in PPA (not shown). We envisage that the method we present to identify shared decodable concepts can also be used to map out within-participant fine-grained semantic networks and we demonstrate an example of this on our initial results here.
§ DISCUSSION
We introduced a new data-driven method that uses the CLIP language-image foundation model <cit.> to identify sets of voxels in the human brain that are specialized for representing different concepts in images. Using this method, we identified several putative new concept-encoding networks that were consistent between individuals. Our method also identified concept-encoding regions that were previously known (e.g., regions specialized for faces, places, and food), which increased our confidence in the new regions uncovered by our method.
Importantly, our new method enables us to compare fine-grained participant-specific concept-encoding networks (defined as sets of voxels) without needing to apply any compression to the brain imaging data. These networks can then be mapped to 180 different anatomically-defined brain regions in each hemisphere, enabling us to relate our concept-encoding networks to known brain anatomy. At the same time, by virtue of its fine-grained resolution, our method can identify concept-encoding networks that span multiple anatomical brain regions, and those that occupy portions of known brain regions.
Among the new concepts-encoding networks we identified are networks specialized for jumping bodies, transport, scenes with strong horizons, room internal (toilet/bathroom). These are clearly important concepts, but their representations are ones that would not be readily identified using standard methods that attempt to tie brain areas to semantic concepts. This is because the space of potential concepts to explore is so large that a hypothesis-driven “guess and check” approach is likely to miss many important concepts. For this reason, our data-driven approach represents an important advance, and highlights the utility of modern AI systems (e.g., the CLIP language-image foundation model) for advancing cognitive neuroscience. Importantly, the method we introduced is quite general: while we have applied it here to CLIP embeddings and fMRI data, future work could apply the same method to embeddings generated by other AI systems (not just CLIP), and/or to other brain recording modalities (not just fMRI).
All neural recordings are from a previously released public dataset <cit.>. The recordings were collected with the oversight of the institutional review boards at the institute where the experiments were performed. Our work has the potential to enable new methods for decoding human brain activity using non-invasive methods. Such methods could have substantial impacts on society. On the positive side, these methods could assist in diagnosing psychiatric disorders, or in helping individuals with locked-in syndrome or related disorders to better communicate. On the other hand, the same brain decoding methods could pose privacy concerns. As with all emerging technologies, responsible deployment is needed in order for society as a whole to obtain maximum benefit while mitigating risk.
There are several limitations of this work, and decoding experiments in general. Firstly, we use fMRI, and so can only recover the shared decodable concepts available in that fMRI space. That is, if concepts are not encoded by the BOLD signal as measurable by fMRI, we will not be able to recover them. On the other hand, the methods we develop here could be applied to other neural recording modalities that more directly measure neural activity, including electrocorticography. Consequently, this first limitation is one that could be overcome in future studies.
Secondly, we are also only able to uncover concepts that appear in the CLIP space spanned by the images in the stimulus set. This means that, while our method is more flexible than the hypothesis-driven ones used by previous studies <cit.>, some important concept representations could still be missed by our method. Future work could address this limitation by using even larger stimulus sets. Finally, we report on a few probable localizations of concepts. These provide clear hypotheses for future experiments that should be targeted at attempting to confirm these findings. This emphasizes that our new data-driven method is not a replacement for the hypothesis-driven approach to identifying concept-encoding regions in the human brain: rather, it serves as a data-driven hypothesis generator that can accelerate the task of understanding how our brains represent important information about the world around us.
§ APPENDIX
§.§ Consistency Check with Faces, Bodies and Place Images
In order to verify that our masking procedure captures known functional localization of various high-level visual concepts, such as faces, places, and bodies, we use the functional localizer scans present in NSD for these categories and calculate the overlap with our participant- and dimension-specific voxel masks. To do this, we use our derived W_SDC matrix of 32 concepts that are found via fMRI-decoding into CLIP-space, where each dimension represents a potentially shared decodable concept derived from brain responses, revealing the semantic tuning sensitivity to visual concepts that are found to be important. For each dimension of W_SDC, we extract the top-10 CLIP images that are nearest neighbours to the fMRI-decoded CLIP embeddings. We then identify a set of indices that we expect to activate voxel locations in the participant-specific NSD functional localizer maps. We plot these flat maps for each participant (generated in PyCortex <cit.>) along with a histogram (average over participants) of voxel mask locations that overlap with areas associated with higher-level functional ventral visual cortex. Figure <ref> shows the top-10 associated images with each of the shared decodable concepts. Figure <ref> shows the participant-specific maps and averaged histogram.
From Figure <ref>, we associate the three high-level functional categories of interest to the following indices: face ∈{4, 5}, place ∈{8, 9, 10, 14, 32} and body ∈{2, 15, 16, 18, 24}.
We clearly see that the mask dimensions for the higher-level concepts of faces, bodies and places do largely overlap with areas in which there is an a priori expectation of overlap. The images contained in NSD are largely confounded in that each image typically represents not just a single semantic concept. For example, images containing bodies will often contain faces, and most likely vice-versa (but head-shot images would be an exception to this). Furthermore, images of people are often in a place-context, whether inside, outside or in the proximity of an area. We see this fact in our mask distributions, too. In place images, such as dimension nine in Figure <ref>, where we see strong horizons, there is less confounding with face and body areas, as place contexts can easily exist without humans, but the reverse is often not true. We see the effect of this in that the place ROIs typically contain the closest association with pooled masks over our identified place dimensions (green bar is higher for OPA, PPA, RSC). These results show that even in a dataset of highly confounded images, the mask locations we derive in our procedure align well with results and expectations from prior literature, serving as a consistency check. Our procedure, however, reveals a vastly more fine-grained set of results across a wider range of cortex, linking (among other regions) functional ROIs together differentially, in order map out a distributed coding of semantically-complex visual inputs that can be used to explore semantic networks between-participants but also across-participants.
§.§ Specification of participant-Specific Data Dimensions
In Section 2.2 we specified that the number of voxels, per-participant, that passed the 5% noise ceiling thresholds were used as inputs into the initial fMRI decoding algorithm (converting fMRI data to brain-derived CLIP embeddings). The range of voxels that passed this threshold was specified to be in the range of 15-30k. Later on in Section 4.1, we discuss per-participant per-dimension masks (each participant has a specific mask for each of the shared decodable concepts derived in W_SDC). We specify the dimensionality of voxel inputs and mask dimensions for each participant in Table <ref>.
We observe that the L1 regularization we apply in order to induce sparse masks results in similar voxel subset sizes for each shared decodable concept in W_SDC irrespective of the number of voxels that passed the noise ceiling threshold and were used as inputs into the fMRI-decoding algorithm. Under our assumptions that a shared latent code across ROIs exists for shared decodable concepts, recovering broadly similar participant-specific and concept-specific mask voxel subset sizes is expected, although this does not itself mean they are similar in spatial distribution across the cortex. A beneficial feature of these masks is that they aren't limited to exact spatial overlap within ROI regions. This allows for participant-specific anatomical and functional specialization to be identified by our analysis method. We assess this using an ROI-similarity metric that checks for cross-participant consistency.
§.§ Visualization of Shared Decodable Concept Clusters via t-SNE
In order to explore the semantic meanings of each of the 32 dimensions in the shared decodable concepts matrix presented in Section 3.2. of the main text, we transform the CLIP representations of the full set of 73,000 stimulus images Y^Full_CLIP∈ℝ^73000 × 512 into a shared decodable concept space, SDC = Y^Full_CLIPW_SDC^T. The top 10 highest-scoring images on each dimension are selected and plotted in Figure <ref>. To allow for a more thorough investigation of individual dimensions, we select the top 250 highest-scoring images and embed their CLIP vectors in a 2-dimensional space using the t-distributed stochastic neighbor embedding (t-SNE) method. A selection of t-SNE visualizations are plotted in Figures <ref>, <ref>, <ref>, <ref>, <ref>. We plan to release the entire set of generated images and t-SNE projections upon manuscript acceptance. The t-SNE results in Section <ref> show remarkably coherent semantic networks containing hierarchical representations that cluster together in human-interpretable ways. We further explore the spatial organization of some of these dimensions by inspecting the participant-specific masks learned for these dimensions in order to derive hypotheses for future work out of our data-driven approach.
§.§.§ Food (Dimension 28)
Figure <ref> shows a 2D t-SNE projection of the top 250 images from this dimension into clusters.
§.§.§ Skies (Dimension 19)
Figure <ref> shows a 2D t-SNE projection of the top 250 images from this dimension into clusters.
§.§.§ Household Locations (Dimension 10)
Figure <ref> shows a 2D t-SNE projection of the top 250 images from this dimension into clusters.
§.§.§ Buildings (Dimension 26)
Figure <ref> shows a 2D t-SNE projection of the top 250 images from this dimension into clusters.
§.§ Food
Recent findings have posited that areas encompassing and surrounding the PPA, FFA are highly selective for abstract food representations <cit.>, particularly the patch of cortex that separates them. After discovering that food was a shared decodable concept in our analysis, evidenced by the t-SNE clustering projection in Figure <ref>, we explored our voxel masks across the 8 NSD participants in order to assess whether we would find similar results.
Figure <ref> shows a varied set of results across participants. All participants have some level of voxel mask locations in the region between PPA and FFA (inclusive), while most do show a large presence in the regions previously identified as being specific for food. We don't restrict the area in which the voxel masks are learned and this could lead to some observed differences.
|
http://arxiv.org/abs/2306.08141v1
|
20230613211045
|
ArtWhisperer: A Dataset for Characterizing Human-AI Interactions in Artistic Creations
|
[
"Kailas Vodrahalli",
"James Zou"
] |
cs.AI
|
[
"cs.AI",
"cs.CV",
"cs.HC",
"cs.LG"
] |
Latent mutations in the ancestries of alleles under selection
[
July 31, 2023
=============================================================
As generative AI becomes more prevalent, it is important to study how human users interact with such models.
In this work, we investigate how people use text-to-image models to generate desired target images. To study this interaction, we created ArtWhisperer, an online game where users are given a target image and are tasked with iteratively finding a prompt that creates a similar-looking image as the target.
Through this game, we recorded over 50,000 human-AI interactions; each interaction corresponds to one text prompt created by a user and the corresponding generated image.
The majority of these are repeated interactions where a user iterates to find the best prompt for their target image, making this a unique sequential dataset for studying human-AI collaborations.
In an initial analysis of this dataset, we identify several characteristics of prompt interactions and user strategies.
People submit diverse prompts and are able to discover a variety of text descriptions that generate similar images. Interestingly, prompt diversity does not decrease as users find better prompts.
We further propose to a new metric the study the steerability of AI using our dataset. We define steerability as the expected number of interactions required to adequately complete a task. We estimate this value by fitting a Markov chain for each target task and calculating the expected time to reach an adequate score in the Markov chain.
We quantify and compare AI steerability across different types of target images and two different models, finding that images of cities and natural world images are more steerable than artistic and fantasy images.
These findings provide insights into human-AI interaction behavior, present a concrete method of assessing AI steerability, and demonstrate the general utility of the ArtWhisperer dataset.
§ INTRODUCTION
Direct human interaction with AI models has become widespread following the public release of text-to-text models like GPT-4 <cit.> and PaLM 2 <cit.> and text-to-image models like Stable Diffusion <cit.>. These models have seen rapid interest and adoption in diverse industries including engineering, creative writing, art, education, medicine, and law <cit.>.
A large reason for why this rapid adoption has been possible is that people, with no understanding of how the model works or its limitations, are able to interact with and steer the model to complete a wide variety of tasks.
One of the key challenges for developing these models is aligning their output to human-written input text. This is made especially challenging by the broad domain of use cases as well as the diverse prompting styles of different users. Many approaches can be categorized into the broad label of “prompt engineering” where specific strategies for prompting are used to steer a model <cit.>. Great success has also been found by fine-tuning models with relatively small datasets to better align a model to follow human instructions <cit.>, to condition the model to respond in a specific style <cit.>, or behave differently to specified prompts <cit.>.
In this work, we take special interest in the fact that human interaction with these models often is an iterative process and develop a dataset to help investigate how people interact with these models with a goal of improving human-AI interaction.
To study this interaction, we created an interactive game where players try to find an optimal prompt for a given task (see Figure <ref>). In particular, we focus on text-to-image models and ask the player to generate a similar image (AI Image) to a given target image. The player is allowed to iterate on their prompt, using the previously generated image(s) as feedback to help them adjust their prompt. A score is also provided as feedback to help the user calibrate how “close” they are to a similar image.
Using this setup, we collected interaction data on 51,026 interactions from 2,250 players across 191 unique target images. The target images were selected from a diverse set of AI-generated and natural images.
We also collected a separate dataset of 4,572 interactions, 140 users, and 51 unique target images in a more controlled setting with two different diffusion models to assess the robustness of our findings.
Based on this data, we find several interesting patterns in how people interact with AI models.
Players are able to find many different ways to achieve high scores; in particular, they find a diverse set of prompts that all result in a player-generated image similar to the target image.
We also find that players tend to make small, iterative updates to their prompts as they steer the AI, with each update improving their image with a moderate success rate (40-60% for most target images).
Based on these findings, we define and evaluate a metric model steerability based on the stopping time of an empirical Markov model. We use this metric to assess steerability across different groups of images and across two AI models.
Ethical considerations
One of the main goals of this work is to help improve the quality of human-AI interaction. our dataset and findings provide quantitative insights on how people interact with generative AI and can potentially be used to design AI that are easier for people to use. It does not address the broader concern that bad actors may abuse generative AI models.
Our contributions
We release a public dataset on human interaction with an AI model. To our knowledge, this is the first such dataset showing repeated interactions of people with a text-to-image model to accomplish specified tasks. We also provide an initial analysis of this data and propose a simple-to-calculate metric for assessing model steerability.
Our dataset and associated code
is made available at https://github.com/kailas-v/ArtWhispererhttps://github.com/kailas-v/ArtWhisperer.
Related Works
Datasets for human interaction with text-to-text or text-to-image models typically focus on single interactions (a single user prompt and the resulting model output) and generally do not provide users with a specific task.
Public text-to-image interaction datasets typically contain the generated AI images or prompts <cit.>, paired data with both images and prompts <cit.>, and data also paired with some form of human preference rating <cit.>. These datasets generally rely on scraping online repositories like Lexica <cit.> or Discord servers focused on AI Art generation. Though some of these datasets include metadata like timestamp and a user ID allowing reconstruction of prompt iteration, there is no guarantee the user has the same target image in mind or whether the user has changed their desired output over time.
Public text-to-text datasets are much more limited as the best performing models are generally accessible only through APIs with no public repositories that aggregate user interactions. While some researchers have investigated how human-AI interaction for text-to-text can be improved through various tools <cit.>, the amount data collected is limited and not publicly available. There are also repositories containing prompt strategies for various tasks <cit.>, but no human interaction component.
We seek to rectify two of the shortcomings of the existing datasets–namely, that they do not contain extended interactions as the user attempts to steer the AI, and they do not have a predefined goal. In our work, we create a controlled environment where we allow extended interactions and have a known goal for human users.
As shown by our initial analysis, our dataset may enable deeper understanding of user prompting strategies and assessing model steerability.
§ INTERACTION GAME
In the game, players are shown a target image.
A few example target images are provided in Figures <ref>, <ref>.
Players are also provided a limited interface to a text-to-image model; in particular, they are given access to the “positive prompt” and “negative prompt” inputs to the Stable Diffusion (SD) v2.1 model <cit.>. Here, the positive prompt should contain what the user would like to see in the generated image, while the negative prompt should contain what the user would like omitted from the generated image. Upon inputting a prompt (which often only includes a positive prompt and no negative prompt), the player is shown the image generated by the AI model, along with a score based on similarity between their generated image and the target image. This interface is shown in Figure <ref>.
Note that players only have access to the model through the positive and negative prompts. In particular, all other parameters are fixed, including the random seed. We opted to fix the random seed for each target image so that all players will generate the same image given the same prompt.
§.§ How Target Images are Selected
We randomly sample target images from two sources. The first is a collection of Wikipedia pages, and the second is a dataset of prompts AI artists have used with SD <cit.>.
In addition to sampling target images, we need to ensure the task is feasible to users. As we do not allow users to adjust the seed or other parameters of the model, we need to ensure the selected model parameters can generate reasonably similar images to the target image. We find that selecting an appropriate random seed is sufficient, and fix all other model parameters (see Section <ref> for details).
Below, we describe how we randomly sample target images and select the random seed.
Wikipedia images
A collection of 35 Wikipedia pages on various topics including art, nature, cities, and various people. A full list of pages sampled from is provided in the Appendix. From these pages, we scraped 670 figures licensed under the Creative Commons license. These figures were then filtered by which had captions, as well as which images were JPG or PNG images (i.e., not animated, and not PDF files), resulting in 557 images.
For each of the 557 images, we first resize and crop the image to size 512 × 512. The Wikipedia caption is used as the ground truth “prompt”. Let the image-caption pair be denoted as (t_i, p_i^*). We sample the model on 50 random seeds, with p_i^* as the prompt input. This generates a set of 50 images: S_i = {(x_i, s_i): i=1,…,50} for generated image x_i and seed s_i. Let C(x) denote the CLIP image embedding <cit.> of an image x. Then we select the seed as s_i^*, where
i^* := min_i=1,…,50||C(x_i)-C(t_i)||_2/||C(x_i)||_2 · ||C(t_i)||_2.
AI-Generated Images
A collection of 2,000 AI-art prompts are randomly sampled from the Stable Diffusion Prompts dataset <cit.>. For each prompt, p_i^*, we generate two sets of images using 60 unique random seeds: the first set, S_i,1 = {(x_i, s_i): i=1,…,10} and S_i,2 = {(x_i, s_i): i=1,…,50}. We select the target image, t_i_1^*, from S_i,1:
i_1^* := min_i=1,…,10median( {||C(x_i,1)-C(x_j,2)||_2/||C(x_i,1)||_2 · ||C(x_j,2)||: j=1,…,50})
We select the random seed, s_i_2^*, using t_i_1^* and S_i,2, with
i_2^* := min_i=1,…,50||C(x_i,2)-C(t_i_1^*)||_2/||C(x_i,2)||_2 · ||C(t_i_1^*)||_2.
The intuition here is that t_i_1^* is more representative of the types of images we may expect given the fixed prompt, p_i^*.
§.§ Scoring Function
To provide feedback to players, we created a scoring function to assess the similarity of a player's generated image and the target image. We define the scoring function as
score(x_i, t_i) = max(0, min(100, α·⟨ CLIP(x_i), CLIP(t_i) ⟩/||CLIP(x_i)||_2 · ||CLIP(t_i)||_2 + β)),
for generated image x_i, target image t_i, and constants α, β. Note the range of score(x_i, t_i) is integers in the interval [0, 100]. Details on how α, β are selected parameters are provided in the Appendix.
While this scoring function is often reasonable, it does not always align with the opinions of a human user. To assess how well score(x_i, t_i) follows a user's preferences, we acquire ratings from a subset of users (see ArtWhisperer-Validation in Section <ref>). We find score(x_i, t_i) has a Pearson correlation coefficient of 0.579 indicating reasonable agreement. Further assessment is performed in Section <ref>.
§.§ Additional Technical Details
For the generative model, we use SD v2.1 <cit.>. We use the DPM Multi-step Scheduler <cit.>, and run the model for 20 iterations. AI-generated target images use the same parameters, but we run the model for 50 iterations. The primary reason for this difference was to limit latency for players; running for 20 iterations resulted in a latency of 1-3 seconds depending on the player's internet connection. All images are generated at size 512 × 512.
§.§ Dataset Overview
We collected two datasets as part of this work, which we term ArtWhisperer and ArtWhisperer-Validation. We use ArtWhisperer for most analysis and results; for some of the results in Sections <ref>, <ref>,and <ref>, we also use ArtWhisperer-Validation.
Data was collected from March-May 2023. IRB approval was obtained.
ArtWhisperer: A public version of our game was released online at <https://artwhisperer.io/> and we collected data from consenting users playing the game. Three new images were released each day, and users can play during as many days as they would like. Users were anonymous and we only collected data related to the prompts submitted to ensure privacy of users. While we expect some users played the game across multiple days, we did not track them and so do not know the true number of unique users across days. A summary of the ArtWhisperer dataset is provided in Table <ref>. In total, we have 2,250 players corresponding to 51,026 interactions across 191 target images. We further break down the data across several common categories the images relate to. In Figure <ref>, we show a density plot of how many queries are submitted by players across different target images.
ArtWhisperer-Validation: A second version of the game (with a near identical interface) was released to paid crowd workers on Prolific <cit.>. The crowd workers were compensated at a rate of $12.00 per hour for roughly 20 minutes of their time. Workers played the game across 5 randomly selected target images from a pre-selected subset of 51 target images chosen to have diverse content. Workers were also asked to rate each of their images on a scale of 1-10 (i.e., self-scoring their generated images). In total, we collected data on 4,572 interactions, corresponding to 140 users and 51 unique target images across two different diffusion models, SD v2.1 and SD v1.5.
Additional details on the ArtWhisperer-Validation dataset are provided in the Appendix.
§ PROMPT DIVERSITY
Across images, regardless of image content, people submit a diverse set of prompts. Notably, people are able to achieve high scores with a diverse set of prompts. While not necessarily surprising that there are multiple ways to achieve a good score (this indicates that the score in our game has multiple local maxima for any given target image), it is interesting that the high-scoring prompts submitted by users remains diverse. Some examples are shown in Figure <ref>.
We quantify prompt diversity by looking at the distribution of prompts in the text embedding space. In particular, we use the CLIP text embedding <cit.>, and measure the distance in the embedding space between a given prompt and the average submitted prompt for the corresponding target image. Though the results presented here are based on the CLIP embedding, we do find similar results for other standard text embeddings (see Supplement).
§.§ Diverse prompts used for high scores
In the left of Figure <ref>, we plot the distribution of distances between the first prompt (in blue) and the last prompt (in orange) to the average prompt for the corresponding target image. Despite the average score improving from 51.9 to 70.3 (out of 100) indicating a significant improvement in score, prompt diversity does not significantly diminish. That is, users do not converge to similar prompts to achieve high scores. Similar analysis of the image embedding space suggests image diversity decreases (see Appendix for details).
This makes sense given that the score is linearly related to distance in the CLIP image embedding space.
This analysis suggests that, over the course of interaction with the AI model, the distribution of user prompts undergoes a transformation (to be closer to the target image), but does not change in diversity.
§.§ People submit similar prompts throughout their interaction
In the center of Figure <ref>, we plot the distribution of the standard deviation of prompts for users (blue) and for permuted users (orange). Permuted users are generated by sampling from all prompts for a given target image uniformly, using the same distribution of number of prompts as for real users. The gap between the two distributions shows that individuals do not randomly sample prompts each interaction, but base new prompts off of previously submitted prompts (p-value<10^-10, t-test for independent variables). An analysis of how scores change between adjacent prompts shows that this strategy has a moderate success rate and improves the score 40-60% of the time, with an average rate of 48.6% (note that , the note that score changes <1 are counted as unchanged; this occurs 10.2% of the time).
While this is not a surprising result (that users often do not make significant changes to their prompt), but it is an important result to understand how typical users interact with AI models. Moreover, it suggests that user initialization (i.e., the first prompt they submit) is critical.
§.§ People have similar prompt styles across images
For each user and each target image, we calculate the average prompt embeddings and the average prompt embedding submitted by the given user. We take the difference of these two embeddings as a user specific average prompt, that represents the style of the user. We then compute the standard deviation these user specific embeddings across target images, as a measure of how much a user varies in prompting style across images.
In the right of Figure <ref>, we plot the distribution of the standard deviations for users (blue) and for permuted users (orange), where the permuted users are generated by sampling user specific average prompts and assigning them to the simulated players, allowing us to test whether real players have more similarity than random across target images.
While the gap between the two distributions is significant (p-value<10^-10, t-test for independent variables) and indicates that users do have specific styles of prompting that are captured by looking at the distribution of text embeddings, it is not large. This suggests that while user style may a component to prompting, other factors related to the target image may be more important.
§ MODEL STEERABILITY
Model steerability refers to the ability of a user to steer a model towards a desired outcome. There is no current consensus on how to measure AI steerability. A common approach is to simply measure performance of a model on standardized dataset evaluations <cit.>. While this can enable comparisons between tasks and models, this approach does not allow for the feedback loop present when humans interact with a model. Steerability can also be measured qualitatively based on user assessment of their experience interacting with the AI <cit.>.
We create a simple yet informative measure of model steerability.
We then analyze this measure across different subgroups of images and across two different Stable Diffusion models–SDv2.1 and the older SDv1.5 <cit.>.
§.§ Measuring steerability
As discussed in Section <ref>, users typically engage with the model through clusters of similar prompts. They typically start with an initial base prompt and proceed to make multiple incremental modifications to it.
We use this observation as a basis for creating a steerability metric.
We define a Markov chain between scores. Each node is a score with edges connecting to the subsequent score. To make this tractable for empirical analysis, we bin scores into five groups: [0,20], [21,40], [41,60], [61,80], [81,100]. We use the expected time taken to reach the last score bin, [81,100], as our steerability score (i.e., the stopping time to reach an adequate score).
For each target image, we calculate the empirical transition probability matrix between binned scores using all the players' data for that image. We then calculate the steerability score for the given target image by running a Monte Carlo simulation to estimate stopping time, as defined above.
To assess steerability across a group of images, we average steerability score across all images in the group.
§.§ Analysis
In Figure <ref>, we plot the steerability score across image groups. Error bars show the standard error. For examples of steerability scores for individual images, see the Supplement.
We find that images containing famous people or landmarks, real images (not AI generated), contain cities, or contain nature are the most steerable. AI-generated images, fantasy images, and images of human art are the least steerable. There are a few possible explanations. The model we are assessing here, SDv2.1, as well as its text encoder OpenCLIP, are trained on subsets of LAION5B <cit.>. The contents of LAION5B are predominantly real world images, indicating why these images may be more steerable (i.e., text describing these types of images may have a better encoding). Moreover, the prompts for AI-generated images and fantasy images generally include specific internet artists and/or art styles which may not be known to most users making achieving the desired target image more difficult. Another potential reason is the distribution of images chosen for each category. Clearly, there are “easier” and “more difficult” images in each category; part of the reason for smaller stopping time may be the sample of images chosen rather than the actual image category.
We also compare steerability across the two models: SDv2.1 and SDv1.5. Across most image categories, we observe a similar steerability. Images of nature, sci-fi or space, and real images have the largest differences in steerability between the two models; SDv2.1 is more steerable in all three cases. This suggests that SDv2.1 may be more steerable for natural images as well as sci-fi images, and is similarly steerable for other kinds of images including AI-generated artwork. One explanation may be that most of our users were not aware of certain prompting strategies that help models generate more aesthetic images or certain art styles; it is possible that for experienced users, AI art images may be more steerable, and differences between models may be magnified if, for example, a user is experienced working with one particular model.
More discussion is provided in the Appendix.
§.§ Justification for automated score
One limitation in our steerability metric comes from the method of scoring user-submitted prompts.
Ideally, we would like to assess steerability based on a user's personal preferences.
As mentioned in Section <ref>, the scores and human ratings have a positive correlation.
Here, we use the human ratings instead of our score function to assess steerability. We compute the steerability score across both models and across image groups. Generally, the steerability scores change little. In all but two cases (SDv2.1 on sci-fi and space images; SDv1.5 on nature images), the human rating-based steerability score remains within a 95% confidence interval of the score-based steerability score. While our score function may not perfectly capture human preferences, the steerability score we generate appears to be robust to these issues. Further discussion is included in the Appendix.
§ DISCUSSION
As demonstrated in our initial analysis, the ArtWhisperer dataset can provide insights into user prompting strategies and enables us to assess model steerability for individual tasks and groups of tasks. What makes our dataset particularly useful is the controlled interactive environment, where users work toward a fixed goal, that we capture data in.
We expect the ArtWhisperer dataset will be useful beyond the initial analyses presented here. For example, some future work may involve fine-tuning models to better accommodate users based on their prompt trajectories or guide users towards better prompting strategies. There is also deeper analysis possible for understanding user prompting strategies.
plain
§ APPENDIX
§.§ Information on Wikipedia pages scraped
Table <ref> presents a list of the Wikipedia pages used to select real-world target images (see Section <ref>). We extracted images from each listed Wikipedia page. We then uniformly sample a category and subsequently sample an image from a page in that category. This ensures a diverse set of images, which is important given that some of the Wikipedia pages contain many more images than others (e.g., has 10 times more usable images than ).
§.§ Scoring function details
In Section <ref>, we defined the scoring function, score(x_i,t_i) as
score(x_i, t_i) = max(0, min(100, α·⟨ CLIP(x_i), CLIP(t_i) ⟩/||CLIP(x_i)||_2 · ||CLIP(t_i)||_2 + β)).
α and β are constants used to scale the embedding distance prior to clipping the score. To select α and β, we used a small dataset of interactions collected by the authors prior to the main dataset collection (this data is not included in the released dataset). This dataset contains groups of images paired with scores in the range [0,1]. For each target image in this small dataset (5 in total), we add the following images to the dataset:
* AI-generated images that use the target prompt but with a different seed. These images are assigned a score of 1.
* The images corresponding to the human-generated prompts. These images are assigned a score of 0.5.
* AI-generated images that use a different prompt than the target prompt. These images are assigned a score of 0.
The intuition here is that with the AI-generated images, we can assume using the same target prompt with a different seed should generate a similar image hence the highest score possible (1). Using a different prompt (from our prompt dataset <cit.>), however, should result in an entirely different image hence the lowest score possible (0). Images generated by people are assumed to be somewhere in between, hence the score of 0.5.
We then fit a linear regression model to this dataset (to predict score given the CLIP image embedding), using balanced sampling across the image groups. This linear model has parameters
α' = -1.503
β' = 1.791
Since our score range is [0,100] but cosine similarity has a range of [-1,1], we scale the cosine similarity by 100, resulting in
α” = -150.3
β” = 179.1
We also add a “score adjustment” term that attempts to normalize image difficulty. In particular, for each target image, we compute the un-clipped score for the target image t_i and the image generated using the target prompt with, x_i:
unclipped_score(x_i,t_i) = α”·⟨ CLIP(x_i), CLIP(t_i) ⟩/||CLIP(x_i)||_2 · ||CLIP(t_i)||_2 + β”.
This score assess the score a user would receive if they exactly entered the target prompt. We fix this value as 100 (i.e., a perfect score prior to clipping), and set the score adjustment parameter, c_i, to appropriately normalize this score. In particular,
c_i = 100/unclipped_score(x_i,t_i),
and then we obtain target specific parameters,
α_i = -150.3 · c_i
β_i = 179.1 · c_i
We use the target specific parameters, α_i, β_i, for our scoring function parameters, α, β, (so each target image may have slightly different parameters).
§.§ Additional information on running the game
Game instructions
Game instructions are provided in Figure <ref>. Here we show the main instructions provided on how to play (top), as well as the tool-tips given for positive prompts (lower left) and negative prompts (lower right).
Crowd workers
Crowd workers are adults from the US. They were paid at a rate of $12.00 per hour for roughly 20 minutes of time. Additionally, they were provided bonus payment of between $0.10-$0.50 per image they received a perfect score on (depending on the image difficulty). In total, we paid about 600 for recruiting the crowd workers.
§.§ Additional example images
We provide additional examples of image trajectories and diverse images in Figures <ref> and <ref>.
§.§ Algorithm for steerability
We describe the algorithm for assessing steerability in more detail in Algorithm <ref>. Here, we define three procedures. estimates the steerability of a target image. We define a set of score bins; we chose 5 equally sized bins so that there was sufficient data to cover each bin. We also use a regularizer, ϵ. What ϵ essentially does is encode a prior that from any given score, the transition to a new score is uniformly random. Then for each target image, we find the empirical score transition probabilities in and use Monte Carlo simulation to estimate the stopping time in .
§.§ Steerability across models
In Figure <ref>, we plot the steerability across SDv2.1 and SDv1.5. As described in Section <ref>, images of nature, sci-fi or space, and real images have the largest differences in steerability between the two models. “Nature” is the only image group with a steerability difference greater than the standard deviation of the mean. Other image groups seem to have similar performance across both SDv2.1 and SDv1.5. This suggests that SDv2.1 only makes minor improvement over SDv1.5 across most image categories.
§.§ Steerability scores for individual images
In Figure <ref>, we provide some example images along with their steerability scores. Note that more simple images with well-defined content that likely has high presence in the model's training data (e.g., the first two rows–a fly on a leaf; a drawing of Barack Obama, well-known public figure) have smaller steerability values, indicating they are easier to steer. However, more complex content that is also more ambiguous for users (e.g., the last three rows), have larger steerability indicating greater difficulty in steering.
§.§ Additional discussion around human ratings
Here we provide plots for the analysis using human ratings. In Figure <ref>, we show a scatter plot of scores and human ratings. We also plot a best fit line which has a correlation of 0.597, indicating that our score function produces values that are indeed similar to the human ratings.
We also provide a plot comparing the steerability value calculated using our score function and calculated using the human ratings in Figure <ref>. Error bars indicate the standard error. Images depicting sci-fi or space have the greatest difference (humans seem to be harsher judges of their generated images' similarity to the target. However, for most image groups, the two steerability scores are quite close and generally exceed a 95% confidence interval.
|
http://arxiv.org/abs/2306.06336v1
|
20230610030308
|
Distribution System Operation Amidst Wildfire-Prone Climate Conditions Under Decision-Dependent Line Availability Uncertainty
|
[
"Alexandre Moreira",
"Felipe Pianco",
"Bruno Fanzeres",
"Alexandre Street",
"Ruiwei Jiang",
"Chaoyue Zhao",
"Miguel Heleno"
] |
math.OC
|
[
"math.OC"
] |
Distribution System Operation Amidst Wildfire-Prone Climate Conditions Under Decision-Dependent Line Availability Uncertainty
Alexandre Moreira, Member, IEEE, Felipe Piancó, Student Member, IEEE, Bruno Fanzeres, Member, IEEE, Alexandre Street, Senior Member, IEEE, Ruiwei Jiang, Chaoyue Zhao, and Miguel Heleno, Member, IEEE
July 31, 2023
=================================================================================================================================================================================================================
Wildfires can severely damage electricity grids leading to long periods of power interruption. Climate change will exacerbate this threat by increasing the frequency of dry climate conditions. Under these climate conditions, human-related actions that initiate wildfires should be avoided, including those induced by power systems operation. In this paper, we propose a novel optimization model that is capable of determining appropriate network topology changes (via switching actions) to alleviate the levels of power flows through vulnerable parts of the grid so as to decrease the probability of wildfire ignition. Within this framework, the proposed model captures the relationship between failure probabilities and line-flow decisions by explicitly considering the former as a function of the latter. The resulting formulation is a two-stage model with endogenous decision-dependent probabilities, where the first stage determines the optimal switching actions and the second stage evaluates the worst-case expected operation cost. We propose an exact iterative method to deal with this intricate problem and the methodology is illustrated with a 54-bus and a 138-bus distribution system.
Decision-dependent uncertainty, wildfire in distribution systems, distribution system operation, ambiguity aversion, line switching.
§ NOMENCLATURE
§.§ Sets
ℒ Set of indexes of line segments.
ℒ^sw Set of indexes of switchable line segments.
𝒦^forbid Set of indexes of forbidden switching patterns.
𝒩 Set of indexes of buses.
𝒩^subs Set of indexes of buses with substation.
§.§ Parameters
β_lSensitivity of failure probability to the scheduled active power flow of line l ∈ℒ.
γ_lEstimated upper bound for the nominal probability of failure associated with line l ∈ℒ.
C^llCost of loss of load.
C^sw_lCost of switching line l ∈ℒ^sw.
C^tr_bCost of active power from main transmission grid for bus b ∈𝒩^subs.
D^p_bActive power demand at bus b ∈𝒩.
E_lNumber of digits for binary expansion used in Master problem linearization for line l ∈ℒ.
F_lMaximum power flow at line l ∈ℒ.
P^tr_bMaximum active power injection at bus b ∈𝒩^subs.
PF_bPower factor at bus b ∈𝒩.
Q^tr_bMaximum reactive power at bus b ∈𝒩^subs.
Q^tr_bMinimum reactive power at bus b ∈𝒩^subs.
R_lResistance of line l ∈ℒ.
sStep for binary expansion used in Master problem linearization.
V_bVoltage lower bound at bus b ∈𝒩.
V_bVoltage upper bound at bus b ∈𝒩.
V^refVoltage reference.
X_lReactance of line l ∈ℒ.
z_l^sw,0Initial switching status of line l ∈ℒ^sw.
§.§ Decision variables
αWorst expected value of lower-level problem.
Δ D^p-_bAmount of active power loss at bus b ∈𝒩.
Δ D^p+_bAmount of active power surplus at bus b ∈𝒩.
Δ D^q-_bAmount of reactive power loss at bus b ∈𝒩.
Δ D^q+_bAmount of reactive power surplus at bus b ∈𝒩.
f^p_lActive power flow at line l ∈ℒ.
f^q_lReactive power flow at line l ∈ℒ.
p^tr_bAmount of active power injected at bus b ∈𝒩^subs.
q^tr_bAmount of reactive power at bus b ∈𝒩^subs.
v^†_bSquared voltage at bus b ∈𝒩.
y^sw_lBinary decision variable indicating a switching action of line l ∈ℒ^sw (1 if switched, 0 otherwise).
z^sw_lBinary decision variable of switching status of line l ∈ℒ^sw (1 if switched on, 0 otherwise).
δ_lhAuxiliary binary variable for binary expansion in Master problem.
η_1-31Dual variables of lower-level problem.
ξ_lAuxiliary binary variable for linearization in Master problem.
ρ_leAuxiliary binary variable for linearization in Master problem.
φDual decision variable of the worst expected value of lower-level problem.
χ_lAuxiliary variable for linearization in Master problem.
ψ_lDual decision variable of the worst expected value of lower-level problem for line l ∈ℒ.
§ INTRODUCTION
Wildfire events are a real threat to power systems operations at both transmission and distribution levels. The damage caused by these events might cost a significantly large amount of irrecoverable capital to society (e.g., the estimate of more than $700 million in damage to transmission and distribution systems over 2000-2016 <cit.>) and be irreparable in cases when human lives are involved. Over the past two decades, California, for instance, has experienced a large raise in the frequency of small wildfires, while the total burned area from large ones has also substantially increased <cit.>. In this context, human-induced activities have been placed at a top rank among the main roots of wildfire ignition, with power system operations responsible for some of them, as, for instance, when eventual sparks due to power flow through overhead lines aligned with dry weather conditions and strong wind speed levels cause this natural disaster <cit.>. In extreme cases, this has been addressed by the electric sector with public safety power shut-offs (PSPS), which results in significant load sheddings and economic impacts <cit.>. As a consequence, novel operative policies are of significant importance in order to establish efficient power system operations amidst wildfire-prone climate conditions, assuring thus high levels of sustainability and system resilience <cit.>.
Due to this critical prospect, various research efforts have been dedicated to addressing resilience in power systems under potential natural disasters and human-made attacks. At the transmission level, for example, the work developed in <cit.> proposes a two-stage stochastic Mixed-Integer NonLinear Programming (MINLP) model to define investment strategies to improve resilience, considering a range of earthquake events and the methodology developed by <cit.> combines optimization and simulation techniques to determine a portfolio of investments to improve grid resilience while also considering the potential occurrence of earthquakes. At the distribution level, on the other hand, the work reported in <cit.> presents a storage sitting and sizing model to increase resilience while facing seismic hazards and, in <cit.>, the authors designed a three-level system of optimization models to identify line hardening solutions to protect the distribution grid against intentional or unintentional attacks. Particularly regarding wildfires, an increasing deal of attention has been emerging in technical literature. Notably, in <cit.>, the authors propose a methodology to alleviate wildfire risks by optimizing the selection of components in the grid to be de-energized in a power shut-off scheme. In addition, in <cit.>, a stochastic programming model that aims at increasing the resiliency of a distribution system exposed to an approaching wildfire is devised under exogenous uncertainties such as solar radiation, wind speed, and wind direction. Notwithstanding the relevance of recent technical literature, none of them has taken into account the direct impact of the power flow dispatch on the likelihood of line failures in a decision-dependent uncertainty framework.
From a modeling perspective, it is important to emphasize that uncertainties in power system operations are typically exogenously induced into the decision-making process. In this framework, uncertainty sources are solely associated with external factors and are not endogenously affected by operational actions. However, in many realistic cases, such as under wildfire-prone climate conditions, the operation of electric grids is also associated with the origin of fire ignitions, which significantly increase line failure probabilities. Due to this double role of power grids, the nature of the uncertainty is thus more complex to characterize (dependent not only on meteorological conditions – exogenous factors, but also on the grid operation decisions – endogenous factors), challenging the standard exogenously-induced approach. Therefore, in order to design resilience-oriented operational strategies in high fire-threat areas, utility operators must be aware of the impact of their operational decisions on the likelihood of wildfire initiation and reduction in reliability levels[We refer to <cit.> and the references therein for a wider discussion on the impact of distribution system operations in the probabilistic characterization of wildfire ignition], i.e., the endogenous nature of the uncertainty.
Methodologically, we can divide decision-dependent uncertainty characterizations into two major types. The first one involves problems where the decision-making process filters the potential paths of uncertainty realization <cit.>. This filtering implies that a given decision might not only rule out possible scenarios from occurring but also open the possibility for a specific subset of future events to occur. The second type is associated with problems where the decisions directly impact the whole probabilistic characterization of the uncertainty factors <cit.>.
In this paper, we leverage the second modeling type to propose a new methodology for distribution system operations capable of endogenously taking into account the impact of power flows on failure probabilities in the context of a potential wildfire event. We design a decision-dependent uncertainty framework where the line failure probabilities are a function (dependent) of the power flow levels. In this framework, we consider that during adverse climate conditions (dry weather and reasonably strong wind), switching actions can be made to reduce power flows in vulnerable areas of the grid, therefore decreasing the probability of wildfire ignition and consequent line failures, while seeking to maintain load supply. Thus, the proposed methodology allows distribution system operators to perform efficient switching actions to improve the system reliability accounting for decision-dependent line availability uncertainty. Structurally, the proposed methodology falls into the class of a two-stage, distributionally robust optimization problem with decision-dependent uncertainty <cit.>. In the first stage, our model decides the network topology (switching lines) and power imports from the main grid with main goal of minimizing the operational cost in the pre-contingency state plus the worst-case expected cost of operating the system under post-contingency states considering probabilities adjusted according to the pre-contingency network topology and line power flows. Then, in the second stage, the power flow and energy not served are evaluated for each post-contingency state. To summarize, the contributions of this paper are twofold:
* To formulate the distribution grid operation under adverse climate conditions as a two-stage distributionally robust optimization problem where the probabilities of line failure are co-dependent on the weather conditions (exogenous) and system power flows (endogenous). In the first stage, the system operator decides switching actions (therefore determining grid topology) and power imports from the main grid aiming at co-optimizing the pre-contingency and the worst-case expected post-contingency operations, formulated as the second stage.
* To devise an effective decomposition-based solution methodology capable of solving the proposed optimization problem. The approach is able to circumvent the computational difficulties posed by the multi-level (non-convex) structure intrinsic to the decision-dependent uncertainty modeling frameworks.
§ OPTIMAL DISTRIBUTION SYSTEM OPERATION WITH DECISION-DEPENDENT UNCERTAINTY
The main objective of this work is to propose a methodology to determine the least-cost operation of a distribution system taking into account the impact of operative decisions in the probabilistic characterization of the line availability. We assume that the operator can perform switching actions in a set of line segments in the distribution system with the objective of minimizing the worst-case expected operation cost considering a decision-dependent uncertainty in line availability. In (<ref>)–(<ref>), the proposed distribution system operation model is formulated.
Δ D_b^p-, Δ D_b^p+, Δ D_b^q-, Δ D_b^q+,
f^p_l,
f^q_l,
p^ tr _b,
q^ tr _b,
v^†_b,
y^sw_l,
z^sw_lMinimize ∑_b∈𝒩^subs( C^tr_b p^tr_b )
+ ∑_b ∈𝒩 C^ll( Δ D_b^p+ + Δ D_b^p- + Δ D_b^q+ + Δ D_b^q-)
+ ∑_l ∈ℒ^sw C^sw_l y^sw_l + sup_𝒬∈𝒫(f^p, β)𝔼_𝒬[ H(z^sw,a^L) ]
subject to:
p^ tr _b + ∑_l ∈ℒ|to(l)=b f_l^p - ∑_l ∈ℒ|fr(l)=b f_l^p - D^p_b
- ΔD^p+_b + ΔD^p-_b = 0;
∀ b ∈𝒩^subs
q^ tr _b + ∑_l ∈ℒ|to(l)=b f^q_l - ∑_l ∈ℒ|fr(l)=b f^q_l
- tan(arccos(PF_b)) D^p_b - ΔD^q+_b + ΔD^q-_b = 0;
∀ b ∈𝒩^subs
∑_l ∈ℒ|to(l)=b f_l^p - ∑_l ∈ℒ|fr(l)=b f_l^p - D^p_b - ΔD^p+_b
+ ΔD^p-_b = 0;
∀ b ∈𝒩∖𝒩^subs
∑_l ∈ℒ|to(l)=b f^q_l - ∑_l ∈ℒ|fr(l)=b f^q_l - tan(arccos(PF_b)) D^p_b
- ΔD^q+_b + ΔD^q-_b = 0;
∀ b ∈𝒩∖𝒩^subs
- v^†_fr(l) + v^†_to(l) + 2(R_l f^p_l + X_l f^q_l) - (1 - z^sw_l)M ≤ 0;
∀ l ∈ℒ^sw
v^†_fr(l) - v^†_to(l) - 2(R_l f^p_l + X_l f^q_l) - (1 - z^sw_l)M ≤ 0;
∀ l ∈ℒ^sw
v^†_fr(l) - v^†_to(l) - 2(R_l f^p_l + X_l f^q_l) = 0;
∀ l ∈ℒ∖ℒ^sw
V_b^2 ≤ v^†_b≤V_b^2;
∀ b ∈𝒩
v^†_b = V^ref^2 ;
∀ b ∈𝒩^subs
- z^sw_l F_l ≤ f^p_l ≤ z^sw_l F_l;
∀ l ∈ℒ^sw
- z^sw_l F_l ≤ f^q_l ≤ z^sw_l F_l;
∀ l ∈ℒ^sw
- F_l ≤ f^p_l ≤F_l;
∀ l ∈ℒ
- F_l ≤ f^q_l ≤F_l;
∀ l ∈ℒ
f^q_l - ( ( 1/2 - e ) π/4 ) ( f^p_l - cos ( e π/4 ) F_l )
- sin ( e π/4 )F_l ≤ 0;
∀ l ∈ℒ, e ∈{1,…,4}
- f^q_l - ( ( 1/2 - e ) π/4 ) ( f^p_l - cos ( e π/4 ) F_l )
- sin ( e π/4 ) F_l ≤ 0;
∀ l ∈ℒ, e ∈{ 1,…,4 }
0 ≤ p^ tr _b ≤P_b^tr;
∀ b ∈𝒩^subs
Q_b^tr≤ q^ tr _b ≤Q_b^tr;
∀ b ∈𝒩^subs
ΔD^p+_b, ΔD^p-_b, ΔD^q+_b , ΔD^q-_b≥ 0; ∀ b ∈𝒩
Δ D_b^p^-≤ D^p_b;
∀ b ∈𝒩
Δ D_b^q^-≤tan(arccos(PF_b))(D^p_b);
∀ b ∈𝒩
y^sw_l ≥ z^sw_l - z^sw,0_l;
∀ l ∈ℒ^sw
y^sw_l ≥ z^sw,0_l - z^sw_l;
∀ l ∈ℒ^sw
∑_l ∈ℒ^forbid_k z^sw_l ≤ |ℒ^forbid_k | - 1;
∀ k ∈𝒦^forbid
z^sw_l ∈{0,1};
∀ l ∈ℒ^sw
where sets L, L^sw, K^forbid, N, and N^subs contain indices of all line segments, line segments that can be switched on/off, line segments that cannot be simultaneously switched on (due to radiality constraints), all buses of the distribution system, and buses with substations, respectively. In addition, parameters C^tr_b, C^ll, C^sw_l, z_l^sw,0, D^p_b, PF_b, V^ref, R_l, X_l, V_b, V_b, F_l, P^ tr _b, Q^ tr _b, Q^ tr _b represent cost of purchasing active power from the main transmission grid, cost of loss of load, cost of switching, initial switching status of switchable line segments (equal to 1 if switched on, 0 otherwise), active power demand, power factor, voltage reference, resistance, reactance, voltage lower bound, voltage upper bound, maximum power flow in each line segment, maximum active power injection at the substations, maximum reactive power injection at the substations, and minimum reactive power injection at the substations, respectively. Moreover, decision variables p^ tr _b, q^ tr _b, v^†_b, f^p_l, f^q_l, y^sw_l, z^sw_l, ΔD^p+_b, ΔD^p-_b, ΔD^q+_b, ΔD^q-_b represent active power injected at the substations, reactive power injected at the substations, squared voltage, active power flow, reactive power flow, an indication of a switching action (equal to 1 if a switching action is scheduled, 0 otherwise), switching status, active power surplus, active power loss, reactive power surplus, reactive power loss.
Problem (<ref>)–(<ref>) is a two-stage, mixed-integer, distributionally robust optimization problem with decision-dependent uncertainty (ambiguity set). The objective function (<ref>) aims at minimizing a combination of active power injection purchases at the nodes with substations, loss of load costs, switching action, as well as the decision-dependent expected second-stage operational cost. More specifically, the latter is represented by H(𝐳^sw,a^L), a function of the first stage switching decision (𝐳^sw) and the random vector a^L associated with the availability of line segments of the feeder. To do so, note that in (<ref>), we formulate the ambiguity set P (that accounts for the collection of credible probability distributions of line availability uncertainty - this set will be better defined in Subsection <ref>) as a function of the scheduled power flow (f^p) and the (contextual) factors (β) (also better defined in Subsection <ref>) to characterize endogenous and exogenous influence to uncertainty in line failures, respectively.
Active and reactive power balance are modeled through constraints (<ref>) and (<ref>) for substations and via constraints (<ref>) and (<ref>) for the remaining buses. Constraints (<ref>) and (<ref>) model voltage difference between sending and receiving ends of switchable line segments, with M denoting a large number to relax these constraints when line l ∈ L^sw is switched off. Analogously, constraints (<ref>) represent voltage drop for non-switchable line segments. Constraints (<ref>) enforce voltage limits. Constraints (<ref>) set the voltage at the substations equal to the voltage reference. Active power flows limits are imposed by constraints (<ref>) for switchable line segments and by (<ref>) for the remaining ones. Likewise, constraints (<ref>) and (<ref>) impose limits to reactive power flows, which are also limited according to current active power flows by constraints (<ref>) and (<ref>) similarly to the linearized AC power flow presented in <cit.>. Constraints (<ref>) and (<ref>) enforce limits on active and reactive power injections at the substations, respectively. Constraints (<ref>) enforce non-negativity to power surplus and load shedding variables while constraints (<ref>) and (<ref>) impose upper limits on load shedding variables. Constraints (<ref>) and (<ref>) model the behavior of variable y^sw_l, which assumes value equal to 1 if the determined switching status z^sw_l of line segment l ∈ L^sw is different from its initial switching status z^sw,0_l. Constraints (<ref>) model the forbidden switching patterns with L^forbid_k indicating the lines segments that cannot be simultaneously switched on for each k ∈ K^forbid. In practice, this set of rules is usually defined a priori by the operator to impose radiality constraints. Finally, constraints (<ref>) impose the binary nature of the switching variables.
Following the decision-making process, the post-contingency operational problem is formulated in (<ref>)–(<ref>):
H(z^sw,a^L) = Δ D^p-^c_b, Δ D^p+^c_b,
Δ D^q-^c_b, Δ D^q+^c_b,
f^p^c_l, f^q^c_l, p^tr^c_b,
q^tr^c_b, v^†^c_b
Minimize ∑_b∈𝒩^subsC^tr_b p^tr^c_b
+ C^ll∑_b∈𝒩 [ Δ D^p+^c_b + Δ D^p-^c_b + Δ D^q+^c_b + Δ D^q-^c_b ]
subject to:
p^tr^c_b + ∑_l∈ℒ|to(l)=bf^p^c_l - ∑_l∈ℒ|fr(l)=bf^p^c_l
- D^p_b - Δ D^p+^c_b + Δ D^p-^c_b = 0 :(η_b^1); ∀ b ∈𝒩^subs
q^tr^c_b + ∑_l∈ℒ|to(l)=bf^q^c_l - ∑_l∈ℒ|fr(l)=bf^q^c_l
- tan(arccos(PF_b))D^p_b - Δ D^q+^c_b + Δ D^q-^c_b = 0:
(η_b^2);
∀ b ∈𝒩^subs
∑_l∈ℒ|to(l)=bf^p^c_l - ∑_l∈ℒ|fr(l)=bf^p^c_l - D^p_b - Δ D^p+^c_b
+ Δ D^p-^c_b = 0 :(η_b^3); ∀ b ∈𝒩∖𝒩^subs
∑_l∈ℒ|to(l)=bf^q^c - ∑_l∈ℒ|fr(l)=bf^q^c_l
- tan(arccos(PF_b))D^p_b - Δ D^q+^c_b + Δ D^q-^c_b = 0:
(η_b^4); ∀ b ∈𝒩∖𝒩^subs
- v^† ^c_fr(l) + v^† ^c_to(l) + 2(R_l f^p^c_l + X_l f^q^c_l)
- (1 - a^L_l)M - (1 - z^sw_l)M ≤ 0:(η_l^5); ∀ l ∈ℒ^sw
v^† ^c_fr(l) - v^† ^c_to(l) - 2(R_l f^p^c_l + X_l f^q^c_l) - (1 - a^L_l)M
- (1 - z^sw_l)M ≤ 0:(η_l^6); ∀ l ∈ℒ^sw
- v^† ^c_fr(l) + v^† ^c_to(l) + 2(R_l f^p^c_l + X_l f^q^c_l)
- (1 - a^L_l)M ≤ 0:(η_l^7); ∀ l ∈ℒ∖ℒ^sw
v^† ^c_fr(l) - v^† ^c_to(l) - 2(R_l f^p^c_l + X_l f^q^c_l)
- (1 - a^L_l)M ≤ 0: (η_l^8); ∀ l ∈ℒ∖ℒ^sw
V_b^2 ≤ v^† ^c_b≤V_b^2:(η_b^9,η_b^10); ∀ b ∈𝒩
- z^sw_l F_l ≤ f^p^c_l ≤ z^sw_l F_l: (η_l^11,η_l^12); ∀ l ∈ℒ^sw
- z^sw_l F_l ≤ f^q^c_l ≤ z^sw_l F_l: (η_l^13,η_l^14); ∀ l ∈ℒ^sw
- a^L_l F_l ≤ f^p^c_l ≤ a^L_l F_l:(η_l^15,η_l^16); ∀ l ∈ℒ
- a^L_l F_l ≤ f^q^c_l ≤ a^L_l F_l:(η_l^17,η_l^18); ∀ l ∈ℒ
f^q^c_l - ( ( 1/2 - e ) π/4 ) ( f^p^c_l - cos ( e π/4 ) F_l )
- sin ( e π/4 )F_l ≤ 0:(η_l,e^19); ∀ l ∈ℒ, e ∈{1,…,4}
- f^q^c_l - ( ( 1/2 - e ) π/4 ) ( f^p^c_l - cos ( e π/4 ) F_l )
- sin ( e π/4 )F_l ≤ 0: (η_l,e^20); ∀ l ∈ℒ, e ∈{1,…,4}
0 ≤ p^tr^c _b ≤P_b^tr^c: (η_b^21,η_b^22); ∀ b ∈𝒩^subs
Q^tr^c_b ≤ q^tr^c _b ≤Q^tr^c_b:(η_b^23,η_b^24); ∀ b ∈𝒩^subs
v^†^c_b = V^ref^2:(η_b^25); ∀ b ∈𝒩^subs
Δ D_b^p^+^c,Δ D_b^p^-^c,Δ D_b^q^+^c,Δ D_b^q^-^c≥ 0:
(η_b^26,η_b^27,η_b^28,η_b^29); ∀ b ∈𝒩
Δ D_b^p^-^c≤ D^p_b: (η_b^30); ∀ b ∈𝒩
Δ D_b^q^-^c≤tan(arccos(PF_b)) D^p_b: (η_b^31); ∀ b ∈𝒩
where the symbols within parenthesis are the dual variables associated with the constraints. Problem (<ref>)–(<ref>) is a linear programming problem with (continuous) decision variables p^tr^c_b, q^tr^c_b, v^†^c_b, f^p^c_l, f^q^c_l, ΔD^p+^c_b, ΔD^p-^c_b, ΔD^q+^c_b, and ΔD^q-^c_b with essentially the same role as in (<ref>)–(<ref>). Analogously to (<ref>)–(<ref>), constraints (<ref>)–(<ref>) model active and reactive power balances. Constraints (<ref>)–(<ref>) express voltage differences in line segments under a given contingency state associated with vector a^L and a first-stage switching decision z^sw_l. Constraints (<ref>) impose voltage limits. Constraints (<ref>)–(<ref>) enforce limits to active and reactive flows.
Constraints (<ref>)–(<ref>) limit power injections and impose voltage reference at the substations as well as enforce non-negativity to power surplus and power loss variables.
§.§ Decision-(Line-Flows)-Dependent Ambiguity Set Modeling
Following the discussion of the previous section, the proposed methodology for distribution system operations seeks for least-cost pre- and post-contingency states operative decisions, the latter with respect to line segment availability. We argue, furthermore, that such line availability is mainly influenced by exogenous weather conditions, in particular during adverse climate circumstances, as well as endogenously impacted by the determined operative point and power flow in the network <cit.>. To jointly tackle these two critical uncertain factors in a unified framework, in this section, a pre-contingency line-flow-dependent ambiguity set of credible branch availability probabilities is constructed. More specifically, the uncertainty related to the underlying stochastic process associated with line failures is modeled via a tailored ambiguity set P∈ M_+ composed of a collection of probability distributions that characterize the limited knowledge of failure probabilities and the endogenous/exogenous uncertain impact factors. Formally, the proposed ambiguity set is expressed as:
P(f^p, β) = { Q∈ M_+ ( A) | 𝔼_ Q[Sâ^L] ≤μ(f^p, β) }.
In (<ref>), function μ (·, ·) is a vector of means that defines the dependency of external factors and decisions variables. The term S is defined as an auxiliary matrix of coefficients, and â^L = 1 - a^L indicates a random vector of line unavailability with set A characterizing its support. In this work, the support of the random vector a^L, is defined as
A={a^L∈{0,1}^| L| | ∑_l ∈ L a^L_l ≥ | L| - K },
with K indicating the number of simultaneous unavailable system components (lines segments, in the context of this work) <cit.>. Following the ambiguity set definition (<ref>), fundamentally, a critical modeling element is the appropriate definition of the vector of means (μ(f^p, β)). In this work, we follow the main findings in <cit.> and consider the following functional representation:
μ(f^p, β) = γ + diag(β) |f^p|,
where diag(β) returns a diagonal matrix with elements of β. Structurally, vector γ represents an estimated upper bound for the nominal probability of failure associated with each line segment l ∈ L, extracted from the set of available information (e.g., failures per year), whereas vector β (exogenous-impact) characterizes the sensitivity in the probability of failure to the scheduled active power flow (endogenous-impact) in each line. Within the context of this paper, on the one hand, vector β provides instrumental information on how the probability of line failure increases as a function of the power flows. On the other hand, in particular, during adverse climate conditions (e.g., dry weather and high wind speed), the line failure can be caused by fire, started by the line itself if it is sufficiently close to vegetation. This condition can be adjusted by the system operator using the contextual (exogenous) vector β. Therefore, structurally, by setting S = [𝕀 | -𝕀]^T_2| L| × | L| and μ_l = (γ_l + β_l |f^p_l|), ∀ l ∈ L and μ_(l+| L|) = 0, ∀ l ∈ L in (<ref>), we have the resulting ambiguity set:
P(f^p, β) = { Q∈ M_+( A) | 0 ≤𝔼_ Q [â^L_l] ≤γ_l + β_l |f^p_l|;
∀ l ∈ L}.
Since â^L = 1 - a^L is a Bernoulli-type random vector, the structural specification of (<ref>) implies that a failure probability in each line l ∈ L is constrained by the factor γ_l + β_l |f^p_l|, thus dependent on the (endogenous) scheduled active power flow f^p_l and the contextual (exogenous) information β_l. It is worth highlighting that the proposed distribution system operation model (<ref>)–(<ref>) with (<ref>) has a decision process that follows a two-stage, distributionally robust optimization with decision-dependent ambiguity set rationale. This decision process is formulated as a three-level system of optimization problems, not suitable for direct implementation on commercial solvers nor standard mathematical programming algorithms. Therefore, in the next section, we leverage the problem structure to devise a decomposition-based solution approach to efficiently handle the proposed model.
§ SOLUTION METHODOLOGY
The two-stage formulation (<ref>)–(<ref>) proposed in Section <ref> is intended to model the operation of a distribution system while performing switching actions to minimize the worst-case expected cost in post-contingency operations. In this section, we develop an iterative procedure based on outer approximation to solve this problem. We being by replacing the last term in (<ref>) with α and writing (<ref>). The variable α is defined through (<ref>)–(<ref>) which essentially represents the last term in (<ref>). Thus, we equivalently rewrite model (<ref>)–(<ref>) as (<ref>)–(<ref>).
α, ΔD^p-_b, ΔD^p+_b, ΔD^q-_b, ΔD^q+_b,
f^p_l, f^q_l, p^ tr _b, q^ tr _b, v^†_b, y^sw_l, z^sw_lMinimize∑_b ∈𝒩^subs C^tr_b p^ tr _b
+ ∑_b ∈𝒩 C^ll(ΔD^p+_b + ΔD^p-_b + ΔD^q+_b + ΔD^q-_b)
+ ∑_l ∈ L^sw C^sw_l y^sw_l + α
subject to:
Constraints (<ref>)–(<ref>)
α = { Q∈ M_+Maximize∑_a^L ∈ A H(z^sw,a^L) Q(a^L)
subject to:
∑_a^L ∈ A(Sâ^L) Q(a^L) ≤μ(f^p, β) : (ψ)
∑_a^L ∈ A Q(a^L) = 1 : (φ) }.
Resorting to duality theory, we can substitute α in (<ref>) by the dual objective function of the inner model (<ref>)–(<ref>) and replace it with the dual feasibility constraints. More precisely,
ΔD^p-_b, ΔD^p+_b, ΔD^q-_b, ΔD^q+_b, φ,
ψ≥0, f^p_l, f^q_l, p^ tr _b, q^ tr_b, v^†_b, y^sw_l, z^sw_lMinimize∑_b ∈𝒩^subs C^tr_b p^ tr _b
+ ∑_b ∈𝒩 C^ll(ΔD^p+_b+ ΔD^p-_b + ΔD^q+_b + ΔD^q-_b)
+ ∑_l ∈ L^sw C^sw_l y^sw_l + ψ^⊤μ(f^p, β) + φ
subject to:
Constraints (<ref>)–(<ref>)
ψ^⊤Sâ^L + φ≥ H(z^sw,a^L); ∀ a^L ∈ A.
To withstand the intractability caused by the combinatorial nature of the support set A defined in (<ref>), we replace constraints in (<ref>) by:
φ≥max_a^L ∈ A{ H(z^sw,a^L) - ψ^⊤Sâ^L }.
Based on (<ref>), (<ref>), (<ref>), we propose in the next subsections an iterative procedure to address formulation (<ref>)–(<ref>).
§.§ Subproblem
The role of the subproblem is to provide an approximation to the right-hand side of (<ref>). Note that H(z^sw,a^L) is a minimization problem. Thus, to build the subproblem, we take the following steps: (i) write the dual problem of H(z^sw,a^L), (ii) subtract the dual objective function by ψ^⊤Sâ^L, and (iii) handle the bilinear products between dual and binary variables a^L in the dual objective function. It is worth mentioning that the recourse function associated with the resulting subproblem is convex with respect to the first-stage decision as it is a maximum of affine functions, therefore rendering the description of the right-hand side of (<ref>) suitable to cutting planes approximation.
§.§ Master problem
The master problem developed in this Section is a relaxation of the original model (<ref>)–(<ref>). Such relaxation is improved by the iterative inclusion of cutting planes. The master problem is formulated as follows.
ΔD^p-_b, ΔD^p+_b, ΔD^q-_b, ΔD^q+_b,
δ_le, ξ_l, ρ_le, φ, χ_l, ψ≥0, f^p_l,
f^p,-_l, f^p,+_l, f^q_l, p^ tr _b, q^ tr _b, v^†_b, y^sw_l, z^sw_lMinimize∑_b ∈𝒩^subs C^tr_b p^ tr _b
+ ∑_b ∈𝒩 C^ll(ΔD^p+_b+ ΔD^p-_b + ΔD^q+_b + ΔD^q-_b)
+ ∑_l ∈ L^sw C^sw_l y^sw_l + ∑_l ∈ L (γ_lψ_l + β_l χ_l) + φ
subject to:
Constraints (<ref>)–(<ref>)
f^p_l = f^p,+_l - f^p,-_l; ∀ l ∈ L
0 ≤ f^p,+_l≤F_l ξ_l; ∀ l ∈ L
0 ≤ f^p,-_l≤F_l(1-ξ_l); ∀ l ∈ L
ξ_l∈{0,1}; ∀ l ∈ L
f^p,+_l + f^p,-_l = s ∑_e=1^E_l 2^e-1δ_le; ∀ l ∈ L
δ_le∈{0,1}; ∀ l ∈ L, e ∈ 1,…, E_l
-M(1 - δ_le) ≤ψ_l - ρ_le≤ M(1 - δ_le); ∀ l ∈ L,
e = 1,…, E_l
- δ_le M ≤ρ_le≤δ_le M; ∀ l ∈ L, e = 1,…, E_l
χ_l = s ∑_e = 1^E_l 2^e-1ρ_le; ∀ l ∈ L
φ≥∑_b ∈𝒩^subs [ - D^p_b η_b^1^(j) -tan(arccos(PF_b)) D^p_b η_b^2^(j)
+V^2_bη_b^9^(j) - V^2_bη_b^10^(j) - P^tr^c_b η_b^22^(j) + Q^tr^c_b η_b^23^(j)
- Q^tr^c_b η_b^24^(j) - V^ref^2η_b^25^(j) - D^p_b η_b^30^(j)
- tan(arccos(PF_b)) D^p_b η_b^31^(j) ]
+ ∑_b ∈𝒩∖𝒩^subs [ - D^p_b η_b^3^(j) -tan(arccos(PF_b)) D^p_b η_b^4^(j)
+ V^2_bη_b^9^(j) - V^2_bη_b^10^(j) - D^p_b η_b^30^(j)
- tan(arccos(PF_b)) D^p_b η_b^31^(j) ]
+ ∑_l ∈ℒ∖ℒ^sw [ - (1 - a^L^(j)_l) M η_l^7^(j) - (1 - a^L^(j)_l) M η_l^8^(j)
- a^L^(j)_l F_l (η_l^15^(j) + η_l^16^(j) + η_l^17^(j) + η_l^18^(j))
+ ∑_e ∈{1,2,3,4} ( F_l ( (((1/2) - e) (π/4)) cos( e(π/4) )
- sin ( e(π/4) ) ) (η_l,e^19^(j) + η_l,e^20^(j)) ) ]
+ ∑_l ∈ℒ^sw [ - ( (1 - a^L^(j)_l) M + (1 - z^sw_l) M ) η_l^5^(j)
- ( (1 - a^L^(j)_l) M + (1 - z^sw_l) M ) η_l^6^(j) - z^sw_l F_l (η_l^11^(j)
+ η_l^12^(j) + η_l^13^(j) + η_l^14^(j)) - a^L^(j)_l F_l (η_l^15^(j) + η_l^16^(j)
+ η_l^17^(j) + η_l^18^(j))
+ ∑_e ∈{1,2,3,4} ( F_l ( (((1/2) - e) (π/4)) cos( e(π/4) )
- sin( e(π/4) )) (η_l,e^19^(j) + η_l,e^20^(j)) ) ] -
∑_l ∈ℒ [ (ψ_l - ψ_|ℒ|+l)(1 - a^L^(j)_l) ] ∀ j ∈ J,
where the product ψ^⊤μ(f^p, β) in the objective function is replaced by ∑_l ∈ L (γ_lψ_l + β_l χ_l) and χ_l represents the bilinear term ψ_l |f^p_l| as modeled in (<ref>)–(<ref>). Furthermore, expression (<ref>) represents cutting planes that are iteratively included to approximate the right-hand side of expression (<ref>).
§.§ Solution Algorithm
In this section, we describe the outer approximation algorithm proposed in this work, following the Master and Subproblem descriptions. Structurally, it is an iterative process that is carried out until the approximation provided by the inclusion of the cutting planes (<ref>) is sufficient to make the solution of the relaxed Master problem close enough to optimality. This proposed outer approximation algorithm is summarized as follows.
* Initialization: set counter m ← 0 and set 𝒥←∅.
* Solve the optimization model (<ref>)–(<ref>), store z^sw (m), ψ^(m) and φ^(m), and set LB^(m) equal to the value of the objective function (<ref>).
* Identify the worst case contingency for z^sw (m) and ψ^(m) by running the linearized subproblem described in Subsection <ref>. Store values of its decision variables and calculate UB^(m) by subtracting φ^(m) from LB^(m) and adding the value of the objective function of the subproblem.
* If (UB^(m) - LB^(m))/UB^(m)≤ϵ, then STOP; else, CONTINUE.
* Include in (<ref>)–(<ref>) a new cutting plane of the format (<ref>) with decision variables stored in Step 3, set m ← m+1, J← J∪{m}, and go to Step 2.
It is interesting to note that the cuts generated when considering β = 0 (i.e., neglecting decision-dependent uncertainty) are still valid for the decision-dependent case (β≥0). This happens because the vectors of decision variables z^sw and ψ have the same feasible region regardless of the value of β and the cuts obtained by solving the maximization problem on the right-hand side of (<ref>) would be valid even if z^sw and ψ are not optimally decided by the Master problem. In the numerical experiments conducted in this work, we will leverage this property to accelerate the solution of the cases with decision-dependent uncertainty (β≥0) by reusing the cutting planes obtained for the case where decision-dependent uncertainty is not considered (β = 0). This reuse can be particularly advantageous since (i) it is usually much faster to solve the problem with β = 0 and (ii) warming up the problem for β≥0 with previously identified valid cutting planes can significantly improve computational efficiency as will be seen in the numerical experiments.
§ CASE STUDIES
The proposed methodology is illustrated in this section with two case studies. The first case study is based on a 54-bus distribution system, whereas the second one comprises a 138-bus distribution system. In both case studies, we consider that part of the grid is vulnerable to the ignition of a wildfire, which can be influenced by the levels of power flows passing through the line segments within the region. The solution algorithm described in Section <ref> has been implemented in Julia 1.6 and solved on a server with one Intel® Core® i7-10700K processor @ 3.80GHz and 64 GB of RAM, using Gurobi 9.0.3. under JuMP.
§.§ 54-bus system
In this case, we consider a 54-bus distribution system (depicted in Fig. <ref>) based on the data provided in <cit.>. In this system, there are 3 substations (buses 51, 53, and 54 in Fig. <ref>) and 57 lines. The total demand of the system is 5400 kW and the energy price is 0.01 $/kWh. In addition, we consider that each switching action costs $100, which can be performed in 11 out of the 57 lines. In Fig. <ref>, the switchable lines are represented by blue lines. Furthermore, the blue dashed lines are initially open lines whereas the blue solid lines are initially closed. To enforce radiality constraints, lines L9 and L37 cannot be switched on simultaneously. The same rule applies to the pairs of lines L17 and L52, L13 and L47, L5 and L34, L3 and L27, L13 and L19, L19 and L47, and L13 and L47.
These rules constitute the forbidden switching patterns in this case study. For replicability purposes, input data can be downloaded from <cit.>. In this case study, we consider an event of adverse climate conditions approaching that includes extreme dry weather and consistent wind speed. In addition, part of the grid, more specifically the southeast, is located close to vegetation, which renders this area particularly more likely to initiate a wildfire. The southeast area of the grid includes lines L6, L7, L8, L9, L10, L12, L13, L36, L37, L43, L45, L46, L47, and L52. We consider that every line segment has a nominal rate of failure equal to 0.4 failures per year. Using the exponential probability distribution, this rate of failure translates into a failure probability of 0.11% for each line in the next 24 hours. In addition, due to the adverse climate conditions, each of the aforementioned lines that belong to the southeast area has an increase of 3% in its probability of failure for each 0.01 pu (100kW) of scheduled active power flow
. The remaining lines have an increase of 10^-4% in their probabilities of failure per 0.01 pu of scheduled active power flow.
Within this context, we consider three possible modeling and algorithmic structures to determine the status of switchable lines. In the first one, hereinafter referred to as without DDU (Decision-Dependent Uncertainty), the operator ignores the decision-dependent influence of line flows and probabilities of failures in the modeling and, therefore, only considers the nominal probabilities previously described. To do so, equation (<ref>) is modified to μ = γ. In the second one, hereinafter referred to as with DDU, the operator explicitly considers the aforementioned increase in failure probability corresponding to line usage according to (<ref>). In the third one, hereinafter referred to as with DDU and warm up, decision-dependent is considered exactly as in the with DDU case but the cutting planes of the without DDU case are included in the master problem since the beginning of the execution of the solution algorithm. This reuse of cutting planes can help the with DDU and warm up approach to achieve the same solution of the with DDU method in less time. The respective switching statuses are depicted in Table <ref>, where 1 means closed line and 0 means open line. As expected, when DDU is ignored, there is no incentive to change the status of any line since the nominal probabilities of failure are relatively low. Nonetheless, when DDU is considered, six lines have their statuses changed. In this context, the solution without DDU costs $54, which is equivalent to the cost of feeding the loads without any switching, and the solution with DDU costs $654, which includes feeding loads and performing 6 switching actions. The with DDU and warm up solution results in exactly the same costs and switching decisions as the with DDU solution. In Fig. <ref>, it can be noted that the average flow per line, as well as the maximum flow among all branches, are significantly reduced when DDU is taken into account to decrease failure probabilities. The solutions without DDU, with DDU, and with DDU and warm up, were obtained in 10.59s, 49.22s, and 22.50s, respectively.
§.§.§ Out-of-sample analysis
To compare the performance of both solutions provided in Table <ref>, we conduct the following out-of-sample analysis. Firstly, we have solved problem (<ref>)–(<ref>) forcing each of the two obtained switching decisions (without considering the last term in the objective function). Given the obtained power flows, we calculated the probability of failure for each line given switching decisions. Then, we generated 2000 scenarios of failure following a Bernoulli trial for the line states (1 in service; 0 failure) with the computed probabilities. Under these generated scenarios, we have evaluated the performances of the two solutions. For this out-of-sample analysis, the average loss of load (% of total demand) for the solutions without DDU and with DDU are 44.15% and 0.53%, respectively. In addition, the CVaR_95% of loss of load (% of total demand) for the solutions without DDU and with DDU are 57.17% and 6.91%, respectively. Moreover, according to Fig. <ref>, the solution with DDU has 85.25% probability to incur in null loss of load and 96.85% probability to resulting in up to 2% of loss of load, whereas the solution without DDU has 98.00% probability to incur a loss of load and more than 90% probability to result in more than 30% of loss of load. Therefore, our proposed model can properly recognize the appropriate switching actions that are needed to significantly decrease the risk of loss of load within a decision-dependent uncertainty framework.
§.§ 138-bus system
We have also studied the benefits and effectiveness of the proposed methodology in the larger and more complex 138-bus distribution system (Fig. <ref>), based on the data provided in <cit.>. In this system, there are 3 substations, 138 buses, and 142 lines, from which 12 are switchable. The total demand of the system is 56,900 kW, the energy price is 0.2 $/kWh, the deficit cost is 2 $/kWh, and each switching action costs $200. To enforce radiality, we used a DFS (depth-first search) algorithm to identify 12 rules that avoid the simultaneous activation of line segments and result in the formation of cycles within the network. All branches have a nominal rate of failure equal to 0.15 per year and, analogously to Subsection <ref>, this rate of failure translates into a 0.0411% of failure probability for each line in the next 24 hours (γ) using the exponential probability distribution. Furthermore, the northwest part of the system is more likely to initiate a wildfire. This area of the grid includes lines L1–L5 and L17–L24.
In this numerical experiment, we conducted a sensitivity analysis of the impact of the β parameter in the solution by running the model with DDU 27 times, considering different values for β in the mentioned area. The range of the chosen values was defined considering the maximum failure probability (γ + βF). This probability indicates how likely a line failure is to happen if the power flow in the feeder is at its maximum capacity. Given that, we chose β values for the lines in the wildfire area considering β×F to range from 1% to 2% by 0.1%, from 2% to 10% by 1%, and from 10% to 90% by 10%. All the lines from outside the wildfire-prone area were assumed to have a β×F as 0.1% in all cases. The input data can also be downloaded from <cit.>.
The main results are depicted in Table <ref>, where, for expository purposes, only the results for 8 cases are shown. These cases of maximum failure probability are important as they resulted in an operation change in terms of switching actions, for example, the cases between 1.1% and 1.8% resulted in the same switching decision, and so on. Besides that, values for the model with DDU refer to running the with DDU and warm up setup, since the warm-up helps in decreasing the computational burden to handle the decision-dependent model.
As depicted in Table <ref>, as the value of β increases, the line risk of failure also increases, thus the solution is to change the grid by switching some critical lines. By changing the grid topology, the model decreases the power flow in the critical lines (inside the wildfire-prone area), decreasing the risk of failure associated with β. Moreover, as we increase the level of β, the worst-case expected value of post-contingency operation cost increases until it is worth performing switching actions. For instance, in the cases where only 4 lines are switched, between 1.9% and 3%, the worst-case expected value increases up to $12,565. At 4%, similarly, it is economically viable to afford further two switching actions and have a lower worst-case expected value. In general, the solution time of each case also increases with the β levels, reaching a maximum elapsed time of roughly 20 minutes.
§.§.§ Out-of-sample analysis
Finally, we also performed an out-of-sample analysis using the same procedure presented for the 54-bus system. In the 138-bus system, we consider the results of using each parameter β. Firstly, in Fig. <ref>, we showcase the average load shedding for the out-of-sample analysis for each level of maximum failure probability (γ + βF). Note that, as the environmental conditions for a wildfire worsen, the impact of the average loss of load when disregarding the DDU increases significantly. For instance, for a maximum failure probability of 90%, the average loss of load would be roughly 22% of total demand if no actions were considered (without DDU), while it would be roughly 1% if the actions suggested by the DDU model were implemented. Furthermore, Fig. <ref> depicts a similar analysis, but highlighting the associated CVaR_95% level. Note that, for the setup without DDU , a value of roughly 30% in loss of load (in % of total demand) can be observed in the most critical scenarios. On the other hand, nevertheless, the system topology prescribed by the with DDU setup significantly mitigates the load shedding occurrence and, consequently, the system operation cost. For instance, consider the maximum failure probability of 90% once more. The average cost in the without DDU setup is $26,512 (Fig. <ref>), which is higher than the expected cost in the 5% worst-valued scenarios (CVaR_95%), given by $13,806, when prescribing the network topology based on the with DDU setup (Fig. <ref>).
§ CONCLUSION
This paper proposes a novel methodology to operate distribution systems amidst adverse climate conditions. We acknowledge that the likelihood of a line failure is dependent on its scheduled power flow and aggravated under a wildfire-prone environment. Therefore, in this work, we leverage a Decision-Dependent Uncertainty (DDU) framework to characterize the climate- and power-flow-dependent line availability probability function to devise a wildfire-aware distribution grid operation methodology and prescribe optimal switching actions to decrease the usage level of lines in peril locations, resulting in a more reliable operative condition. Two numerical experiments were conducted to illustrate the effectiveness of the proposed methodology. The results demonstrated that by properly considering DDU, our methodology can keep supplying loads when preventive switching actions are taken. This new configuration leads to a decrease in power flows near the areas where wildfire ignitions are more likely to occur. By doing that, the risk of failure and the risk of load loss are reduced. Although this method can be seen as a better alternative to PSPS, the model can also be adapted to determine where the shut-offs actions should be made.
IEEEtran
§ ACKNOWLEDGMENT
Alexandre Moreira
(S'12) received the Electrical Engineering and Industrial Engineering degrees from the Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, in 2011. He received his M.Sc. degree from the Electrical Engineering Department of PUC-Rio, in 2014. Currently, he is pursuing his Ph.D. degree at the Department of Electrical and Electronic Engineering of the Imperial College London, London, UK.
His research current interests include decision making under uncertainty as well as power system economics, operation, and planning.
|
http://arxiv.org/abs/2306.04919v4
|
20230608034332
|
Unsupervised Cross-Domain Soft Sensor Modelling via Deep Physics-Inspired Particle Flow Bayes
|
[
"Junn Yong Loo",
"Ze Yang Ding",
"Surya G. Nurzaman",
"Chee-Ming Ting",
"Vishnu Monn Baskaran",
"Chee Pin Tan"
] |
cs.LG
|
[
"cs.LG"
] |
Unsupervised Cross-Domain Soft Sensor Modelling via Deep Physics-Inspired Particle Flow Bayes
Junn Yong Loo^1, Ze Yang Ding^2, Surya G. Nurzaman^2, Chee-Ming Ting^1, Vishnu Monn Baskaran^1
and Chee Pin Tan^2,3
^1The authors are with the School of Information Technology, Monash University Malaysia, Jalan Lagoon Selatan, Bandar Sunway, 47500 Selangor, Malaysia (E-mail: loo.junnyong/ting.cheeming/[email protected]).
^2The authors are with the School of Engineering, Monash University Malaysia, Jalan Lagoon Selatan, Bandar Sunway, 47500 Selangor, Malaysia (E-mail: ding.zeyang/surya.nurzaman/[email protected]).
^3Corresponding author.
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Data-driven soft sensors are essential for achieving accurate perception through reliable state inference. However, developing representative soft sensor models is challenged by issues such as missing labels, domain adaptability, and temporal coherence in data.
To address these challenges, we propose a deep Particle Flow Bayes (DPFB) framework for cross-domain soft sensor modeling in the absence of target state labels. In particular, a sequential Bayes objective is first formulated to perform the maximum likelihood estimation underlying the cross-domain soft sensing problem. At the core of the framework, we incorporate a physics-inspired particle flow that optimizes the sequential Bayes objective to perform an exact Bayes update of the model extracted latent and hidden features. As a result, these contributions enable the proposed framework to learn a rich approximate posterior feature representation capable of characterizing complex cross-domain system dynamics and performing effective time series unsupervised domain adaptation (UDA). Finally, we validate the framework on a complex industrial multiphase flow process system with complex dynamics and multiple operating conditions. The results demonstrate that the DPFB framework achieves superior cross-domain soft sensing performance, outperforming state-of-the-art deep UDA and normalizing flow approaches.
Bayesian deep learning, soft sensor, domain adaptation, unsupervised learning, particle filtering.
§ INTRODUCTION
Rapid evolution of industrial instrumentation, computing, and communication in recent years have fueled a growing demand for complex systems with rich dynamics. Accurate estimation of both the representations of the dynamic system and its operating condition is often critical for monitoring and control.
These system representations often consist of a set of mutually exclusive state variables that encapsulate the information needed to describe the system's behavior over time.
In this context, inference models such as soft sensors and discrete filters play a crucial role in predicting these states (labels), given the time series measurements (data) from the integrated physical sensors.
Following the rise of deep learning, neural networks (NNs) have been widely adopted as soft sensors in both autonomous systems and industrial processes <cit.>. However, most deep learning-based soft sensor models assume that training and testing data are drawn from a single domain with a unified distribution <cit.>. This assumption is highly unrealistic in practice, as industrial systems and processes exhibit distinct dynamics and data distributions under different operating conditions <cit.>. Consequently, deep soft sensor models trained on data collected from preset conditions (source domains) cannot generalize well to target applications with unseen operating conditions (target domains).
Although the soft sensor models could be rebuilt using data and labels collected from the target domains, it is inefficient. Moreover, obtaining state labels in many industrial systems is not as straightforward as acquiring measurement data, readily provided by the physical sensors. In many circumstances, collecting representative labels on target operating conditions may be too expensive or unattainable <cit.>, ultimately leading to missing labels in the target domains.
To tackle the challenge of distributional mismatch between domains and the problem of missing labels, unsupervised domain adaptation (UDA) has shown immense potential in leveraging unlabelled target data to generalize deep learning models beyond the labelled source data <cit.>. A plethora of feature-based deep UDA models have been proposed for cross-domain time series data analysis in many important applications <cit.>.
These UDA models predominantly rely on deep domain adaptation techniques such as stacked autoencoders (AEs) <cit.>, domain adversarial training (DAT) <cit.>, variational Bayes (VB) <cit.> to extract nonlinear domain-invariant feature representations that can be adapted to both source and target domains. Recently, deep probabilistic UDA models <cit.> that combine the auto-encoding variational Bayes (AEVB) framework <cit.> with DAT have seen improved robustness to incomplete data via better predictive uncertainty quantification. Due to the intrinsic capacity of the AEVB framework in characterizing stochastic data, these models provide a probabilistic data modelling solution ideal for dynamic industrial systems <cit.>.
Despite the widespread success of existing deep UDA models, several major issues remain. To address covariate shift, these models rely on the domain invariance assumption, which assumes that the posterior distribution of features remains the same across the source and target domains. However, this assumption compromises the expressive power of the resulting feature representation <cit.>. Additionally, the AEVB framework implements the variational approximation, which employs a mean-field parametric posterior distribution with highly localized density and limited generalization capability <cit.>. The drawbacks, when combined, could impede effective domain adaptation and result in a inadequate characterization of the complex cross-domain time series data.
Secondly, deep UDA models are typically trained using exclusive source and target time series data, which is challenging to collect from specific domains in many industrial systems due to their perpetually varying operating conditions. Moreover, these domain-specific data overlook the complex dynamics that occur during transitions between domains. Therefore, to address the unsupervised cross-domain soft sensing problem, it is crucial to develop end-to-end models that can perform UDA on time series data consisting of switching domains, which has not been extensively studied in the literature <cit.>.
Finally, while RNNs are capable of capturing complex temporal dependencies and factors of variation underlying time series data <cit.>, existing RNN-based UDA models do not subject these hidden states to domain adaptation. As a consequence, this could preclude the extensive adaptation of temporal coherence across time series from distinct domains <cit.>.
In this paper, we propose a deep Particle Flow Bayes (DPFB) framework to address the unsupervised cross-domain soft sensing problem in dynamic systems with varying operating conditions. The DPFB framework learns a cohesive approximate posterior feature representation that accurately characterizes the complex cross-domain system dynamics, thereby facilitating effective unsupervised time series domain adaptation.
In addition, our proposed framework abstains from the restrictive domain invariance assumption and variational approximation in retaining the domain-adaptability and nonlinear data modelling capability of the learned feature representation.
Fig. <ref> provides an overview of the proposed framework. To the best of knowledge, this is the first work that investigates deep particle flow for cross-domain soft sensor modelling.
Our contributions are highlighted as follows:
* A sequential Bayes objective is formulated to perform maximum likelihood estimation of the cross-domain time series data and source labels. This objective underpins the proposed framework and inherently facilitates a Bayes update of the model extracted latent and hidden features.
* A physics-inspired particle flow that resembles the advection of fluid in a potential-generated velocity field, is proposed to transport the model extracted feature samples. A flow objective is derived to enable the particle flow in performing a Bayes update to endow the transported features with a representative approximate posterior.
* The proposed DPFB framework is validated on a real industrial system with complex dynamics and multiple operating conditions. Benchmark results show that amidst the missing target labels, DPFB achieves superior cross-domain soft sensing performances in comparison to state-of-the-art deep UDA and normalizing flow approaches.
The rest of the paper is organized as follows: Section II outlines the backgrounds of time series UDA and sequential AEVB. Section III details the formulation of our proposed DPFB framework. A real industrial case study is presented and discussed in Section IV. Section V concludes the paper.
§ BACKGROUNDS
§.§ Unsupervised Time Series Domain Adaptation
UDA is a special case of transfer learning, where the source and target domains are from the same space and the labels are missing in the target domain <cit.>.
Consider a sequence of data and label subspace pairs {𝒳_n^𝒮×𝒴_n^𝒮}_n=0^L and {𝒳_n^𝒯×𝒴_n^𝒯}_n=0^L of length L, known respectively as the source and target domains with distinctive marginal data distributions p(𝒳_n^𝒮) ≠ p(𝒳_n^𝒯).
The objective of time series UDA is to predict the missing target labels y^𝒯_0:L≡{y_n | y_n∈𝒴_n^𝒯}_n=0^L, using the cross-domain data x_0:L≡{x_n | x_n∈𝒳_n^𝒮∪𝒳_n^𝒯}_n=0^L
and source labels y^𝒮_0:L≡{y_n | y_n∈𝒴_n^𝒮}_n=0^L <cit.>.
In general, deep UDA models achieve this by learning, at each sample index n, a domain-invariant latent feature space 𝒵_n that generalizes well across domains, so that it accurately predicts the labels in both the label spaces {𝒴_n^𝒮,𝒴_n^𝒯}.
On one hand, the DAT-based models <cit.> incorporate a feature-level domain discriminator, where its adversarial gradients are backpropagated to train the feature extractors in generating domain-invariant feature representations.
On the other hand, the VB-based models <cit.> implement variational inference to learn domain-invariant probabilistic latent feature representations that generalize the knowledge learned from the supervised source domain to the unsupervised target domain.
Nevertheless, these deep UDA approaches hinge on the assumption of domain invariance, which could impede the generalization and expressive abilities of feature models in characterizing complex cross-domain system dynamics <cit.>.
§.§ Sequential Auto-Encoding Variational Bayes
The AEVB is a class of probabilistic deep learning framework that aims to maximize the lower bound of the joint data log-likelihood log p(X). By introducing a latent state Z, a variational evidence lower bound (ELBO) of the data log-likelihood can be obtained via the importance decomposition
ℒ^ELBO(θ,ϑ) = 𝔼_q_ϑ(Z|X)[log p_θ(X|Z)] + 𝒟^KL[ q_ϑ(Z|X)p_θ(Z) ],
where 𝒟^KL denotes the (positive-valued) Kullback-Leibler divergence (KLD).
The data likelihood p_θ(X|Z), the latent approximate posterior distribution q_ϑ(Z|X), and the latent prior distribution p_θ(Z) are mean-field Gaussians parameterized respectively by the fully-connected neural network (FCNN) models: decoder, prior, encoder, known collectively as the variational AE (VAE).
The parameters θ and ϑ denote the set of NN parameters governing these models.
A significant drawback of VAE is the assumption that the time series data are independent and identically distributed (i.i.d.), which falls short at capturing the underlying temporal characterization.
To address this, the sequential AEVB framework <cit.> introduces a sequential ELBO as follows:
ℒ^ELBO(θ,ϑ) = ∑_n=0^L𝔼_q_ϑ(z_0:n|x_0:n)[log p_θ(x_n|z_0:n,x_0:n-1) ]
- 𝒟^KL[ q_ϑ(z_n|x_0:n,z_0:n-1)p_θ(z_n|z_0:n-1,x_0:n-1) ]
where a_0:k denotes a partial sequence up to the k^th sample.
Similarly, the conditional data likelihood p_θ(x_n|z_0:n,x_0:n-1), the latent prior distribution p_θ(z_n|z_0:n-1,x_0:n-1), and the approximate posterior q_ϑ(z_n|x_0:n,z_0:n-1) are mean-field Gaussians parameterized by RNN models, known collectively as the variational RNN (VRNN).
However, existing VRNN-based approach rely on the variational approximation, which significantly compromises their generalization capability to perform effective domain adaptation. Moreover, the RNN hidden states are not explicitly subjected to posterior update, which could result in poorly conditioned hidden states that generate latent features divergent from the actual system representation.
§.§ Particle Flow
Bayesian inference, which is a more general approach than variational inference <cit.>, constructs an exact approximation of the true posterior recursively via the Bayes rule p(z_n|x_0:n) ∝ p(z_n|x_0:n-1) p(x_n|z_n,x_0:n-1). Particle flow, first introduced in <cit.>, achieves this by subjecting prior samples to a series of infinitesimal transformations that follow a linear ordinary differential equation (ODE) dz/dτ = u in continuous pseudo-time τ∈ [0,1]. The function f, known as the velocity, is formulated such that the flow inherently performs the Bayes update, and the transformed samples represent an empirical (Monte Carlo) approximation of the true posterior. While particle flow is closely related to normalizing flows <cit.> for variational inference, the latter do not exhibit a Bayes update. Pal et al. <cit.> recently combined graph representation learning with particle flow for spatiotemporal time series forecasting, but relied on restrictive local linearization for nonlinear models. Nonetheless, existing flow-based methods do not consider the issues of domain shift and missing labels.
§ METHODS
In this section, we develop the deep Particle Flow Bayes (DPFB) framework. First, we consider the cross-domain time series maximum likelihood problem, which results in a sequential Bayes objective (SBO). Then, a RNN-based parameterization is introduced for the SBO to sample stochastic temporal features. Finally, we propose a physics-inspired particle flow that performs exact Bayes update of the extracted features.
§.§ Semi-supervised Sequential Bayes Objective
From a probabilistic perspective, the unsupervised cross-domain soft sensor modelling problem can be viewed as maximizing the joint likelihood p_θ(x_0:L,y^𝒮_0:L) of the cross-domain time series data and the source labels. The maximum likelihood problem can be formulated as follows:
max_θ log p_θ(x_0:L,y^𝒮_0:L)
= max_θ ∑_n=0^L log p_θ(x_n,y^𝒮_n|x_0:n-1,y^𝒮_0:n-1)
Taking into account the difficulty of directly optimizing (<ref>), we introduce a sequence of stochastic latent state space {𝒵_n}_n=0^L and consider an importance decomposition of the conditional data log-likelihood that holds for any choice of approximate posterior q_ϑ(z_0:n) as follows:
max_θ log p_θ(x_n,y^𝒮_n|x_0:n-1,y^𝒮_0:n-1)
= max_θ ℒ^ELBO(θ,ϑ;n) + min_ϑ 𝒟^KL[ q_ϑ(z_0:n)p^+(z_0:n) ]
where
ℒ^ELBO(θ,ϑ;n)
= 𝔼_q_ϑ(z_0:n)[logp_θ(x_n,y^𝒮_n,z_n|x_0:n-1,y^𝒮_0:n-1,z_0:n-1)/q_ϑ(z_n|z_0:n-1)]
Here, p^+(z_0:n) denotes the intractable true joint latent posterior p(z_0:n|x_0:n). Also, we assumed that p(z_0:n|x_0:n) = p(z_0:n|x_0:n,y^𝒮_0:n), i.e., the true posterior is self-sufficient and gains no further information from observing the labels.
Subsequently, we consider factorizing the conditional data likelihood in (<ref>) as follows:
p_θ(x_n,y^𝒮_n,z_n|x_0:n-1,y^𝒮_0:n-1,z_0:n-1)
= p_θ(x_n|z_0:n) p_θ(y^𝒮_n|z_0:n-1) p_θ(z_n|z_0:n-1)
where we assumed that p_θ(x_n,y^𝒮_n,z_n|z_0:n-1,x_0:n-1) = p_θ(x_n,y^𝒮_n,z_n|z_0:n-1), i.e., the approximate posterior latent sequence z_0:n-1 accurately characterizes the preceding time series x_0:n-1.
Finally, substituting (<ref>) into (<ref>), we obtain the sequential Bayes objective (SBO) as follows:
ℒ^SBO(θ,ϑ;n) = max_θ 𝔼_q_ϑ(z_0:n)[ log p_θ(x_n|z_0:n) ]
+ max_θ 𝔼_q(z_0:n)[ log p_θ(y^𝒮_n|z_0:n-1) ]
+ min_ϑ 𝒟^KL[ q_ϑ(z_n|z_0:n-1)p_θ(z_n|z_0:n-1) ]
+ min_ϑ 𝒟^KL[ q_ϑ(z_0:n)p^+(z_0:n) ]
As a result, we reformulated the cross-domain soft sensing problem as an end-to-end expectation-maximization algorithm (<ref>). In the maximization step, the first two objectives identify a pair of generative data likelihood p_θ(x_n|z_0:n) and label likelihood p_θ(y^𝒮_n|z_0:n-1) that represent the cross-domain data and source labels. In the expectation step, the last two objectives aim to find the optimal marginal approximate posterior q_ϑ(z_n|z_0:n-1) and joint approximate posterior q_ϑ(z_0:n) that resemble the conditional prior p_θ(z_n|z_0:n-1) and the intractable true posterior p^+(z_0:n), respectively.
In particular, the first three objectives in (<ref>) are the same as those in the VRNN's sequential ELBO (<ref>). However, we have added an additional (third) semi-supervised objective to account for the source labels. In addition to these objectives, our SBO formulation includes the final KLD objective, which inherently performs a Bayesian smoothing <cit.> of the partial state sequence z_0:n. This Bayesian smoothing encourages the latent state sequence to adopt an approximate posterior feature representation that facilitates a comprehensive characterization of the cross-domain time series data.
§.§ Recurrent Neural Network Parameterization
In this subsection, we introduce a RNN parameterization for the proposed SBO.
First, we parameterize the generative data and label likelihoods, and the latent prior in (<ref>) as Gaussian distributions, given by:
p_θ(x_n|z_0:n) = 𝒩 ( x_n | μ^dec_n , Σ^dec_n ) ,
p_θ(y^𝒮_n|z_0:n-1) = 𝒩 ( y^𝒮_n | μ^prior_n , Σ^prior_n ) ,
p_θ(z_n|z_0:n-1) = 𝒩 ( z_n | μ^prior_n , Σ^prior_n )
where Σ^dec_n = diag(σ_n^prior^2) and Σ^prior_n = diag(σ_n^prior^2) are isotropic covariances, and diag(·) denotes the diagonal function.
In particular, we allow the latent prior (<ref>) to coincide with the label likelihood (<ref>), allowing the models to learn to predict the source labels. This supervised knowledge in the source domain can then be adapted to the unsupervised target domain during the Bayes update of the proposed particle flow.
Subsequently, the mean and standard deviation pairs in (<ref>) are obtained using a stochastic RNN as follows:
h_n = φ_θ^rnn(φ_θ^z(z_n-1), h_n-1) ,
( μ^prior_n , σ^prior_n) = φ_θ^prior (h_n) ,
( μ^dec_n , σ^dec_n) = φ_θ^dec(φ_θ^z(z_n),h_n)
where φ_θ^rnn is the RNN model, and the prior model φ_θ^prior, decoder model φ_θ^dec and state encoding model φ_θ^z are FCNNs.
Here, the memory-encoding RNN hidden states h_n∈ℝ^n_h serve as an embedding of the preceding time series z_0:n-1 to retain the intrinsic temporal information; this allows us to work with a reduced-dimension state space ℝ^n_z + n_h instead of the exponentially growing ℝ^n_z× n for the entire latent sequence, thus avoiding curse of dimensionality <cit.>.
To sample from the latent prior p_θ(z_n|z_0:n-1) = p_θ(z_n|h_n), we compute its mean and standard deviation using equations (<ref>) to (<ref>), and then apply the reparameterization trick <cit.>. It is worth noting that although the hidden states have a degenerate conditional distribution δ_θ(h_n|z_n-1,h_n-1), where δ(·) is the Dirac delta function, they remain stochastic due to the recurrence and the variability of latent states after integrating out the conditioned variables. Additionally, unlike in VRNN <cit.>, data are not fed into the RNN model to minimize the inheritance of data noise. Fig. <ref> outlines the (features) prior and (data) decoder operations in DPFB.
§.§ Particle Flow for Bayesian Inference
In this subsection, we introduce particle flow as a Bayesian inference technique to parameterize the approximate posterior. More precisely, a set of sampled prior latent and hidden state particles (z_n^i,h_n^i) is subjected to a series of infinitesimal transformations, collectively mapping them to (z̅_n^i,h̅_n^i). These FCNN-parameterized transformations, denoted individually as α_ϑ: ℝ^n_x+n_h→ℝ^n_x+n_h, are governed by an ODE d(z,h)/dτ = u on a continuous pseudo-time interval τ∈ [0,1].
The flow velocity function f is designed to induce a Bayes update, such that the transformed particles empirically constitute an approximate posterior q^α_ϑ(z̅_n,h̅_n) that statistically resembles the true posterior p^+(z̅_n,h̅_n).
In particular, we sample the prior particles (z_n^i,h_n^i) ∼ p_θ(z_n|h_n) δ_θ(h_n|z̅_n-1^i,h̅_n-1^i) using the RNN and prior models (<ref>)-(<ref>), given transformed particles from the previous sample.
Incorporating the RNN parameterization (<ref>) followed by the particle flow into our SBO (<ref>), we obtain
ℒ^SBO(θ,ϑ;n) = max_θ 𝔼_q^α_ϑ(z̅_n,h̅_n)[log p_θ(x_n|z̅_n,h̅_n) ]
+ max_θ 𝔼_q(z_n,h_n)[ log p_θ(y^𝒮_n|h_n) ]
+ max_ϑ 𝔼_q(z_n,h_n)[ | Dα_ϑ(z_n,h_n) | ]
+ min_ϑ 𝒟^KL[ q^α_ϑ(z̅_n,h̅_n)p^+(z̅_n,h̅_n) ]
where we obtain the third objective using the density transformation q^α_ϑ(z̅_n|h_n) = p_θ(z_n|h_n) / |D_zα_ϑ|. Here, |·| denotes the determinant operator, and D α_ϑ denotes the Jacobian of α_ϑ with respect to its input variables.
Integrating particle flow into our framework provides two key advantages. First, it results in an end-to-end learning framework that eliminates the disjointed encoder model in VRNN. This prevents unstable KLD gradients that arises from divergent prior and encoder distributions <cit.>. Second and more importantly, it preserves the expressive and generalization powers of the approximate posterior q^α_ϑ(z̅_n|h̅_n) by overcoming the restrictive variational approximation.
Nevertheless, it is not possible to directly optimize the KLD in (<ref>) due to the intractable true posterior p^+(z̅_n,h̅_n). Taking this into account, we propose the particle flow in the following section.
§.§ Deep Physics-Inspired Particle Flow
In this subsection, we propose a physics-inspired particle flow that draws inspiration from the control-oriented approaches <cit.> to Bayesian inference. The proposed particle flow simulates fluid advection, where the flow (transport) of fluid is driven by an irrotational potential-generated velocity field. We provide a planar illustration of the particle flow in Fig. <ref>. Subsequently, we derive a tractable flow objective that solves the minimum KLD problem in the SBO (<ref>). Consequently, the particle flow learns to inherently perform a Bayes update on the RNN-extracted features towards constructing a representative approximate posterior.
Our proposed particle flow takes the following form:
d(z,h)/dτ = ∇ϕ_ϑ(x_n,z,h)
where ∇ denotes the Del operator (gradient) with respect to (z,h). Here, we parameterize the normalized (mean-subtracted) velocity potential ϕ_ϑ: ℝ^n_h+n_y→ℝ as FCNN.
By discretizing (<ref>) using a forward Euler scheme with step size Δ_τ, we obtain the infinitesimal piecewise-linear transformation given by:
(z^i_τ+Δ_τ,h^i_τ+Δ_τ) = α_ϑ (z^i_τ,h^i_τ)
= (z^i_τ,h^i_τ) + Δ_τ ∇ϕ_ϑ(x_n,z^i_τ,h^i_τ)
Here, (z^i_τ,h^i_τ) represent the trajectories traced out by the feature particles, as they undergo the particle flow over the defined pseudo-time interval τ∈ [0,1].
Therefore, the particle trajectories begin at the initial coordinates (z_n^i,h_n^i) = (z_0^i,h_0^i) and end at final coordinates (z̅_n^i,h̅_n^i) = (z_1^i,h_1^i).
As a direct consequence, the proposed particle flow (<ref>) induces an evolution in the resulting empirical distribution q^α_ϑ(z_τ,h_τ), which can be described by a Kolmogorov forward equation as follows:
d q^α_ϑ/d τ = -( q^α_ϑ(z,h) ∇ϕ_ϑ(x_n,z,h) )
where denotes the divergence operator (∇·).
Upon closer examination, it becomes apparent that this forward equation is in fact a continuity equation which governs the advection of fluid, uncovering its fundamental connection to physics. In particular, q^α_ϑ corresponds to the fluid density, and ϕ_ϑ represents the driving velocity potential. The resulting gradient ∇ϕ_ϑ acts as the irrotational (conservative) velocity field, as illustrated by the (coloured) arrows in Fig. <ref>.
Next, we derive a tractable flow objective for the proposed particle flow (<ref>)-(<ref>) using the calculus of variations in two stages, so that it induce a Bayes update of the RNN-extracted prior feature particles. In the first stage, Proposition <ref> shows that solving the minimum KLD problem is equivalent to solving a partial differential equation (PDE). In the second stage, Proposition <ref> yields a tractable particle flow objective that provides a weak solution to the aforementioned PDE.
propositionProposition
Assuming the following hold:
* The prior distribution (before particle flow) q(z_n,h_n) has compact support on ℝ^n_z+n_h.
* The preceding approximate posterior q^α_ϑ (z̅_n-1,h̅_n-1) (after particle flow) is a good approximation to the true posterior p^+(z̅_n-1,h̅_n-1).
* The pseudo-time step size Δ_τ is sufficiently small.
Considering a particle flow of the form (<ref>), the problem of minimizing the KLD in (<ref>):
min_ϑ 𝒟^KL[ q^α_ϑ(z̅_n,h̅_n) p^+(z̅_n,h̅_n) ]
is then equivalent to finding the velocity potential ϕ_ϑ that satisfies the following PDE:
( q(z,h) ∇ϕ_ϑ(x_n,z,h) )
= 1/2 q(z,h) ( Γ(z,h) - Γ̂(z,h) )
where Γ denotes the normalized innovation squared (NIS):
Γ(·) ≜( x_n - μ^dec_n(·) )^TΣ_n^dec(·)^-1( x_n - μ^dec_n(·) )
and Γ̂ denotes its mean 𝔼_q(z,h)[Γ].
Refer to Supplementary Materials.
The following particle flow objective:
ℒ^PF(ϑ;n) = min_ϑ 1/2 𝔼_q(z,h)[ ∇ϕ_ϑ(x_n,z,h) ^2]
+ 1/2_q(z,h)[ ϕ_ϑ(x_n,z,h) , Γ(z,h) ]
is equivalent to solving the weak formulation of PDE (<ref>), given that ϕ_ϑ(x_n, ·) ∈ℋ^1_0(ℝ^n_z+n_h;q). Here, · denotes the Euclidean norm, _q(·,·) denotes the covariance, and ℋ^1_0 denotes the Sobolev space of square-integrable scalar functions whose first derivative is also square-integrable.
Refer to Supplementary Materials.
*remarkRemark
The second assumption of Proposition 1 becomes more accurate as training progresses.
The velocity potential ϕ_ϑ are modelled as FCNN (with leaky ReLU activation), which is shown to be universal approximator in Sobolev spaces <cit.>.
To summarize, Propositions <ref> and <ref> have allowed us to reformulate the intractable minimum KLD problem (<ref>) as a tractable particle flow objective (<ref>). As a result, by optimizing the velocity potential ϕ_ϑ with respect to (<ref>) and transporting the prior samples through the corresponding infinitesimal transformations (<ref>), we are able to perform exact, non-variational Bayesian update of the RNN-extracted feature particles to obtain an empirical approximate posterior that accurately resembles the true posterior in a statistical sense.
The minimum covariance objective in (<ref>) plays a crucial role as it ensures that a decrease in the NIS (<ref>) corresponds to an increase in the velocity potential. The gradient ∇ϕ_ϑ consistently points in the direction of greatest potential ascent, and by minimizing the covariance, we guarantee that the potential-generated velocity field drives the flow of latent particles towards the high probability regions of true posterior distribution, as illustrated in Fig. <ref>. These regions, characterized by low NIS values, are where the true posterior is concentrated. As a result, the proposed particle flow inherently performs a Bayes update of the RNN-extracted features to facilitate effective learning and adaptation.
Given that the sampling rate is high enough to accurately represent the underlying system dynamics with negligible sampling error, we can set Δ_τ = 1 and perform the particle flow (<ref>)-(<ref>) in a single step.
Additionally, to improve data characterization in the velocity potential, we incorporate a measurement encoding FCNN model φ_ϑ^x(x_n) to extract useful features from the data.
After incorporating these considerations and replacing the minimum KLD problem in (<ref>) with the particle flow objective (<ref>), the final SBO for our DPFB framework is given by:
ℒ^SBO(θ,ϑ;n) = max_θ 𝔼_q^α_ϑ(z̅_n,h̅_n)[ log p_θ(x_n|z̅_n,h̅_n) ]
+ max_θ 𝔼_q(z_n,h_n)[ log p_θ(y^𝒮_n|h_n) ]
+ min_ϑ 1/2 𝔼_q(z_n,h_n)[ ∇ϕ_ϑ(φ_ϑ^x(x_n),z_n,h_n) ^2]
+ min_ϑ 1/2_q(z_n,h_n)[ ϕ_ϑ(φ_ϑ^x(x_n),z_n,h_n) , Γ(z_n,h_n) ]
Here, we opted to exclude the third objective in (<ref>) for our implementation to avoid the computation of the expensive Jacobian determinant term.
On that note, we conclude the development of our proposed DPFB framework.
Overall, the framework intrinsically performs an end-to-end expectation-maximization algorithm. In the maximization step, the stochastic RNN learns to model a generative (features) prior and a (data) decoder likelihood that captures the distributional uncertainties of the cross-domain data and source labels.
Using the pair of feature prior and data likelihood, the expectation step construct a cohesive approximate (features) posterior that is representative of the unsupervised data across domains.
These steps performed together, allow the framework to learn a domain-adaptive probabilistic feature representation that accurately characterizes complex cross-domain system dynamics and effectively facilitates cross-domain soft sensor modelling.
Fig. <ref> outlines the particle flow operation in DPFB, where it bridges the (features) prior and the (data) decoder.
§ CASE STUDY
In this section, we present an industrial case study using a complex multivariate time series dataset collected from a real industrial-scale multiphase flow (MFP) system. The multiple operating conditions and complex dynamics of this MFP system make it a suitable candidate for evaluating the efficacy of deep UDA methods in cross-domain soft sensing.
§.§ Systems Description
The Cranfield Multiphase Flow Process (MFP) <cit.> is a real industrial process that employs advanced condition monitoring techniques based on heterogeneous sources. The Cranfield MFP is designed to provide a controlled and measured flow rate of water, oil, and air to a pressurized system. The process flow diagram is illustrated in Fig. <ref>, and a detailed description of the process can be found in <cit.>.
During the data collection process, the air and water flow rates were deliberately varied between the set points (operating conditions) listed in Table <ref> to obtain a good variety of process changes and capture a wide range of distinct dynamics.
For this case study, the variables that require specialized measurement sensors, including pressure, flow rate, and density, are identified as the states that need to be predicted using the proposed soft sensor model. The remaining variables are considered as measurement data, which the model utilizes for state prediction.
Table <ref> provides a list of all MFP variables. All variables are sampled at 1 Hz.
§.§ Benchmarking
To evaluate the cross-domain soft sensing performance of our proposed framework, we compare it against several state-of-the-art UDA approaches based on deep autoencoder: DLSN <cit.>, domain adversarial: DARNN <cit.>, variational Bayes: MVI <cit.> and a combination of these concepts: VRADA <cit.>, InfoVDANN <cit.> and DPTR <cit.>. Note that we have extended DSLN and MVI based on their respective sequential counterparts <cit.> to allow for time series regression.
The RNNs in these baseline approaches are designed to be single-layered Long Short-Term Memory (LSTM) networks for capturing complex temporal patterns.
Additionally, we include a stacked (three-layered) LSTM <cit.> trained on complete source and target labels (termed LSTM-C) to benchmark the overall UDA performances.
Furthermore, to outline the importance of the physic-inspired particle flow in our proposed framework, we replace it with two widely recognized normalizing flows: IAF <cit.> and iResNet <cit.>.
To demonstrate the resilience of the proposed framework to particle degeneracy, we also compare it with a deep probabilistic model based on particle filtering: FIVO <cit.>.
All these baselines are trained with missing target labels, and their architectures are designed to match the number of model parameters in DPFB.
§.§ Experimental Setup
The MFP dataset contains three data sequences with 13200, 10372 and 9825 data points, respectively. The first two are used for training and the last one is used for testing.
The target (state) labels in the testing dataset serve only as ground-truths for result validation.
For the purpose of unsupervised domain adaptation, the data region in which the air flow rate variables range from 0.0278 - 0.0347 m^3/s is considered as the source domain, which constitutes 45% of the entire training data. The region where the air flow rate is outside of this range is considered as the target domain, where the state labels are assumed to be missing. Taking into account the conditions, the time series data (and labels) consist of time windows that switch between the two domains, as shown in Fig. <ref>.
In terms of model architecture, our proposed DPFB framework employs a single-layered Gated Recurrent Unit (GRU) with a hidden state h_n∈ℝ^512 of size 512 as the RNN model φ_θ^rnn.
The prior model φ_θ^prior and the decoder model φ_θ^dec implemented as FCNNs with hidden layers {256, 128} and {512, 256, 128}, respectively. Here, each entry of the curly brackets is a hidden layer and the value denotes layer width.
The state and measurement encoding model φ_θ^x, φ_θ^y are also implemented as FCNNs with hidden layers {256} and {128}, respectively.
The velocity potential ϕ_ϑ is modeled as a bottleneck FCNN with hidden layers {512, 256, 128}.
These model hyperparameters are chosen based on training loss.
All proposed and baseline models are trained on batches of data and label sequences with a fixed length of 100. We optimize the models using the Adam optimizer with annealing and L2 regularization of 0.01, for 300 epochs. The batch sizes and initial learning rates are selected from 8, 16, 32 and 5×10^-4, 1×10^-4, 5×10^-5, respectively. We use 8 sample particles during training, and a single particle without reparameterization during inference.
§.§ Results and Discussions
In this section, we evaluate the performances of DPFB and baselines using the metrics:
State Prediction Error: the normalized root-mean-square error (NRMSE) between the predicted states and the ground-truth labels, to assess the accuracy of point state estimates based on the approximate posterior.
Coefficient of Determination: the proportion of variation (R^2) between the predicted states and the ground-truths, to assess the unbiasedness and sharpness of the approximate posterior.
Table <ref> shows the comparison results.
Overall, our proposed DPFB outperforms the state-of-the-art VB and DAT methods, achieving the lowest state NRMSE and the highest R^2 among the baselines. Furthermore, DPFB performs better with particle flow than the normalizing flows, IAF and i-ResNet, due to the Bayes update performed via the proposed physics-inspired particle flow. Unlike FIVO which attains low R^2 scores due to repeated importance weighting and resampling, DPFB exhibits robustness to the particle degeneracy issue as it does not involve these particle filtering operations.
Notably, the difference between the DPFB and the baselines grows larger on the unsupervised target domains consisting of unseen system dynamics. Moreover, the DPFB achieves scores that are closest to the fully-supervised LSTM-C benchmark. These results demonstrate the outstanding ability of DPFB in producing accurate state predictions amidst the missing target labels. Additionally, they show the DPFB's capability to perform effective UDA on time series data with switching domains. By virtue of this capability, our framework addresses the cross-domain soft sensing problem in dynamic systems with varying operating conditions.
Fig. <ref> shows the validation results of the state predictions against the ground truth labels. Notably, during the unsupervised time windows where the system operates in the target domains, the predictions of our proposed DPFB are able to track the ground truths more closely than the second-best performing VRADA. This is particularly evident in the air flow rate predictions in the third row of Fig. <ref>. Unlike VRADA, the air flow rate prediction of DPFB is not restricted within the predetermined source domain range of 0.0278 - 0.0347 m^3/s. This capability of DPFB to generalize predictions beyond the source domains is due to its competence in accurately performing soft sensing (state inference) using unsupervised data. This is achieved through the proposed particle flow, which circumvents the variational approximation of variational Bayes and facilitates exact (non-variational) Bayesian inference of the state labels across both supervised and unsupervised domains.
Fig. <ref> compares the PCA embeddings (using cosine kernel) of the extracted latent state features and the PCA embeddings of the ground truth labels. The latent state embeddings generated by our proposed DPFB model are found to be most consistent with the ground truth embeddings. Additionally, the PCA embeddings of the source and target domains exhibit less overlap in both the DPFB and ground truth embeddings compared to the other baseline methods. This can be attributed to the dispensing of domain-invariant enforcing techniques such as DAT in our DPFB framework, which helps in retaining the rich representation of the extracted posterior feature space. Consequently, the models can construct an inclusive joint (time) feature space capable of adopting complex cross-domain state representations of dynamic systems.
These results illustrate the DPFB is more effective in generating accurate state predictions and constructing a cohesive probabilistic feature representation that better captures the underlying characteristics of the unsupervised data.
Fig. <ref> presents the t-SNE embeddings of the extracted hidden state features from the RNN-based models. The hidden state embeddings of our proposed DPFB exhibit a structured temporal arrangement that closely resembles that of those of the fully-supervised LSTM-C benchmark. This similarity demonstrates the ability of the proposed particle flow to endow the RNN hidden states with temporal characteristics that are inherent the dynamic system. As a result, the endowed temporal coherence facilitates a smooth transition between the source and target time windows, enabling effective cross-domain soft sensor modelling. In contrast, baseline models that lack this temporal coherence experience difficulty in transitioning between the domains and leads to poor performance. These results highlight the importance of subjecting the hidden states to a Bayes update via the proposed particle flow.
§ CONCLUSION
In this paper, we propose a novel DPFB framework for unsupervised cross-domain soft sensor modelling in dynamic systems.
The framework consists of a SBO that facilitates Bayesian filtering of the hierarchical features, followed by a physics-inspired particle flow that inherently performs a Bayes update to acquire a representative approximate posterior. To evaluate the proposed framework, it is validated on a multi-domain MFP system with complex process dynamics. The superior results of DPFB in comparison to state-of-the-art deep UDA and normalizing flow models provide strong evidence for the effectiveness of our proposed framework in addressing the cross-domain soft sensing problem in dynamic systems with varying operating conditions. As future work, incorporating a self-attention mechanism to obtain explicit dependencies on each time position of the feature sequence could be explored, as well as investigating graph representation learning to uncover spatiotemporal relationships in the time series data.
IEEEtran
[pages=1-]SUPP_revised.pdf
|
http://arxiv.org/abs/2306.05780v2
|
20230609094729
|
A space-time DG method for the Schrödinger equation with variable potential
|
[
"Sergio Gómez",
"Andrea Moiola"
] |
math.NA
|
[
"math.NA",
"cs.NA"
] |
Multiscale modeling of dislocations: Combining peridynamics with gradient elasticity
[
July 31, 2023
====================================================================================
We present a space–time ultra-weak discontinuous Galerkin discretization of
the linear Schrödinger equation with variable potential. The proposed method is well-posed and quasi-optimal in mesh-dependent norms for very general discrete spaces.
Optimal h-convergence error estimates are derived for the method when test and trial spaces are chosen either as piecewise polynomials, or as a novel quasi-Trefftz polynomial space.
The latter allows for a substantial reduction of the number of degrees of freedom and admits piecewise-smooth potentials.
Several numerical experiments validate the accuracy and advantages of the proposed method.
Keywords:
Schrödinger equation, ultra-weak formulation,
discontinuous Galerkin method, smooth potential, quasi-Trefftz space.
Multiscale modeling of dislocations: Combining peridynamics with gradient elasticity
[
July 31, 2023
====================================================================================
§ INTRODUCTION
In this work we are interested in the approximation of the solution to the time-dependent Schrödinger equation on a space–time cylinder = Ω× I, where Ω⊂^d (d ∈) is an open,
bounded polytopic domain with Lipschitz boundary , and I =
(0, T) for
some final time T > 0:
ψ := i ψ + 1/2ψ - V ψ = 0 ,
ψ = × I,
ψ = × I,
ψ - i ϑψ = × I,
ψ(, 0) = ψ_0() Ω.
Here i is the imaginary unit; (·) is the normal derivative-in-space operator; V: → is the potential energy function; ϑ∈ L^∞(× I) is a positive “impedance" function; the Dirichlet (), Neumann (), Robin () and initial condition (ψ_0) data are given functions; ,, are a polytopic partition of .
The model problem (<ref>) has a wide range of applications.
In quantum physics <cit.>,
the solution ψ is a quantum-mechanical wave function determining the dynamics of one or multiple particles in a potential V.
In electromagnetic wave propagation <cit.>, it is called “paraxial wave equation" and ψ is a function associated with the field component in a two-dimensional electromagnetic problem where the energy propagates at small angles from a preferred direction. In such problems, the function V depends on the refractive index and the wave number.
In underwater sound propagation <cit.>, it is referred to as “parabolic equation" and ψ describes a time harmonic wave propagating primarily in one direction.
In molecular dynamics <cit.>, by neglecting the motion of the atomic nuclei, the Born-Oppenheimer approximation leads to a Schrödinger equation in the semi-classical regime.
Space–time Galerkin methods discretize all the variables in a time dependent PDE at once; this is in contrast with the method of lines, which combines a spatial discretization and a time-stepping scheme.
Space–time methods can achieve high convergence rates in space and time, and provide discrete solutions that are available on the whole space–time domain.
The literature on space–time Galerkin methods for the Schrödinger equation is very scarce.
In fact, the standard Petrov-Galerkin formulation for the Schrödinger equation, i.e., the analogous formulation to that proposed in <cit.> for the heat equation, is not inf-sup stable, see <cit.>.
In <cit.>, Karakashian and
Makridakis proposed a space–time method for the Schrödinger equation with nonlinear potential, combining a conforming Galerkin discretization in space and an upwind DG time-stepping. This method reduces to a Radau IIA Runge-Kutta time discretization in the case of constant potentials.
Moreover, under some restrictions on the mesh that are necessary to preserve the accuracy of the method, it allows for changing the spatial mesh on each time-slab, but not for local time-stepping.
A second version of the method, obtained by enforcing the transmission of information from the past through a projection, was proposed in <cit.>. This version reduces to a Legendre Runge-Kutta time discretization in the case of constant potentials.
Recently, some space–time methods based on ultra-weak formulations of the Schrödinger equation have been designed. The well-posedness of such formulations requires weaker assumptions on the mesh.
Demkowicz
et al., in <cit.>, the authors proposed a discontinuous Petrov-Galerkin (DPG) formulation for the linear Schrödinger equation. The method is a conforming discretization of an ultra-weak formulation of the Schrödinger equation in graph spaces. Well-posedness and quasi-optimality of the method follow directly from the inf-sup stability (in a graph norm) of the continuous Petrov-Galerkin formulation.
In <cit.>, Hain and Urban proposed a space–time ultra-weak variational formulation for the Schrödinger equation with optimal inf-sup constant.
The formulation in <cit.> is closely related to the DPG method in <cit.>, but differs in the choice of the test and trial spaces. While in the latter one first fixes a trial space and then construct a suitable test space, the former requires the choice of a conforming test space and then the trial space is defined accordingly.
We are not aware of publications proposing space–time DG methods for the Schrödinger equation other than <cit.> and <cit.>, which is described below.
Trefftz methods are Galerkin discretizations with test and trial spaces spanned by local solutions of the considered PDE.
Trefftz methods with lower-dimensional spaces than standard finite element spaces, but similar approximation properties,
have been designed for many problems, e.g.,
Laplace and solid-mechanics problems <cit.>; the Helmholtz equation <cit.>; the time-harmonic <cit.>, and time-dependent <cit.> Maxwell's equations;
the acoustic wave equation in second-order <cit.> and first-order <cit.> form; the Schrödinger equation <cit.>; among others. Nonetheless, pure Trefftz methods are essentially limited to problems with piecewise-constant coefficients, as for PDEs with varying coefficients the design of “rich enough" finite-dimensional Trefftz spaces
is in general not possible.
A way to overcome this limitation is the use of quasi-Trefftz methods, which are based on spaces containing functions that are just approximate local solutions to the PDE. In essence, the earliest quasi-Trefftz spaces are the generalized plane waves used in <cit.> for the discretization of the Helmholtz equation with smoothly varying coefficients. More recently, a quasi-Trefftz DG method
for the acoustic wave equation with piecewise-smooth material parameters was proposed in <cit.>, where some polynomial quasi-Trefftz spaces were introduced. As an alternative idea, the embedded Trefftz DG method proposed in <cit.> does not require the local basis functions to be known in advance, as they are simply taken as a basis for the kernel of the local discrete operators in a standard DG formulation. This corresponds to a Galerkin projection of a DG formulation with a predetermined discrete space onto a Trefftz-type subspace. In practice, it requires the computation of singular or eigenvalue decompositions of the local matrices.
In <cit.>, the authors proposed a space–time Trefftz-DG method for the Schrödinger equation with piecewise-constant potential, whose well-posedness and quasi-optimality in mesh-dependent norms were proven for general discrete Trefftz spaces. Optimal h-convergence estimates were shown for a Trefftz space consisting of complex-exponential wave functions.
In this work we propose a space–time DG method for the discretization of the Schrödinger equation with variable potentials, extending the formulation of <cit.> to more general problems and discrete spaces.
The main advantages of the proposed method
are the following:
* The proposed ultra-weak DG variational formulation of (<ref>) is well-posed, stable, and and quasi-optimal in any space dimension for an almost arbitrary choice of piecewise-defined discrete spaces and variable potentials.
* A priori error estimates in a mesh-dependent norm can be obtained by simply analyzing the approximation properties of the local spaces.
* The method naturally allows for non-matching
space-like and time-like facets and all our theoretical results hold under standard assumptions on the space–time mesh, which make
the method suitable for adaptive versions and local time-stepping.
* Building on <cit.>, for elementwise smooth potentials, we design and analyze a quasi-Trefftz polynomial space with similar approximation properties of full polynomial spaces but with much smaller dimension, thus substantially reducing the total number of degrees of freedom required for a given accuracy.
Structure of the paper: In Section <ref> we introduce some notation on the space–time meshes to be used and the proposed ultra-weak DG variational formulation on abstract spaces. Section <ref> is devoted to the analysis of well-posedness, stability and quasi-optimality of the method. In Sections <ref> and <ref> we prove optimal h-convergence estimates for the method when the test and trial spaces are taken as the space of piecewise polynomials or a novel quasi-Trefftz space, respectively. In Section <ref> we present some numerical experiments that validate our theoretical results and illustrate the advantages of the proposed method. We end with some concluding remarks in Section <ref>.
§ ULTRA-WEAK DISCONTINUOUS GALERKIN FORMULATION
§.§ Space–time mesh and DG notation
Let be a non-overlapping prismatic partition of , i.e., each element K ∈ can be written as K = × for a d-dimensional polytope ⊂Ω and a time interval ⊂ I.
We use the notation = (), = and = (K)=(^2+^2)^1/2.
We call “mesh facet” any intersection F=_1∩_2 or F=_1∩∂ Q_T, for
K_1,K_2∈, that has positive d-dimensional measure and is contained in a d-dimensional hyperplane.
We denote by = (, ) ∈^d+1 one of the two unit normal vectors orthogonal to F with = 0 or = 1.
We assume that each internal mesh facet F is either
a space-like facet if = 0, or a time-like facet if = 0.
We further denote the mesh skeleton and its parts as
:= ⋃_K ∈, := Ω×{0}, := Ω×{T},
:= × (0, T), := × (0, T), := × (0, T),
:= ,
:= .
We employ the standard DG notation for the averages · and
space ·_ and time ·_t jumps for
piecewise complex scalar w and vector fields:
w : = 1/2(w|_K_1 + w|_K_2)
: = 1/2(|_K_1 + |_K_2)
_1∩_2⊂,
w_ : = w|_K_1K_1 + w|_K_2K_2
_ : = |_K_1·K_1 + |_K_2·K_2 _1∩_2⊂,
w_t : = w|_K_1K_1 + w|_K_2K_2 = w^- - w^+,
_1∩_2⊂,
where K∈^d and K∈ are the space and time components of the outward-pointing unit
normal vectors on ∩ and ∩, respectively. The superscripts “-" and “+" are used to denote the traces of a function on a space-like facet from the elements “before" (-) and “after" (+) the facet.
We denote space–time broken function spaces as H^s():={v∈ L^2(Q_T), v|_K∈ H^s(K) ∀ K∈}, s:={v:Q_T→, v|_K∈sK ∀ K∈}, for s∈_0.
§.§ Variational formulation of the DG method
For any finite-dimensional subspace () of the broken Bochner–Sobolev space
() := ∏_K ∈1; L^2()∩ L^2(;
2),
the proposed ultra-weak DG variational formulation for the Schrödinger equation (<ref>) is:
∈() =
ℓ() ∀∈(),
where
:= ∑_K ∈∫_K
+ i (∫__t + ∫_)
+ 1/2∫_(·_ + i α_·_
- _ + i β__)
+ 1/2∫_( + i α)
+ 1/2∫_(- +
iβ() () )
+ 1/2∫_(δ
+(1-δ)iϑ)(+i/ϑ)
+ i ∑_K ∈∫_K μ,
ℓ() := i ∫_ψ_0 + 1/2∫_( + i α)
+ 1/2∫_(- + i β) + 1/2∫_((δ - 1) +
iδ/ϑ),
for some mesh-dependent stabilization functions
α∈ L^∞(∪), _∪α>0,
β∈ L^∞(∪), _∪β>0,
δ∈ L^∞(), 0<δ≤1/2,
μ∈ L^∞(), _μ>0.
More conditions on these functions, in particular on their dependence on the local mesh size, will be specified in Section <ref>.
The variational formulation (<ref>) can be derived by integrating by parts twice in space and once in time in each element as in <cit.>, and treating the Neumann and the Robin boundary terms similarly to <cit.>.
However, as the current setting does not require the discrete space () to satisfy the Trefftz property (ψ_|_K = 0, ∀ K ∈), there are an additional volume term that is needed to ensure consistency (the first integral over K in ··), and a local Galerkin-least squares correction term (the second integral over K in ··) that were not present in the previous method.
Such additional terms vanish when () is a discrete Trefftz space, thus recovering the formulation in <cit.>.
The variational problem (<ref>) is a global problem involving all the degrees of freedom of the discrete solution for the whole space–time cylinder .
However, as upwind numerical fluxes are taken on the space-like facets, if the space–time prismatic mesh can be decomposed into time-slabs (i.e., if the mesh elements can be grouped in sets of the form Ω× [t_n-1, t_n] for a partition of the time interval of the form 0 = t_0 < t_1 < … < t_N = T), the global linear system stemming from (<ref>) can be solved as a sequence of N smaller systems of the form
𝐊_n Ψ_h^(n) = b_n 1 ≤ n ≤ N,
where b_n = 𝐑_n Ψ_h^(n - 1) for n = 2, …, N.
This is comparable to an implicit time-stepping, and it naturally allows for local mesh refinement in different regions of the space–time cylinder Q_T.
Moreover, when is a tensor-product space–time mesh, the potential V does not vary in time, and the partition of the time interval is uniform, the matrices 𝐊_n and 𝐑_n are the same for every time-slab.
The well-posedness of the variational formulation (<ref>) strongly relies on the L^2(K)-self-adjointness of the Schrödinger operator (·) on each K∈ (in the sense that ∫_Kψ φ=∫_Kψ φ for all ψ∈(), φ∈^∞_0(K), thanks to the fact that the only odd derivative in is multiplied to the imaginary unit), which makes the local Galerkin-least squares correction term consistent.
On the one hand, such term is essential in the proof of coercivity of the sesquilinear form · · (see Proposition <ref> below).
On the other hand, numerical experiments suggest that it can be neglected without loosing accuracy and stability, see Section <ref> below.
This is also the case for the quasi-Trefftz DG method for the Helmholtz equation <cit.> and for the wave equation <cit.>, where a similar correction term was used.
Nonetheless, in the design of an ultra-weak DG discretization for a PDE with a non-self-adjoint differential operator ℒ(·) (e.g., the heat operator ℒ(·) = (∂_t - )(·)), the corresponding local least-squares correction term ∑_K ∈∫_K μℒℒ would not control the consistency term ∑_K ∈∫_Kℒ^* arising from the integration by parts.
The variational problem (<ref>) allows for time-dependent potentials V. This is an important feature as, in such a case, the method of separation of variables cannot be used to reduce the time-dependent problem (<ref>) to the time-independent Schrödinger equation.
§ WELL-POSEDNESS, STABILITY AND QUASI-OPTIMALITY OF THE DG METHOD
The theoretical results in this section are derived for any spatial dimension d, and are independent of the specific choice of the discrete space ().
Recalling that the volume penalty function μ, the stabilization functions α, β and the impedance function ϑ are positive, and that δ∈(0, 1/2),
we define the following mesh-dependent norms on ():[Observe that a factor 1/2 is missing in the first term of the DG norm in <cit.>.]
w^2 : = ∑_K ∈μ^1/2
wL^2(K)^2 + 1/2(w_tL^2()^2 +
wL^2(∪)^2)
+ 1/2( α^1/2w_L^2()^d^2
+ β^1/2 w_L^2()^2 + α^1/2wL^2()^2
+
β^1/2 w L^2()^2 +
(ϑ (1 - δ))^1/2 wL^2()^2 + (δϑ^-1)^1/2 w L^2()^2 ),
w^2 : = w^2 + ∑_K ∈μ^-1/2wL^2(K)^2 + 1/2w^-L^2()^2
+
1/2(α^-1/2 wL^2()^d^2 + α^-1/2
w L^2()
+
β^-1/2wL^2()^2 + β^-1/2 wL^2()^2
+ δ^-1/2ϑ^1/2 w L^2()^2).
The sum of the L^2(K)-type terms ensures that · is a norm.
That · is a norm on () follows from the following reasoning (see also <cit.>): if w∈() and w = 0, then w is the unique variational solution to the Schrödinger equation (<ref>) with homogeneous initial and boundary conditions.
Moreover, by the energy conservation (if = ∅) or dissipation (if ≠∅), then w(·, t)L^2(Ω)^2 ≤w(·, 0)L^2(Ω)^2 = 0, for all t ∈(0, T]; therefore, w = 0.
The DG norms in (<ref>)–(<ref>) are chosen in order to ensure the following properties of the sesquilinear form ·· and the antilinear functional ℓ(·), from which the well-posedness and quasi-optimality of the method (<ref>) follow.
For all w ∈() the following identity holds
(ww) = w^2.
The result follows from the following identities (see <cit.> for more details):
∫_(v^- _t- 1/2v^2_t) =
1/2 ∫_v_t^2 ∀ v ∈ H^1(),
∫_(v_ + ·v_) = ∫_v _ ∀ (v, ) ∈ H^1() × H^1()^d,
-0.25in
(∑_K ∈∫_K w
w) = -1/2(∫_w^2_t +
∫_w^2 - ∫_w^2 )
+ 1/2(∫_w w_
+ ∫_× I w w)
∀ w ∈().
The sesquilinear form ·· and the antilinear functional ℓ(·) are continuous in the following sense:
∀ v,w ∈()
vw≤ 2vw,
ℓ(v)≤ (2ψ_0L^2()^2 +
α^1/2L^2()^2 + β^1/2L^2()^2 + ϑ^-1/2L^2()^2 )^1/2w.
The terms on ,,, and are controlled as in <cit.>.
The remaining terms are bounded using Cauchy–Schwarz inequality and the inequality δ≤ 1 - δ < 1.
For any finite-dimensional subspace () of (), there exists
a unique solution ∈() satisfying the variational formulation
(<ref>). Additionally, the following
quasi-optimality bound holds:
ψ - ≤ 3 inf_∈()ψ -
.
Moreover, if = 0 and = 0 (or = ∅ and = ∅), then
≤(2ψ_0L^2()^2 + ϑ^-1/2L^2()^2 )^1/2.
Existence and uniqueness of the discrete solution ∈() of the variational formulation (<ref>), and the quasi-optimality bound (<ref>) follow directly from Propositions <ref>–<ref>, the consistency of the variational formulation (<ref>) and Lax–Milgram theorem.
The continuous dependence on the data (<ref>) follows from Proposition <ref>, and the fact that if = 0 and = 0 (or = ∅ and = ∅), the term w on the right-hand side of (<ref>) can be replaced by w.
Theorem <ref> implies that it is possible to obtain error estimates in the mesh-dependent norm · by studying the best approximation in () of the exact solution in the · norm. Moreover, according to Proposition <ref> below, a priori error estimates can be deduced from the local approximation properties of the space () only, as the · norm can be bounded in terms of volume Sobolev seminorms and norms.
So far, we have not imposed any restriction on the space–time mesh . Henceforth, in our analysis we assume:
* Uniform star-shapedness:
There exists 0 < ρ≤1/2 such that, each element K ∈ is star-shaped with respect to the ball B:= B_ρ(_K, s_K) centered at (_K, s_K) ∈ K and
with radius ρ.
* Local quasi-uniformity in space: there exists a number 𝗅𝗊𝗎()>0 such that h_^1≤ h_^2 𝗅𝗊𝗎() for all K^1 = ^1 ×^1 ,K^2 = ^2 ×^2∈ such that K^1∩ K^2
has positive d-dimensional measure.
The proof of Proposition <ref> is a direct consequence of a collection of trace inequalities
(see <cit.> and <cit.>), which
in our space–time setting can be written
for any element K = ×∈
as
φL^2(×∂)^2
≤
C_( ^-1φL^2(K)^2 + ∂_t φL^2(K) ^2) ∀φ∈1; L^2(),
φL^2(∂×)^2
≤ C_(^-1φL^2(K)^2 + φL^2(K)^d^2 )
∀φ∈ L^2(; 1),
φL^2(∂×)^d^2
≤ C_(^-1φL^2(K)^d^2 + D_^2 φL^2(K)^d× d^2 )
∀φ∈ L^2(; 2),
where D_^2 φ is the spatial Hessian of φ, and C_≥1
only depends on the star-shapedness parameter ρ.
Fix δ = min(ϑ, 1/2), and assume that V ∈ L^∞(K), ∀ K ∈. For all φ∈(),
the following bound holds
φ^2 ≤3/2 C_∑_K = ×∈[
^-1φL^2(K)^2 + ∂_t φL^2(K)^2 + a_K^2 ^-1φL^2(K)^2
+ ( a^2_K +b_K^2 ^-1)
φL^2(K)^d^2
+ b_K^2 D_^2 φL^2(K)^d× d^2 + μ^1/2∂_t φL^2(K)^2
+ μ^1/2φL^2(K)^2 + VL^∞(K)^2 μ^1/2φL^2(K)^2 + μ^-1/2φL^2(K)^2],
where
a_K^2 := max{∂ K ∩(∪)α, (∂ K ∩ (∪)β)^-1, ∂ K ∩ ϑ},
b_K^2 := max{(∂ K ∩(∪)α)^-1, ∂ K ∩ (∪)β, }.
The factor 3/2 C_ appearing in the bound of Proposition <ref> is due to the
integral terms with arguments
1/2αw_^2, 1/2β^-1w^2 on in the definition
(<ref>) of the · norm.
The volume term μ^1/2
w
L^2(K)^2 is controlled by the inequality _1 ≤√(n)_2, ∀∈^n.
It is well known that the Schrödinger equation (<ref>) with
homogeneous Dirichlet and/or Neumann boundary conditions and = ∅
preserves the energy (or probability) functional (t;
ψ):=1/2∫_Ω |ψ(, t)|^2, i.e. (t; ψ) =0.
The proposed DG method is dissipative, but the energy loss can be quantified in terms of the local least-squares error, the initial condition error, the jumps of the solution on the mesh skeleton, and the error on ∪ due to the weak imposition of the boundary conditions. More precisely, for =0, = 0 and = ∅, the discrete solution to (<ref>) satisfies
(0;ψ_0) - (T; ) = _loss : = δ_ + 1/2ψ_0 -
^2,
where
δ_ : = ∑_K ∈μ^1/2
L^2(K)^2 + 1/2_tL^2()^2 +
1/2(α^1/2L^2()^2
+ β^1/2L^2()^2 + α^1/2_L^2()^d^2 + β^1/2_L^2()^2 ).
This follows from the definition of the · norm of the solution , the coercivity of the sesquilinear form · ·, the
definition of the antilinear functional ℓ(·) and simple algebraic manipulations; see <cit.>.
§ DISCRETE SPACES AND ERROR ESTIMATES
In this section we prove a priori h-convergence estimates on the · norm of the error for some discrete polynomial spaces.
In particular, for each element K ∈, we consider two different polynomial spaces: the space ^p(K) of polynomials of degree p on K, and a quasi-Trefftz subspace pK⊂^p(K) with much smaller dimension, i.e., (pK) ≪(^p(K)) (see Proposition <ref> below).
We denote the local dimensions n_d+1,p := (pK) and r_d+1,p := (^p(K)) in dependence of the space dimension d of the problem and the polynomial degree p, but independent of the element K.
For simplicity, we only describe the case where the same polynomial degree is chosen in every element; the general case can easily be studied.
§.§ Multi-index notation and preliminary results
We use the standard multi-index notation for partial derivatives and monomials, adapted to the space–time setting: for =
(, j_t) = (j_x_1, …, j_x_d, j_t)∈_0^d+1,
! := j_x_1!⋯ j_x_d!j_t!,
:= + j_t := j_x_1 + ⋯ + j_x_d + j_t,
f := j_x_1x_1⋯j_x_dx_dj_tt f,
^t^j_t := x_1^j_x_1⋯ x_d^j_x_dt^j_t.
We also recall the definition and approximation properties of multivariate Taylor polynomials, which constitute the basis of our error analysis.
On an open and bounded set Υ⊂^d+1,
the Taylor polynomial of order m∈ (and degree m - 1), centered at (,s)∈Υ,
of a function φ∈m - 1Υ is defined as
(,s)mφ(, t) := ∑_ < m1/!φ(, s)( - )^ (t - s)^j_t.
If φ∈mΥ and the segment
[(,s),(,t)]⊂Υ,
the Lagrange's form of the Taylor remainder (see <cit.>) is bounded as follows:
φ(,t) - (,s)mφ(,t) ≤φ_mΥ∑_ = m1/!|( -
)^ (t - s)^j_t|≤(d + 1)^m/2/m!
h_Υ^mφ_mΥ,
where h_Υ is the diameter of Υ.
In particular, if Υ is star-shaped with respect to (, s),
then the following estimate is obtained
φ(,t) - (,s)mφ(,t)L^2(Υ)≤(d + 1)^m/2Υ^1/2/m!
h_Υ^mφ_mΥ,
which, together with the well-known identity (see <cit.>)
D^(,s)mφ =
(,s)m-||D^φ, ||<m,
gives the
estimate
φ - (, s)mφ_rΥ≤d + rd^1/2(d+1)^m - r/2Υ^1/2/(m - r)! h_Υ^m - rφ_mΥ r < m, ∀φ∈mΥ.
The Bramble–Hilbert lemma provides an estimate for the error of the
averaged Taylor polynomial, see <cit.>
and <cit.>.
Let Υ⊂^d + 1, 1 ≤
d ∈, be an open and bounded set with diameter h_Υ, star-shaped with
respect to the ball B:= B_ρ h_Υ(, s) centered at (, s) ∈Υ and
with radius ρ h_Υ, for some 0 < ρ≤1/2. If φ∈mΥ, the averaged Taylor polynomial of order m (and degree m
- 1) defined as
mφ(, t) :=
1/B∫_B(,s)mφ(, t) (,s),
satisfies the following error bound for all s < m
φ - mφ_H^s(Υ)≤
C_d,m, ρ
h_Υ^m-sφ_mΥ≤ 2 d + sd (d + 1)^m - s/(m - s-1)!h_Υ^m - s/ρ^d+1/2φ_mΥ.
A sharp bound on C_d,m,ρ>0 is given in <cit.> in dependence of d, s, m and ρ, and the second bound is proven in <cit.>.
§.§ Full polynomial space
In next theorem, we derive a priori error estimates for the DG formulation (<ref>) for the space of elementwise polynomials
() = ∏_K ∈^p(K).
Let p∈, fix δ as in Proposition <ref> and assume that V ∈ L^∞(). Let ψ∈()
∩p + 1 be the exact solution of (<ref>) and ∈() be the solution to the variational formulation (<ref>) with () given by (<ref>).
Set the volume penalty function and the stabilization functions as
min{^2, ^2}≤μ|_K ≤max{^2, ^2},
α|_F = 1/h_F_ ∀ F ⊂∪,
β|_F = h_F_ ∀ F ⊂∪,
where
h_F_=h_ if F⊂∩(∪),
min{h_^1,h_^2}≤ h_F_≤max{h_^1,h_^2} if F=K^1 ∩ K^2 ⊂,
then the following estimate holds
ψ - ≤ 3√(6 C_)ρ^-p + 1/2(d+1)^p + 1/p!∑_K=×∈[ ^-1/2^p + 1
+ p ^1/2^p + 𝗅𝗊𝗎() (
^-1^p+1 + 2 p ^p + (p - 1)p/2(d + 2/d + 1) ^p - 1)
+
p max{, }^p +
(p - 1)p/2(d + 2/d + 1)max{, }^p - 1
+ VL^∞(K)max{, }^p + 1 +
min{^-1, ^-1}^p + 1] ψ_p+1K.
Moreover, if ≃ for all K ∈, there exists a positive constant C independent of the element sizes ,,
but depending on the degree p, the L^∞() norm of V, the trace inequality constant C_ in (<ref>), the local quasi-uniformity parameter 𝗅𝗊𝗎() and the star-shapedness parameter ρ such that
ψ - ≤ C ∑_K ∈^p ψ_p+1K.
The proof follows from the choice of the volume penalty function μ and the stabilization functions α, β, the quasi-optimality bound (<ref>), Proposition <ref>, the inequality √(_1)≤∑_i = 1^N √(v_i) ∀∈^N, the fact that p+1ψ_|_K∈(K) for all elements K∈, and the Bramble-Hilbert lemma <ref>.
§.§ Quasi-Trefftz spaces
We now introduce a polynomial quasi-Trefftz space. Let p ∈ and assume that V∈p-2K. For each K∈ we define the following local polynomial quasi-Trefftz space:
pK := {q_p ∈^p(K) : q_p(_K, t_K) = 0, ≤ p - 2
},
for some point (_K, t_K) in K. We consider the following global discrete space
() = ∏_K ∈pK.
For all ∈^d + 1, if V ∈K and f ∈ + 2K, then by the multi-index Leibniz product rule for multivariate functions we have
f(_K, t_K) = i , j_t + 1 f(_K, t_K) + 1/2∑_ℓ = 1^d + 2e_ℓ, j_t f(_K, t_K)
- ∑_≤ - V(_K, t_K) f(_K, t_K),
where {e_ℓ}_ℓ = 1^d ⊂^d is the canonical basis,
= !/! ( - )!, and ≤⇔ j_x_i≤ z_x_i (1 ≤ i ≤ d) and j_t ≤ z_t.
The next proposition is the key ingredient to prove optimal convergence rates in Theorem <ref> for the DG method (<ref>) when () is chosen as the quasi-Trefftz polynomial space defined in (<ref>).
Let p ∈ and K ∈. Assume that V ∈max{p - 2,0}K and ψ∈pK satisfies ψ = 0 in K, then the Taylor polynomial (_K, t_K)p+1ψ∈pK.
By the definition of the Taylor polynomial, (_K, t_K)p+1ψ∈^p(K).
Therefore, it only remains to show that (_K, t_K)p+1ψ(_K, t_K) = 0 for all ≤ p - 2.
Taking f = (_K, t_K)p+1ψ in (<ref>), all the derivatives of (_K, t_K)p+1ψ at (_K, t_K) that appear in (<ref>) are at most of total order + 2 ≤ p, so
they coincide with the corresponding derivatives of ψ.
Furthermore, since ψ = 0, then
(_K, t_K)p+1ψ(_K, t_K) = ψ(_K, t_K) = 0,
which completes the proof.
Proposition <ref> allows for the use of the Taylor error bound (<ref>) in the analysis of the quasi-Trefftz DG scheme.
Let p∈, fix δ as in Proposition <ref> and assume that V ∈ L^∞()∩max{p - 2, 0}. Let ψ∈()
∩p + 1 be the exact solution of (<ref>) and ∈() be the solution to the variational formulation (<ref>) with () given by (<ref>).
Set the volume penalty function μ and the stabilization functions α,β as in Theorem <ref>.
Then, the following estimate holds
ψ - ≤3/2√(6C_)^1/2(d+1)^p + 1/2/(p + 1)!∑_K=×∈[^-1/2^p + 1 + (p + 1) ^1/2^p
+ 𝗅𝗊𝗎() (
^-1^p+1 + 2(p + 1) ^p + p(p+1) (d+2/2(d+1))^1/2^p - 1)
+
(p + 1) max{, }^p
+ p(p + 1) (d+2/2(d+1))^1/2max{, }^p - 1
+ V0Kmax{, }^p + 1 +
min{^-1, ^-1}^p + 1] ψ_p+1K.
Moreover, if ≃ for all K ∈, there exists a positive constant C independent of the mesh size h, but depending on the degree p, the L^∞() norm of V, the trace inequality constant C_ in (<ref>), the local quasi-uniformity parameter 𝗅𝗊𝗎() and the measure of the space–time domain such that
ψ - ≤ C ∑_K ∈^pψ_p+1K.
The proof follows from the choice of the volume penalty function μ and the stabilization functions α, β, the quasi-optimality bound (<ref>), bound (<ref>), the inequality √(_1)≤∑_i = 1^N √(v_i) ∀∈^N, Proposition <ref>, and the estimate (<ref>).
The a priori error estimate in Theorem <ref> requires stronger regularity assumptions on ψ than Theorem <ref> (namely ψ∈p+1 instead of ψ∈ H^p+1())
due to the fact that pK is tailored to contain the Taylor polynomial (_K, t_K)p + 1ψ, but in general it does not contain the averaged Taylor polynomial p + 1ψ.
Optimal h-convergence estimates can also be derived for non-polynomial spaces, by requiring the local space (K) to contain an element whose Taylor polynomial coincide with that of the exact solution. This is the approach in <cit.> for the Trefftz space of complex exponential wave functions for the Schrödinger equation with piecewise-constant potential.
§.§.§ Basis functions and dimension
So far, we have not specified the dimension and a basis for the space pK, which is the aim of this section.
Recalling that r_d, p = (^p(^d))=p+dd, let {m_α}_α = 1^r_d, p and {m_β}_β = 1^r_d, p- 1 be bases of _p(^d) and _p-1(^d), respectively. We define
n_d+1, p := r_d,p+r_d,p-1
= p + dd + p + d - 1d = (p + d - 1)! (2p + d)/d! p!,
and the following n_d+1, p elements of pK
{b_J ∈pK :
b_J(_K^(1), ·) = m_J and ∂_x_1 b_J(_K^(1), ·) = 0 if J ≤ r_d, p
b_J(_K^(1), ·) = 0 and ∂_x_1 b_J(_K^(1), ·) = m_J-r_d, p if r_d, p < J ≤ n_d+1,p
},
where g(_K^(1), ·) denotes the restriction of g: K → to x_1 = _K^(1), where _K^(1) is the first component of _K ∈^d.
Any element q_p ∈pK can be expressed in the scaled monomial basis as
q_p(, t) = ∑_≤ p C_( - _K/h_K)^(t - t_K/h_K)^j_t,
for some complex coefficients {C_}_≤ p.
By the conditions q_p (_K, t_K) = 0 for all ≤ p -2, in the definition of pK, we have the following relations between the coefficients
i/h_K (j_t + 1) C_, j_t + 1 + 1/2h_K^2∑_ℓ = 1^d ( + 1)( + 2) C_ + 2e_ℓ, j_t^J - ∑_≤h_K^ - /( - )! - V(_K, t_K) C_^J = 0,
which can be rewritten as
C_ + 2e_1, j_t = 1/( + 1)( + 2)( - 2ih_K (j_t + 1) C_, j_t + 1^J
- ∑_ℓ = 2^d ( + 1)( + 2) C_ + 2e_ℓ, j_t^J + 2∑_≤h_K^ - + 2/( - )! - V(_K, t_K) C_^J).
The conditions imposed in (<ref>) on the restriction of b_J to x_1 = _K^(1) fix the coefficients of their expansion for all with j_x_1∈{0, 1}.
In Figures <ref> and <ref>, we illustrate how the
coefficients that are not immediately determined by the conditions in (<ref>)
(i.e., those for _x_1≥ 2) are uniquely defined and can be computed for the (1+1)- and (2+1)-dimensional cases using the recurrence relation (<ref>).
The set of functions {b_J}_J = 1^n_d + 1,p defined in (<ref>) are a basis for the space pK. Therefore,
(pK)
= n_d+1, p = (p + d - 1)! (2p + d)/d! p!
= 𝒪_p→∞(p^d) ≪(^p(K))
=d + 1+ pd + 1 = 𝒪_p→∞(p^d+1).
We first observe that the set of polynomials {b_J}_J = 1^n_d+1,p is linearly independent due to their restrictions to x_1 = _K^(1).
On the other hand, the relations (<ref>), imply that q_p is uniquely determined by its restriction q_p(_K^(1), ·) and the restriction of its derivative ∂_x_1 q_p(_K^(1), ·).
In addition, there exist some complex coefficients {λ_s}_s = 1^n_d + 1, p such that
q_p(_K^(1), ·)
= ∑_s = 1^r_d, pλ_s m_s(·)
= ∑_s = 1^ r_d, pλ_s b_s(_K^(1), ·),
∂_x_1 q_p(_K^(1), ·)
= ∑_s = r_d, p + 1^n_d + 1, pλ_sm_s - r_d, p(·)
= ∑_s = r_d,p+1^n_d + 1, pλ_s∂_x_1 b_s(_K^(1), ·),
whence
q_p = ∑_s = 1^n_d + 1, pλ_s b_s,
which completes the proof.
The definition of the basis functions b_J in (<ref>) can be modified by fixing the restriction of b_J and its partial derivative ∂_x_ℓ b_J to x_ℓ = _K^(ℓ) for any 1 ≤ℓ≤ d. However, it is not possible to assign the values for a given time t=t_K, as the order of the time derivative appearing in the Schrödinger equation is lower than the order of the space derivatives.
How this affects the basis construction is visible from Figure <ref>:
the coefficients (the colored dots) can be computed sequentially when all the other coefficients of a relation (the Y-shaped stencil) are known, so it is possible to reach all dots moving left to right, but not moving bottom to top.
Imposing the values at a given time is possible for the wave equation, as it is done in <cit.>, precisely because in that case time and space derivatives have the same order.
The space pK does not reduce to a Trefftz space for the case of constant potential V. Nonetheless, the pure Trefftz space p(K) defined as
p(K) = {q_p ∈^p(K) : q_p = 0},
does not posses strong enough approximation properties to guarantee optimal h-convergence. In particular, it does not contain the Taylor polynomial of all local solutions to the Schrödinger equation; for d = 1, p = 1 and V = 0, p(K) = span{1, x}; however, ψ(x, t) = exp( x + i/2t) satisfies ψ = 0, and (0, 0)p+1ψ = 1 + x + i/2t ∉p(K).
As seen in Proposition <ref>, the quasi-Trefftz polynomial space has considerably lower dimension than the full polynomial space of the same degree.
This “dimension reduction” is common to all Trefftz and quasi-Trefftz schemes.
In particular, the dimension n_d + 1,p of pK is equal to the dimension of the space of harmonic polynomials of degree ≤ p in ℝ^d+1, the Trefftz space of complex exponential wave functions for the Schrödinger equation with piecewise-constant potential in <cit.>,
the Trefftz and quasi-Trefftz polynomial space for the wave equation in <cit.> and <cit.>.
§ NUMERICAL EXPERIMENTS
In this section we validate the theoretical results regarding the h-convergence of the proposed method, and numerically assess some additional features such as p-convergence and conditioning.
We list some aspects regarding our numerical experiments
* We use Cartesian-product space–time meshes with uniform partitions along each direction, which are a particular case of the situation described in Remark <ref>.
* We choose (_K, t_K) in the definition of the quasi-Trefftz space pK in (<ref>) as the center of the element K.
* In all the experiments we consider Dirichlet boundary conditions.
* The linear systems are solved using Matlab's backslash command.
* The quasi-Trefftz basis functions {b_J}_J = 1^n_d + 1,p are constructed by choosing m_J and m_J in (<ref>) as scaled monomials and by computing the remaining coefficients C_𝐣 with the relations (<ref>).
* In the h-convergence plots, the numbers in the yellow rectangles are the empirical algebraic convergence rates for the quasi-Trefftz version (continuous lines). The dashed lines correspond to the errors obtained for the full polynomial space.
§.§ (1+1)-dimensional test cases
We first focus on the (1+1)-dimensional case, for which families of explicit solutions are available for some well-known potentials V.
§.§.§ h-convergence
In order to validate the error estimates in Theorems <ref> and <ref>, we consider a series of problems with different potentials V.
No significant difference in terms of accuracy between the quasi-Trefftz and the full polynomial versions of the method with the same polynomial degree p (corresponding to different numbers of DOFs n_d + 1,p and r_d+1,p, respectively) is observed in all the experiments.
Harmonic oscillator potential (V(x) = ω^2 x^2/2)
For this potential, the Schrödinger equation (<ref>) models the situation of a quantum harmonic oscillator for an angular frequency ω > 0. On = (-3, 3) × (0, 1), we consider the following well-known family of solutions (see e.g., <cit.>)
ψ_n(x, t) = 1/√(2^n n!)(ω/π)^1/4ℋ_n(√(ω) x) exp(-1/2(ω x^2 + (2n + 1)iω t))
n∈ℕ,
where ℋ_n(·) denotes the n-th physicist's Hermite polynomials as defined in <cit.>.
In Figure <ref>, we present the errors obtained for ω = 10, n = 2 and a sequence of Cartesian meshes with uniform partitions and h_x = h_t = 0.05 × 2^-i, i = 0, … 4. Rates of convergence of order h^p in the DG norm are observed, as predicted by the error estimate in Theorem <ref>.
A convergence of at least order h^p+1 is observed for the L^2-error at the final time, which is faster (by a factor h) than the order that can be deduced from the estimates in Theorems <ref> and <ref>.
Due to the fast decay of the exact solution close to the boundary (see Figure <ref>(panel a), the energy is expected to be preserved. In Figure <ref>, we show the evolution of the energy error, and the convergence of the energy loss ℰ_loss to zero for the quasi-Trefftz version.
In the latter, rates of order h^2p are observed, which follows from Remark <ref> and the error estimates in Theorems <ref> and <ref>.
Reflectionless potential (V(x) = -a^2sech^2(ax))
This potential was studied in <cit.> as an example of a reflectionless potential. On the space–time domain = (-5, 5) × (0, 1), we consider the Schrödinger equation with exact solution (see <cit.>)
ψ(x, t) = (√(2)i - a tanh(ax)/√(2)i + a) exp(i(√(2)x - t)).
In Figure <ref>, we show the errors obtained for a sequence of meshes with h_x = 2h_t = 0.2 × 2^-i, i = 0, …, 4, and a = 1. As in the previous experiment, rates of convergence of order h^p and h^p + 1 are observed in the DG norm and the L^2 norm at the final time, respectively. The real part of the exact solution is depicted in Figure <ref> (panel b).
Morse potential (V(x) = D (1 - e^-α x)^2)
This potential was introduced by Morse in <cit.> to obtain a quantum-mechanical energy level spectrum of a vibrating, non-rotating diatomic molecule. There, the following family of solutions was presented (see also <cit.>)
ψ_λ, n(x, t) = N(λ, n) ξ(x)^λ - n - 1/2𝕃_n^(2λ - 2n - 1)(ξ(x))
×exp(-ξ(x)/2 - it⌊(n + 1/2) - 1/2 λ (n + 1/2)^2 ⌋ω_o),
where ⌊·⌋ is the floor function, n = 0, …, ⌊λ - 1/2 ⌋, 𝕃_n^(α) denote the general associated Laguerre polynomials as defined in <cit.>
and
N(λ, n) = ⌊(2λ - 2n - 1) Γ(n + 1)/Γ(2λ - n)⌋^1/2, λ = √(2D)/α, ξ(x) = 2λexp(-α x), ω_o = √(2 D)α.
In Figure <ref>, we show the errors obtained for the Morse potential problem with D = 8, α = 4 and exact solution ψ_1, 1 on the space–time domain = (-0.5, 1.5) × (0, 1) for a sequence of meshes with h_x = h_t = 0.1 × 2^-i, i = 0, …, 4. The observed rates of convergence are in agreement with those obtained in the previous experiments. The real part of the exact solution is depicted in Figure <ref> (panel c).
Square-well potential
We now consider a problem taken from <cit.>, where the exact solution is not arbitrarily smooth.
On the space–time domain = (-√(2), √(2)) × (0, 1), we consider the Schrödinger equation with
homogeneous Dirichlet boundary conditions and the following square-well potential
V(x) = {0 x∈ (-1, 1),
V_* x ∈ (-√(2), √(2)) ∖ (-1, 1),.
for some fixed V_* > 0.
The initial condition is taken as an eigenfunction (bound state) of
-1/2∂_x^2+V on (-√(2), √(2)):
ψ_0(x) = {cos(k_*√(2) x) x ∈ (-1, 1),
cos(k_*)/sinh(√(V_* - k_*^2))sinh(√(V_* - k_*^2)(2 - √(2) |x|)) x ∈ (-√(2), √(2)) ∖ (-1, 1),
.
where k_* is a real root of the function f( k) := √(V_* - k^2) -
ktan( k) tanh(√(V_* - k^2) ).
The solution of the corresponding initial boundary value problem
(<ref>) is ψ(x, t) = ψ_0(x)exp(-ik^2 t) and belongs to the space ∞I;1Ω\∞I;2Ω.
Among the finite set of values k_* for a given V_*, in this experiment we take the largest one, corresponding to faster oscillations in space and time.
In Figure <ref>, we show the errors obtained for V_* = 20 (k_* ≈ 3.73188) and a sequence of meshes with h_t = √(2) h_x = 0.1 × 2^-i, i = 0, …, 4. Optimal convergence is observed for the errors in both norms of the quasi-Trefftz version of the method.
§.§.§ Effect of stabilization and volume penalty terms
In this experiment we are interested in the effect of neglecting some of the terms in the variational formulation (<ref>). To do so, we consider the (1 + 1)-dimensional quantum harmonic oscillator problem with exact solution (<ref>).
In Tables <ref>–<ref> (quasi-Trefftz space) and <ref>–<ref> (full polynomial space) we present the errors in the DG-norm obtained for the same sequence of meshes and approximation degrees as in the previous section, for different combinations of the stabilization terms α, β and the volume penalty parameter μ.
Although the proof of well-posedness of the method (<ref>) relies on the assumption that α, β and μ are strictly positive, in our numerical experiments, the matrices of the arising linear systems are non-singular and optimal convergence rates are observed even when all these parameters are set to zero. Moreover, the errors obtained when α = 0 or β = 0 are smaller as some terms in the definition (<ref>) of · vanish, while the presence of μ seems to have just a mild effect in the results. Not shown here, similar effects were observed for the error in the L^2()-norm.
§.§.§ p-Convergence
We now study numerically the p-convergence of the method, i.e., for a fixed space–time mesh , we study the errors when increasing the polynomial degree p. We consider the (1+1)-dimensional problems above with the same parameters and the coarsest meshes for each case. In Figure <ref>, we compare the errors obtained for the method with the two choices for the discrete space () analyzed in the previous sections: the full polynomial space (<ref>) and the quasi-Trefftz polynomial space (<ref>). As expected, for the quasi-Trefftz version we observe exponential decay of the error of order e^-bN_dofs, where N_dofs denotes the total number of degrees of freedom. As for the full polynomial space, only root-exponential convergence e^-c√(N_dofs) is expected. The superiority of the quasi-Trefftz version is evident in all cases.
Exponential convergence of space–time Trefftz and quasi-Trefftz schemes has been observed in several cases <cit.> but no proof is available yet (differently from the stationary case, <cit.>).
§.§.§ Conditioning
We now assess the conditioning of the stiffness matrix.
In Figure <ref> we compare the 2-condition number κ_2(·) for the stiffness matrix 𝐊_n defined in Remark <ref>, for the free particle problem V = 0 on the space–time domain = (0, 1) × (0, 1).
We consider the proposed polynomial quasi-Trefftz space in (<ref>), the full-polynomial space in (<ref>) and the pure-Trefftz space of complex exponential wave functions p() proposed in <cit.>. A basis {ϕ_ℓ}_ℓ = 1^2p + 1⊂p() was defined in <cit.> as
ϕ_ℓ(x, t) = exp(i (κ_ℓ x - κ_ℓ^2/2 t)), ℓ = 1, …, 2p + 1.
We consider two choices for the parameters κ_ℓ: the arbitrary choice used in <cit.> κ_ℓ = -p, …, p, and the choice κ_ℓ = 2πℓ/h_x which makes the basis orthogonal in each element.
The conditioning number κ_2(𝐊) for the quasi-Trefftz space, the full polynomial space, and the Trefftz space with orthogonal basis asymptotically grows as h^-1 for all p ∈, while for the Trefftz space with a non-orthogonal basis, asymptotically grows as h^-(2p + 1).
Unfortunately, with higher dimensions and non-Cartesian elements, choosing the parameters and directions defining the basis functions {ϕ_ℓ} so as to obtain an orthogonal basis is more challenging.
§.§ (2+1)-dimensional test cases
We now present some numerical test for space dimension d = 2. We recall that we use Cartesian space–time meshes with uniform partitions along each direction.
§.§.§ h-convergence
Singular time-independent potential (V(x, y) = 1 - 1/x^2 - 1/y^2)
We consider the (2 + 1)-dimensional problem on = (0, 1)^2 × (0, 1) with exact solution (see <cit.>)
ψ(x, y, t) = x^2 y^2 e^it.
In Figure <ref>, we show the errors obtained for a sequence of meshes with h_x = h_y = h_t = 0.1, 0.0667, 0.05, 0.04 and different degrees of approximation p. As in the numerical results for the (1+1)-dimensional problems, we obtain rates of convergence of order h^p in the DG norm, and h^p+1 in the L^2 norm at the final time.
Time-dependent potential (V(x, y, t) = 2tanh^2(√(2) x) - 4(t - 1/2)^3 + 2tanh^2(√(2)y) - 2)
We now consider a manufactured problem with a time-dependent potential (see <cit.>). On the space–time domain = (0, 1)^2 × (0, 1) the exact solution is
ψ(x, y, t) = i e^i(t - 1/2)^4sech(x) sech(y).
In Figure <ref> we show the errors obtained for the sequence of meshes from the previous experiment, and optimal convergence is observed in both norms.
§.§.§ p-convergence
In Figure <ref> we show the results obtained for the p-version of the method applied to the (2+1)-dimensional problems above, on the coarsest mesh.
As expected, for the (2+1)-dimensional case, the error of the quasi-Trefftz version decays root-exponentially as e^-b√(N_dofs).
§ CONCLUDING REMARKS
We have introduced a space?time ultra-weak discontinuous Galerkin discretization for the linear Schrödinger equation with variable potential.
The DG method is well-posed and quasi-optimal
in mesh-dependent norms
for any space dimension d∈, and for very general prismatic meshes and discrete spaces.
We proved optimal h-convergence of order h^p, in such a mesh-dependent norm, for two choices of the discrete spaces: the space of piecewise polynomials, and a novel quasi-Trefftz polynomial space with much smaller dimension.
When the space–time mesh has a time-slab structure, the method allows for the decomposition of the resulting global linear system into a sequence of smaller problems on each time-slab: this is equivalent to an implicit time-stepping, possibly with local refinement in space–time.
We present several numerical experiments that validate the accuracy of the method for different potentials and high-order approximations.
10
Banjai_Georgoulis_Lijoka_2017
L. Banjai, E. Georgoulis, and O. Lijoka.
A Trefftz polynomial space-time discontinuous Galerkin method for
the second order wave equation.
SIAM J. Num. Anal., 55(1):63–86, 2017.
Born_Oppenheimer_2000
M. Born and R. Oppenheimer.
On the quantum theory of molecules.
In Quantum Chemistry: Classic Scientific Papers, pages 1–24.
World Scientific, 2000.
Brenner_Scott_2007
S. Brenner and R. Scott.
The mathematical theory of finite element methods, volume 15.
Springer Science & Business Media, 2007.
Callahan_2010
J. Callahan.
Advanced calculus: a geometric view.
Springer Science & Business Media, 2010.
Crandall_Litt_1983
R.E. Crandall and B.R. Litt.
Reassembly and time advance in reflectionless scattering.
Annals of Physics, 146(2):458–469, 1983.
Dahl_Springborg_1988
J.P. Dahl and M. Springborg.
The Morse oscillator in position space, momentum space, and phase
space.
The Journal of chemical physics, 88(7):4535–4547, 1988.
Dehghan_Shokri_2007
M. Dehghan and A. Shokri.
A numerical method for two-dimensional Schrödinger equation
using collocation and radial basis functions.
Comp. & Math. with Appl., 54(1):136–146, 2007.
Demkowicz_ETAL_2017
L. Demkowicz, J. Gopalakrishnan, S. Nagaraj, and P. Sepulveda.
A spacetime DPG method for the Schrodinger equation.
SIAM J. Num. Anal., 55(4):1740–1759, 2017.
Duran_1983
R. Durán.
On polynomial approximation in Sobolev spaces.
SIAM J. Num. Anal., 20(5):985–988, 1983.
Egger_Kretzchmar_Schnepp_Weiland_2015
H. Egger, F. Kretzschmar, S.M. Schnepp, and T. Weiland.
A space-time discontinuous Galerkin Trefftz method for time
dependent Maxwell's equations.
SIAM J. Sci. Comput., 37(5):B689–B711, 2015.
Gomez_Moiola_2022
S. Gómez and A. Moiola.
A space-time Trefftz discontinuous Galerkin method for the linear
Schrödinger equation.
SIAM J. Num. Anal., 60(2):688–714, 2022.
Griffiths_1995
D.J. Griffiths.
Introduction to Quantum Mechanics.
Prentice-Hall, New York, 1995.
Hain_Urban_2022
S. Hain and K. Urban.
An ultra-weak space-time variational formulation for the
Schrödinger equation.
arXiv:2212.14398, 2022.
Hiptmair_Moiola_Perugia_2013
R. Hiptmair, A. Moiola, and I. Perugia.
Error analysis of Trefftz-discontinuous Galerkin methods for the
time-harmonic Maxwell equations.
Math. Comp., 82(281):247–268, 2013.
Hiptmair_Moiola_Perugia_2016
R. Hiptmair, A. Moiola, and I. Perugia.
A survey of Trefftz methods for the Helmholtz equation.
In Building bridges: connections and challenges in modern
approaches to numerical partial differential equations, pages 237–279.
Springer, 2016.
ImbertGerard_Desperes_2014
L.-M. Imbert-Gérard and B. Després.
A generalized plane-wave numerical method for smooth nonconstant
coefficients.
IMA J. Num. Anal., 34(3):1072–1103, 2014.
ImbertGerard_Moiola_Stocker
L.-M. Imbert-Gérard, A. Moiola, and P. Stocker.
A space–time quasi-Trefftz DG method for the wave equation with
piecewise-smooth coefficients.
Math. Comput., 92(341):1211–1249, 2023.
ImbertGerard_Monk_2017
L.-M. Imbert-Gérard and P. Monk.
Numerical simulation of wave propagation in inhomogeneous media using
generalized plane waves.
ESAIM Math. Model. Numer. Anal., 51(4):1387–1406, 2017.
Karakashian_Makridakis_1998
O. Karakashian and C. Makridakis.
A space-time finite element method for the nonlinear
Schrödinger equation: the discontinuous Galerkin method.
Math. Comp., 67(222):479–499, 1998.
Karakashian_Makridakis_1999
O. Karakashian and C. Makridakis.
A space-time finite element method for the nonlinear
Schrödinger equation: the continuous Galerkin method.
SIAM J. Num. Anal., 36(6):1779–1807, 1999.
Keller_Papadakis_1977
J. Keller and J. Papadakis.
Wave propagation and underwater acoustics.
Springer, 1977.
Lehrenfeld_Stocker_2022
C. Lehrenfeld and P. Stocker.
Embedded Trefftz discontinuous Galerkin methods.
International Journal for Numerical Methods in Engineering,
2023.
https://doi.org/10.1002/nme.7258.
Levy_2000
M. Levy.
Parabolic equation methods for electromagnetic wave
propagation.
Number 45. IET, 2000.
Lifshitz_Landau_1965
E. Lifshitz and L. Landau.
Quantum Mechanics; Non-relativistic Theory.
Pergamon Press, 1965.
Moiola_Perugia_2018
A. Moiola and I. Perugia.
A space–time Trefftz discontinuous Galerkin method for the
acoustic wave equation in first-order formulation.
Numer. Math., 138(2):389–435, 2018.
Morse_1929
P. Morse.
Diatomic molecules according to the wave mechanics. II.
Vibrational levels.
Physical review, 34(1):57, 1929.
DLFM_2010
F. Olver, Daniel W. Lozier, R. F Boisvert, and C. Clark.
NIST handbook of mathematical functions.
Cambridge university press, 2010.
Perugia_Schoeberl_Stocker_Wintersteiger_2020
I. Perugia, J. Schöberl, P. Stocker, and C. Wintersteiger.
Tent pitching and Trefftz-DG method for the acoustic wave
equation.
Comput. Math. Appl., 79(10):2987–3000, 2020.
Qin05
Q.-H. Qin.
Trefftz finite element method and its applications.
Appl. Mech. Rev., 58(5):316–337, 2005.
Steinbach:2015
O. Steinbach.
Space-time finite element methods for parabolic problems.
Comput. Methods Appl. Math., 15(4):551–566, 2015.
Subacsi_2002
M. Subaşi.
On the finite-differences schemes for the numerical solution of two
dimensional Schrödinger equation.
Numer. Meth. for PDE: An International Journal, 18(6):752–758,
2002.
|
http://arxiv.org/abs/2306.07752v1
|
20230613130948
|
The Haldane Model with Chiral Edge States using a Synthetic Dimension
|
[
"Joel Priestley",
"Gerard Valentí-Rojas",
"Patrik Öhberg"
] |
cond-mat.quant-gas
|
[
"cond-mat.quant-gas",
"quant-ph"
] | |
http://arxiv.org/abs/2306.04574v2
|
20230607162244
|
The Effect of Length on Key Fingerprint Verification Security and Usability
|
[
"Dan Turner",
"Siamak F. Shahandashti",
"Helen Petrie"
] |
cs.CR
|
[
"cs.CR",
"cs.HC",
"68M25",
"C.2.0; H.1.2"
] |
The Effect of Length on Key Fingerprint Verification Security and Usability]The Effect of Length on
Key Fingerprint Verification Security and Usability
0009-0004-8338-558X
[obeypunctuation=true]
[email protected]
0000-0002-5284-6847
[obeypunctuation=true]
University of York,
UK
[email protected]
0000-0002-0100-9846
[obeypunctuation=true]
University of York,
UK
[email protected]
In applications such as end-to-end encrypted instant messaging, secure email, and device pairing, users need to compare key fingerprints to detect impersonation and adversary-in-the-middle attacks.
Key fingerprints are usually computed as truncated hashes of each party's view of the channel keys, encoded as an alphanumeric or numeric string, and compared out-of-band, e.g. manually, to detect any inconsistencies.
Previous work has extensively studied the usability of various verification strategies and encoding formats, however, the exact effect of key fingerprint length on the security and usability of key fingerprint verification has not been rigorously investigated.
We present a 162-participant study on the effect of numeric key fingerprint length on comparison time and error rate.
While the results confirm some widely-held intuitions such as general comparison times and errors increasing significantly with length, a closer look reveals interesting nuances.
The significant rise in comparison time only occurs when highly similar fingerprints are compared, and comparison time remains relatively constant otherwise.
On errors, our results clearly distinguish between security non-critical errors that remain low irrespective of length and security critical errors that significantly rise, especially at higher fingerprint lengths.
A noteworthy implication of this latter result is that key fingerprints provide a considerably lower level of security than usually assumed.
[
Helen Petrie
================
§ INTRODUCTION
Authentic keys are required for secure communication. Devices negotiate these keys using a key exchange protocol, or use a public key that purportedly belongs to the other party.
These keys may be authenticated using authenticated key exchange protocols such as password-based authenticated key exchange, or by verifying public key certificates.
Such authentication is only possible when there is an existing shared security context between parties such as shared passwords or public key infrastructure (PKI).
In the absence of authentication, adversaries may carry out adversary-in-the-middle (, traditionally known as man-in-the-middle) or impersonation attacks to compromise security.
Digital devices have become ubiquitous, and hence there is a growing need for establishing ad hoc secure communication channels between devices, i.e. securely pairing devices, without a shared security context.
Although impersonation and attacks cannot be prevented, system designers can build in measures to restrict or detect such attacks.
As an example of the restriction approach, distance bounding protocols in contactless payment systems limit the distance between the payment card or device and the point of sale terminal to minimise the possibility of attacks <cit.>, such as the so-called Mafia Fraud.
One of the most common methods to detect impersonation or attacks is through an out-of-band channel.
System designers assume that users have access to a separate secure communication channel with low bandwidth.
The key observation is that the keys held by the communicating parties will differ when there is an impersonation or attack, and will be identical in the absence of such attacks.
The out-of-band channel is used to detect differences between the keys the two sides hold after the key exchange.
Since the out-of-band channel is low bandwidth, devices usually apply a hash function to the keys and truncate the result to derive a short digest, which we call a key fingerprint.
Comparing the short fingerprints through an out-of-band channel would provide the confidence in keys being identical bar any hash collisions.
Various formats for key fingerprints have been considered.
OpenPGP, designed for email encryption, encodes public-key fingerprints as hexadecimal strings. The user then manually compares these against a trusted copy of the key fingerprint, e.g., on a business card.
The ZRTP protocol for secure VoIP uses a Short Authentication String (SAS), which is a fingerprint of the key negotiated using Diffie–Hellman key exchange.
The Silent Phone app shows the SAS as two words for users to verbally check.
Loud and Clear, a device pairing method, creates a short sentence from the key fingerprint and speaks it aloud using a text-to-speech engine.
The user checks it against a sentence shown on the other device <cit.>.
Alphanumeric fingerprints are one of the most widely used as they are generally considered comparatively more usable.
The most widely deployed text-based format is likely to be numeric, thanks to WhatsApp's 2016 rollout of the Signal protocol for end-to-end encryption.
Signal and WhatsApp use a string of 60 digits arranged in 12 chunks of 5 as shown in Figure <ref>.
This is called a safety number in Signal and a security code in WhatsApp.
The key fingerprint length is usually set based on the security level required for the application.
Signal and WhatsApp use 60-digit fingerprints since they need to provide long-term security against adversaries without any location restriction.
However, a 4-digit fingerprint may be sufficient to safely pair two smart home devices if keys are freshly generated for one-time use and the communication protocol is short range.
There have been multiple studies on the usability of various key fingerprint formats and their susceptibility to error in the literature.
However, apparently no study has investigated the effect of key fingerprint length.
Intuitively, one expects that users can compare shorter key fingerprints more quickly and with fewer errors, but the veracity of this intuition does not seem to have been empirically tested yet.
Such a rigorous study is also needed to clarify the parameters of the apparent trade-off between security and usability for a range of fingerprint lengths and provide crucial empirical evidence for designers when deciding on the specifications for key fingerprint verification methods.
In this work, we contribute to the understanding of the effect of key fingerprint length on the usability and security of manual key fingerprint verification.
We focus on numeric key fingerprints because of their comparative usability, and specifically consider the format, as it is widely deployed.
We present the result of a study in which participants were asked to compare -like key fingerprints of three different lengths.
We measured how the key fingerprint length affects comparison time and accuracy.
Analyses of our results provide evidence in support of a number of points that so far have been poorly understood in the literature.
Namely, the results show that comparison time only changes significantly when fingerprint pairs of high similarity are being compared, but otherwise stays relatively constant.
Furthermore, we present strong evidence showing that the security non-critical error rate remains fairly low even for long fingerprints, whereas the security critical error rate grows significantly at higher lengths.
One of the main implications of these results is that key fingerprints provide considerably lower levels of security than intended.
Paper outline:
Section <ref> summarises the related work, Section <ref> outlines our research questions and study design, Section <ref> discusses the results, and Section <ref> draws conclusions from the study.
§ RELATED WORK
There has been no previous study considering length as an independent variable.
Therefore, in this section we provide a brief overview of the main results on manual fingerprint verification to set out the context in which our work is conducted.
Various formats for key fingerprints have been proposed in the literature or deployed in practice for manual comparison.
Examples include
hexadecimal e.g. GnuPG <cit.>,
numeric e.g. Signal, WhatsApp, and SafeSlinger <cit.>,
words (and pseudo-words) e.g. Bubble Babble encoding <cit.>,
sentences e.g. pseudo-random poems <cit.>.
graphical e.g. abstract art <cit.>, ASCII art <cit.>, snowflakes <cit.>, and unicorns <cit.>, and
auditory e.g. Loud and Clear <cit.>.
Several teams investigated the comparative usability of various key representations including Kainda et al. <cit.>, Dechand et al. <cit.>, and Tan et al. <cit.>.
These studies broadly found that alphanumeric and numeric representations offer better perceived usability, comparison speed, and accuracy.
The considered numeric fingerprint lengths in these studies were 6, 34, and 48 digits, respectively.
The usability and security of key fingerprint verification for end-to-end encrypted instant messaging apps have been the subject of studies by Herzberg et al. <cit.>, Schröder et al. <cit.>, and Shirvanian et al. <cit.>.
Evidence presented in these works unanimously points towards high error rates and low perceived usability of manual verification.
More recently, Livsey et al. studied word-based manual fingerprint verification when compared visually or verbally and found that visual comparisons are more effective against security non-critical errors <cit.>.
Considering the entire authentication ceremony in these apps, Vaziripour et al. found low usability, including low completion rates <cit.>.
Follow-up studies showed rephrasing the task and redesigning the user interface is effective in helping users understand and perform the ceremony correctly <cit.>.
Evidence of low prevalence of manual verification has been reported in the literature.
For instance, in an attempt to study whether users verify SSH key fingerprints, Gutmann approached two large organisations with several thousand computer-literate users, and found that staff were unable to recall a single case, or locate any records, or any user ever verifying any SSH server key out-of-band <cit.>.
Device pairing methods are related to manual fingerprint verification and have been studied for their comparative usability and security, notably by Kobsa et al. <cit.> and Uzun et al. <cit.>.
Comparing numeric fingerprints has been consistently found to be perceived more usable, provide better speed, and lead to less security critical errors compared to other methods in these studies.
A pertinent research question here concerns the most effective adversarial strategy in crafting a similar fingerprint that would pass less attentive human verification.
Cherubini et al. provide eye-tracking evidence that attention to compared strings is highest at the beginning of the string and decreases as progress is made towards the end <cit.>.
Furthermore, several works have hypothesised that human attention is heavily biased towards the beginning and end of the compared sequences <cit.>.
§ STUDY DESIGN
We consider the numeric key fingerprint format because of its comparatively higher usability and its wide deployment.
As shown in Figure <ref>, these fingerprints are represented in three lines, each containing four 5-digit chunks, in their full format.
To study the effect of length, we consider three length conditions:
* 1 Line (1L): a fingerprint includes four 5-digit chunks in 1 line, corresponding to 1 line out of 3 of the full format,
* 2 Lines (2L): a fingerprint includes eight 5-digit chunks in 2 lines, corresponding to 2 lines out of 3 of the full format, and
* 3 Lines (3L): a fingerprint includes twelve 5-digit chunks in 3 lines, corresponding the full format.
To minimise the effect of inconsistent formats, we opted for a between-participants design with respect to length conditions, i.e. each participant will be randomly assigned to one condition and all the fingerprints they compare will be of the same length according to the condition they are assigned.
Compared key fingerprint pairs can be either matching or non-matching. An adversary may trade off attack success probability with computation and be happy with a nearly matching fingerprint that may fool a proportion of users. To be able to investigate the interplay of the effect of each of these possibilities with that of fingerprint length, we consider three comparison types:
* Safe: a comparison between a pair of fully matching (i.e. identical) fingerprints,
* Adversarial (Adv.): a comparison between a pair of nearly matching fingerprints with only 1 chunk being different, and
* Random (Rand.): a comparison between a pair of randomly selected (and hence highly dissimilar) fingerprints.
The above types represent scenarios where a user encounters an authentic key, an adversarially crafted one in case of an attack, or an erroneous key, respectively.
To closely follow what would happen in practice where the same user may compare safe, adversarial, or random fingerprints, we opted for a within-participants design with respect to comparison types, i.e. each participant will carry out comparisons of all types.
It is expected that in practice users will be comparing safe fingerprints most of the time and the occurrence of attack scenarios will be limited to rare occasions.
Hence, a realistic study should contain as few adversarial pairs as possible.
At the same time, gathering sufficient data to compute reliable security-critical error rates requires as many adversarial pairs as possible.
We decided to strike a balance between these two competing goals by designing the study to show 12 safe, 4 adversarial, and 4 random key fingerprint pairs to each participant.
Dechand et al. follow a similar principle <cit.>.
The 20 key pairs are shown to the participant in a random order different for each participant to counterbalance the possible effects of habituation and fatigue.
We emphasise that the scenario we consider is manual fingerprint verification carried out individually.
This is also the approach taken by Kainda et al. <cit.>, Dechand et al. <cit.>, and Tan et al. <cit.>.
Automated verification, such as scanning the QR code provided by using a smartphone camera, and collaborative verification, i.e. two users carrying out the comparison together, are both outside the scope of our study.
§.§ Adversarial Model
We consider adversaries that are able to intercept initial key exchange messages between user devices and replace them with adversarially chosen ones.
However, the adversary does not have the ability to modify messages on the out-of-band channel, i.e. the channel through which key fingerprints are compared and verified.
The goal of the adversary is to impersonate one or both of the entities, corresponding to impersonation or attacks, respectively.
These capabilities allow the adversary to replace a user's authentic keys with their own which would result in key fingerprints being computed on different keys.
Specifically for the key fingerprint format, we allow adversaries to create key fingerprints that matched all but one of the key fingerprint chunks.
This is to keep the level of similarity high between adversarial pairs.
The fingerprint is made of two halves, each a 30-digit fingerprint of the so-called identity key of one of the two parties <cit.>.
From each party's viewpoint, the adversary may only compromise one of these two halves since each party knows the authentic version of their own key.
Hence, we did not allow adversarial digits to cross the midpoint boundary and restricted the adversary to manipulating digits only in the second half of the fingerprint.
The chunk not targeted for collision by the adversary was designated to be the one just after the key fingerprint midpoint.
This is to maximise the likelihood that it would be overlooked since previous works suggest that users pay less attention to the middle sections of the compared fingerprints <cit.>.
Requiring all but one of the chunks to be identical in adversarial fingerprints corresponds to adversarial powers outlined in Table <ref> under no iteration for each condition.
For instance, for our 2 Lines condition, there are eight 5-digit chunks, four of which are computed from the key provided by the adversary.
The adversary needs three out of these four chunks to be identical to those of the fingerprint half being impersonated, i.e. it needs a 3-chunk, i.e. 15-digit, collision.
This is equivalent to finding a second preimage for a hash function with an output length of approximately 49.8 bits, since 10^15≈ 2^49.8.
Testing every preimage can be seen as a Bernoulli trial and hence the success probability of such an attack with respect to number of computed hashes follows the cumulative distribution function of a geometric distribution.
It follows that the expected number of hashes that need to be computed in the attack is approximately 0.69 × 2^49.8.
Despite this, the attack is said to require 2^49.8 adversarial power by convention.
Modern applications use iterated hashing for fingerprint calculation to increase the computational cost for adversaries while keeping the cost of hashing for legitimate users within affordable bounds.
For instance, WhatsApp and Signal iterate the hash 5200 times to compute each fingerprint half.
If such a design is used, the incurred computational cost of attacks will be about 5200 ≈ 2^12.3 times higher than the base case where no iteration is used.
The required adversarial powers, if 5200 iterations are used, are listed in Table <ref> under with iteration.
We have opted for variable adversarial power to mirror the fact that shorter fingerprints are only appropriate for safer environments, for instance use cases where adversaries are restricted in time or location. An adversary with high power would be able to easily compute keys that lead to full fingerprint collisions for shorter fingerprint lengths which would not allow us to see the effect of similar but not identical fingerprints on user performance.
§.§ Research Questions
The overall aim of our study is to investigate whether the length and similarity of key fingerprint have significant effects on a person's performance when comparing key fingerprints.
We focus on user performance in the comparison task, as measured by effectiveness and efficiency.
Perceived usability would be more appropriate for the overall confirmation ceremony and we do not consider it here.
Accordingly, we developed three sets of hypotheses as follows.
Considering the speed with which participants can compare pairs of key fingerprints as a measure of efficiency, we tested the following high-level hypothesis H_1^t ℓ on comparison time t with respect to fingerprint length ℓ, with the alternative hypothesis H_0^t ℓ defined as the opposite:
H_1^t ℓ: Participants take longer time to compare longer numeric key fingerprints than shorter ones.
Since we are studying different comparison types, H_1^t ℓ gives rise to three type-specific hypotheses for safe, adversarial, and random comparisons.
Considering safe, adversarial, and random fingerprint pairs as pairs with maximum, high, and low similarity, we tested the following high-level hypothesis H_1^t s on comparison time t with respect to fingerprint similarity s, or equivalently comparison type, with the alternative hypothesis H_0^t s defined as the opposite:
H_1^t s: Participants take longer time to compare numeric key fingerprint pairs with higher similarity.
Similarly, H_1^t s is tested at three different fingerprint lengths, giving rise to three length-specific hypotheses.
Considering the accuracy with which participants can compare pairs of key fingerprints as a measure of effectiveness, we tested the following high-level hypothesis H_1^e ℓ on error rate e with respect to fingerprint length ℓ, with the alternative hypothesis H_0^e ℓ defined as the opposite:
H_1^e ℓ: Participants make more mistakes when comparing longer numeric key fingerprints than shorter ones.
Here, depending on the comparison type we consider, we have two types of errors:
* False Acceptance Errors occur when non-matching fingerprints are incorrectly accepted as matching, and
* False Rejection Errors occur when matching fingerprints are incorrectly rejected as non-matching.
Consequently, we test two error-type-specific hypotheses, i.e. H_1^e ℓ for false acceptance and false rejection errors.
Note that the security implications of the two types of error can be considerably different.
False acceptance errors, especially on adversarial fingerprints, would be security-critical as they would allow an attack to go unnoticed.
However, false rejection errors would only cause inconvenience.
It is clear that fingerprint length and comparison type are the independent variables, and comparison time, false acceptance and false rejection error rates are the dependent variables in this study.
§.§ Ethical Considerations
The ethical principles of avoidance of harm, informed consent, and data protection were followed throughout the design, data collection, and analysis phases of our study.
No actual communication channels were attacked.
Participants were asked for their consent after providing an information sheet at the start of the study.
The participants could withdraw at any time for any reason.
The information sheet explained the study and that participation was voluntary, and provided the contact details of the investigators.
No personally identifiable information were collected from participants.
Only general demographic data were collected to give contextual information. These were age range, gender, highest education level, and presence of a disability.
A pilot study was used to estimate the time taken to complete the study, based on which we calculated the amount to pay participants in the main study.
We used the living wage for London and New York to ensure that all participants got fair pay for their time.
Participants who withdrew were still paid for their time.
The University of York's Physical Sciences Ethics Committee approved this work before we collected any data.
§.§ Pilot Study
First, we ran a pilot study to find any issues in the study design.
We recruited participants locally by offering entry into a raffle for a £25 (GBP) Amazon gift card.
We advertised the pilot study to friends and family on Facebook.
Participants publicly discussed the pilot study on Facebook.
We did not intend this, but it gave us useful insights into how the participants were approaching the pilot study.
Although we did not aim for many piloting participants, we recruited 60 participants, from which we excluded 17 for being inattentive as they indicated that at least one of the random fingerprint pairs matched.
We asked each participant to compare 20 pairs of fingerprints, some identical and some different.
We made several modifications to our study based on the pilot study feedback as explained below.
In each individual task, we asked each participant to compare a pair of fingerprints.
Our question caused confusion for some of our participants, so we reworded the question from Are Alice and Bob's messages safe? to Do the numbers match? to make it more clear.
While the original question works well for those familiar with the purpose of key fingerprint verification, it requires a level of knowledge not generally expected of non-experts.
Some participants were unsure how to proceed, so we added more guidance.
This was especially important as at least one pilot study participant commented it took me waaay [sic] too long to work out it was essentially a compare these numbers exercise.
We showed participants an extra screen before they started which explained the task and showed them where to find the key fingerprint on the screen.
Besides, we added a counter to each page, so that progress through the study was clear to the participants.
§.§ Main Study
In the main study, each participant was randomly assigned one of the three fingerprint lengths, i.e. 1, 2, or 3 lines, and asked to compare 20 different key fingerprint pairs of the same length, comprising of 12 safe, 4 adversarial, and 4 random pairs in a randomised order.
The browser window for each fingerprint pair comparison included two simulated phone screens side-by-side and asked participants Do the numbers match? with response options Yes, they match and No, they don't match as in Figure <ref>.
Random key fingerprint pairs were used as attention checkers.
Participants who got any of the attention checkers wrong were excluded from our analysis, but were still compensated for their time.
Participants were recruited through MTurk.
We did not restrict which MTurk users could accept the task, other than stopping those who had already done the study.
Each participant was paid $2 (USD) for their time.
All of the guidance was written in English, so all participants needed a sufficient level of English reading comprehension to understand the tasks.
Since the included participants all passed the attention checkers we assume this to be the case.
Before starting the tasks, the participants read the information sheet and consented to take part in the study.
§.§ Technical Implementation
We built the experiment on Amazon Web Services (AWS) using Python and TypeScript.
We used AWS Lambda to host the back-end, stored the data encrypted in AWS DynamoDB, and fronted the site with a static site stored in AWS S3 and distributed through AWS CloudFront.
We exposed the Lambda API using AWS API Gateway, which offers TLS by default, so all the participants' data was encrypted in transit.
§.§ Study Participants
A total of 186 participants were recruited.
2 were excluded from our analysis for failing to complete the study and another 22 for failing the attention checkers.
In all the following analyses, we report the results for the remaining 162 participants.
Table <ref> shows self-reported participant demographics.
As the table shows, large proportions of our participants declared being male, young, educated, and not disabled.
We had an even split however between conditions:
53, 55, and 54 participants were assigned to the 1L, 2L, and 3L conditions, respectively.
§ RESULTS
In this section we give an overview of the collected data and the results of testing the hypotheses stated in Section <ref>, using the common α=0.05 significance level throughout.
We first tested for any significant demographic difference between groups of users in the three conditions.
Fisher's exact test found no significant difference in the reported gender, educational level, disability, or age between the three groups. The p-values were 0.56, 0.75, 0.91, and 0.28 respectively.
§.§ Comparison Time
We calculated each participant's median comparison times for each three comparison types: safe, adversarial, and random comparisons.
The distribution parameters of participant median comparison times by comparison type and condition are detailed in Table <ref> and depicted in Figure <ref>.
As expected, median comparison times for all nine combinations (3 conditions × 3 types) have skewed distributions with long tails.
Shapiro-Wilk tests of normality were significant in all cases except for 1-line adversarial comparisons (1L safe p<0.001, adv. p=0.071, rand. p=0.004, 2L safe p=0.001, adv. p=0.007, rand. p<0.001, 3L safe p=0.016, adv. p=0.028, rand. p<0.001) indicating that 8 out of 9 of the median time distributions are significantly non-normal.
Hence, non-parametric tests were used for analysis.
For analysing change with fingerprint length, we have independent samples and hence Kruskal–Wallis test was used, whereas for analysing change with comparison type, we have related measures and hence Friedman test was appropriate.
§.§.§ Change with Fingerprint Length.
For safe fingerprint comparisons, Kruskal–Wallis test found statistically significant differences between median comparison times for fingerprints of various lengths (χ^2(2)=13.3, p=0.001).
The effect size was moderate (η^2[H]=0.071).
Pairwise Wilcoxon test between groups with Holm correction found significant differences between all conditions (1L–2L: W=1112, p=0.049, 2L–3L: W=1114, p=0.049, 1L–3L: W=905, p=0.003).
For adversarial comparisons, Kruskal–Wallis test found statistically significant differences between median comparison times for fingerprints of various lengths (χ^2(2)=22.3, p < 0.001).
The effect size was moderate (η^2[H]=0.128).
Pairwise Wilcoxon test between groups with Holm correction showed that only the differences between 3-line comparisons and the other two groups were significant (1L–2L: W=1277, p=0.269, 2L–3L: W=889, p<0.001, 1L–3L: W=728, p<0.001).
For random comparisons, Kruskal–Wallis test did not find statistically significant differences between median comparison times for fingerprints of various lengths (χ^2(2)=3.68, p=0.159).
The analysis above shows that although we can reject H_0^t ℓ for safe comparisons and for adversarial comparisons at higher fingerprint lengths, namely for 2L–3L and 1L–3L comparisons, the same cannot be done for random comparisons.
This means that our a priori expectation of median comparison time increasing with fingerprint length only holds when similarity between compared fingerprints is high (e.g. in the case of safe pairs that are identical), but as the differences between compared fingerprints grow larger (e.g. in random pairs) the differences between comparison times for various lengths become insignificant to the point that median comparison times stays approximately constant for 1-line, 2-line, and 3-line random fingerprints.
§.§.§ Change with Comparison Type.
For 1-line comparisons, Friedman test found statistically significant differences between the distributions of median times for safe, adversarial, and random comparisons (χ^2(2)=77.43, p < 0.001).
The effect size was large (Kendall W=0.73).
Nemenyi post hoc test indicated significant differences between median time distributions for all three pairs of comparison types (safe–adv.: p=0.001, adv.–rand.: p<0.001, safe–rand.: p<0.001).
For 2-line comparisons, Friedman test found statistically significant differences between the distributions of median times for safe, adversarial, and random comparisons (χ^2(2)=52.51, p < 0.001).
The effect size was moderate (Kendall W=0.48).
Nemenyi post hoc test indicated significant differences between median time distributions for all three pairs of comparison types (safe–adv.: p < 0.001, adv.–rand.: p < 0.001, safe–rand.: p < 0.001).
For 3-line comparisons, Friedman test found statistically significant differences between the distributions of median times for safe, adversarial, and random comparisons (χ^2(2)=62.11, p < 0.001).
The effect size was large (Kendall W=0.58).
Nemenyi post hoc test indicated significant differences between median time distributions for all three pairs of comparison types (safe–adv.: p = 0.011, adv.–rand.: p < 0.001, safe–rand.: p < 0.001).
The analysis above shows that for all three different lengths of fingerprints we considered, our participants compare random pairs of fingerprints significantly more quickly than adversarial pairs, and adversarial pairs significantly more quickly than safe pairs.
Therefore, we emphatically reject H_0^t s for all fingerprint lengths.
In other words, the more the differences between the compared fingerprints, the less amount of time it takes on average to compare them and decide whether they are identical or not.
This observation, coupled with the similar observations in Section <ref>, provide considerable evidence supporting the fact that users employ a short-circuit evaluation like strategy for comparing fingerprints, i.e. as soon as a difference is observed a decision is made and the rest of the comparison is abandoned.
§.§ Error Rates
In this section we bring the results and analyses of the effect of length on false acceptance and rejection errors. Note that participants who made any errors in comparing random fingerprints were excluded from our study as inattentive participants and hence all attentive participants we consider have correctly identified such fingerprints as non-matching. Consequently, we do not consider random fingerprints in our analysis in this section. We are testing for change with fingerprint length for both error types, hence Kruskal–Wallis was deemed appropriate.
§.§.§ False Acceptance Errors.
Each participant in our study carried out 4 adversarial comparisons.
Table <ref> lists the number and proportion of participants by number of false acceptance errors they made for different lengths of fingerprints.
The proportion of participants making no false acceptance error decreases from 72% for 1-line key fingerprints to 55% for 2-line fingerprints and eventually to the very low figure of 39% for 3-line fingerprints which are used by .
On the other hand, while only 6% of the participants did not manage to spot any of the adversarial comparisons for 1-line fingerprints, this figure rose to 22% for 2-line fingerprints, and eventually to 31% for 3-line fingerprints.
Kruskal–Wallis test indicated significant differences between the number of false acceptance errors made by participants for different key fingerprint lengths (χ^2(2)=15.03, p<0.001).
The effect size was moderate (η^2[H]=0.082).
Pairwise comparisons using Wilcoxon rank sum test with Holm correction indicated significant differences only between 1-line and 3-line conditions (1L–2L: p=0.051, 2L–3L: p=0.102, 1L–3L: p<0.001).
Therefore we can reject H_0^e ℓ for false acceptance errors for larger differences between fingerprint lengths.
In other words, we find evidence that indicates false acceptance errors significantly increase when the length of the key fingerprint significantly increases.
To distill these figures, we can compute overall average false acceptance error rates by looking at the number of such errors made over all comparisons across all participants. Since all participants make the same number of adversarial comparisons, this would be equivalent to first computing an average error rate for each participant and then averaging over all participants. Number of false acceptance errors for all participants and their respective rates, including 95% confidence intervals, are listed in Table <ref>. As the figures suggest, an adversary mounting an attack against random users is expected to have an estimated 13.2% success rate for 1-line, 30.9% for 2-line, and 43.5% for 3-line fingerprints.
§.§.§ False Rejection Errors.
In our study, each participant compared 12 safe (i.e. identical) key fingerprints.
The number and proportion of participants by number of false rejection errors they made for different fingerprint lengths are shown in Table <ref>. No participant made 7 or above errors and for all categories of 2 to 6 errors, there was at most 1 participant who made that number of errors. We therefore compressed the table for those categories.
As the table shows, the proportion of participants making no false rejection errors steadily decreases from 92% for 1-line fingerprints to 85% for 2-line fingerprints and eventually to 80% for 3-line fingerprints. However, the overwhelming majority of participants make no more than 1 error for all fingerprint lengths.
Kruskal–Wallis test did not find significant differences between the number of false rejection errors made by participants for different fingerprint lengths (χ^2(2)=3.39, p=0.184). This shows that although the number of false rejection errors increase with fingerprint length, this increase is not statistically significant for the range of fingerprint lengths we considered and hence we cannot reject H_0^e ℓ for false rejection rates.
We can again look at the global false rejection error rates over all participants as indicators of the rates with which safe comparisons might be erroneously rejected in general for different fingerprint lengths. These rates are listed in Table <ref> and show that false rejection errors are rare, with upper confidence limits of less than 5% for all fingerprint lengths.
Besides, there does not seem to be a considerable change in error rates as fingerprints get longer, especially at higher lengths.
§.§ Comparison with Previous Work
To put our results in context, in this section we list the comparison times and error rates reported in previous studies on numeric fingerprint verification alongside our results.
These measurements are not directly comparable per se, since they are collected under different conditions.
Nevertheless, we believe this comparison helps situate our results in the wider context.
Results show a gradual increase of comparison time with fingerprint length as expected.
Kainda et al. reported a median of 5 and a mean of 6 seconds, respectively, for comparing 6-digit numeric fingerprints <cit.>.
Other notable results are a median of 9.5 seconds for 34-digit fingerprints reported by Dechand et al. <cit.> and a median of 8.1 seconds for 48-digit fingerprints by Tan et al. <cit.>.
These works all only report overall results and do not give a breakdown of the results by type, i.e. safe, random, and adversarial comparisons.
The overall medians in our study can be computed as 5.1, 6.3, and 8.0 seconds for 20, 40, and 60-digit fingerprints (i.e. for 1, 2, and 3-line fingerprints), and the respective means as 6.1, 7.4, and 10.0 seconds.
Overall median comparison times for our study and the previous studies are all shown in Figure <ref>.
We have also included median comparison times for the three comparison types in our study, but excluded Uzun et al.'s reported mean of 12.5 seconds as it was for a pair of users carrying out the comparison collaboratively <cit.>.
As the figure shows, our overall results and those of Kainda et al. and Tan et al. are more or less in line with each other, with Dechand et al.'s result seemingly being an outlier to some extent.
Another important point depicted by our results is that overall medians only give reliable estimates in environments where occasional attacks and random comparisons are expected. In safer environments, where the overwhelming majority of the comparisons are expected to be safe ones, timing estimates should be considered to be considerably higher, e.g., by about a third for 60-digit fingerprints.
Kainda et al. did not observe any false acceptance errors (called security failure there) in their 30-participant study for 6-digit fingerprints <cit.>.
Dechand et al. reported a 6.3% rate (called fail rate) for 34-digit fingerprints <cit.> and Tan et al. a 35% rate (called fraction [of attacks] missed) for 48-digit fingerprints <cit.>.
Our results of 13.2%, 30.9%, and 43.5% false acceptance error rates for 20, 40, and 60-digit fingerprints are broadly in line with the results above, except for that of Dechand et al.'s, as shown in Figure <ref>.
A possible explanation for the discrepancy between Dechand et al.'s result and the rest, both in terms of comparison time and error rates, is that Dechand et al.'s participants were particularly attentive and hence took longer time for carrying out the comparisons, ending up with much lower error rates.
As for false rejection rates, Kainda et al. report a rate of 3.3% (called non-security failure) for 6-digit fingerprints <cit.> and Dechand et al. 0.28% (called false positive) for 34-digit fingerprints <cit.>.
Tan et al. do not report the rate.
Our rates of 0.9%, 2.7%, and 2.0% for 20, 40, and 60-digit fingerprints are largely consistent with the results above.
As Figure <ref> shows, mean false rejection rate remains below 5% irrespective of the length of compared fingerprints.
§.§ Limitations
It is not immediately clear what the best method is to control the similarity between pairs of fingerprints, ensuring adversarial pairs of different lengths have comparable similarity.
For numeric fingerprints represented without chunking and in one line, one may keep the proportion of different digits constant for various fingerprint lengths.
However when chunking and multiple lines come into play, factors such as where in each line and between chunks the differences appear and how many chunks are affected need to be taken into account.
We aimed for a simple method of allowing one chunk of difference for all lengths, but this would mean that the proportion of different digits will not stay the same.
In our adversarial comparisons, we considered near-collision fingerprints differing only in one chunk immediately after the midpoint.
This means that the non-identical chunk appeared in different positions in different conditions: in the middle of the line for the 1 Line condition, in the beginning of the second line for the 2 Line condition, and in the middle of the middle line for the 3 Line condition.
This may have introduced a confound in comparing the 2 Line condition results with the other two conditions, but the comparisons between 1 Line and 3 Line conditions are not expected to have been affected.
We have simulated smartphone user interfaces within browsers.
In practice, comparisons are made on two real smartphones that are likely to be different makes or models.
However, we don't expect this issue to have had a considerable effect on our results in general.
Our participants were largely young (around 69% 21–35), male (around 63% male), and educated (around 69% with tertiary education).
This needs to be kept in mind when considering the results.
§ DISCUSSIONS AND CONCLUSIONS
We discuss the implications of our results and some possible directions for future work in this section.
§.§ Implications of the Results
Figure <ref> shows the summary of our results in terms of statistical significance for comparison time on the left and error rates on the right.
We list the main takeaways from our study based on the results in the following.
Although these are not mutually exclusive, it is instructive to look at the results from various perspectives.
Fingerprint length is a major determinant of efficiency.
As the analysis in Section <ref> shows, for safe comparisons, changes in comparison time are significant with respect to fingerprint length for all length differences.
In the most common use cases of numeric key fingerprint verification, the overwhelming majority of comparisons are expected to be safe comparisons.
Hence, our results provide strong evidence for the intuition that fingerprint length should be considered as a significant determinant of efficiency when designing numeric key fingerprint verification systems.
Overall time estimates can be misleading.
Analysis in Section <ref> demonstrates that time differences between comparison types are significant at all lengths, with timing estimates for safe comparisons being significantly higher than other types.
Given that in most common use cases we expect safe comparisons to dominate, median comparisons times in practice are going to be closer to median safe comparison times.
However, overall comparison times usually reported in the literature assume arbitrary and unrealistic proportions of safe, adversarial, and random comparisons.
Hence, when considering efficiency, design decisions for common use cases should be made based on safe comparison times, when available, rather than overall comparison times usually reported in the literature.
If safe comparison times are not available, our results show they can be estimated to be between a tenth to a third above overall times depending on fingerprint length.
Users are neither efficient nor effective in comparing highly similar long fingerprints.
Focusing on adversarial fingerprints with high similarity, the results in Sections <ref> and <ref> show that although users take significantly longer time to perform the comparison, they make significantly higher false acceptance errors which can be security critical.
This underlines the crucial role of providing alternative or complimentary means of key fingerprint verification for contexts where higher levels of security is required, as manual verification of long fingerprints suffers from low usability.
Manual key fingerprint verification provides a lower security level than usually assumed.
Fingerprint lengths are usually chosen to provide desired levels of security.
This level of security indicates the adversarial power required to achieve a (full) fingerprint collision (i.e. an adversarial fingerprint identical to an authentic one) and hence fool the user with a success probability of 1.
For instance, the fingerprint is designed to provide 112-bit security since the adversarial power required for finding a second preimage for 30-digit key fingerprints computed with 5200 hash iterations is 10^30× 5200 ≈ 2^99.7× 2^12.3≈ 2^112.
This means that with approximately 0.69 × 2^112 hash computations, an adversary is expected to achieve a 50% success rate.
Looking at another point of interest on the attack success probability curve (specified in Section <ref>), to achieve a 40% attack success rate, the adversary would be expected to perform approximately 0.51 × 2^112≈ 2^111 computations.
However, as our results in Section <ref> show, a near collision (i.e. an adversarial fingerprint sufficiently similar to an authentic one) is enough to achieve a considerable false acceptance error rate as high as 40%.
As Table <ref> shows, such a near collision would only require 2^95.4 adversarial power, i.e. approximately 0.69 × 2^95.4≈ 2^94.9 hash computations.
False acceptance error rate is strongly indicative of the success rate for attack campaigns targeting multiple victims repeatedly, which can be possible in many use cases of such fingerprints.
Hence, it is more realistic to think of the fingerprint length providing approximately 96-bit security rather than 112-bit security,
and in general, longer fingerprint lengths for which high false acceptance rates are possible should be considered to provide considerably less security than usually assumed.
Users are quite efficient and effective in recognising dissimilar fingerprints.
Our results for random fingerprint comparisons in Sections <ref> and <ref> clearly show that not only users are pretty quick and accurate in recognising highly dissimilar fingerprint pairs, but also both comparison time and false rejection error rate stay low and roughly constant even with considerable changes in fingerprint length.
As discussed before, this points toward a short-circuit evaluation like behaviour exhibited by users in performing fingerprint comparison.
Consequently, in an environment where users may be expected to perform higher proportion of such comparisons, designers can be confident that users can handle a wide range of fingerprint lengths with similar effectiveness and efficiency.
False rejection errors are rare.
False rejection errors and rates stay quite low across a relatively wide range of fingerprint lengths as the results in Section <ref> show. Indeed, even the 95% confidence interval upper limits stay below 5% in all the measurements we carried out.
Therefore, when designing such mechanisms, decisions for fingerprint length can be made mainly based on efficiency and security (including false acceptance errors).
Similarity is a significant determinant of efficiency.
The most emphatic results were given by the analysis of comparison time with respect to comparison type in Section <ref>: the differences between comparison time for safe and adversarial, as well as between adversarial and random, and hence between safe and random fingerprint pairs are found to be significant at all lengths.
This shows that the effect of similarity between fingerprints is markedly significant on the efficiency of manual key fingerprint comparison.
§.§ Future Work
As with any other study, the scope of the parameters had to be limited in our investigation and further work is required to explore the parameter space more broadly.
Of particular interest would be investigating a higher granularity of lengths and a wider range of similarity between fingerprints.
To test whether our results can be generalised to wider contexts, it would be crucial to replicate the investigation for other verification modes, including verbal and collaborative comparisons, and other fingerprint representations, including word-based ones.
Our results can be seen as part of a series of related works collectively demonstrating the poor usability of currently recommended methods for manual verification of long key fingerprints, e.g. those used by , and underlining the importance of developing better manual and automated verification methods.
§.§ Acknowledgement
We sincerely thank the reviewers of ARES'23 for their valuable and constructive comments.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.11718v2
|
20230620175254
|
$\texttt{MultiHypExp}$: A Mathematica Package For Expanding Multivariate Hypergeometric Functions In Terms Of Multiple Polylogarithms
|
[
"Souvik Bera"
] |
hep-th
|
[
"hep-th",
"hep-ph",
"math-ph",
"math.MP"
] |
Signatures of pressure-enhanced helimagnetic order in van der Waals multiferroic NiI_2
Riccardo Comin
July 31, 2023
======================================================================================
§ INTRODUCTION
Multivariate Hypergeometric Functions <cit.> (hereinafter MHFs) while being ubiquitous in mathematics and physics, are also of great importance today to elementary particle physics applications as they appear as solutions to the dimensionally regularized multi-scale Feynman integrals required for higher-order corrections to the scattering amplitude. The expansion of Feynman integrals, expressed in terms of MHFs, in the dimension parameter = (4-D)/2 necessitates the construction of robust algorithms and efficient computer programs. The purpose of the present work is to develop the considerations of the recent publication <cit.> further and present a Mathematica <cit.> realization in terms of the package.
A long-standing relationship between the Feynman integrals and MHFs has existed since the conjecture of Regge <cit.>. Standard methods like Mellin-Barnes <cit.>, Negative DIMension <cit.>, Method Of Brackets <cit.>, Functional Equations <cit.> can yield hypergeometric function (HF) representation of Feynman integrals. The connection between these two is further revived by recent works on realizing Feynman integrals as GKZ hypergeometric systems
<cit.>.
In the dimensional regularization, the dimension D of the Feynman integrals appears linearly as Pochhammer parameters of its associated HF representations, which are further expanded in series in the parameter . Series expansion of one variable HFs are well studied in the literature <cit.>. The -expansion of double variable Appell and Kampé de Fériet are discussed in <cit.>. Certain MHFs related to Feynman integrals are expanded in series using differential equation method <cit.>. The series expansion of multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions are considered in <cit.>. Automated packages also exist to find the series coefficients of certain HFs analytically <cit.> and numerically <cit.>.
Recently, a method to expand any MHF in series around its Pochhammer parameter has been prescribed in <cit.>. In this approach, starting with the series representation of a given MHF, the expansion can be obtained around any value of its parameters, and the series coefficients are expressed in terms of MHFs having the same domain of convergences. However, it is possible to express the coefficients in terms of multiple polylogarithms (MPLs) if the series expansion is carried out around integer values of the Pochhammer parameters for most of the MHFs (see the Section <ref> below for the exceptional cases). It is advantageous to express the series expansion coefficients in terms of MPLs, whenever possible, rather than MHFs, as the former can be readily evaluated at any value of their arguments using well-established computer libraries, even though the series representation of the given MHF is valid in certain domains of convergence.
In this work, we reshape the algorithm presented in <cit.> such that, whenever possible, the series coefficients of MHFs can be expressed in terms of much simpler MPLs rather than MHFs. The modified algorithm is further implemented in Mathematica as the package. In its present version, it can obtain a series expansion of one variable _pF_p-1, double variable Appell-Horn, certain Kampé de Fériet functions and certain triple variable Lauricella-Saran functions around integer values their Pochhammer parameters, which typically arise in the Feynman integral calculus.
The article is organized as follows. We recapitulate the algorithm proposed in <cit.> and discuss the modifications made in Section <ref>, followed by illustrative examples of Gauss _2F_1 and Appell F_2 functions in Section <ref>. An application to Feynman integral is provided in Section <ref>. We give a detailed description of the commands of the package and their usage in Section <ref> and explain how the methodology is implemented in Mathematica in Section <ref>. Finally, we present our conclusions in Section <ref> which is followed by number of appendices. In Appendix <ref> and <ref> we provide the definitions of MPLs and MHFs respectively. A list of expressions of series expansions of pure MHFs is given in Appendix <ref>. In Appendix <ref> and <ref>, we discuss the transformation theory and reduction formulae of MHFs.
§ METHODOLOGY
A method to obtain series expansion of any MHF around arbitrary values of Pochhammer parameters is prescribed in <cit.>, where the expansion parameter, say , appears linearly in the Pochhammer parameters. Throughout this method, the hypergeometric structure of a given MHF remains intact. In other words, starting with a series presentation, one can find the series expansion of a given MHF, whose coefficients are expressed in terms of higher summation fold MHFs with the same domain of convergence as that of the given MHF. However, the higher fold MHFs appearing in the series coefficients contain the same number of independent variables as the given MHF. For instance, the one variable Gauss _2F_1(,-;-1;x) is expanded in series in <cit.> as
_2F_1(,-;-1;x) = 1 - ^2 x _3F_2(1,1,1;2,2;x)
+ ^3 (2/3 x _3F_2(1,1,1;2,2;x)
+x/3 KdF^2:1;2_2:0;1[
[ 1,1 1 1,1; 2,2 2 ] |
x,x
]+ x^2/12 KdF^2:1;2_2:0;1[
[ 2,2 1 1,2; 3,3 3 ] |
x,x
]
)
+ O(^4)
Here, the Kampé de Fériet functions (denoted as KdF) are double summation fold HFs with only one variable x.
It is well-known that the function _3F_2(1,1,1;2,2;x) can be written in terms of simpler function
_3F_2(1,1,1;2,2;x) = Li_2(x)/x
Nonetheless, such a reduction formulae for the KdF functions in eqn:2f1exp is difficult to obtain. To the best of our knowledge, there do not exist reduction formulae for arbitrary MHFs.
There are some advantages in expressing the coefficients of the series expansion of MHFs (whenever possible) in terms of simpler functions such as MPLs <cit.>, whose properties are well studied. The numerical evaluation of the multiple polylogs for any values of its arguments can be easily performed using readily available computer libraries <cit.>, whereas the series representation of the MHFs are valid in certain domain of convergences. For instance, the KdF functions in eqn:2f1exp are only valid for |x|<1.
This motivates us to modify the algorithm presented in <cit.> slightly such that, the series coefficients of a given MHF, whenever possible, can be expressed in terms of MPLs. In order to achieve this, we must perform the series expansion around integer values of Pochhammer parameters for most of the HFs, because the series expansion around rational values of parameters often leads to functions beyond MPLs.
We now recapitulate the algorithm proposed in <cit.> below, which is consists of five steps.
* Step 1 : Distinguish the type of series expansion by observing the Pochhammer parameters of the given MHF (say F())
* Step 2 : Find the series expansion of F(), if it is of Taylor type
* Step 3 : If the series expansion is of Laurent type, find a secondary function, say G() that can be related to F() by a differential operator H()
F() = H() ∙ G()
and G() can be expanded in Taylor series following Step 2. Here, the symbol ∙ means the action of the differential operator H() on the function G().
* Step 4 : Find the corresponding differential operator H()
* Step 5 : Perform the series expansion of the operator H() and apply it on the Taylor expansion of G() and collect different powers of
We discuss each of the steps in detail.
§.§ Step 1 : Determination of the type of the series expansion
By observing the structure of the Pochhammer parameters in the series representation of a given MHF, one may find the type of the series expansion of the same. The series expansion of a MHF may be of Laurent series if any of the following situations appear
* When one or more lower Pochhammer parameters (i.e., Pochhammer parameters in the denominator) are of the form
(n + q )_p
* When one or more upper Pochhammer parameters (i.e., Pochhammer parameters in the numerator) are of the form
(p + q )_n
where n is non positive integer and p is non negative integer. We call a Pochhammer parameter singular if it satisfies any of the above two conditions. For instance, the series expansion of _2F_1 (1,1;-1+;x) is of Laurent type because of the presence of the lower Pochhammer parameter (-1+)_p, where p is the summation index running over non-negative integer values.
However, the above two criteria do not guarantee the fact that, the series expansion of the corresponding MHF must be of Laurent type. The expansion of _2F_1 (,-;-1+;x) (given in eqn:2f1exp) is of Taylor type even though the first condition is satisfied. This is due to the presence of dependent upper Pochhammer parameters. Thus, the above two criteria are necessary but not sufficient conditions.
§.§ Step 2 : Taylor expansion of MHF
In <cit.>, the Taylor expansion of a MHF is found by taking successive derivatives of the function with respect to its Pochhammer parameters, which results in higher summation fold MHFs in the series coefficients. In order to express the series coefficients in terms of MPLs, we modify this step.
To proceed, we define a HF with n variables having Pochhammer parameters a and b
F:=F(a;b;x) = ∑_m∈ℕ_0^rΓ(a+ μ . m )/ Γ(a) /Γ(b+ ν . m )/ Γ(b) x^m/m! = ∑_m∈ℕ_0^r(a)_μ . m/(b)_ν . mx^m/m! = ∑_m∈ℕ_0^r A(m) x^m
We have used vector notation here; a, b and m are vectors of length p,q and n respectively. μ and ν are matrices of size p × n and q × n respectively with integers as their elements. ℕ_0 denotes natural numbers including zero.
* a := {a_1 , a_2, …, a_n}
* x^m := ∏_i=1^n x_i^m_i
* m! := ∏_i=1^n (m_i!)
* Γ(a) := ∏_i=1^n Γ(a_i)
* (a)_m :=∏_i=1^n (a_i)_m_i =∏_i=1^n Γ(a_i+m_i)/Γ(a_i)
* ∑_m∈ℕ_0^n := ∑_m_1 =0^∞…∑_m_n =0^∞
These MHFs are known to satisfy partial differential equations <cit.>. Let,
P_i = A(m+e_i)/A(m) = g_i(m)/h_i(m), i=1,…,n
where e_i is unit vector with 1 in its i-th entry. The annihilators L_i of F(a;b;x) (eqn:mhfdefinition) are given by
L_i = [ h_i(θ) 1/x_i - g_i(θ) ]
where θ = {θ_1,…,θ_n } is a vector containing Euler operators θ_i= x_i∂_x_i.
The set of PDEs associated with a MHF can be brought to Pfaffian form.
d g = Ωg
where Ω = ∑_i=1^n Ω_i dx_i and the vector g contains the function F and its derivatives.
g = (F , θ_i∙F , θ_i θ_j∙F, … )^T
The integrability condition reads d Ω + Ω∧Ω =0. The length of g is equal to the holonomic rank of the system of PDEs, which can be computed using the Gröbner basis calculation. The Pfaffian system of Appell and Lauricella functions are well studied in mathematics literature <cit.>.
We find the Taylor series expansion of F finding the solution of eqn:Pfaffian with appropriate boundary condition. In the context of computing Feynman integrals using the differential equation method, it is observed in <cit.> that, by choosing a particular set of master integrals, the set of differential equations can be brought to a much simpler form known as the canonical form. We apply this technique to solve the system eqn:Pfaffian.
Starting with the Pfaffian system
d g = Ω() g
One can find a transformation T to bring the system into canonical form
d g' = Ω' g'
with
g = T g'
Ω' = T^-1Ω T - T^-1 dT
The system eqn:canonicalform can be solved order by order in with a boundary condition.
For any MHF, we note that
.F(a;b;x)|_x = 0 = 1
θ_i∙ .F(a;b;x)|_x = 0 = 0
θ_iθ_j∙ .F(a;b;x)|_x = 0 = 0
⋮
Therefore, the boundary condition can be easily obtained in terms of g as,
. g|_x=0 = (1,0,0,…)^T
The system eqn:canonicalform can now be solved with the boundary condition
. g'|_x=0 = T^-1. g|_x=0
Finally, the series expansion of F can be found by converting the solution of eqn:canonicalform with boundary condition eqn:BCg' to the g basis by using eqn:transformation.
§.§ Step 3 : Construction of the secondary function
The secondary function G() related to F() can be obtained by performing the following replacements of the Pochhammer parameters of F()
* When one or more lower Pochhammer parameters of F() are singular, then
(n + q )_p → (1+ q)_p
* When one or more upper Pochhammer parameters of F() are singular, then
(p + q )_n → (q)_n
As an example, the function _2F_1 (,-;-1+;x) contains a lower Pochhammer parameter that is singular. Therefore, following the replacement rule 1 above, we replace the lower Pochhammer parameter to find the associated secondary function as
_2F_1 (,-;1+;x)
Thus obtained secondary function G() can be expanded in the Taylor series following Step 2.
§.§ Step 4 : The differential operator
There exist differential operators that relate two MHFs with Pochhammer parameters differed by integer values. In <cit.>, a general algorithm based on Gröbner basis techniques is provided by Takayama. These differential operators play crucial role in the differential reduction of MHFs <cit.>. For our purpose, we only need two types of differential operators; the step-down operator for the lower Pochhammer parameters and step-up operator for the upper Pochhammer parameters. These two types of operators can be easily obtained from the series representation of a MHF. For a MHF with n variables (eqn:mhfdefinition), the step-down operators for the lower Pochhammer parameters are given by
F(a;b;x) = 1/b_i(∑_j=1^nν_i jθ_x_j +b_i) ∙ F(a;b+𝐞_𝐢;x) = H_-(b_i) ∙ F(a;b+𝐞_𝐢;x)
and the step-up operators for the upper Pochhammer parameters can be found as
F(a,b,x) = 1/a_i-1(∑_j=1^nμ_i jθ_x_j +a_i-1) ∙ F(a-𝐞_𝐢,b,x) = H_+(a_i) ∙ F(a-𝐞_𝐢,b,x)
Here, as before, the bullet means the action of the operator H_± on a function. When required, the step-up and the step-down operators may be applied multiple times to increase or decrease any upper or lower Pochhammer parameter by suitable integer. In such cases, the product of the unit operators modulo the ideal generated by L_i's can be taken as the required differential operator.
H = [∏_i,j H_-(b_i) H_+(a_j) ]/ ⟨ L_1,…,L_n⟩
§.§ Step 5 : Action of the differential operator
In the final step, we apply the differential operator found in Step 4 on the Taylor expansion of G() obtained in Step 3. Since the coefficients of the Taylor expansion of G () are expressed in terms of MPLs, the task of performing the application of the differential operator is the same as finding the ordinary derivatives of the MPLs with respect to their arguments.
Next, we present examples of series expansion of one and two-variable HFs to illustrate the methodology.
§ EXAMPLES
§.§ Gauss _2 F_1 function
Let us consider the following Gauss HF, whose series expansion we wish to find.
F() := _2F_1(,-;-1;x) = ∑_m=0^∞ ()_m (-)_m/ (-1)_mx^m/m!
We notice that the lower Pochhammer parameter of F() is singular. Therefore, the series expansion may be of Laurent type. Therefore, we construct the secondary function.
The secondary function G(), which is related to F() and has a Taylor series expansion, can be found by replacing the singular Pochhammer parameter,
G() := _2F_1(,-;+1;x) = ∑_m=0^∞ ()_m (-)_m/ (+1)_mx^m/m!
Next, we go on to find the series expansion of G(). By constructing a vector g = ( G(), θ_x G())^T and making good use of the ODE of Gauss _2F_1, we obtain the following Pfaffian system
d g = Ωg,Ω = (
[ 0 1/x; ϵ ^2/x-1 ϵ/x-1-ϵ/x; ])
Further, the Pfaffian system can be brought to canonical form by the transformation matrix
T = (
[ 1 0; 0 ϵ; ])
which reads
d g' = Ω' g' ,Ω' = (
[ 0 1/x; 1/x-1 1/x-1-1/x; ])
This system can now be solved order by order in with the boundary condition given by
g(x=0) = (1,0)^T
Thus, we find the Taylor series expansion of G() as
G() = 1+ϵ ^2 G(0,1;x)+ϵ ^3 (-G(0, 0, 1; x) + G(0, 1, 1;x))+O(ϵ ^4)
Here G(0,1;x)'s are MPLs (see Appendix <ref> for their definition), not to be confused with the secondary function G().
Next, we obtain the differential operator that relates the two Gauss HFs
F() = H() ∙ G()
Following the prescription given in Step 4, we find
H () = ( (2 x-1)-x+1)/(-1) (x-1)θ_x + (2 x-1)-x+1/(-1) (x-1)
Finally, we apply, the differential operator H() on the Taylor expansion of G() to find the series expansion of F()
F() = 1 + [G(1;x)-x/x-1] + ^2 [-x /x-1G(1;x)+G(1,1;x)-x/x-1] + O(^3)
The result is consistent with the result obtained using the <cit.> package.
§.§ Appell F_2 function
Let us now consider an example of double variable Appell F_2 function.
F := F_2 (1,1,;,-;x,y)
We observe that, both the lower Pochhammer parameters of the above function are singular. Thus the series expansion of F_2 may be of Laurent type.
Thus, following step 3, we find a secondary function by replacing the singular lower Pochhammer parameters by non-singular ones
F' = F_2 (1,1,;1+,1-;x,y)
The secondary function F' can now be expanded in the Taylor series. Note that, the series expansion of this particular Appell F_2 function (i.e., F') is readily available in the literature ( Eq. (81) of <cit.> and Section 3.5 of <cit.> ).
Following step 2, we find the series expansion of F'. To proceed, we obtain the PDE associated with the Appell F_2(a, b_1, b_2; c_1, c_2 ;x,y) function
L_1 = -a b_1+(c_1-x (a+b_1+1))∂_x-b_1 y ∂_y-x y ∂_x ∂_y-(x-1) x ∂_x^2
L_2 = -a b_2+(c_2-y (a+b_2+1))∂_y -b_2 x ∂_x-x y ∂_x ∂_y-(y-1) y ∂_y^2
The operators L_1 and L_2 annihilate the Appell F_2 function.
Next, using the vector
g = ( F', θ_x F', θ_y F', θ_x θ_y F' )^T
the above set of PDEs is brought to the Pfaff system
d g = Ω_1 dx + Ω_2 dy
where,
Ω_1 = (
[ 0 1/x 0 0; -1/x-1 ϵ -2/x-1-ϵ/x -1/x-1 -1/x-1; 0 0 0 1/x; ϵ/x-1-ϵ/x+y-1 (2-ϵ ) ϵ/x-1-(2-ϵ ) ϵ/x+y-1 ϵ/x-1-2 ϵ +1/x+y-1 -ϵ -2/x+y-1--ϵ x+(ϵ +1) x-x-ϵ/(x-1) x; ])
Ω_2 = (
[ 0 0 1/y 0; 0 0 0 1/y; -ϵ/y-1 -ϵ/y-1 ϵ/y+-2 ϵ -1/y-1 -1/y-1; ϵ/y-1-ϵ/x+y-1 ϵ/y-1+ϵ (ϵ +1)-3 ϵ/x+y-1 2 ϵ +1/y-1-2 ϵ +1/x+y-1 -ϵ -2/x+y-1+ϵ/y+1/y-1; ])
Here θ_x = x ∂_x and θ_y = y ∂_y are Euler operators. Further, the Pfaff system is brought to the canonical form by the transformation matrix T
T = (
[ 1/x-1 0 0 0; 2 ϵ -x/(x-1)^2 ϵ/x-1 -x ϵ/(x-1) (x+y-1) 0; -ϵ/x-1 0 ϵ/x+y-1 0; ϵ(x^2+(y-1) x-2 y ϵ)/(x-1)^2 (x+y-1) -ϵ ^2/x-1 x ϵ (-x+y ϵ +1)/(x-1) (x+y-1)^2 ϵ ^2/x+y-1; ])
Finally, the system is solved order by order in ϵ with the boundary condition
g (x = 0, y=0) = (1,0,0,0)^T
which is valid for all orders in . Thus we find the series expansion of F' to be
F' = -1/x-1+ [(G(1-y;x)-2 G(1;x)+G(1;y))/x-1]+ ^2 1/x-1[2 G(1;x) G(1;y)
-2 G(1;y) G(1-y;x)-G(0,1-y;x)+2 G(1,1-y;x)+2 G(1-y,1;x)
-G(1-y,1-y;x)+2 G(0,1;x)-4 G(1,1;x)+G(0,1;y)-2 G(1,1;y)] + O(^3)
Next in step 4, we find the differential operator that relates F with F'
F = H∙ F'
using the package <cit.>, we obtain
H = 1 + 1/ϵθ_x -1/ϵθ_y -1/ϵ ^2θ_x θ_y
In the final step 5, we perform the action of the operator H on the series expansion of F' to find the Laurent series expansion of F,
F = x /(2/(x-1)^2-1/(x+y-1)^2) + ^0[ x (4/(x-1)^2-2/(x+y-1)^2) G(1;x)
+2 x (1/(x+y-1)^2-1/(x-1)^2) G(1;y)+x (1/(x+y-1)^2-2/(x-1)^2) G(1-y;x)
-(x+2 y-1) (x (2 x+3 y-3)-y+1)/(x-1)^2 (x+y-1)^2] + O()
We crosschecked the above result numerically and it is found to be consistent.
§ APPLICATION TO THE FEYNMAN INTEGRALS
In <cit.>, the one loop three-point function with two massive propagators and one off-shell leg is expressed in terms of double variable HFs using the NDIM method. The authors presented several HF representations (Eq. (3.23) - (3.27) and Eq. (3.18)) of the abovementioned article) for the same Feynman integral, which are related by analytic continuations. Furthermore, starting with a HF representation valid in a certain kinematic region (namely, region IIb ) and using the integral representations of MHFs, the authors found the series expansion of the Feynman integral in Eq. (3.34) of <cit.>.
In principle, any analytic continuation of the Feynman integral can be used to find the series expansion. In this Section, we start with the HF representation of the Feynman integral valid in region IIIb, which contains Kampé de Fériet and Horn function, and find the series expansion using the package. By doing so, besides validating their results, we also examine if series expansions from different analytic continuations agrees or not.
In region IIIb, the Feynman integral can be expressed as
I_3^D(ν_1, ν_2, ν_3 ; Q_1^2, 0,0, M_1^2, M_2^2, 0)=I_3^{m_2, q_1}+I_3^{p_1, q_1}
where
I_3^{m_2, q_1}=(-1)^D/2(-M_1^2)^D/2-ν_1-ν_2-ν_3Γ(ν_1+ν_2+ν_3-D/2) Γ(D/2-ν_2-ν_3)/Γ(ν_1) Γ(D/2)
× S_1(ν_2, ν_1+ν_2+ν_3-D/2, ν_3, 1+ν_2+ν_3-D/2, D/2,-Q_1^2/M_1^2, M_2^2/M_1^2) ,
I_3^{p_1, q_1}=(-1)^D/2(-M_1^2)^-ν_1(-M_2^2)^D/2-ν_2-ν_3Γ(ν_2+ν_3-D/2) Γ(D/2-ν_3)/Γ(ν_2) Γ(D/2)
× H_2(ν_2+ν_3-D/2, ν_3, ν_1, D/2-ν_3, D/2, Q_1^2/M_2^2,-M_2^2/M_1^2)
Here, the ν_i's denote the powers of the propagators and S_1 is KdF function and Horn H_2 function. In terms of our notation of KdF functions (see Appendix <ref>),
S_1(a_1,a_2,b,c,d,x,y) = KdF^2:1;0_1:1;0[
[ a_1, a_2 b ; c d ] |
x,y
]
To proceed, we set the dimension D = 4-2 and the powers of the propagators to unity (i.e. ν_i = 0, i = 1,2,3). The KdF function S_1 takes a simpler form
S_1(1,+1,1,+1,2-,x,y) = ∑_m,n=0^∞(1)_m (1)_m+n/ (2-)_mx^m y^n/m! n!
= -1/y-1 _2F_1(1,1;2-;-x/y-1)
The second identity is obtained by performing the summation of the index n.
On the other hand, the Horn H_2 function takes the form
H_2(,1,1,1-,2-,x,y)
We can easily find the series expansion of the Gauss _2F_1 (eqn:application2F1) and Horn H_2 function (eqn:applicationH2) in using the presented package (see Section <ref>), which reads
S_1(1,+1 ,1,+1,2-,x,y)
= -1/x G(1;x/1-y)
+/x[G(1;x/1-y)-G(0,1;x/1-y)+G(1,1;x/1-y)]+O(^2)
H_2(,1, 1,1-,2-,x,y) = -G(1/y+1;x)/x y
+/x y(G(y+1/y;x)-G(0,y+1/y;x)
+G(y+1/y,1;x)+G(y+1/y,y+1/y;x))+O(^2)
Using the above two expressions, we find the series expansion of the Feynman integral in terms of MPLs as
I_3^D (1,1,1 ; Q_1^2, 0,0, M_1^2, M_2^2, 0) = 1/Q_1^2[G(0,1;-Q_1^2/M_1^2-M_2^2)-G(0,1-M_1^2/M_2^2;Q_1^2/M_2^2)
-G(1,1;-Q_1^2/M_1^2-M_2^2)+G(1-M_1^2/M_2^2,1;Q_1^2/M_2^2)+G(1-M_1^2/M_2^2,1-M_1^2/M_2^2;Q_1^2/M_2^2)
-(log(-M_1^2)+γ +i π) G(1;-Q_1^2/M_1^2-M_2^2)+(log(-M_2^2)+γ +i π) G(1-M_1^2/M_2^2;Q_1^2/M_2^2)]
which can further be written in terms of ordinary polylogarithms
as
I_3^D(1,1,1 ; Q_1^2, 0,0, M_1^2, M_2^2, 0) = 1/Q_1^2( Li_1,1( 1- M_1^2/M_2^2, - Q_1^2/M_1^2 - M_2^2)
+ Li_1(- Q_1^2/M_1^2 - M_2^2) [Li_1(1+ M_2^2)-Li_1(1+ M_1^2)])+ O()
The result obtained above in eqn:example_series_expansion matches with Eq. (3.34) of <cit.> by analytic continuations. Thus we conclude that the series expansion coefficients of different analytic continuations of a Feynman integral are related by analytic continuations. In other words, the process of finding analytic continuation and the process of obtaining the series expansion of MHF commutes with each other.
§ DOCUMENTATION AND USAGE
In this Section, we discuss the documentation and usage of the package , which can be downloaded from this https://github.com/souvik5151/MultiHypExpurl. It is built in Mathematica v11.3 and works in higher versions of Mathematica as well. The package depends on the following packages, whose usages are explained in Section <ref>.
* http://www3.risc.jku.at/research/combinat/software/ergosum/RISC/HolonomicFunctions.html <cit.>
* https://github.com/christophmeyer/CANONICA <cit.>
* https://sites.google.com/site/loopcalculations/ <cit.>
* https://gitlab.com/pltteam/plt/-/tree/master <cit.>
* https://www.physik.uzh.ch/data/HPL/ <cit.>
Therefore, those packages must be loaded before loading . To do so, we store the path of the dependencies in the following global variables,
The and packages are loaded inside the package and the package is called automatically when is called. The user must load the other dependencies after setting their paths.
The performance of the package can be significantly improved by calling the package in multiple kernels. This can be achieved by distributing the paths of and in the available kernels.
Finally, the package can be loaded after setting its path.
We now demonstrate the usage of the commands.
The package is consists of two commands, and .
§.§ The command
The command finds the series expansion of MHFs. It can take input in two different forms, which we discuss below.
or,
This command finds the first coefficients of the series expansion of the function in . The other input arguments of the commands are given below.
* : The name of the HF. The following are the available 's of double- and triple-variable HFs.
Double variable series : F1, F2 ,F3, F4, G1, G2, G3, H1, H2, H3, H4,
H6 and H7
Three variable series : FA3, FB3, FD3, FK3, FM3, FN3 and FS3
* : List of Pochhammer parameters
* : List of variables
* : Expression of the MHF
* : List of summation indices
Note that, for the expansion of _pF_p-1 functions, the input must be omitted and the argument must be given in the following form
§.§ The commmand
This command finds the reduction formulae of MHFs in terms of MPLs. It takes the input as
where, as before
* : List of Pochhammer parameters
* : List of variables
* : The name of the HF. The available 's are
Double variable series : F1, F2, F3 and F4
Triple variable series : FD3 and FS3
We next show the usage of the commands by reproducing the results of the series expansion of the MHFs considered in Section <ref> and Section <ref>.
§.§ Usage of the commands
In this Section, we provide demonstrations of the two commands of our package.
§.§.§ The command
Let us consider the one variable Gauss _2F_1 function eqn:examples2f1def from Section <ref>,
F() := _2F_1(,-;-1;x) = ∑_m=0^∞ ()_m (-)_m/ (-1)_mx^m/m!
Below we find the series expansion of the above function using the command.
The same result can be obtained by providing the series representation of the Gauss _2F_1 function in the command as follows.
We now produce the series expansion of bi-variate Appell F_2 functions considered in Section <ref>. Using the command, we find the series expansion of F' (defined in (<ref>)) to be
which matches with eqn:Fpexpansion.
In a similar fashion, we call the command to yield the series expansion of F (defined in eqn:examplesF2def1),
Thus, we reproduce (<ref>).
Likewise, the series expansion of the Horn H_2 function in Section <ref> (eqn:expansionH2) can be obtained as
Finally, we provide an example of a series expansion of a HF using the transformation theory. As an example, we take the Horn function G_2 with the following set of Pochhammer parameters.
G_2:= G_2 (,,,;x,y)
The series expansion of G_2 is calculated using its transformation formula to Appell F_2 (i.e., eqn:G2F2relation) internally inside the package. Therefore, we call the command to find the series expansion of G_2 as
§.§.§ The commmand
As discussed in Section <ref>, this command finds the reduction formula of the MHFs. We demonstrate the usage of this command with examples of double- and triple-variable HFs.
Thus, we reproduce eqn:F2reduction.
We conclude this section by presenting an example of the reduction formula of the triple variable Lauricella-Saran function F_S(1,1,1,1,1;2;x,y,z).
which matches eqn:FSreduction.
§ MATHEMATICA IMPLEMENTATION
The presented method in Section <ref> is implemented in the form of a Mathematica <cit.> package , which consists of two commands, and . The former command can be used to find the series expansion of a given MHF, and the latter can be used to yield reduction formulae of MHFs. The usage of these commands is demonstrated in Section <ref> in detail. In this Section, we discuss how these commands are implemented in Mathematica.
§.§ The command
Let us start with the command. In Section <ref>, the procedure of finding the series expansion is divided into five parts.
* Step 1 and Step 3 : In the first step (Section <ref>), the type of the series expansion of the given MHF is determined by its Pochhammer structure, and the singular Pochhammer parameters (if exist) are placed by the non-singular ones according to the prescription given in Step 3 (Section <ref>). This is achieved using the Mathematica's pattern matching and replacement rules commands.
* Step 2 : The Taylor series of MHF is found in this step. At first, the PDE associated with the given MHF is obtained and brought to the Pfaffian form.
* When the command is called with the input argument
(i.e., the first form in Section <ref>), the pre-stored Pfaff systems of double variable Appell F_1, F_2, F_3 and Horn H_2 and triple variable Lauricella F_A, F_B, F_D, F_K, F_M, F_N, and F_S are used.
* When the command is called with the input argument (i.e., in the second form in Section <ref>), the PDE associated with the MHF is calculated using the expressions given in eqn:annihilator1 and eqn:annihilator2. Further, the system of PDE is brought to Pfaffian form using the Gröbner basis calculations, which is carried out by the Mathematica package <cit.>.
Then the Pfaffian system, wherever possible, is further brought to the canonical form using the Mathematica package <cit.> and solved using proper boundary condition and expressed in terms of MPLs. Extensive use of commands of <cit.> are made to integrate and simplify the expressions containing MPLs.
* Step 4 : The differential operator that relates the given MHF and the associated secondary function is found in this step. The <cit.> packages are used to find the necessary step up/down operators for the Appell F_1, F_2, F_3 and Horn H_2 functions when the command is called using the input argument . Otherwise, the operators are calculated from eqn:stepdownop and eqn:stepupop and the reduction of the products of these step up/down operators with respect to the Gröbner basis of the annihilators is performed using the package.
* Step 5 : Finally, the obtained differential operator is made to act on the Taylor series of the secondary function. As the Taylor expansion coefficients contain MPLs, the commands are used to calculate the derivatives and simplify those expressions.
The series expansion of other double variable Appell-Horn functions (i.e., F_4, H_1, H_3,H_4, H_6 and H_7) are calculated using their connection formulae to Appell F_1, F_2, F_3 or Horn H_2, which are given in Appendix <ref>.
§.§ The command
It is possible to find the reduction formulae of MHFs as a byproduct of the methodology. For a given MHF, one can find a secondary MHF, whose Taylor expansion can be readily calculated and the differential operator associated with these two functions can be made to act on the series expansion of the secondary function to find the reduction of the given MHF. An example of such reduction of Appell F_2 function is given in Appendix <ref>. This procedure is implemented in the command, which can find the reduction formulae of double variable Appell F_1, F_2, F_3, F_4 and triple variable Lauricella F_D and F_S functions. The required differential operators are procured using the <cit.> packages, and the Taylor expansions of the abovementioned MHFs from Appendix <ref> are stored in the package and are called accordingly to fulfill the purpose.
We conclude this sections with several comments.
* In principle, the procedure mentioned in <cit.> can be applied to find the series expansion of any MHFs. However, the package is made with the objective of expressing the series coefficients in terms of well-known MPLs. This, not only restricts the reach of the package to the small number of double- and triple-variable HFs, but also restricts the Pochhammer parameters to integers for most of the accessible HFs. Thus, the Appell F_1, F_2, F_3, and Horn H_2 and triple variable Lauricella F_A, F_B, F_D, F_K, F_M, F_N, and F_S functions can be expanded in series around the integer values of their Pochhammer parameters.
* Since the series expansion of other double variable Appell-Horn functions (i.e., F_4, H_1, H_3, H_4, H_6 and H_7) are calculated using their connection formulae to Appell F_1, F_2, F_3 or Horn H_2, the parameter c in H_4, d in H_7 must be of the form p/2 + q, where p is integer or half integer, so that, the F_2 and H_2 function appearing on the right-hand side of their corresponding connection formula (see Appendix <ref>), can be expanded in series around integer values of parameters.
* To the best of our knowledge, there do not exist connection formulae of Appell F_4, Horn H_1 and H_5 with general parameters to other Appell-Horn functions. Thus, the package can find the series expansion of Appell F_4 and Horn H_1 with restricted Pochhammer parameters (see Appendix <ref>).
* The package can find, at most, the first six series coefficients. This limitation comes from the command of as it can handle MPLs with weight up to five.
* In certain situations, a non-rational transformation of variables is required to bring the Pfaffian system to the canonical form. This is neither implemented in nor in .
* On an ordinary personal computer, the process of finding series expansion takes a few minutes for double variable HFs. However, the computational time significantly increases when triple variable HFs are considered, as most of the time is devoted to bringing the Pfaffian system into canonical form.
* The series expansion of a given MHF is not applicable on its singular locus.
* Since the series expansion of the Appell F_4 function with general values of Pocchhammer parameters can not be found using this package, the command can only find the reduction formulae of F_4 for restricted parameters.
§ SUMMARY AND CONCLUSIONS
In this work, we have provided an implementation of an algorithm to perform series expansion of MHFs. The implemented algorithm is a slight modification of the one presented in <cit.>. The modifications are made for the requirement of expressing the series coefficients in terms of MPLs, rather than higher summation-fold MHFs. We have utilized publicly available packages as dependencies to build the package , which, in its current version, is suitable for finding the series expansion of certain one, two, and three-variable HFs around the integer values of parameters. The restriction of the (integer-valued) parameters arises as we choose to express the series coefficients in terms of MPLs, which offers immediate numerical evaluation using well-established computer programs. We have described the steps of the algorithms in detail and explained how the dependencies are employed to accomplish those steps.
We have provided examples of the series expansion of MHFs that typically appears in Feynman integral calculus.
The package also allows one to find reduction formulae of certain MHFs with integer values of Pochhammer parameters to simpler functions.
In some cases, the MHFs are needed to series-expand around rational values of parameters, which may require functions beyond MPLs. The current version of the package is not suitable for that. We plan to explore the possibilities in near future.
We are indebted to Prof. Thomas Gehrmann and the Physik-Institut, Universität Zürich for supporting the present work and hospitality. We are also grateful to Prof. Daniel Wyler for his support throughout the course of this work. We thank Prof. Thomas Gehrmann, Dr. Robin Marzucca and Dr. Kay Schönwald for enlightening discussions and Prof. B. Ananthanarayan and Prof. Thomas Gehrmann for useful comments on the manuscript. This is a part of the author's doctoral work at CHEP, IISc.
§ DEFINITION OF MULTIPLE POLYLOGARITHMS (MPLS)
Multiple polylogarithms <cit.> are defined as
G(a_1,… , a_n;z) = ∫_0^z dt/t-a_1 G(a_2,…,a_n;t )
with G(z) = 1 and a_i and z are complex-valued variables. The vector a⃗ = (a_1, … ,a_n) is called the weight vector, and its length is called the weight.
These can also be defined as nested sums,
Li_m_1, …, m_k(z_1, …, z_k) =∑_0<n_1<n_2<⋯<n_kz_1^n_1 z_2^n_2⋯ z_k^n_k/n_1^m_1 n_2^m_2⋯ n_k^m_k
=∑_n_k=1^∞z_k^n_k/n_k^m_k∑_n_k-1=1^n_k-1…∑_n_1=1^n_2-1z_1^n_1/n_1^m_1,
which are related to each other as
G(0⃗_m_1-1, a_1, …, 0⃗_m_k-1, a_k ; z) =(-1)^k Li_m_k, …, m_1(a_k-1/a_k, …, a_1/a_2, z/a_1)
G(a⃗_n ; z) =1/n !log ^n(1-z/a),
G(0⃗_n-1, a ; z) =-Li_n(z/a),
where a_i ≠ 0. MPLs, GPLs, logarithms and polylogarithms are used interchangeably throughout the paper.
§ DEFINITIONS OF SOME MHFS
In the appendix, we list the definition of some MHFs in two and three variables along with their domain of convergences below. The standard references are <cit.>.
The four Appell functions are defined as
F_1 := F_1(a, b_1, b_2 ; c ; x, y)=∑_m, n=0^∞(a)_m+n(b_1)_m(b_2)_n/(c)_m+n x^m y^n/ m ! n !
F_2 := F_2(a, b_1, b_2 ; c_1, c_2 ; x, y)=∑_m, n=0^∞(a)_m+n(b_1)_m(b_2)_n/(c_1)_m(c_2)_n x^m y^n/ m ! n !
F_3 :=F_3(a_1, a_2, b_1, b_2 ; c ; x, y)=∑_m, n=0^∞(a_1)_m(a_2)_n(b_1)_m(b_2)_n/(c)_m+n x^m y^n/ m ! n !
F_4 :=F_4(a, b ; c_1, c_2 ; x, y)=∑_m, n=0^∞(a)_m+n(b)_m+n/(c_1)_m(c_2)_n x^m y^n/ m ! n !
their associated domains of convergence are
F_1 : |x|<1 ∧ |y|<1
F_2 : |x|+|y|<1
F_3 : |x|<1 ∧ |y|<1
F_4 : √(|x|)+√(|y|)<1
The ten double-variable Horn functions are defined below.
G_1 := G_1(a , b, b^' ; x, y)=∑_m, n=0^∞(a)_m+n(b)_n-m(b^')_m-nx^m y^n/m ! n !
G_2 := G_2(a, a^' , b, b^' ; x, y)=∑_m, n=0^∞(a)_m(a^')_n(b)_n-m(b^')_m-nx^m y^n/m ! n !
G_3 := G_3(a, a^' ; x, y)=∑_m, n=0^∞(a)_2 n-m(a^')_2 m-nx^m y^n/m ! n !
H_1 := H_1(a , b , c ; d ; x, y)=∑_m, n=0^∞(a)_m-n(b)_m+n(c)_n/(d)_mx^m y^n/m ! n !
H_2 := H_2(a , b , c , d ; e ; x, y)=∑_m, n=0^∞(a)_m-n(b)_m(c)_n(d)_n/(e)_mx^m y^n/m ! n !
H_3 := H_3(a , b ; c ; x, y)=∑_m, n=0^∞(a)_2 m+n(b)_n/(c)_m+nx^m y^n/m ! n !
H_4:= H_4(a , b ; c , d ; x, y)=∑_m, n=0^∞(a)_2 m+n(b)_n/(c)_m(d)_nx^m y^n/m ! n !
H_5:= H_5(a , b ; c ; x, y)=∑_m, n=0^∞(a)_2 m+n(b)_n-m/(c)_nx^m y^n/m ! n !
H_6:= H_6(a , b , c ; x, y)=∑_m, n=0^∞(a)_2 m-n(b)_n-m(c)_n x^m y^n/m ! n !
H_7:= H_7(a , b , c ; d ; x, y)=∑_m, n=0^∞(a)_2 m-n(b)_n(c)_n/(d)_mx^m y^n/m ! n !
The associated domains of convergence are
G_1 : |x|+|y|<1
G_2 : |x|<1 ∧ |y|<1
G_3 : Z_1 ∩ Z_2, Z_1 = |x|<Φ_1(|y|), Z_2 = |y|<Φ_1(|x|)
H_1 : | x| <1| y| <1 2 √(| x| | y| )+| y| <1
H_2 : | x| <1| y| <1|y|(1+|x|)<1
H_3 : | x| <1/4∧( |x|<1/4∧ |y|< 1/2 + 1/2√(1-4 |x|)) ∪(|y|≤1/2)
H_4 : 2√(|x|)+ |y|<1
H_5 : |x|<1/4∧ |y|< min{Ψ_1(|x|),Ψ_2(|x|)}
H_6 : | x| <1/4∧(|x| |y|^2 + |y|<1)
H_7 : | x| <1/4∧ |y|(1+ 2√(|x|))<1
where
Φ_1(x) = 2 √(3 x+1)+1/3 (√(3 x+1)+1)^2
Φ_2(x) = 2 √(1-3 x)-1/3 (√(1-3 x)-1)^2
Ψ_1(x) =2 (2-√(12 x+1))^2/9 (√(12 x+1)-1)
Ψ_2(x) = 2 (√(1-12 x)+2)^2/9 (√(1-12 x)+1)
These fourteen Appell-Horn functions form the set of complete, order two bivariate HFs.
The Kampé de Fériet functions are defined as
KdF^p:q;k_l:m;n[
[ (a_p) (b_q) (c_k); (α_l) (β_m) (γ_n) ] |
x,y
] := ∑_r=0^∞∑_s=0^∞∏_j_1=1^p(a_j_1)_r+s∏_j_2=1^q(b_j_2)_r ∏_j_3=1^k(c_j_3)_s/∏_j_4=1^l(α_j_4)_r+s∏_j_5=1^m(β_j_5)_r ∏_j_6=1^n(γ_j_6)_sx^r/r !y^s/s !
and
KdF^p:q;k_l:m;n[
[ (a_p) (b_q) (c_k); (α_l) (β_m) (γ_n) ] |
x,y
] := ∑_r=0^∞∑_s=0^∞∏_j_1=1^p(a_j_1)_r-s∏_j_2=1^q(b_j_2)_r ∏_j_3=1^k(c_j_3)_s/∏_j_4=1^l(α_j_4)_r-s∏_j_5=1^m(β_j_5)_r ∏_j_6=1^n(γ_j_6)_sx^r/r !y^s/s !
The domains of convergence of the KdF functions are,
* p+q< l+m+1, p+k<l+n+1, |x|<∞, |y|<∞
or,
* p+q=l+m+1, p+k=l+n+1 and
|x|^1/(p-l)+ |y|^1/(p-l) <1, if p>l,
max{|x|,|y|}<1, if p≤ l
The KdF functions can be related to the KdF functions by reshuffling the summation indices (see C.2. of <cit.>). Thus, we do not discuss the domains of convergence of these functions. However, the domain of MHFs can be derived using the Horn's theorem, which is implemented in the <cit.> package for the bivariate HFs.
Next, we list the triple-variable HFs whose series expansion can be performed by the package.
F_A := F_A(a,b_1,b_2,b_3;c_1,c_2,c_3; x,y,z) = ∑_m,n,p = 0^∞(a)_m+n+p (b_1)_m (b_2)_n (b_3)_p /(c_1)_m (c_2)_n (c_3)_p x^m y^n z^p/m! n! p!
F_B := F_B(a_1,a_2,a_3,b_1,b_2,b_3;c; x,y,z) = ∑_m,n,p = 0^∞ (a_1)_m (a_2)_n (a_3)_p (b_1)_m (b_2)_n (b_3)_p /(c)_m+n+px^m y^n z^p/m! n! p!
F_D := F_D(a,b_1,b_2,b_3;c_1,c_2,c_3; x,y,z) = ∑_m,n,p = 0^∞(a)_m+n+p (b_1)_m (b_2)_n (b_3)_p /(c)_m+n+px^m y^n z^p/m! n! p!
F_K := F_K(a_1,a_2,b_1,b_2;c_1,c_2,c_3; x,y,z) = ∑_m,n,p = 0^∞(a_1)_m (a_2)_n+p(b_1)_m+p(b_2)_n /(c_1)_m (c_2)_n (c_3)_px^m y^n z^p/m! n! p!
F_M := F_M(a_1,a_2,b_1,b_2;c_1,c_2;x,y,z) = ∑_m,n,p = 0^∞(a_1)_m (a_2)_n+p(b_1)_m+p(b_2)_n/(c_1)_m (c_2)_n+px^m y^n z^p/m! n! p!
F_N := F_N(a_1,a_2,a_3,b_1,b_2;c_1,c_2;x,y,z) = ∑_m,n,p = 0^∞(a_1)_m (a_2)_n (a_3)_p (b_1)_m+p(b_2)_n/(c_1)_m (c_2)_n+px^m y^n z^p/m! n! p!
F_S := F_S(a_1,a_2,b_1,b_2,b_3;c;x,y,z) = ∑_m,n,p = 0^∞(a_1)_m (a_2)_n+p(b_1)_m(b_2)_n (b_2)_p/(c)_m+n+px^m y^n z^p/m! n! p!
The corresponding domains of convergence are
F_A : |x|+|y|+|z|<1
F_B : |x|<1 ∧|y|<1∧ |z|<1
F_D : |x|<1 ∧|y|<1∧|z|<1
F_K : |x|<1 ∧ |z|<1 ∧ |y|<(1-|x|)(1-|z|)
F_M : |x|<1 ∧ |y|+|z|<1
F_N : |x|+ |y|<1 ∧ |z|<1
F_S : |x|<1 ∧|y|<1∧ |z|<1
§ SOME RESULTS
We provide some series expansion of pure functions below. Note that, each of the series coefficients is of weight zero if we assign the weight of ^n to be -n. The weight of G(a_1,…,a_n;x) is n.
F_1(a, b_1 , b_2 ; 1+ c ; x, y) = 1+ ^2 [ -a b_1 G(0,1,x)-a b_2 G(0,1,y) ]
+^3 [a b_1 G(0,1,1,x) (a+b_1-c)
+a b_1 (c-b_2) G(0,0,1,x)+a b_2 c G(0,0,1,y)+a b_2 G(0,1,1,y) (a+b_2-c)
+a b_1 b_2 G(0,1,x) G(1,y)-a b_1 b_2 G(1,y) G(0,y,x)+a b_1 b_2 G(0,y,1,x)] + O(^4)
F_2(a, b_1 ,b_2 ;1+c_1 ,1+c_2;x,y) = 1+ ^2[ -a b_1 G(0,1,x)-a b_2 G(0,1,y) ]
+^3 [G(0,1,1,y) (a^2 b_2-a b_2 c_2+a b_2^2)+a b_1 (G(0,1,1,x) (a+b_1-b_2-c_1)
+b_2 G(0,1,x) G(1,y)
+b_2 G(0,1,1-y,x)+c_1 G(0,0,1,x))+a b_2 c_2 G(0,0,1,y)] + O(^4)
F_3(a_1 , a_2 , b_1 , b_2 ; 1+ c ; x , y) = 1 + ^2 [- a_1 b_1 -G(0,1,x)-a_2 b_2 G(0,1,y)] + ^3 [a_1 b_1 c G(0,0,1,x)
+a_1 b_1 G(0,1,1,x) (a_1+b_1-c)+a_2 b_2 c G(0,0,1,y)+a_2 b_2 G(0,1,1,y) (a_2+b_2-c)]
+ O(^4)
F_A (a , b_1 , b_2 , b_3 , 1+ c_1 , 1+ c_2 , 1+ c_3 ,x,y,z)= 1+ ^2 [-a b_1 G(0,1,x)-a b_2 G(0,1,y)-a b_3 G(0,1,z)]
+ ^3 [a b_1 G(0,1,1,x) (a+b_1-b_2-b_3-c_1)+a b_1 c_1 G(0,0,1,x)+a b_2 G(0,1,1,y) (a+b_2-b_3-c_2)
+a b_2 c_2 G(0,0,1,y)+a b_3 G(0,1,1,z) (a+b_3-c_3)+a b_3 c_3 G(0,0,1,z)+a b_1 b_2 G(0,1,x) G(1,y)
+a b_1 b_2 G(0,1,1-y,x)+a b_1 b_3 G(0,1,x) G(1,z)+a b_1 b_3 G(0,1,1-z,x)+a b_3 b_2 G(0,1,y) G(1,z)
+a b_3 b_2 G(0,1,1-z,y)]+ O(^4)
F_B(a_1 , a_2 , a_3 , b_1 , b_2 , b_3 , 1+ c ; x , y, z) = 1+ ^2 [a_1 b_1 (-G(0,1,x))-a_2 b_2 G(0,1,y)-a_3 b_3 G(0,1,z)]
+^3[a_1 b_1 c G(0,0,1,x)+a_1 b_1 G(0,1,1,x) (a_1+b_1-c)+a_2 b_2 c G(0,0,1,y)
+a_2 b_2 G(0,1,1,y) (a_2+b_2-c)+a_3 b_3 c G(0,0,1,z)+a_3 b_3 G(0,1,1,z) (a_3+b_3-c)]
+ O(^4)
F_D(a , b_1 , b_2 , b_3 , 1+ c , x ,y ,z)=1+ ^2 [-a b_1 G(0,1,x)-a b_2 G(0,1,y)-a b_3 G(0,1,z)]
+ ^3[a b_1 G(0,1,1,x) (a+b_1-c)-a b_1 (b_2+b_3-c) G(0,0,1,x)+a b_2 G(0,1,1,y) (a+b_2-c)
+a b_2 (c-b_3) G(0,0,1,y)+a b_3 c G(0,0,1,z)+a b_3 G(0,1,1,z) (a+b_3-c)+a b_1 b_2 G(0,1,x) G(1,y)
-a b_1 b_2 G(1,y) G(0,y,x)+a b_1 b_2 G(0,y,1,x)+a b_1 b_3 G(0,1,x) G(1,z)-a b_1 b_3 G(1,z) G(0,z,x)
+a b_1 b_3 G(0,z,1,x)+a b_2 b_3 G(0,1,y) G(1,z)-a b_2 b_3 G(1,z) G(0,z,y)+a b_2 b_3 G(0,z,1,y)]
+ O(^4)
F_S( a_1 , a_2 , b_1 , b_2 , b_3 , 1+ c , x,y,z)= 1+ ^2 [a_1 b_1 (-G(0,1,x))-a_2 b_2 G(0,1,y)-a_2 b_3 G(0,1,z)]
+ ^3[a_1 b_1 c G(0,0,1,x)+a_1 b_1 G(0,1,1,x) (a_1+b_1-c)+a_2 b_2 G(0,1,1,y) (a_2+b_2-c)
+a_2 b_2 (c-b_3) G(0,0,1,y)+a_2 b_3 c G(0,0,1,z)+a_2 b_3 G(0,1,1,z) (a_2+b_3-c)
+a_2 b_2 b_3 G(0,1,y) G(1,z)-a_2 b_2 b_3 G(1,z) G(0,z,y)+a_2 b_2 b_3 G(0,z,1,y)] + O(^4)
F_K(a_1 , a_2 ,b_1 ,b_2 ;1+ c_1 ,1+ c_2 ,1+ c_3 ; x,y,z) = 1+ ^2 [a_1 b_1 (-G(0,1,x))-a_2 b_2 G(0,1,y)-a_2 b_1 G(0,1,z)]
+ ^3[a_1 b_1 G(0,1,1,x) (a_1-a_2+b_1-c_1)+a_1 b_1 c_1 G(0,0,1,x)+a_2 b_2 G(0,1,1,y) (a_2-b_1+b_2-c_2)
+a_2 b_2 c_2 G(0,0,1,y)+a_2 b_1 G(0,1,1,z) (a_2+b_1-c_3)+a_2 b_1 c_3 G(0,0,1,z)+a_1 a_2 b_1 G(0,1,x) G(1,z)
+a_1 a_2 b_1 G(0,1,1-z,x)+a_2 b_2 b_1 G(0,1,y) G(1,z)+a_2 b_2 b_1 G(0,1,1-z,y)]+ O(^4)
F_M(a_1 , a_2 ,b_1 ,b_2 ,1+ c_1 ,1+ c_2 ;x,y,z) = 1+ ^2[a_1 b_1 (-G(0,1,x))-a_2 b_2 G(0,1,y)-a_2 b_1 G(0,1,z)]
+ ^3[a_1 b_1 G(0,1,1,x) (a_1-a_2+b_1-c_1)+a_1 b_1 c_1 G(0,0,1,x)+a_2 b_2 G(0,1,1,y) (a_2+b_2-c_2)
+a_2 b_2 (c_2-b_1) G(0,0,1,y)+a_2 b_1 G(0,1,1,z) (a_2+b_1-c_2)+a_2 b_1 c_2 G(0,0,1,z)+a_1 a_2 b_1 G(0,1,x) G(1,z)
+a_1 a_2 b_1 G(0,1,1-z,x)+a_2 b_2 b_1 G(0,1,y) G(1,z)-a_2 b_2 b_1 G(1,z) G(0,z,y)+a_2 b_2 b_1 G(0,z,1,y)]
O(^4)
F_N(a_1 , a_2 ,a_3 ,b_1 ,b_2 ;1+ c_1 ,1 + c_2 ;.x.y.z) = 1+ ^2[a_1 b_1 (-G(0,1,x))-a_2 b_2 G(0,1,y)-a_3 b_1 G(0,1,z)]
+^3[a_1 b_1 G(0,1,1,x) (a_1-a_3+b_1-c_1)+a_1 b_1 c_1 G(0,0,1,x)+a_2 b_2 G(0,1,1,y) (a_2+b_2-c_2)
+a_2 b_2 c_2 G(0,0,1,y)+a_3 b_1 G(0,1,1,z) (a_3+b_1-c_2)+a_3 b_1 c_2 G(0,0,1,z)+a_1 a_3 b_1 G(0,1,x) G(1,z)
+a_1 a_3 b_1 G(0,1,1-z,x)] + O(^4)
§ TRANSFORMATION THEORY OF ORDER TWO COMPLETE HFS
The connection formulae between the Horn's functions are well studied in the literature <cit.>. We present some of the connection formulae of Appell-Horn functions that are used to compute their series expansions.
G_1(a,b,c,x,y)=(1/x+y+1)^a
×
F_2 (-b-c+1,a,a,1-b,1-c,-√(1-4 x y)+2 x+1/2 (x+y+1),-√(1-4 x y)+2 y+1/2 (x+y+1))
G_2(a,b,c,d,x,y)= (x+1)^-a (y+1)^-bF_2 (-c-d+1,a,b,1-c,1-d,x/x+1,y/y+1)
G_3(a,b,x(1-y)/(1-x)^2,y(1-x)/(1-y)^2)=(1-y)^a(1-x)^b G_1(a+b,a,b,x,y)
H_3(a,b,c,x,y) =(1-4 x)^-a/2(-4 x+√(1-4 x)+1/2-8 x)^1-c
× F_3 (c-a,a,a-c+1,b,c,4 x+√(1-4 x)-1/8 x-2,y-√(1-4 x) y/2 x)
H_4(a,b,c,d,x,y) = (1-2 √(x)/2 √(x)+1)^a F_2(a,c-1/2,b,2 c-1,d,4 √(x)/2 √(x)+1,y/2 √(x)+1)
H_6(a,b,c,x,y)= (4 x+1)^-a/2(4 x+√(4 x+1)+1/8 x+2)^b
× H_2 (b,c,a+b,-a-b+1,1-a,-(4 x+√(4 x+1)+1) y/2 √(4 x+1),-4 x+√(4 x+1)-1/8 x+2)
H_7(a,b,c,d,x,y) = (2 √(x)+1)^-a H_2 ( a,d-1/2,b,c,2 d-1,4 √(x)/2 √(x)+1,2 √(x) y+y )
It is worth mentioning that the correct form of the connection formula between G_3 and G_1 is found in <cit.>. This expression is brought to the form G_3(a,b;x,y)= … G_1(…) for computation of the series expansion of the former series, which is not presented here due to its long length.
To the best of our knowledge, no connection formula exists for the functions H_1,H_5 and F_4 having general parameters to other Appell-Horn functions. However, the connection formula of H_1 and F_4 with some restriction of parameters can be found in <cit.> and <cit.> respectively, which we present below.
H_1(d-c,b,c,d,x,y)= 2^-b(-√(-4 x y+y^2+2 y+1)+y-1/(x-1) y)^b
× H_2 (d-c,b,b,c,d,-√(-4 x y+y^2+2 y+1)+y+1/2 y,-√(-4 x y+y^2+2 y+1)-y+1/2 (x-1))
H_1(a,b,c,1/2 (a+b+1),x,y)= (-2 w-2 √((w-1) w)+1)^b-a/2(-2 w+2 √((w-1) w)+1)^1/2 (-a-b)
× F_2(b,a+b/2,c,a+b,1-a,4 √((w-1) w)/2 √((w-1) w)+√((1-2 w)^2),-((-2 w-2 √((w-1) w)+1) z))
The relations to the Appell F_4 are
F_4(a,b;c,b,x,y) = (1-X)^a (1-Y)^a F_1 ( a,c-b,a-c+1;c, X ,XY )
where,
X = √((x+y-1)^2-4 x y)+x+y-1/2 y
Y = √((x+y-1)^2-4 x y)+x+y-1/2 x
F_4(a,b;c,a-c+1,x,y) = F_2 ( a,b,b;c,a-c+1, X ,Y )
where,
X = 1/2(-√((-x+y-1)^2-4 x)+x-y+1)
Y = 1/2(-√((-x+y-1)^2-4 x)-x+y+1)
F_4(a,b;c,a+b-c+1,x,y) = _2F_1(a,b;c;X) _2F_1(a,b;a+b-c+1;Y)
where,
X = 1/2(x-y-√((-x+y-1)^2-4 x)+1)
Y = 1/2(-x+y-√((-x+y-1)^2-4 x)+1)
§ REDUCTION FORMULAE OF MHFS
It is possible write a MHF with positive integer values of Pochhammer parameters in terms of simpler functions. These useful reduction formulae have applications in mathematics and physics.
As pointed out in <cit.>, some reduction formulae of Appell and Lauricella-Saran functions can be immediately derived using the results of their series expansions (Appendix <ref>) and the differential reduction formulae from <cit.> packages. For instance,
F_2(1+,1+ ,1+;2+,2+;x,y) =
[ϵ +1/x ϵ ^2θ_x +ϵ +1/y ϵ ^2θ_y +(ϵ +1) (x+y-1)/x y ϵ ^3 θ_y θ_x] ∙ F_2(,,;1+,1+;x,y)
Using the series expansion of the F_2 from Appendix <ref>, we find the right-hand side to be
F_2(1+,1+ ,1+;2+,2+;x,y) =
((x+y-1) (G(1;x)-G(1-y;x))/x y-G(1;x)/x-G(1;y)/y)+O(ϵ ^1)
Finally, setting → 0, the reduction formula of Appell F_2 reads
F_2(1,1,1;2,2;x,y) =(x+y-1) (G(1,x)-G(1-y,x))/x y-G(1,x)/x-G(1,y)/y
= -(1-x) log (1-x)/x y-(x+y-1) log(1-x/1-y)/x y-log (1-y)/y
The above result differs from the expression obtained in Eq. (A.9) of <cit.>. However, we found that eqn:F2reduction is numerically consistent.
Similarly, the reduction formulae of Appell F_2 with other set of positive values of Pochhammer parameters can be easily obtained by utilizing the package and eqn:F2reduction. As an example
F_2(3,2,1;3,2;x,y) = (-1/x-1-1/x-1θ_x-1/x-1θ_y -1/(x-1) xθ_y θ_x) ∙ F_2(1,1,1;2,2;x,y)
Thus,
F_2(3,2,1;3,2;x,y) = -log (1-x)/x^2 y+log(x+y-1/y-1)/x^2 y+1/(x-1) x (x+y-1)
which matches with the result from literature <cit.>.
We provide a short list of reduction formulae of Appell F_1, F_3, F_4, Lauricella - Saran F_D and F_S functions below in terms of MPLs.
F_1(1,1,1;2;x,y) = G(1;y)-G(1;x)/x-y
F_3(1,1,1,1;2;x,y) =G(1;x)+G(1;y)/x y-x-y
F_4(1,1;1,1;x,y) = 1/√(x^2-2 x y-2 x+y^2-2 y+1)
F_D(1,1,1,1;2;x,y,z) =-x G(1;x)/(x-y) (x-z)+y G(1;y)/(x-y) (y-z)+z G(1;z)/(x-z) (z-y)
F_S(1,1,1,1,1;2;x,y,z) =-x G(1;x)/(x (y-1)-y) (x (z-1)-z)+y G(1;y)/(x (y-1)-y) (y-z)
-z G(1;z)/(x (z-1)-z) (y-z)
F_1(1,1,1;2;x,y) = log (1-y)-log (1-x)/x-y
F_3(1,1,1,1;2;x,y) = -log (1-x)/- x y +x+y-log (1-y)/-x y+x+y
F_4(1,1;1,1;x,y) = 1/√(x^2-2 x y-2 x+y^2-2 y+1)
F_D(1,1,1,1;2;x,y,z) =-x log (1-x)/(x-y) (x-z)+y log (1-y)/(x-y) (y-z)+z log (1-z)/(x-z) (z-y)
F_S(1,1,1,1,1;2;x,y,z) =-x log (1-x)/(x (y-1)-y) (x (z-1)-z)+y log (1-y)/(x (y-1)-y) (y-z)
-z log (1-z)/(x (z-1)-z) (y-z)
These expressions can be written in terms of ordinary logarithms, since
G(1;x) = -Li_1(x) = log(1-x)
Making good use of the packages and the expressions from Appendix <ref>, a huge number of reduction formulae can be easily derived. This process of obtaining the reduction formula is encoded in the command command of the presented package.
JHEP
|
http://arxiv.org/abs/2306.01891v1
|
20230602195213
|
DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System
|
[
"Abanob Soliman",
"Fabien Bonardi",
"Désiré Sidibé",
"Samia Bouchafa"
] |
cs.CV
|
[
"cs.CV",
"cs.RO",
"eess.IV",
"eess.SP"
] |
DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System
Abanob Soliman^0000-0003-4956-8580, Fabien Bonardi^0000-0002-3555-7306, Désiré Sidibé^0000-0002-5843-7139, and Samia Bouchafa^0000-0002-2860-8128
All authors are with Université Paris-Saclay, Univ Evry, IBISC Laboratory, 91020, Evry, France. Correspondence: [email protected]
July 31, 2023
==========================================================================================================================================================================================================================================================================================================
This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance compared to state-of-the-art methods in terms of robustness and accuracy in adverse conditions. Our implementation's research-based Python API is publicly available on GitHub for further research and development: <https://github.com/AbanobSoliman/DH-PTAM>.
Stereo, events, SuperPoint, R2D2, SLAM.
§ INTRODUCTION
Sensor fusion <cit.> combines data from multiple sensors to improve a system's accuracy, reliability, and robustness. It can also reduce computational costs by eliminating the need for redundant sensor data. Different types of sensors can be fused, such as cameras, lidars, radars, and ultrasonics. The algorithm used for fusion can vary, and it typically requires online calibration to ensure accurate and consistent data.
Visual Odometry (VO) is a method that utilizes sensor fusion to estimate the motion of a camera by analyzing the changes in visual features between consecutive frames. Still, it faces challenges, such as difficulties in feature matching when the scene has little texture, the need for a robust feature detector and descriptor, and the problems of relative scale ambiguity and drift <cit.>. Scale ambiguity refers to the problem of determining the actual scale of the scene. In contrast, drift refers to the accumulation of errors over time that causes the estimated positions to deviate from the true positions. These challenges and limitations must be considered when applying frame-based visual odometry in practical applications <cit.>.
An event camera <cit.>, known as an asynchronous or dynamic vision sensor (DVS), operates on a fundamentally different concept than traditional frame-based cameras. Instead of capturing frames at a constant rate, event cameras output a stream of "events" that indicate the brightness changes in each pixel. This allows event cameras to operate at high speed, in very low-light conditions, and be more resistant to motion blur <cit.>. The event-based nature of these cameras also makes them highly suitable for tasks that involve fast-moving objects or scenes with high dynamic range. These characteristics make them an excellent complementary sensor to frame-based visual odometry in adverse conditions such as fast motion, high dynamic range, and low-light environments, where traditional cameras may struggle.
Deep learning-based features are more robust than traditional methods <cit.>, as they can learn from large amounts of data and generalize well to unseen data. They are also more invariant to changes in viewpoint and lighting, making them suitable for real-world applications. Recently, pre-trained models have been widely adopted in computer vision and have achieved state-of-the-art performance in object detection, semantic segmentation, and image classification tasks.
In this paper, we propose a deep hybrid stereo events-frames parallel tracking and mapping system that significantly improves simultaneous localization and mapping accuracy and robustness in dynamic environments. This system combines the advantages of stereo RGB and event cameras, which can capture visual information at high temporal resolution. The use of deep learning techniques in this system allows for the extraction of robust features from the stereo hybrid image and event frames, which improves the accuracy of the feature-matching process and the estimation of the camera pose.
Our main contributions can be summarized as follows:
* We propose an end-to-end parallel tracking and mapping (PTAM) approach based on a novel spatio-temporal synchronization of stereo visual frames with event streams (see Fig. <ref>).
* We propose a simple mid-level feature loop-closure algorithm for prompt SLAM behavior based on a learning-based feature description method to maximize robustness.
* DH-PTAM's effectiveness is evaluated in both stereo event-aided and image-based visual SLAM modes, achieving improved accuracy when incorporating event information, shown in an ablation study on the CPU versus the GPU of a consumer-grade laptop.
This paper is organized as follows: Section <ref> gives a brief overview of the state-of-the-art SLAM methods. Section <ref> provides an in-detail overview of the proposed method and offers insights of the novel parts of the algorithm. Section <ref> comprehensively evaluates the algorithm on the most recent VECtor <cit.> and TUM-VIE <cit.> benchmarks, along with defining the limitations. Section <ref> summarizes the experiments' main observations, the proposed method's behavioral aspects, and the start points for future works.
§ RELATED WORK
§.§ Conventional visual-SLAM
Simultaneous Localization and Mapping (SLAM) problem has been widely studied in the literature <cit.>, and various techniques have been proposed to solve it. In recent years, learning-based features extraction and description methods <cit.>, and deep learning based approaches <cit.> have been applied to improve SLAM robustness.
One of the most popular SLAM techniques is the filter-based SLAM using an extended Kalman filter (EKF) <cit.>, or a particle filter <cit.>. These filters use probabilistic frameworks to estimate the robot's pose and map. They can handle non-linearities and uncertainties in the system, making them useful for large-scale and highly dynamic environments. Filter-based SLAM has been widely used in applications <cit.> such as mobile robots, UAVs, and autonomous vehicles.
Another important class of SLAM is graph-based SLAM <cit.>, which uses a factor graph data structure to represent the robot's poses and the map. Graph-based SLAM requires Sparse Bundle Adjustment (SBA), which uses a non-linear least squares optimization to estimate the robot's poses and a graph to represent the map. These methods are robust to changes in lighting and viewpoint, making them well-suited for real-world applications. Some popular graph-based SLAM methods include ORB-SLAM <cit.>, Basalt <cit.>, and VINS-Fusion <cit.>.
Loop-closure detection is a fundamental approach to minimize drifts in visual-SLAM, as it allows a system to recognize a previously visited location. Two common approaches to loop-closure detection are mid-level features <cit.> and bag-of-words <cit.> representations. Mid-level features are more abstract than low-level features, such as edges and corners, but are not as high-level as object recognition. Deep learning descriptors <cit.> can be considered mid-level features as they can extract higher-level information from raw data compared to low-level features, such as pixel values, but are not as high-level as features directly related to the task at hand, such as object labels.
§.§ Event-aided visual-SLAM
Event-based VO is an emerging form of localization solution that uses event-based cameras to generate measurements of the environment. While the number of sampled frames limits traditional SLAM, event-based SLAM provides high temporal resolution by generating an abundance of measurements, allowing for improved 3D localization and 6D pose estimation. Indirect methods, such as frame-based approaches, extract keypoints from the input data in the front-end. This front-end stage typically involves detecting and matching salient features in the sensory data, such as images or event streams. These keypoints are then passed to the back-end, where state estimation algorithms are used to estimate the robot's pose and build a consistent map of the environment.
Conversely, direct methods attempt to process all available sensor data, such as individual pixel intensity changes in images (events) or all RGB frame pixels, without any intermediate filtering or feature extraction in the front-end, relying on the back-end to handle the entire data.
Event-aided systems leverage the high-quality representations that events can produce after processing, especially in dynamic and dimmed environments where RGB camera frames fail. Some of the well-known event representations are event image (EI) <cit.>, Time Surfaces (TS) <cit.>, Event Spike Tensor (EST) <cit.>, and recently Event 3-Channel Tensor (E3CT) <cit.>. Others <cit.> build the front-end on an Event Generation Model (EGM) <cit.> or construct motion-compensated event frames (MEF) <cit.> aided by a gyroscope. Towards a traditional frame reconstruction from events, <cit.> proposes a Log Intensity Reconstruction (LIR), a model-based method, and <cit.> proposes Spade-e2vid, a learning-based method.
Table <ref> compares the latest event-based and event-aided VO solutions concerning the sensor setup, events pre-processing layer (EPL), direct or indirect event processing, and the loop-closure capability to minimize visual drifts.
§ METHODOLOGY
§.§ System Overview
Fig. <ref> illustrates the main components and the process of DH-PTAM. The system establishes a global reference frame based on the camera position in the initial frame. A preliminary map is created by identifying and triangulating distinctive points in the first stereo pair of images. For subsequent frames, the tracking thread calculates the 6D pose of each stereo frame by minimizing the discrepancy between the projected map points and their matches. The system chooses a subset of keyframes used in another thread to update the map at a slower pace.
Map points are derived from the stereo matches of each keyframe and added to the map. The mapping thread constantly improves the local discrepancy by adjusting all map points, and stereo poses using Bundle Adjustment. A pose graph is utilized to preserve the global consistency of the map which is a shared resource among the tracking, mapping, and loop-closing threads. Point correspondences are actively searched between keyframes to strengthen the constraints of the pose graph optimization smoothing process.
Notations. The odometry state representation comprises the 3D points X_w^k and a 7-increment vector μ∈𝔰𝔢(3), which is the current pose of the left fusion frame at time k:
μ^k=[ δ x δ y δ z δ q_x δ q_y δ q_z δ q_w]^⊤ ,
where [δ x δ y δ z]^⊤ is the incremental translation vector and [δ q_x δ q_y δ q_z δ q_w]^⊤ is the incremental quaternion vector.
§.§ Spatio-temporal Synchronization
Our spatio-temporal synchronization approach (see Fig. <ref>) considers the general case of global shutter cameras where the exposure time t_exp_0,1 is known. We adopt the constant-time Δt^k_0,1 events accumulation window k approach in our spatio-temporal events-frames synchronization method.
As soon as stereo RGB camera frames are received at timestamps t_C_0,1, we calculate the fusion frames timestamps assuming the hardware synchronization of stereo RGB images and stereo event streams, using:
t_f_0,1=t_C_0,1+t_exp_0,1/2 , Δt^k_0,1=t_f_0,1^k-t_f_0,1^k-1 ,
where t_C_0 is the selected stereo keyframe timestamp.
§.§ Events-Frames Hybridization Approach
One of the main advantages of our front-end fusion modeling is that it does not rely on any online probabilistic photo-metric matching or alignment approach using filters or cost functions and considers all events polarities p∈{+1, -1}. Hence, the computational load of our method lies mainly on the PTAM modules of the optimization-based back-end. The E3CT events pre-processing layer is adopted and modeled as two consecutive filtering kernel convolutions on the event volume 𝒱_0(x,y,t) of temporal width Δt^k_0,1. The first kernel to filter the time decaying events in the volume, is the α-exponential time decay kernel and is modeled as:
𝒱_1(x,y,t) ≐exp(-α(𝒱_0(x,y,t)-η/2/η/6)^2) ,
where α=0.5 and the decay rate η=30 [ms] for our model. Followed by a trilinear voting kernel to stack the events in the three channels tensor so that each event contributes to two consecutive channels depending on their location from a vertex of this trilinear kernel. An event near the top contributes a higher weight to the current channel and a lower weight to the neighboring ones. These contribution weights of the three channels can represent a percentage of an R-G-B color map; hence, the E3CT can be considered a synthetic RGB frame of events. The trilinear voting kernel can be modeled as follows:
𝒱_2(x,y,t_i) ≐max(0, 1-|𝒱_1(x,y,t_i)/δt|) ,
where δt is the temporal bin i size as discussed in <cit.>.
After applying the trilinear temporal voting kernel on the exponential-decay time surface, we stack the 3-channel tensor temporal bins together, resulting in a synthetically colored 2D frame called the Event 3-Channel Tensor (E3CT). In Fig. <ref>, we can observe that the constructed synthetic colors are always consistent, meaning that the stereo left and right constructed E3CTs have identical colors for the same scene.
Conventional frame-based post-processing operations can be applied to the constructed E3CTs, such as adaptive threshold, contrast stretching, color correction and balance, and denoising functions. We consider a fully calibrated stereo RGB and event cameras stack as represented in Fig. <ref>, so that the rigid-body transformations 𝒯_cd_0,1=[R_cd_0,1|t_cd_0,1]_3×4 and the cameras intrinsic parameters 𝒦_c_0,1,𝒦_d_0,1 are known.
Given that the same post-processing operations are applied on the current stereo E3CT frames, the 2D-to-3D-to-2D consecutive inverse-forward projections of the pixels on the E3CT frames P^h_d_0,1 to the RGB camera frames P^h_d∈ c_0,1 can be performed as follows:
P^h_d∈ c_0,1≈𝒦_c_0,1 𝒯_cd_0,1 [(𝒦_d_0,1)^-1 P^h_d_0,1 1]^⊤ + δP^h_align ,
where (.)^h denotes the pixel location in homogeneous coordinates. The term δP^h_align denotes the pixel location alignment correction factor for the RGB and event frames (see Fig. <ref>) so that the same 3D world point X^h_w_0,1 should correspond exactly to the pixel locations P^h_d∈ c_0,1, P^h_c_0,1. This alignment term is observed to be constant for the same sensor rig with non-varying intrinsic and extrinsic parameters. δP^h_align value can be estimated using an offline optimization process only once on a selected number of frames (the more the accurate) with high confidence feature matches, and this value is given in Section <ref> for both VECtor and TUM-VIE sequences.
Finally, the fusion function (and frame) f(.) performs a temporal cross-dissolve (linear blending) between both the left (D_0, C_0) and right (D_1, C_1) E3CTs and RGB camera frames, respectively, and is formulated as:
f_0,1(C_0,1,D_0,1) = (1-β)*C_0,1 + β*D_0,1 ,
where β∈[0,1] is the E3CT contribution weight in the current fusion frame. β value is dynamic and depends on the scene lighting and texture conditions. It should be set to high values β=max(C_0,1/C_0,1^max, 1-C_0,1/C_0,1^max) when the RGB camera frame fails to detect features due to adverse conditions and low-textured scenes, and this is the DVS-biased fusion mode. For situations where RGB camera frames can detect reliable scene features with good lighting and enough texture, the β value should be low β=min(C_0,1/C_0,1^max, 1-C_0,1/C_0,1^max) to reduce the amount of extracted features and maintain the back-end processing complexity and latency in reasonable ranges, and this is the APS-biased fusion mode.
Dynamic scenes with challenging and adverse conditions can easily trigger rapid switching between these two fusion modes during long-term navigation. This causes a critical problem during the feature tracking process using conventional low-level feature detectors, such as ORB, SIFT, SURF, BRIEF, and FAST. Accordingly, applying mid-level feature detectors that depend mainly on learning-based architectures could solve this fusion frame modes alternation problem. We employ the learning-based feature extractors and descriptors <cit.> for their high robustness and feature detection speed. In Fig. <ref>, we notice the stable tracking of the learning-based features on the hybrid fusion frames in this high dynamic range scenario.
§.§ Optimization-based State Estimation
As our work is based on the original S-PTAM system, all the optimization Jacobians mentioned in this section can be found with detailed proofs in <cit.>. All objective functions are minimized with the Levenberg-Marquardt algorithm implemented in the g^2o optimization library. We employ the Huber loss function for outliers rejection ρ(.).
System bootstrapping. The first stereo fusion frames are considered a keyframe. Then, a triangulation for the collected feature matches on the left and right fusion frames is performed to initialize the map.
Pose tracking thread. Each map point is projected into the viewing frustum of the anticipated stereo position, and we then look nearby for the match. A valid prediction of the current pose is required for such a projection. By contrasting the descriptions, map points and features are matched. The L_2 norm is computed using the binary descriptors of SuperPoint and R2D2. The match is valid if the distance falls below a certain threshold; otherwise, it is ignored. The pose refinement is then applied to recover the current pose knowing the previous one using the following objective function:
L^refine = _μ∑_i∈ Nρ(||
J_i^k μ_k - Δz_i(μ_k-1, X_w^i)
||^2) ,
where N={z_1 , … , z_M} and M is the number of matched measurements. The measurement z=[u,v]^⊤ is a pixel 2D location of the forward projection of a 3D map point X_w using the pinhole model projection function π(X_w^i) = 𝒦_c𝒯_i^f_0 w X_w^i. J_i^k=∂Δz_i(μ)/∂μ_k is the re-projection error's Jacobian with respect to the current odometry state vector. Δz is the re-projection error of a matched set of measurements on the current k stereo fusion frames and is defined as:
Δz_i(μ, X_w) = z_i - π(exp(μ)𝒯_k-1^f_0 w X_w^i) ,
where the 3D point cloud X_w is considered a constant optimization parameter and not updated in the tracking thread and 𝒯_k-1^f_0w=exp(μ) ∈ SE(3) with exp(.) the exponential map in the 𝔏𝔦𝔢 group for the previous increment state vactor. If the number of observed points is less than 90% of the points recorded in the previous keyframe, a frame is chosen to be a keyframe after the current pose has been evaluated. Then, new map points are created by triangulating the stereo pair's remaining mismatched features. The keyframe is then placed in the local mapping thread for processing.
Mapping thread. We apply Bundle Adjustment (BA) to fine-tune the camera poses (keyframe map) and the 3D points (point cloud map). Local Bundle Adjustment minimizes the re-projection error of every point in every keyframe f^k_0. Given an initial set of N keyframe poses {𝒯^f_0w_1, … , 𝒯^f_0w_N}, an initial set of M 3D points X_w^i, and measurement sets S∈{S_1, … , S_N}, where each set comprises the measurement z_i^k of the i^th point in the k^th keyframe, the local BA is performed using the following objective function on all keyframes in a pre-defined sliding-window size N:
L^BA = _μ, X_w∑_k=1^N ∑_i∈ S_kρ(||
J_i^k [ μ_k; X_w^i ] - Δz_i(μ_k, X_w^i)
||^2) ,
where the 3D point cloud X_w is considered a variable optimization parameter and is updated in the mapping thread. Hence, the J_i^k= [ ∂Δz_i(μ_k, X_w^i)/∂μ_k, ∂Δz_i(μ_k, X_w^i)/∂ X_w^i ] is the re-projection error's Jacobian with respect to the current odometry state vector and the 3D point as well.
Loop-closure thread. Instead of the conventional way of keyframe embedding assignments using a bag-of-words, we adopt a simple loop-closure detection method based on the mean of the mid-level learning-based feature descriptors (SuperPoint and R2D2) for each keyframe and assign this mean value as the embedding identity of each keyframe. Once a potential loop closure is detected, the system performs geometric verification through RANSAC-based pose estimation to validate the candidate. If the verification is successful, a loop closure constraint is added to the pose graph, and a graph optimization is performed to distribute the error and update the global map, thus correcting the accumulated drift.
§ EVALUATION
We perform a thorough, comprehensive evaluation during navigation in real-world, large-scale, and small-scale areas in challenging settings. In subsection <ref>, we compare DH-PTAM with other RGB image-based and event-based/-aided methods on the HDR large-scale sequences of the publicly available dataset VECtor <cit.> due to its high-quality ground truth values and sensors calibration parameters. In subsection <ref>, we evaluate the small-scale (mocap-) sequences of TUM-VIE <cit.> to test the quality of the DH-PTAM spatio-temporal synchronization method with degraded event camera calibration parameters. Moreover, the first 45 frames of TUM-VIE sequences suffer a high over-/under-exposure global shutter alternation, which tests the DH-PTAM's pose estimation stability. We perform a comparative quantitative analysis to evaluate the accuracy of our system in Table <ref> and a qualitative/quantitative analysis in Fig. <ref>. The accuracy of DH-PTAM is measured with absolute trajectory error (ATE), and relative pose error (RPE) metrics calculated using the baseline SLAM evaluation tool <cit.>.
To prevail the advantages of complementing the sensor stack with events information, we compare our event-aided stereo visual odometry solution (DH-PTAM) to the latest best-performing open-source visual-inertial systems in literature in Table <ref>. Table <ref> gives the system parameters configuration for large-scale and small-scale sequences. We keep these parameters constant for all sequences of the same scale group without an online fine-tuning process.
All experiments are performed on the CPU and the GPU of a 16 GB RAM laptop computer running 64-bit Ubuntu 20.04.3 LTS with AMD(R) Ryzen 7 4800h ×16 cores 2.9 GHz processor and a Radeon RTX NV166 Renoir graphics card. Table <ref> reports a detailed computational complexity analysis for our DH-PTAM system with minimal and maximal system requirements. The high CPU load observed when detecting SuperPoint and R2D2 features can be attributed to the algorithms' design, which prioritizes feature quality and robustness over computational efficiency. This trade-off is often necessary for computer vision research, where high-quality results are crucial for many applications but come at the cost of increased computational complexity. The back-end runs with real-time performance, and it is recommended to run the front-end on a GPU to achieve a memory efficient, faster, and more stable performance.
No event streams (β=0). In Table <ref>, we show an ablation study where we run DH-PTAM on stereo images. We notice estimation failure with all the conventional and learning-based feature detectors except R2D2. Although the ATE metric shows slightly better results without using events, the RPE metric shows much more accurate values when using events. These better ATE values are due to the high performance of the GPU in loop-closures using R2D2 features (see Fig. <ref>).
§.§ VECtor large-scale experiments
We notice a prominent estimation failure in Table <ref> while evaluating the event-based methods EVO, ESVO and Ultimate SLAM on the large-scale sequences. Numerous factors may contribute to the failure of these systems, including stringent initialization requirements. For instance, the system EVO necessitates running in a sensor-planar scene for several seconds to bootstrap the system. Additionally, these systems are susceptible to parameter tuning, as demonstrated by using different parameters for different sequences in the same scenarios, even within their open-source projects.
Table <ref> shows a good performance for DH-PTAM compared to the competing VI-SLAM systems. Although Fig. <ref> shows high visual drifts for our vision-only system in the case of units sequences, DH-PTAM could outperform the VI-SLAM systems based on the ATE metric. Fig. <ref> gives an overview of the high-quality loop detection of DH-PTAM in the case of corridors sequences. Loop detection failure can be noticed only when the RAM overflows while running the system with enormous point clouds, as in the case of units sequences. We provide trajectory smoothing and post-processing script with our open-source implementation to join estimated trajectory increments in case of RAM overflow failures.
§.§ TUM-VIE small-scale experiments
As noticed in <cit.>, the calibrationA (mocap-desk, mocap-desk2) sequences have more accurate depth estimation results than calibrationB (rest of mocap and TUM-VIE large-scale) sequences due to the significant calibration errors in the latter. Hence, we perform our comparative evaluation on TUM-VIE small-scale (mocap-) sequences using calibrationA parameters. Although the same high-quality calibrationA parameters apply to both desk2 and desk sequences with the same spiral motion, DH-PTAM performs the best with desk2 sequence but the worst with desk sequence. This occurs since the scene of the desk sequence is bounded by a close-by white wall that strict the depth, and hence DH-PTAM front-end detects low quality and fewer features for desk than desk2. Table <ref> shows that the more DoF excited (6dof, desk2) and the consistent loops detection (1d-trans), the better the pose estimation quality.
§ CONCLUSION
In this paper, we presented the DH-PTAM system for robust parallel tracking and mapping in dynamic environments using stereo images and event streams. The proposed system builds upon the principles of S-PTAM and extends it with a learning-based approach to handle the sparse and noisy nature of event-based sensors while leveraging the rich information provided by fusion frames. Our experiments demonstrate that DH-PTAM outperforms state-of-the-art visual-inertial SLAM methods, particularly in challenging scenarios such as fast motion, HDR, and occlusions. The proposed system can achieve better performance on a GPU and provides a scalable and accurate solution for 3D reconstruction and pose estimation. Future work includes investigating the potential of integrating inertial navigation sensors, such as IMUs, and exploring the integration of additional deep learning components for improving loop-closure robustness and accuracy. DH-PTAM has the potential to provide robust and accurate 3D mapping and localization, which are crucial for the successful operation of long-term navigation systems.
IEEEtran
|
http://arxiv.org/abs/2306.04052v2
|
20230606225330
|
Nuclear Spin-Depleted, Isotopically Enriched 70Ge/28Si70Ge Quantum Wells
|
[
"O. Moutanabbir",
"S. Assali",
"A. Attiaoui",
"G. Daligou",
"P. Daoust",
"P. Del Vecchio",
"S. Koelling",
"L. Luo",
"N. Rotaru"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.mtrl-sci",
"physics.app-ph",
"quant-ph"
] |
APS/123-QED
[email protected]
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7
The p-symmetry of the hole wavefunction is associated with a weaker hyperfine interaction as compared to electrons, thus making hole spin qubits attractive candidates to implement long coherence quantum processors. However, recent studies demonstrated that hole qubits in planar germanium (Ge) heterostructures are still very sensitive to nuclear spin bath. These observations highlight the need to develop nuclear spin-free Ge qubits to suppress this decoherence channel and evaluate its impact. With this perspective, this work demonstrates the epitaxial growth of ^73Ge-depleted isotopically enriched ^70Ge/SiGe quantum wells. The growth was achieved by reduced pressure chemical vapor deposition using isotopically purified monogermane ^70GeH_4 and monosilane ^28SiH_4 with an isotopic purity higher than 99.9 % and 99.99 %, respectively. The quantum wells consist of a series of ^70Ge/SiGe heterostructures grown on Si wafers using a Ge virtual substrate and a graded SiGe buffer layer. The isotopic purity is investigated using atom probe tomography following an analytical procedure addressing the discrepancies in the isotopic content caused by the overlap of isotope peaks in mass spectra. The nuclear spin background in the quantum wells was found to be sensitive to the growth conditions. The lowest concentration of nuclear spin-full isotopes ^73Ge and ^29Si in the heterostructure was established at 0.01 % in the Ge quantum well and SiGe barriers. The measured average distance between nuclear spins reaches 3-4 nm in ^70Ge/^28Si^70Ge, which is an order of magnitude larger than in natural Ge/SiGe heterostructures.
Nuclear Spin-Depleted, Isotopically Enriched ^70Ge/^28Si^70Ge Quantum Wells
N. Rotaru
July 31, 2023
===========================================================================
§ INTRODUCTION
Although it was quickly relegated behind silicon (Si) because of its relatively low bandgap energy, its lack of a stable oxide, and its large surface state densities, germanium (Ge) is inarguably the material that catalyzed the transition from what W. Pauli and I. Rabi called the ‘’Physics of Dirt’’ <cit.> to modern-day semiconductor physics and technology <cit.>. Indeed, the ease by which Ge can then be purified and processed led to the demonstration of point contact diode mixers for radar reception <cit.> and of the point contact and junction transistors <cit.>. These inventions contributed to laying the groundwork for what was later coined as the first quantum revolution. In recent years, there has been a revived interest in Ge-based materials for integrated photonic circuits <cit.>, sensing <cit.>, high-mobility electronic s <cit.>, and solid-state quantum computing <cit.>. The latter, for instance, aims at capitalizing on the advantageous quantum environment of holes in Ge, their inherently large and tunable spin-orbit interaction (SOI), and their reduced hyperfine coupling with nuclear spins to implement increasingly robust and reliable spin qubits <cit.>. Indeed, these quantum devices are now considered forefront candidates for scalable quantum processors <cit.>. This recent surge in developing Ge qubits makes one think that Ge may also be a key material in shaping the anticipated second quantum revolution.
From a fundamental standpoint, it is expected that the hyperfine interaction to be weaker for holes than electrons due to the p-symmetry of the hole wavefunction. However, theoretical investigations suggested a hyperfine coupling that is only one order of magnitude smaller than that of electrons <cit.> or of equal strength as in Si <cit.>. Moreover, the p-symmetry and d-orbital hybridization of the hole wavefunction leads to an anisotropic hyperfine coupling that is non-existent for electron spins <cit.>. Interestingly, recent experimental studies hint at the sensitivity of hole spin qubits in planar Ge/SiGe heterostructure to nuclear spin bath reporting an amplitude of the fluctuating Overhauser field of 34.4 kHz, which is suggested to limit spin dephasing times <cit.>. Although charge noise is believed to be the dominant decohering process, these observations call for the development of nuclear spin-free Ge qubits to elucidate their sensitivity to hyperfine coupling. Undertaking this research direction requires Ge-based quantum devices that are depleted of ^73Ge, which is the only Ge nuclear spin-full stable isotope. This work addresses this very issue and provides a demonstration of the epitaxial growth of isotopically purified ^70Ge quantum wells (QWs). Note that enriched ^70Ge, ^74Ge, and ^76Ge isotopes were employed in the past to grow superlattices and self-assembled quantum dots by solid-source molecular beam epitaxy <cit.>. Herein, the growth of ^73Ge-depleted QWs is achieved by hydride precursors using the chemical vapor deposition (CVD) method, which is broadly adopted in Ge device research besides being compatible with the processing standards in the semiconductor industry <cit.>.
§ EXPERIMENTAL
The epitaxial growth of isotopically engineered Ge/SiGe QW heterostructures was carried out on hydrogen-passivated 4-inch (001)-oriented Si wafers in a reduced-pressure CVD reactor using isotopically purified monogermane ^70GeH_4 (isotopic purity >99.9 %) and monosilane ^28SiH_4 (isotopic purity >99.99 %). The precursors were enriched in a centrifugal setup using natural monogermane (^natGeH_4) and SiF_4 as starting gases <cit.>. After purification, ^70GeH_4 contains traces (<0.006 at.%) of other Ge isotopes: ^72Ge, ^73Ge, ^74Ge, and ^76Ge. Moreover, chemical contaminants including other hydrides are also negligible, with an average content being <0.06 µmol/mol. Reference Ge/SiGe QW heterostructures were also prepared following the same growth protocol using conventional precursors with natural isotopic abundance (^natGeH_4 and disilane ^natSi_2H_6). After annealing in hydrogen, a 3 µm-thick Ge interlayer, commonly known as Ge virtual substrate (Ge-VS), was grown on Si using ^natGeH_4 and a two-step growth process in the 450-600 °C temperature range. Then follows a thermal cyclic annealing step (725-875 °C) to improve the Ge-VS quality. A reverse-graded 1µm-thick Si_1-xGe_x layer was then grown at 600 °C using ^natGeH_4 and ^natSi_2H_6 until a uniform Si content of 18 at.% is reached. Without interrupting the growth, the ^natGeH_4 supply was switched to the purified ^70GeH_4 to grow the first Si_1-xGe_x barrier layer (BR1), while keeping all the other growth parameters unchanged. Thickness and composition of BR1 were varied in the 0.3-1µm range and x = 0.15-0.18 range to investigate the effect of the growth time on the isotopic purity of the epilayers. After that the growth of BR1 was completed, the reactor was then purged in hydrogen for 90 s before growing the ^70Ge QW layer using ^70GeH_4 supply for a variable growth time of up to 40 s. Next, the reactor was purged in hydrogen for 90 s prior to the growth of the Si_1-xGe_x BR2 layer under identical growth conditions as BR1. Lastly, a Si capping layer with a few nm thickness was grown. Fig. 1a illustrates the grown stacks. Ge/Si_0.18Ge_0.82 (A), ^70Ge/Si_0.18Ge_0.82 (B), and ^70Ge/Si_0.15Ge_0.85 (C) QWs were grown using this protocol. The ^70Ge/^28Si_0.15^70Ge_0.85 (D) QW was grown following a similar protocol except for the growth of BR1-2 that was performed by changing from ^natSi_2H_6 to ^28SiH_4 and adjusting the growth conditions to accommodate the change in the precursor decomposition.
Several characterization techniques were employed to elucidate the basic properties of the as-grown heterostructures and investigate their isotopic content. Lattice strain and average content in Ge/SiGe heterostructures were evaluated from X-ray diffraction (XRD) measurements including reciprocal space map (RSM) analysis. The microstructure of the grown materials was investigated by transmission electron microscopy (TEM) and scanning TEM (STEM). The quality of interfaces, the atomic-level composition, and the isotopic purity were investigated using atom probe tomography (APT). Additional insights into the chemical and isotopic compositions are also obtained using secondary ion mass spectrometry (SIMS). Raman scattering spectroscopy was employed to evaluate the effects of the isotopic content on phonon scattering in Ge QWs. Additionally, the uniformity of the growth thickness as well as the optical signature of quantum confinement were investigated using spectroscopic ellipsometry (SE).
§ RESULTS AND DISCUSSION
A cross-sectional STEM image of a representative isotopically-engineered Ge QW heterostructures is shown in Fig. 1b, while the enlarged view of the ^70Ge/Si_0.15^70Ge_0.85 QW region is displayed in Fig. 1c. The figure shows an 18 nm-thick ^70Ge QW together with BR1 and BR2 layers with thicknesses of 290 nm and 28 nm, respectively. The transition between SiGe barrier layers and ^70Ge QW is of the order of 1-2 nm. To evaluate the structural quality of the heterostructures, cross-sectional TEM images were acquired (Fig. 1d). The extended defects are confined to the Si/Ge-VS and Ge-VS/Si_1-xGe_x interfaces, with no defects being detected in the QW region at the TEM imaging scale. XRD-RSM (224) analysis of the as-grown heterostructures demonstrates sharp peaks for the SiGe/Ge substrate, barriers as well as the signature of the strained 18 nm-thick ^70Ge layer, thus suggesting an excellent degree of crystallinity across the structure (Fig. 1(e)). Here, the variation in composition between natural and purified SiGe layers (Si_0.18Ge_0.82 vs. Si_0.15Ge_0.85 as determined by APT) is related to the difference in composition between the germane precursor supplies.
A first glimpse into the isotopic content of the as-grown QWs was obtained from Raman spectroscopy studies. Fig. 2(a) shows Raman spectra around the Ge-Ge LO mode recorded for a set of QWs grown at a variable growth time between 4 and 40 s corresponding to a 3-30 nm thickness range. The spectra indicate the presence of two distinct modes. The first is centered around 293.7 cm^-1 corresponding to Ge-Ge LO mode in the SiGe barrier, whereas the second peak at 305.3 cm^-1 is attributed to the same mode but in the ^70Ge QW. This assessment is consistent with the observed increase in the second peak intensity as the QW thickness increases. Note that the Ge-Ge mode in ^natGe QW is detected at 300.1 cm^-1, as demonstrated in Fig. 2(b) comparing two identical ^natGe and ^70Ge QW samples. The observed shift between the two samples is analyzed based on the quasi-harmonic approximation, which is a valid approximation for semiconductors at room temperature <cit.>. According to the virtual crystal approximation, a simple harmonic analysis predicts that the energy of a phonon mode is inversely proportional to the square root of the average isotopic mass. The average isotopic mass is given by ⟨m⟩ = Σ_ic_im_i, with c_i being the fractional composition of an isotope of mass m_i. Knowing that the atomic mass of ^natGe is 72.63 amu, the measured wavenumbers of Ge-Ge LO mode in the sets of QWs yield an average atomic mass in ^70Ge QW lattice of 70.17 amu corresponding to at least 99.6% enrichment in ^70Ge isotopes. As discussed below, the growth protocol has a strong effect on the isotopic purity of the QW.
It is important to mention that the limited spectral resolution ( 1 cm^-1) of the used Raman setup does not allow addressing the effect of isotopic purification on lattice disorder <cit.>. Nevertheless, it is reasonable to conclude that the similarity observed in the full width at the half maximum of the Ge-Ge peaks in ^natGe and ^70Ge is indicative of a similar crystalline quality, which is consistent with XRD and TEM studies. To further assess the quality of the grown QWs, SE studies were carried out on ^70Ge QW samples. For these studies, reference samples consisting of the same grown layers but without BR2 were also prepared and investigated. Fig. 2(c) displays the measured spectra for 18 nm ^70Ge QW and the associated reference material. The figure shows the imaginary dielectric function (left) and the critical point (CP) analysis of the measured dielectric function. The nature of the lineshape of the dielectric function of both heterostructures conceals insights into the quantum confinement in the ^70Ge QW. Note that the penetration depth of the incident excitation near the E_1 CP is around 20-35 nm for Ge bulk. If one considers a limited spectral range between 1.5-3 eV, the effect of the underlying materials (SiGe buffer, Ge-VS, and Si substrate) can be negated as the incident light will not reach and excite them. Consequently, only the top three layers (^70Ge QW, BR2, and Si cap) should in principle contribute to the measured dielectric function (Fig. 2(c)). Moreover, the contribution of the 3-5 nm-thick Si cap should be excluded in the analysis as the E_1 CP of Si is located around 3.4 eV <cit.>, which is outside the measured spectral range.
The second derivative of the dielectric function of the two samples (with and without the top barrier BR2) is displayed in Fig. 2(d). To unravel the electronic structure of the analyzed heterostructure, the measured data were fitted using a generic critical point parabolic band model <cit.>. The CP energy of the ^70Ge layer without a barrier is evaluated at 2.156 eV, which is close to the Ge bulk CP of 2.134 eV <cit.>, whereas for ^70Ge QW sample a blueshift is noted yielding a CP energy of 2.233 eV. More importantly, the qualitative difference between both dielectric functions at 2.17 eV is clear. Indeed, the CP lineshape changes drastically from 2D Van Hove singularities in the reference structure (green dots) to a discrete excitonic lineshape in ^70Ge QW (blue dots). This observed change in CP lineshape and energy is indicative of quantum confinement and its associated narrowing of the optical transition in Ge <cit.>.
In the following, the isotopic content of the grown QWs is discussed based on APT studies. Fig. 3(a) shows a representative 3D 30 × 30 × 30 nm^3 atom-by-atom APT map of a ^70Ge QW. The map indicates that the QW region contains mainly the ^70Ge isotope, but traces of other isotopes can also be seen. Before quantifying and discussing the level of these contaminants, the recorded mass spectra are described first, as shown in Fig. 3(b,c). The figures exhibit the mass spectra recorded for a set of four QW samples labeled A, B, C, and D, as illustrated in Fig. 1(a). These samples were grown under different conditions. In sample A, the QW was grown using ^70GeH_4, whereas the SiGe barriers were grown using ^natGeH_4. In the other three samples, the growth of both barriers and QWs was conducted using ^70GeH_4. However, the change from ^natGeH_4 to ^70GeH_4 occurred during the growth of the underlying SiGe layer at a variable thickness from the interface with the QW: 290 nm (B), 1000 nm (C), and 1890 nm (D). This means that the changes from ^natGeH_4 to ^70GeH_4 took place at different times during the growth of SiGe buffer layer prior to the QW growth in these samples (B: 8 min, C: 24 min, and D: 29 min). In the case of sample D, the growth of SiGe barriers was conducted using the isotopically purified precursor ^28SiH_4 instead of ^natSi_2H_6. The growth rate was higher for this sample due to a higher GeH_4 supply required for the growth optimization using ^28SiH_4 precursor. The obtained APT mass spectra are compared in Fig. 3(b,c) showing the spectra of doubly charged Ge ions (Fig. 3(b)) and doubly charged Si ions (Fig. 3(c)). Each spectrum contains 10 million atoms from the selected region which includes most of the top barrier, the full QW and its interfaces, and a part of the bottom barrier. Note that this includes the QW interfaces and the local fluctuations in the isotopic purity observed near these interfaces, as shown in Fig. 4.
The mass spectrum of sample A shows peaks associated with all five Ge isotopes at intensities close to the natural abundance of each isotope as most of the signal originates from the barriers grown with ^natGeH_4 (Fig. 3(b)). However, in samples B, C, and D, the APT spectra clearly show enrichment in ^70Ge isotope as the peaks related to other isotopes have significantly diminished. Interestingly, the level of this contamination from other isotopes is intimately related to the growth protocol. Indeed, the level of Ge isotope cross-contamination becomes lower the longer the time, relative to the moment of the QW growth, of the transition from ^natGeH_4 to ^70GeH_4. This indicates that the detection of ^70Ge^++, ^73Ge^++, ^74Ge^++, and ^76Ge^++ peaks is a manifestation of the reservoir effect, meaning that ^natGeH_4 used to grow the much thicker Ge-VS and SiGe-VS still resides in the growth reactor for an extended period of time. This leads to the undesired incorporation of the nuclear spin-full ^73Ge isotope into the growing QW structure. Herein, it is shown that an early introduction of ^natGeH_4 can eliminate this contamination to a great extent. Ideally, the growth of the entire stack Ge-VS/SiGe-VS/BR1/Ge/BR2 should be done using ^70GeH_4, but the process can be costly. Similarly, Fig. 3(c) shows that the use of ^28SiH_4 to grow the SiGe barriers leads to a significant reduction, more than 30-fold, of the amount of ^29Si isotope in the heterostructure. Since the hole wavefunction in Ge QW is expected to leak to the SiGe barriers, it is also important to suppress the hyperfine interactions that may result from the presence of ^29Si isotope.
The local isotopic purity and 3D distribution of isotopes can be obtained from APT. However, since the peaks of heavier isotopes are embedded in the tails of the lighter isotopes (Fig. 3(b)), it is important to carefully analyze and model the mass spectra to separate the tails and the peaks to accurately quantify the isotopic content in the heterostructures. Herein, SIMS analyses were carried out to validate the APT isotope mapping method. Since all non-^70Ge isotopes originate from a natural Ge source, one can use the content measured for each isotope to estimate the ^70Ge purity by making a projection of the overall contamination based on the natural distribution of isotopes. As shown in Fig. 4(a), SIMS data provide estimates derived from ^72Ge, ^74Ge, and ^76Ge signals coinciding almost perfectly which each other and the estimate for the ^70Ge purity gained from considering the signal from all Ge isotopes. For APT, however, a difference was observed (data not shown) when estimating based on doubly charged ^72Ge, ^74Ge, or all of the isotopes. This discrepancy is caused by the aforementioned overlap of isotope peaks in the mass spectra (Fig. 3(b)). To address this issue, a Monte-Carlo approach is implemented where the tails are fitted locally around the peak region, and the peak and tail are decomposed multiple 100s or 1000s times to find the average content of the peak and the error created by the decomposition. The resulting estimate using the tail-corrected data for doubly-charge ^72Ge, ^74Ge, and ^76Ge match SIMS data, as shown in Fig. 4a (solid line).
Using the same Monte Carlo approach, we can quantify the ^70Ge purity in all samples. The result is shown in Fig. 4(b) highlighting once more the differences between the samples in terms of isotopic purity near the QW caused by the difference in time passed between the onset of ^70GeH_4 growth and the QW growth. Furthermore, both SIMS and APT data consistently show that the 90 s growth interruption at the QW interfaces, introduced to promote the growth of sharper interfaces, leads to an accumulation in ^natGe at the interface. For the growth of sample D, the top barrier was grown without interruption thus suppressing the isotopic cross-contamination at the interface. Maintaining the ^70Ge purity is important to achieve a nuclear spin-depleted interface and BR1.
A more accurate evaluation of the nuclear spin background is obtained from APT analyses displayed in Fig. 4(c). The figure outlines the total concentration profiles of nuclear spin-full isotopes ^73Ge and ^29Si across the investigated heterostructures. It is noticeable that in ^natSi^natGe/^70Ge/^natSi^natGe (sample A) the nuclear spin concentration drops from 6 at.% in the SiGe barriers down to 0.1 at.% in the QW. This background is further reduced to 0.02 at.% in QWs of samples B and C and even below 0.01 at.% in sample D consisting of ^28Si^70Ge/^70Ge/^28Si^70Ge. Besides providing the isotopic composition profiles, APT also allows extracting the atomic-level spatial distribution of individual nuclear spin-full species ^73Ge and ^29Si, as displayed in Fig. 4(d), The figure shows the depth evolution of the average distance between neighboring nuclear spins across the investigated heterostructures. To obtain these profiles, a model of SiGe lattice was generated from APT maps <cit.> on which the distribution of each isotope was imprinted thus allowing the calculation of the distance between nuclear spins in a lattice plane-by-lattice plane fashion. The uncertainty in these calculations was assessed by sampling 10 different models. The obtained result demonstrates that the average distance between nuclear spins is the lowest in the QW for all samples, but it remains sensitive to the growth conditions. For instance, in ^natSi^natGe barriers (A) the obtained average distance is 0.3-0.4 nm, whereas it increases by one order of magnitude to 3-4 nm in isotopically pure^28Si^70Ge/^70Ge/^28Si^70Ge heterostructure (D).
§ CONCLUSION
In summary, this work demonstrates the epitaxial growth of nuclear spin-depleted, isotopically enriched ^70Ge QWs. The growth was achieved on Si wafers using enriched precursors ^70GeH_4 and ^28SiH_4 in a reduced-pressure CVD system. The crystalline quality of the grown heterostructures was confirmed by XRD and electron microscopy studies. The critical point of the grown QWs exhibits a discrete excitonic lineshape at 2.233 eV indicative of quantum confinement. The isotopic purity and the distribution of the nuclear spin background were investigated using APT. In this regard, a Monte Carlo approach was introduced to solve the discrepancies in APT analyses caused by the overlap of isotope peaks in the recorded mass spectra. These analyses demonstrate that the isotopic content is very sensitive to the growth conditions including any growth interruption. The latter was found to induce an accumulation of natural Ge isotopes at the growth interface leading to lower ^70Ge content. To evaluate the distribution of the residual nuclear spin background, a lattice model was constructed to map the average distance between the two nuclear spin-full isotopes ^73Ge and ^29Si. These studies showed that the distance between nuclear spins reaches 3-4 nm in ^70Ge/^28Si^70Ge, which is an order of magnitude higher than in natural Ge/SiGe heterostructure. Additionally, the lowest concentration of ^73Ge and ^29Si contaminants in the heterostructure was established at 0.01% in both QW and barriers of ^70Ge/^28Si^70Ge heterostructure. These insights constitute a valuable input to improve the design and theoretical modeling of spin qubits by providing quantitative, atomic-level details on nuclear spin distribution.
METHODS.
X-ray diffraction (XRD) measurements performed using a Bruker Discover D8. A 3 bounces Ge(220) 2-crystals analyzer was placed in front of the XRD detector during the XRD (004) and (224) reciprocal space map (RSM) analysis. The microstructure of the grown materials was investigated by transmission electron microscopy (TEM). TEM specimens were prepared in a Thermo Fisher Helios Nanolab 660 dual-beam scanning electron microscope using a gallium-focused ion beam (FIB) at 30, 16, and 5 kV. Electron beam-induced carbon and platinum were locally deposited on the sample to protect the imaged region from being damaged by the ion-beam milling during the thinning of the TEM lamella. TEM and scanning TEM (STEM) analyses were carried out on a Thermo Scientific Talos F200X S/TEM system with an acceleration voltage of 200 kV.
Insights into the quality of interfaces, the atomic-level composition, and the isotopic purity were obtained using atom probe tomography (APT). APT specimens were prepared in a FEI Helios Nanolab 660 dual-beam scanning electron microscope using a gallium-focused ion beam (FIB) at 30, 16, and 5 kV. A 120-150 nm-thick chromium capping layer was deposited on the samples before FIB irradiation to minimize the implantation of gallium ions into the imaged region. APT studies were performed in a LEAP 5000XS tool. The LEAP 5000XS utilizes a picosecond laser to generate pulses at a wavelength of 355 nm. For the analysis, all samples were cooled to a temperature of 25 K. The experimental data were collected at laser powers of 3-6 pJ. Additional insights into the chemical and isotopic compositions are also obtained using secondary ion mass spectrometry (SIMS).
Raman scattering analyses were performed at room temperature using a 633 nm excitation laser. Additionally, the uniformity of the growth thickness as well as the optical signature of quantum confinement were investigated using spectroscopic ellipsometry (SE). SE measurements were carried out at room temperature, using a variable angle spectroscopic RC2-XI ellipsometer manufactured by J. A. Woollam Co. The variable angle spectroscopic ellipsometer system covers the 0.5–6 eV range. All heterostructures were measured between 70° and 80° angles of incidence with a 1° step. A noticeable increase in the sensitivity of the SE parameters (Ψ and Δ) was observed around 76-77°, which is very close to the Brewster angle for Si and Ge. Thus, during the optical modeling, special care was accorded to the modeling near this angle.
ACKNOWLEDGEMENTS.
The authors thank J. Bouchard for the technical support with the CVD system. O.M. acknowledges support from NSERC Canada (Discovery Grants, Alliance International Quantum, and CQS2Q Consortium), Canada Research Chairs, Canada Foundation for Innovation, Mitacs, PRIMA Québec, and Defense Canada (Innovation for Defense Excellence and Security, IDEaS), the European Union's Horizon Europe research and innovation programme under grant agreement No 101070700 (MIRAQLS), and the US Army Research Office Grant No. W911NF-22-1-0277.
*
|
http://arxiv.org/abs/2306.05748v1
|
20230609082726
|
Shape-based clustering of synthetic Stokes profiles using k-means and k-Shape
|
[
"Thore Espedal Moe",
"Tiago M. D. Pereira",
"Flavio Calvo",
"Jorrit Leenaarts"
] |
astro-ph.SR
|
[
"astro-ph.SR"
] |
Rosseland Centre for Solar Physics, University of Oslo, P.O. Box 1029 Blindern, NO–0315 Oslo, Norway
Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, NO–0315 Oslo, Norway
Institute for Solar Physics, Dept. of Astronomy, Stockholm University, AlbaNova University Centre, 10691 Stockholm, Sweden
The shapes of Stokes profiles contain much information about the atmospheric conditions that produced them. However, a variety of different atmospheric structures can produce very similar profiles. Thus, it is important for proper interpretation of observations to have a good understanding of how the shapes of Stokes profiles depend on the underlying atmosphere. An excellent tool in this regard is forward modeling, i.e. computing and studying synthetic spectra from realistic simulations of the solar atmosphere. Modern simulations routinely produce several hundred thousand spectral profiles per snapshot. With such numbers, it becomes necessary to use automated procedures in order to organize the profiles according to their shape. Here we illustrate the use of two complementary methods, k-means and k-Shape, to cluster similarly shaped profiles, and demonstrate how the resulting clusters can be combined with knowledge of the simulation's atmosphere to interpret spectral shapes.
We aim to showcase the use of clustering analysis for forward modeling. In particular we wish to introduce the k-Shape clustering method to the solar physics community as a complement to the well-known k-means method.
We generate synthetic Stokes profiles for the CaII 854.2 nm line using the Multi3D code from a Bifrost simulation snapshot. We then apply the k-means and k-Shape clustering techniques to group the profiles together according to their shape, and investigate the within-group correlations of temperature, line-of-sight velocity and line-of-sight magnetic field strengths.
We show and compare the classes of profile shapes we retrieve from applying both k-means and k-Shape to our synthetic intensity spectra. We then show the structure of the underlying atmosphere for two particular classes of profile shapes retrieved by the clustering, and demonstrate how this leads to an interpretation for the formation of those profile shapes. Furthermore, we apply both methods to the subset of our profiles containing the strongest Stokes V signals, and demonstrate how k-Shape can be qualitatively better than k-means at retrieving complex profile shapes when using a small number of clusters.
Shape-based clustering of synthetic Stokes profiles using k-means and k-Shape
Thore E. Moe
1,2
Tiago M.D. Pereira
1,2
Flavio Calvo
3
Jorrit Leenaarts
3
======================================================================================
§ INTRODUCTION
Forward modeling of the solar atmosphere is a very useful tool for understanding the relative importance of atmospheric components in the formation of polarized spectra, thereby guiding interpretations of observations. By computing synthetic Stokes profiles from realistic 3D radiative magnetohydrodynamic (rMHD) simulations, one can directly compare a particular spectral signature with the full state of the atmosphere that produced it <cit.>. Modern simulations routinely contain several hundred thousand pixels, with each pixel giving rise to a set of Stokes profiles. Depending on the spatial resolution of the numerical model, and the spectral resolution considered for the synthesis, these profiles can be quite complex; often exhibiting more complicated behavior than what is typically resolved in real observations. It is obviously not feasible to analyze the formation of so many profiles one by one, nor is it practical to manually sort them into groups according to their features. Rather, some automated procedure must be used to organize the profiles in a meaningful manner for further human analysis.
One way of reducing the number of individual profiles into more manageable collections is the use of clustering techniques like k-means <cit.>. k-means has seen extensive use in solar and stellar physics, for examples see <cit.>.
Apart from k-means, other clustering methods have also been used on solar spectra, for instance the t-distributed Stochastic Neighbor Embedding employed by <cit.>.
The purposes of the clustering vary from identifying and studying the observational signatures of particular physical processes and features, to reducing the spatial dimensionality of data-sets for inversions, to statistical characterizations of observations. Relatively little explored, however, is the application of clustering techniques in a forward modeling context, one notable exception being <cit.>. In this paper we aim to address that issue, applying the k-means method to CaII 854.2 nm Stokes I and Stokes V profiles generated from a Bifrost <cit.> snapshot using the Multi3D radiative transfer code <cit.>, which has been extended (Calvo & Leenaarts (in prep.)) to include polarization, accounting for the Zeeman effect. We focus on the shapes of the Stokes profiles, aiming to illustrate what different classes of shapes do, or do not, tell us about the underlying atmospheric conditions.
While k-means is a fast and robust clustering technique, it does not directly cluster profiles based on their shapes. It works by minimizing the sum of within-cluster Euclidean distances between profiles, which can potentially lead to distinctly different shapes appearing in the same cluster as demonstrated in Fig. <ref>. Or, for instance, two Doppler-shifted spectral profiles with the otherwise same exact shape can be put into separate clusters. Furthermore, the centroid, or `representative profile' (RP), of a cluster is given as the mean of the profiles belonging to the cluster, which in some cases can give a poor representation of the typical profile shapes in the cluster. Of course, increasing the number of clusters can mitigate this problem, but at the cost of the interpretability, which is the main point of the kind of forward modeling we seek to undertake in this paper.
A relatively fast clustering method that is inherently shape-based is the k-Shape method of <cit.>. Though originally developed for use on time-series, the method is quite general and we apply it here to the case of Stokes profiles with the obvious substitution of the time axis for a wavelength axis. A feauture of k-Shape is that the clustering is largely independent of Doppler-shifts, which can be beneficial or detrimental depending on the intended usage case. By ignoring Doppler-shifts and using a different measure of similarity than k-means, the profiles are matched more directly according to their similarity in actual shape, rather than being matched according to a combination of shape and wavelength position. Furthermore, as the centroid computation is rather different from the one in k-means, the RP's are much more prototypical of the clustered profiles. The cost, of course, is that all absolute velocity-information is not considered in the clustering.
<https://tslearn.readthedocs.io/en/stable/auto_examples/clustering/plot_kshape.html#sphx-glr-auto-examples-clustering-plot-kshape-py>
§ METHODS
§.§ Generating synthetic profiles
We generated our synthetic spectra from the 23 km resolution atmospheric model described in <cit.>. This is a Bifrost model <cit.> with a magnetic field configuration constructed to resemble a coronal hole. The model has 512×512×512 grid points, spanning roughly 12 Mm in the horizontal directions and going from z=-2.5 Mm below up to z=8 Mm above the solar surface. The horizontal spacing of the grid points is uniform, resulting in a horizontal resolution of 23 km pix^-1.
We used an extension (Calvo & Leenaarts (in prep.)) of the Multi3D code <cit.> with polarimetric capabilities to produce 3D full Stokes profiles of the CaII 854.2 nm line accounting for the Zeeman effect. As 3D computations are immensely expensive we cut the bottom 112 grid points, corresponding to below -0.4 Mm beneath the surface, under the assumption that these are too deep to affect the formation of our line of interest. Furthermore, we neglected to include the effects of partial frequency redistribution (PRD) and isotopic splitting. The obtained synthetic profiles were normalized by the nearby continuum, meaning each profile was divided by the Stokes I value of the reddest wavelength in the synthesis at approximately λ_0 + 0.95 nm, and interpolated to 100 equidistant wavelength points in the range λ_0 ± 0.05 nm, where λ_0 denotes the central wavelength of the line. We performed this interpolation in order to give equal weight to all parts of the profile when clustering since the original wavelength grid used in the synthesis is non-equidistant.
§.§ k-means clustering
The most common clustering technique for spectral profiles is k-means clustering. The full set of profiles is divided into k clusters of similarly shaped profiles, where the number k must be chosen at the outset. The measure of similarity is the Euclidean distance between profiles; that is, the distance between two profiles is the sum over wavelengths of the squared difference in their amplitudes:
distance = ∑_i (I_1(λ_i) - I_2(λ_i))^2,
where I(λ_i) denotes the amplitude of the profile at each wavelength point λ_i.
Each cluster has a centroid, and the goal is to assign the profiles to the k clusters in such a way that the sum of distances between all profiles and their nearest centroid (often called the inertia) is minimized. Algorithmically, k-means performs the following steps:
* Initialize k centroids, one for each cluster.
* Assign each profile to the cluster with the closest centroid.
* Recompute the centroids as the mean (for each wavelength) of the profiles belonging to the cluster.
* Repeat 2. and 3. until no profile changes cluster, a fixed number of iterations has been performed, or until the total inertia no longer changes above a set tolerance.
It should be noted that the convergence of the k-means algorithm does not guarantee that a global minimum has been found. Therefore it is common to re-initialize the clustering a predefined number of times, keeping the result with lowest inertia.
In this paper, we have used the k-means implementation of scikit-learn <cit.>, employing the k-Means++ initialization <cit.> for selecting better initial cluster centroids.
§.§ k-Shape clustering
As the name implies, k-Shape <cit.>, is designed to perform a clustering into k clusters of distinct shape. While the general idea is similar to k-means, it uses a different metric for the distance between profiles; as well as another method for computing the cluster centroids. The distance metric is based on shifting the profiles across each other and computing the cross-correlation for each possible shift. Consider two profiles I_1 and I_2, defined on m wavelength points, written in the form of vectors:
I⃗_⃗1⃗ = I_1(λ_1),I_1(λ_2),...,I_1(λ_m), I⃗_⃗2⃗ = I_2(λ_1),I_2(λ_2),...,I_2(λ_m).
The cross-correlation sequence between these two profiles, CC_w(I⃗_⃗1⃗,I⃗_⃗2⃗), is defined as:
CC_w(I⃗_⃗1⃗,I⃗_⃗2⃗) = R_w-m(I⃗_⃗1⃗,I⃗_⃗2⃗), w ∈{1,2,…,2m-1},
where
R_k(I⃗_⃗1⃗,I⃗_⃗2⃗) = ∑_l=1^m-kI_1(λ_l+k) · I_2(λ_l), k ≥ 0
R_-k(I⃗_⃗1⃗,I⃗_⃗2⃗), k < 0.
Thus, the sequence CC_w(I⃗_⃗1⃗,I⃗_⃗2⃗) contains the cross-correlation value for each of the 2m-1 possible shifts of the profiles relative to each other; essentially a sequence of the vector dot products between zero-padded I⃗_⃗1⃗ and I⃗_⃗2⃗ for each possible overlapping shift of the profiles. Normalizing the cross-correlation sequence (corresponding to dividing by the Euclidean norm of both profiles):
NCC_c = CC_w(I⃗_⃗1⃗,I⃗_⃗2⃗)/√(R_0(I⃗_⃗1⃗,I⃗_⃗1⃗) · R_0(I⃗_⃗2⃗,I⃗_⃗2⃗)),
results in a number between -1 and 1 for each entry in the sequence, where -1 signifies perfect anti-correlation and 1 signifies perfect correlation between the profiles. Selecting the entry with the largest cross-correlation value then gives the shape-based distance between two profiles as:
distance = 1 - max_w (CC_w(I⃗_⃗1⃗,I⃗_⃗2⃗)/√(R_0(I⃗_⃗1⃗,I⃗_⃗1⃗) · R_0(I⃗_⃗2⃗,I⃗_⃗2⃗))),
which is bounded between 0 and 2.
As in k-means, each profile is assigned to the closest centroid in terms of distance, and the cluster centroid is recomputed. In k-Shape, however, the refinement of the cluster centroids is done by reformulating the minimization of within-cluster distances as a maximization of a Rayleigh quotient calculation; for details see the original paper <cit.>. It should, however, be remarked that the k-Shape method assumes that the profiles have been z-normalized, meaning each profile has zero mean, and unity standard deviation:
I⃗_⃗1⃗^' = I⃗_⃗1⃗ - μ_1/σ_1,
where μ_1 and σ_1 is, respectively, the mean and the standard deviation of the profile over the m wavelengths considered. This assumption is not strictly necessary, as the method can be modified to work with other data-normalizations. However, the original authors found the z-normalization to work best in their tests and it is beyond the scope of our current work to re-implement and evaluate the method for other normalizations.
We used the k-Shape implementation from the tslearn library <cit.>, with some simple modifications to make it run in parallel. Even so, the k-Shape method is significantly slower than the k-means implementation of scikit-learn. In one example case, using k=100 clusters for 512 × 512 profiles with 100 wavelength points, one run of k-Shapes without re-initializations took roughly 2.7 hours, while a k-means run with 10 re-initializations took about 5 minutes, both on the same 32-core workstation. It should be noted that in the tslearn implementation of k-Shape, k single profiles are randomly chosen as the initial cluster centroids. In the original paper <cit.>, the initialization is done by randomly distributing all profiles among k clusters
§ RESULTS
§.§ Overview
Our intention was to illustrate and compare the use of both k-Shape and k-means for clustering synthetic profiles according to their shape, and subsequently how the resulting clusters can reveal correlations between the typical profile shapes in a cluster and the particular structure of the underlying atmosphere these profiles emerge from. We therefore begin by presenting and discussing the clustering of the intensity profiles in Sec. <ref>, before we perform a detailed examination of two particular profile shapes retrieved by the clustering in Sec. <ref> and Sec. <ref>.
As the k-Shape method assumes that its input profiles are z-normalized, we used the same normalization for the k-means method in order to do a fair comparison. This turned out to be a reasonable approach for the synthetic intensity profiles, as they have signal values in the same general range. However, the polarized components of the Stokes vector can vary vastly in amplitude, so the z-normalization can cause tiny signals to appear misleadingly large compared to stronger signals as the amplitude is given in units of the per-profile standard deviation. We have therefore focused mostly on the intensity profiles, though we did perform a clustering of the very strongest Stokes V signals (those with a signal exceeding 0.5% of the nearby continuum intensity), which we will discuss in Sec. <ref>.
§.§ Clustering the intensity profiles
We clustered the synthetic intensity profiles into k=100 clusters using both k-means and k-Shape, the resulting clusters are shown in Fig. <ref> and Fig. <ref> respectively. The choice of 100 clusters was made after some experimentation, as a reasonable trade-off between the two opposing considerations of accuracy and human interpretability. The k-means method was run with 10 re-initializations, while the k-Shape method was run with a single initialization due to being around two orders of magnitude slower. We have tested k-Shape with 10 re-initializations, which yielded qualitatively very similar results to the single initialization run. We therefore elected to use the single initialization run in order to compare the methods for somewhat more similar runtimes.
The first observation we can make is that both clustering techniques seem to recover a similar variety of different profile shapes. These range from typical absorption profiles (e.g. #35 in Fig. <ref>, #52 in Fig. <ref>), through increasingly strongly skewed absorption profiles (e.g. #9 and #37 in Fig. <ref>, #30 and #44 in Fig. <ref>), to more complicated profiles, including double-peaked profiles (e.g. #98 and #100 in Fig. <ref>, #97 and #100 in Fig. <ref>), asymmetric emission profiles (e.g. #73 in Fig. <ref>, #64 in Fig. <ref>) and multi-lobed profiles (e.g. #81 in Fig. <ref> or #84 in Fig. <ref>).
The clustering appears to be reasonably tight, and in both methods there are several clusters showing very similar shapes, i.e. there is more than one cluster per `family' of shapes. Encouragingly, both clustering methods seem to recover all the same types of cluster `families', e.g. several clusters with similar asymmetric emission peaks or double peaks show up in both clusterings, though there is obviously not a one-to-one correspondence between individual clusters across the methods. Conversely, at first glance there do not seem to be clusters with very distinct shapes found only with one method compared to the other. The most unique-looking clusters are perhaps #56 and #88 in Fig. <ref>, but even these find quite similar counterparts in #97 and #47 in Fig. <ref>. This gives us some confidence that our choice of 100 clusters reasonably covers the range of typical profile shapes.
A second observation we can make, is how the retrieved clusters do differ between the methods. The k-Shape groupings demonstrate the method's insensitivity to Doppler-shifts, especially the clusters containing the asymmetric emission peaks (e.g. #63, #64, #65, in Fig. <ref>) show the same shape at different shifts grouped together. Conversely, k-means splits these into different clusters (e.g. #72, #73, #74 in Fig. <ref>) according to their Doppler shifts. The fact that both methods retrieve the same `families', but differently distributed over the clusters, can be beneficial for analysis, as we will see in Sect. <ref>. With such a stereoscopic view of the underlying atmospheres it becomes easier to discern by inspection which atmospheric parameters are important and which are incidental for the formation of the particular profile shapes. In particular, k-Shape's insensitivity to Doppler shifts contrasted with k-means sensitivity to them, allows one to better discern which atmospheric behaviors are correlated solely with the shape of the profile, as opposed to being correlated to the combination of shape and Doppler-shift.
A third observation relates to how and where the methods perform poorly, in terms of profiles not being a good fit for their assigned clusters. As mentioned, cluster #56 in k-means does not seem to be well captured by k-Shape. It turns out that most of the profiles from this cluster are assigned to #68 and #73 in Fig. <ref>. These profiles are on the whole quite different from their assigned k-Shape centroids, but when the profiles and the centroids are shifted drastically across each other, the overlapping parts agree sufficiently for them to be grouped together. As k-Shape computes all possible shifts, it may occasionally find large shifts (and thereby a large clipping of the signal) to be the least bad option, leading to such apparently poor assignments. That type of signal clipping does not happen with k-means.
On the other hand, the k-means clusters appear to have issues distinguishing profiles where there is a large difference in signal strength over a narrow wavelength region. For instance, the k-means cluster #79 in Fig. <ref> turns out to be a mix of profiles with enhanced shoulders on either the right side or on both sides of the line core, as well as some with only a weakly enhanced right shoulder followed by a second absorption feature to the right. In the k-Shape clustering vast majority of these profiles are assigned to #77, #78, #94 and #95 in Fig. <ref>.
To summarize, neither method performs ideally, in the sense that both have clusters where some members that are rather poorly represented by the centroids. The obvious way to improve the fidelity of the clusters is to increase the number of clusters, or possibly do more re-initializations. However, the methods seem to complement each other, each to an extent balancing out the others weaknesses, and are useful as starting points for human analysis.
§.§ CBG-like profiles
As an example of the sort of analysis facilitated by these kinds of clustering techniques, we decided to perform an in-depth examination of the family of asymmetric blue-lobed single-peaked Stokes I profiles found in Fig. <ref> (exemplified by cluster number #70 and #72) and Fig. <ref> (exemplified by cluster number #64 and #65). These profiles are reminiscent of the chromospheric bright grains (CBGs) seen in the CaII H and K lines, see for instance <cit.> and references therein, so we call them CBG-like.
Fig. <ref> shows the Stokes I and Stokes V signals (with each profile normalized to its nearby continuum Stokes I value), as well as the stratification of temperature, line-of-sight velocity, and line-of-sight magnetic field strength for all the profiles belonging to k-means cluster #70 and #72. The atmospheric quantities are plotted as a function of the logarithm of optical depth for radiation at wavelength 500 nm (5000 Å), log(τ_5000). Throughout this paper we use the convention that positive heights, velocities and vertical magnetic field components point outwards from the solar surface. Each row of Fig. <ref> corresponds to one cluster, and the profiles are stacked along the vertical axis for each panel. The k-Shape clusters #64 and #65 are shown in a similar fashion in Fig. <ref>.
Looking at the intensities we see that the clusters are indeed well constrained for the most part. The k-means method produces clusters where the emission peak is at approximately the same wavelength throughout each cluster, but with some variance in the other features of the profile shapes. The k-Shape method, on the other hand, retrieves clusters where the location of the emission peak varies considerably in wavelength, but the shapes in each cluster seem more consistent in their shapes. For instance, the wavelength distance of the slope from peak to bottom seems to be more regular, and the red-side absorption features show less variance.
As for the Stokes V profiles, with both methods the wavelength positions of the strongest Stokes V signals seem to coincide with the sharpest changes in the intensity as one might expect from the Weak Field Approximation. There does not, however, seem to be any other universal tendencies in Stokes V across all the CBG-like clusters. Similarly for the stratification of the line-of-sight magnetic field strengths, there do not appear to be clear tendencies neither within nor across the clusters. This suggests that the structure of the vertical magnetic field component does not play a direct role in the formation of these CBG-like Stokes I profiles.
What does seem to be common to all the clusters, and therefore important for the formation of these profile shapes, is the depth-stratification of temperature and line-of-sight velocities. Mostly we see a temperature increase in the atmosphere, followed by a large velocity gradient slightly higher up. Mostly this manifests as upflowing material from below meeting downflowing material from above, but not exclusively as there are some instances of faster downflows from above meeting slower downflows, i.e. there is not necessarily a sign change in the vertical velocity, but there is a significant change in speed.
That the temperature increase occurs deeper in the atmosphere than the velocity gradient, as well as the fact that the absolute values of the velocity are less important for the formation of these shapes than the presence of a strong gradient, is more easily seen with the k-Shape clusters as each of them contains the CBG-like profile shapes at a range of Doppler shifts. In any case, the correlation between the temperature increase, the velocity-gradient and the profile shape is certainly made clearer when comparing the results of both clustering methods.
In terms of explaining the formation of these profiles, we are reminded of the interpretation of CaII K and H bright grains provided in <cit.> as signatures of acoustic shocks propagating upwards through the chromosphere, with the asymmetry being caused by Doppler shifts of the opacity across the shock front. The increased temperature enhances the local source function, which produces enhanced emission. The velocity gradient to more rapidly downflowing material above the heating event causes an opacity shift as the absorbing material is shifted to redder wavelengths, letting the bluer part of the profile escape while attenuating the redder part.
A point of note is that the correlation between the atmospheric structure and the CBG-like profile shapes is apparent straight from the clustering when we have access to underlying atmosphere. This allowed a qualitative interpretation of the profiles' formation without having to resort to using response functions or contribution functions, which are ill-defined for the case of 3D radiative transfer.
§.§ Double peaked profiles
As another example, we now consider the double peaked profiles seen in k-means clusters #98 and #100, and in k-Shape clusters #97 and #100. Similar to Figs. <ref> and <ref>, the continuum-normalized intensity and Stokes V signals, as well as the height-stratified temperature, line-of-sight velocity, and line-of-sight magnetic field strength for all the individual profiles in each cluster is shown in Figs. <ref> and <ref> for the k-means and k-Shape clusters, respectively.
Once again the clusters, on the whole, seem fairly well constrained regarding the shape of the intensity profiles. Here, there seems to be a larger variation in the absolute values of the intensities compared to the previous example. This sort of variation is not unexpected; since the z-normalization scales each profile independently to have a standard deviation equal to one our clusters are relatively insensitive to amplitudes, focusing instead on the shapes. Comparing the methods, we see they mostly recover the same profiles. An exception is that the k-means cluster #98 in the top row of Fig. <ref> has some unique profiles around profile number 300 which appear to have either a very weak left peak or only a single peak on the right, followed by a prominent absorption feature to the right of the rightmost peak. Looking at the temperature and velocity structure for these atypical profiles with suppressed left peaks, it appears they have a temperature enhancement coinciding in height with a moderate downflow. This temperature enhancement persists upwards through a velocity gradient to a region of strong upflow, before it hits a very strong downflow. Their formation can potentially be explained in the same manner as the CBG-like profiles; but with an oppositely signed velocity gradient, and with the strong downflow above the upflow causing the additional strongly redshifted absorption feature.
Returning to the general behavior of the clusters, we find that the Stokes V profiles seem to behave as expected from the weak-field approximation, in that they follow the behaviour of the intensity profiles. There is, however, a rather interesting region between profile number 200 to 300 in the bottom row of Fig. <ref>, where the rightmost Stokes V signal is very low despite a gradient in the intensity, and the vertical magnetic field component has a sign change around logτ_5000 = -4.
The temperature structure of the atmosphere is more varied for the double peaked profiles, compared to the CBG-like profiles. There are both regions of temperature enhancements with little variation spanning decades in logτ_5000, and hot regions bounded by colder plasma above and below. The common feature for all these double peaked profiles is enhanced temperatures in the range of -5 < logτ_5000 < -3. That was also the case for the CBG-like profiles, though the CBG-like profiles seldom showed these colder layers above the first strong temperature increase.
The vertical velocities are also rather varied in their structure, but three general features stand out compared to the CBG-like profiles from before. Firstly, the shift from upflows (or weak downflows) to strong downflows at the top tends to occur at a higher point in the atmosphere. Secondly, the starting points for the temperature enhancements coincide with slower plasma velocities and weaker velocity gradients, as opposed to the CBG-like profiles where the temperature increase starts slightly below strong velocity gradients. Thirdly, we note that the second velocity layer from the top, roughly -5.5 < logτ_5000 < -4.5, typically shows low to moderate velocities and fairly modest gradients. As such, the effect of opacity shifting in this layer is less, and both intensity peaks due to the temperature enhancements survive.
Another noteworthy point, is that when these double peaked profiles do have downflows from the top extending deeper (to logτ_5000≈ -5.5), the downflows are very strong and there is a corresponding absorption feature on the red side of the reddest peak. A possible interpretation is that the previously discussed opacity shifting is so red-shifted in those cases, that it overshoots the red peaks from the slower flowing regions and therefore does not suppress them.
Interestingly, and contrasting with the CBG-like profiles, the vertical component of the magnetic field does in many of these double peaked profiles display some correlations with the vertical velocities and temperature stratifications. To wit, there are areas of Figs. <ref> and <ref> where the velocities change signs coinciding with an appreciable gradient in vertical field strength to more negative (downward) values. Furthermore, the starting heights of the temperature increases coincide with the appearance of the stronger vertical magnetic field components; particularly obvious examples are profiles number 100 through 200 in the bottom row of Fig. <ref>, and profiles number 300 through 500 in the top row of Fig. <ref>.
In summary, these double peaked profiles seem to arise from a range of different atmospheric conditions. The common features are increased temperatures in the low chromosphere/upper photosphere, coinciding with low or modest velocities and weak velocity gradients. This, combined with cospatial enhanced vertical magnetic field strengths, suggests that these profiles are not all caused solely by acoustic shocks, in contrast with the CBG-like profiles. Whether the cause of the heating is due to a magnetic phenomenon, or if we simply see already hot plasma being transported, is unclear from this analysis.
§.§ The strongest Stokes V profiles
We have so far focused on the clustering of intensity profiles, since the z-normalization scaled Stokes V signals of very different amplitudes to a misleadingly similar range. Many of our Stokes V profiles contained only very weak signals, and clustering according to the shapes of such weak signals should not be expected to provide much diagnostic information. However, by restricting ourselves to look only at the Stokes V profiles containing an (unsigned) amplitude larger than 0.5% of the nearby continuum intensity we could perform a clustering on profiles with similar strengths. Out of our 512 × 512 synthetic profiles, only 7054 (≈ 2.7%) matched that selection criterion. The results of k-means and k-Shape clustering with k=20 clusters on this subset of Stokes V profiles are shown in Fig. <ref> and Fig. <ref> respectively.
In this case, we deliberately selected a rather low number of clusters. This was partly done to avoid having clusters with very few members considering our reduced dataset, and partly to compare the performance of the two methods when using a very limited, and possibly too low, number of clusters. It is obvious from looking at Figs. <ref> and <ref> that 20 clusters is not sufficient to capture all the complexities present in the profiles with either method, though the clusterings do reproduce the primary features of the profiles.
Comparing these two clustering results reveals some interesting differences. Most noticeably, not all shapes are common to both methods. The double peaked Stokes V profiles of cluster number #8 and #10 in the k-Shape result are not retrieved as a separate class by the k-means method; instead they are mixed into most of the k-means clusters, though primarily into #1, #7, #10, #12 and #17. On the other hand, the valley-peak-valley shape apparent in cluster number #16 from the k-means method does not appear in the k-Shape case. Looking in more detail at the individual profiles comprising that cluster, we find almost no profiles with a shape similar to that of the cluster mean. The triple-lobed shape of the cluster mean (marked in red) is instead mostly a mix of valley-peak and peak-valley shapes. In this case, the k-Shape centroids are more faithful representations of the shapes picked up by each cluster.
In general, the clusters found by k-means contain one dominant feature, like a peak, a dip, or both, at a certain wavelength position with considerable variation in the rest of the signal. Furthermore, looking at cluster #13 or #16 in Fig. <ref> we see that when the dominant feature in the cluster is multi-lobed, it might actually be a mix of single-lobed and multi-lobed signals grouped together, so long as their lobes occur at the same wavelength. This type of shape-mixing does not happen as readily with k-Shapes, contrast k-means cluster #13 with k-Shape cluster #15 and #17. Also, k-Shape seems to retrieve profiles with more commonality also at the weaker parts of the signal; compare for instance k-means clusters #5, #10 and #19 with k-Shape clusters #1, #5 and #13. k-Shape does, however, occasionally struggle when excessive shifts of the signal causes clipping of the features at the edges, which can be most easily seen in cluster #1, #19 or #20 of Fig. <ref>. While it is by no means perfect, we find, in conclusion, that k-Shapes performs markedly better than k-means at identifying shapes with this particular combination of complex signals and low number of clusters. How well that observation generalizes to other datasets, or cluster numbers, or both, is not clear, and beyond the scope of the current work. It does, however, indicate the type of problems where k-Shape can potentially provide an advantage over k-means. As a note, we have also performed this clustering experiment with k-means on the continuum-normalized Stokes V profiles and found that their behavior is very similar to the z-normalized case discussed above.
§ DISCUSSION AND CONCLUSIONS
We have used the k-means and k-Shape clustering techniques to group according to profile shape synthetic CaII intensity and Stokes V profiles, generated by 3D radiative transfer calculations from a 3D MHD simulation.
Using k=100 clusters for the intensities resulted in both methods retrieving qualitatively similar `families' of clusters. While the k-means method produced clusters whose features were strongly coherent with regard to wavelength, the k-Shape method, being insensitive to Doppler shifts, produced clusters where the same shape appeared over a range of wavelength shifts. Regarding the methods' shortcomings, we found that k-Shape occasionally would mislabel some profiles by clipping the signals at the edges when comparing across Doppler shifts, while k-means at times would lump rather differently shaped profiles together so long as their strongest feature occurred at the same wavelength.
Armed with full knowledge of the simulation's atmospheric parameters, we took an in-depth look at a particular set of profile shapes and arrived at an explanation of their formation by looking at the correlations in the underlying atmospheric structure. We remark that the most interesting aspect of this exercise was not the description itself of how those profile shapes are formed, but rather how we arrived at it. In that use case, there did not appear to be much benefit in using one method over the other in terms of the results; though k-means was significantly quicker computationally. However, we do note that using both methods gave a stereoscopic view of the data, making it easier to determine which atmospheric quantities were important.
Doing a clustering analysis of the Stokes V profiles, based on their shapes, proved difficult due to the large variations in signal strength being masked by the z-normalization required by k-Shape, causing strong and weak signals to appear deceivingly similar. Restricting ourselves to a subset of the strongest Stokes V profiles, we performed a clustering with k=20 clusters using both methods. We found that the methods showed the same tendencies as with the intensity, but more strongly pronounced due to the lower number of clusters and more complex shapes. In this setting we found that k-means clearly performed qualitatively worse than k-Shape at creating clusters with coherent shapes; though is difficult to quantitatively compare the methods since they use very different metrics.
In conclusion, k-Shape seems interesting for use cases where one wants human interpretation and small numbers of clusters. Another interesting possibility is to use the k-Shape distance metric to search an observation or simulation for the profiles with shape most similar to a certain prototype, for example when trying to detect Ellerman bombs. We want to stress that k-Shape is, however, not at all suited to usage cases like <cit.>, where the purpose of clustering is to speed up inversions, as the centroids found by k-Shape do not correspond to a definite Doppler-shift nor to an absolute intensity. In those cases, k-means is the better option, and one can easily increase the number of clusters beyond what a human can reasonably process. For a qualitative clustering, aimed towards human interpretation and with a comparatively small number of clusters, we find that k-Shape can be a useful complement to, and sometimes better than, the more well-known k-means method.
The authors wish to thank Mats Carlsson for providing the Bifrost atmosphere used in this paper. We also wish to thank the anonymous referee for comments and suggestions that improved the clarity of this manuscript. This work has been supported by the Research Council of Norway through its Centers of Excellence scheme, project number 262622. Computational resources have been provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway. The computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at the PDC Centre for High Performance Computing (PDC-HPC) at the Royal Institute of Technology partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
|
http://arxiv.org/abs/2306.03035v1
|
20230605165312
|
Double summation addition theorems for Jacobi functions of the first and second kind
|
[
"Howard S. Cohl",
"Roberto S. Costas-Santos",
"Loyal Durand",
"Camilo Montoya",
"Gestur Olafsson"
] |
math.CA
|
[
"math.CA"
] |
-1.6cm
-0.9cm
-0.9cm
18.3cm
24cm
black
𝒢^^2_R
𝒢^^n_R
𝒢^^n_R
𝒢^^2_R
𝒢^^n_R
𝒢^^n_R
𝒢
^n_R
^n_R
2^2_R
thm[lemma]Theorem
cor[lemma]Corollary
rem[lemma]Remark
lem[lemma]Lemma
prop[lemma]Proposition
defn[lemma]Definition
equationcurrentlabel=eqnswtrue
centering
=eqncr
to eqcnt@
@## eqcntne
## eqcnt@ ##centering ##@@eqncr@̧equation@ne
ignoretrueyeqncrifnextchar [xeqncrxeqncr[5pt]
#1 to0pt to 0pt1em
2cm10000
#1 to8pt
Double summation addition theorems for Jacobi functions of the first and second kind
Double summation addition theorems for
Jacobi functions of the first and second kind
Howard S. Cohl ^∗,
Roberto S. Costas-Santos ^,
Loyal Durand ^†,
Camilo Montoya ^∗
and Gestur Ólafsson ^H. S. Cohl,
R. S. Costas-Santos,
L. Durand,
C. Montoya, G. Ólafsson^∗ Applied and Computational Mathematics Division,
National Institute of Standards and Technology,
Gaithersburg, MD 20899-8910, USA
http://www.nist.gov/itl/math/msg/howard-s-cohl.cfmhttp://www.nist.gov/itl/math/msg/[email protected], [email protected]^ Department of Quantitative Methods, Universidad Loyola Andalucía, Sevilla, Spain
http://www.rscosan.comhttp://[email protected]^† Department of Physics, University of Wisconsin, Madison, WI 53706, USA
[email protected]^ Department of Mathematics, Louisiana State University, Baton Rouge, LA 70803-4918, USA
http://www.math.lsu.edu/ olafssonhttp://www.math.lsu.edu/∼[email protected] ???, in final form ????; Published online ????In this paper we review and derive hyperbolic and trigonometric double summation addition theorems for Jacobi functions of the first and second kind. In connection with these addition theorems, we perform a full analysis of the relation between symmetric, antisymmetric and odd-half-integer parameter values for the Jacobi functions with certain Gauss hypergeometric functions which satisfy a quadratic transformation, including associated Legendre, Gegenbauer and Ferrers functions of the first and second kind.
We also introduce Olver normalizations of the Jacobi functions which are particularly useful in the derivation of expansion formulas when the parameters are integers. We introduce an application of the addition theorems for the Jacobi functions of the second kind to separated eigenfunction expansions of a fundamental solution of the Laplace-Beltrami operator on the compact and noncompact rank one symmetric spaces.
Addition theorems;
Jacobi function of the first kind;
Jacobi function of the second kind;
Jacobi polynomials;
ultraspherical polynomials
33C05, 33C45, 53C22, 53C35
70mm Dedicated to Dick Askey whose favorite function was the Jacobi polynomial.
§ INTRODUCTION
Jacobi polynomials (hypergeometric polynomials)
were introduced by the German mathematician Carl Gustav
Jacob Jacobi (1804–1851).
These polynomials first appear in an article by Jacobi which was published posthumously in 1859 by Heinrich Eduard Heine <cit.>. Jacobi polynomials, P_n^(α,β)(x),
are polynomials which for α, β>-1 are
orthogonal on the real segment [-1,1]<cit.>
and can be defined in terms of a terminating sum as follows:
P_n^(α,β)(cosθ):=Γ(α+1+n)/Γ(n+1) Γ(α+β+1+n)∑_k=0^n(-1)^knkΓ(α+β+1+n+k)/Γ(α+1+k)sin^2k(12θ),
where Γ is the gamma function <cit.>, and
nk the binomial
coefficient <cit.>.
The above definition of the Jacobi polynomial is equivalent to
the following Gauss hypergeometric representation <cit.>:
P_n^(α,β)(x)=
(α+1)_n/n!21-n,n+α+β+1α+11-x/2,
where x=cosθ.
We will return to the notations used in (<ref>)
in the following section.
Ultraspherical polynomials, traditionally defined by <cit.>
C_n^λ(cosθ):=Γ(λ+1/2)
Γ(2λ+n)/Γ(2λ)Γ(λ+1/2+n)
P_n^(λ-1/2,λ-1/2)(cosθ),
are symmetric α=β Jacobi polynomials.
These polynomials are
commonly referred
to as Gegenbauer polynomials after Austrian mathematician
Leopold Gegenbauer
(1849–1903). However the Czech (Austrian) astronomer and mathematician Moriz Allé discovered and used many of their fundamental properties including their generating function and addition theorem <cit.> almost a decade prior to Gegenbauer <cit.>, and Heine <cit.>.
See the nice discussion of the history of the addition theorem for ultraspherical polynomials by Koornwinder in <cit.>; see also <cit.>.
The addition theorem for ultraspherical polynomials is given by
C_n^λ(cosθ_1cosθ_2
±sinθ_1sinθ_2cosϕ)
=n!/(2λ)_n∑_k=0^n (∓ 1)^k
(λ)_k(2λ)_2k/(-n)_k(λ-1/2)_k(2λ+n)_k
(sinθ_1sinθ_2)^kC_n-k^λ+k(cosθ_1)
C_n-k^λ+k(cosθ_2)C_k^λ-1/2(cosϕ).
This result is quite important all by itself.
In the special case λ=1/2, it becomes one way of writing the addition
theorem for spherical harmonics on the two-dimensional sphere:
P_n(cosθ_1cosθ_2±sinθ_1sinθ_2cosϕ)=
∑_k=-n^n(± 1)^k
(n-k)!/(n+k)! P_n^k(cosθ_1)
P_n^k(cosθ_2)cos(kϕ),
where the P_n^k are Ferrers functions of the first
kind <cit.>.
See the foreword of Willard Miller (1977) <cit.> written by Richard Askey for a
beautiful discussion (on pp. xix–xx) of addition theorems.
These addition theorems are intimately related to separated
eigenfunction expansions of spherical functions (reproducing kernels)
on highly symmetric (isotropic) manifolds.
In fact, on a d-dimensional hypersphere, the special argument
of Gegenbauer's addition theorem is easily expressible
in terms of the geodesic distance between two arbitrary points.
Given the addition theorem for ultraspherical polynomials (<ref>), it was a natural problem to extend this to Jacobi polynomials for αβ. It was a good match when Richard Askey, on sabbatical at the Mathematical Centre in Amsterdam during 1969–1970, met Tom Koornwinder there, who had some experience with group theoretical methods and was looking for a good subject for a Ph.D. thesis.
Askey suggested to Koornwinder the problem of finding an addition theorem for Jacobi polynomials, and he also arranged that Koornwinder could attend a special year at the Mittag-Leffler Institute in Sweden. There Koornwinder obtained the desired result <cit.>.
He later found that his group theoretic method and the resulting addition theorem in a special case were anticipated by two papers in Russian: Vilenkin and Šapiro <cit.>
realized that disk polynomials
<cit.>
and in particular the Jacobi polynomials
P_n^(α,0), αan integer, can be interpreted as spherical functions on the complex projective space SU(α+2)/U(α+1)<cit.> or as spherical functions on
the complex unit sphere U(α+2)/U(α+1) in
ℂ^α+2 as a homogeneous space of the unitary group U(α+2)
(see references by Ikeda, Kayama and Seto in <cit.>).
Šapiro
obtained from that observation, the addition theorem for Jacobi polynomials in the β=0 case
<cit.>.
Koornwinder initially presented his addition theorem
for Jacobi polynomials in a series of three papers in 1972
<cit.>.
Koornwinder gave four different proofs of the addition formula for Jacobi polynomials. His first proof focused on the spherical functions of the Lie groupU(d)/U(d-1), d≥ 2 integer, and appeared in
<cit.>; his second proof that used
ordinary spherical harmonics
appeared in <cit.>;
his third proof was
an analytic proof and it appeared in <cit.>; and a short proof using
orthogonal polynomials in three variables which appeared in
<cit.>.
Let us consider
the trigonometric context of Koornwinder's addition theorem for
the Jacobi polynomials.
Let n∈ℕ_0, α>β>-1/2,
cosθ_1=1/2(^iθ_1+^-iθ_1),
cosθ_2=1/2(^iθ_2+^-iθ_2),
w∈(-1,1),
ϕ∈[0,π].
Then Koornwinder's addition theorem for Jacobi polynomials is given by
P_n^(α,β)(2|cosθ_1cosθ_2±^iϕwsinθ_1
sinθ_1|^2-1)
=n!Γ(α+1)/Γ(α+n+1)∑_k=0^n
(α+1)_k(α+β+n+1)_k/(α+k)(β+1)_k(-n)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(-β-n)_l/(α+n+1)_l
(cosθ_1cosθ_2)^k-l(sinθ_1sinθ_2)^k+l
×P_n-k^(α+k+l,β+k-l)(cos(2θ_1))
P_n-k^(α+k+l,β+k-l)(cos(2θ_2))
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/βC_k-l^β(cosϕ) .
As we will see in Section <ref> below, this addition theorem and its various counterparts for Jacobi functions of the first and second kind are deeply connected to a 2-variable orthogonal polynomial system sometimes referred to as parabolic biangle polynomials 𝒫_k,l^(α,β)(w,ϕ).
Furthermore, this addition theorem for Jacobi polynomials represents a separated eigenfunction
expansion of the spherical functions on highly symmetric
(isotropic) manifolds (so-called compact symmetric spaces of rank one) and the special argument is again
given by the geodesic distance between two points on these manifolds.
We shall return to this later.
In the case of ultraspherical and Jacobi polynomials, the sum is terminating,
as one would expect since the object of study is a polynomial.
However, as Flensted-Jensen and Koornwinder realized <cit.>, the addition theorem for Jacobi polynomials can
be extended to Jacobi functions of the first kind by formally
taking the outer sum limit to infinity.
While Jacobi polynomials P_n^(α,β) have n discrete, the Jacobi functions φ_λ^(α,β) have λ continuous
(see (<ref>) below).
When one starts to consider Jacobi functions, then many
new questions arise which must be understood for a full
theory of the separated eigenfunctions expansions of Jacobi functions.
First of all, one must consider two separate contexts,
the trigonometric context
where the arguments of the functions are analytically
continued from the segment (-1,1) and also the hyperbolic
context, where the arguments of the functions are analytically
continued from the segment (1,∞).
On top of that, one must also consider the particular expansions
of Jacobi functions of the second kind.
Gegenbauer and Jacobi functions are solutions to
second-order ordinary differential equations.
Therefore there are two linearly independent solutions,
namely the functions of the first kind and the functions
of the second kind. The separated eigenfunction expansions
of the Gegenbauer functions of the first and second
kind were treated quite extensively in a paper by
Durand, Fishbane and Simmons (1976) <cit.>.
Durand extended Koornwinder's addition theorem to Jacobi (and other) functions of the second kind in <cit.>.
The study of multi-summation addition theorems for Jacobi
functions of the first and second kind seems not to have
moved forward since
the advances by
Durand and by Flensted-Jensen and Koornwinder.
In the remainder of this paper, we give the full
multi-summation expansions of Jacobi functions of the
second kind and bring the full theory of the expansions
of Jacobi functions to a circle. However, there still
remain open questions in the study, and we will return
to these questions later.
§ PRELIMINARIES
Throughout this paper we adopt the following set notations:
ℕ_0:={0}∪ℕ={0, 1, 2, 3, …}, and
we use the set ℂ which represents the complex
numbers.
Jacobi functions (and their special cases such as Gegenbauer, associated Legendre and Ferrers functions) have representations given in terms of Gauss hypergeometric functions which can be defined in terms of an infinite series over ratios of
shifted factorials (Pochhammer symbols). The shifted factorial
can be defined for a∈ℂ,
n∈ℕ_0
by <cit.>(a)_n:=(a)(a+1)⋯(a+n-1).
The following ratio of two gamma functions <cit.> are related to
the shifted factorial, namely
for a∈ℂ∖-ℕ_0, one
has
(a)_n=Γ(a+n)/Γ(a),
which allows one to extend the definition to non-positive
integer values of n.
Some other properties of shifted factorial which we will use are
(n,k∈ℕ_0, n≥ k)
Γ(a-n)=(-1)^nΓ(a)/(1-a)_n,
(-n)_k=(-1)^kn!/(n-k)!.
One also has the following
expression for the generalized
binomial coefficient
for z∈ℂ, n∈ℕ_0<cit.>
zn=(-1)^n (-z)_n/n!.
Define the multisets a:={a_1,…,a_r}, b:={b_1,…,b_s}.
We will also use the common notational product convention,
a_l∈ℂ, l∈ℕ, r∈ℕ_0, e.g.,
( a)_k:=(a_1,…,a_r)_k:=(a_1)_k(a_2)_k⋯(a_r)_k,
Γ( a):=Γ(a_1,…,a_r):=Γ(a_1)⋯Γ(a_r).
Also define the multiset notation a+t:={a_1+t,…,a_r+t}.
For any expression of the form
(z^2-1)^α, we fix the branch of the power functions such that
(z^2-1)^α:=(z+1)^α(z-1)^α,
for any fixed α∈ℂ and
z∈ℂ∖{-1,1}.
The generalized hypergeometric
function <cit.> is defined by the
infinite series <cit.>
_rF_s
( a; b;z):=rs a bz
:=
∑_k=0^∞( a)_k/( b)_kz^k/k!,
where |z|<1, b_j∉-ℕ_0, for
j∈{1, …, s};
and elsewhere by analytic continuation.
Further define the Olver normalized (scaled or regularized)
generalized hypergeometric series
_rF_s( a; b;z), given
by
_rF_s
( a; b;z):=
rs a bz:=
1/Γ( b)rs a bz=
∑_k=0^∞(a_1,…,a_r)_k/Γ( b+k
)z^k/k!,
which is entire for all a_l,b_j∈ℂ, l∈{1,…,r}, j∈{1,…,s}.
Both the generalized and Olver normalized generalized hypergeometric series, if
nonterminating, are entire if r≤ s, convergent for |z|<1 if r=s+1
and divergent if r≥ s+1.
The special case of the generalized hypergeometric function with r=2, s=1 is referred to as the Gauss hypergeometric function <cit.>, or simply the hypergeometric function. It has many interesting properties, including linear transformations which were discovered by Euler and Pfaff.
Euler's linear transformation is <cit.>
21a,bcz=(1-z)^c-a-b21c-a,c-bcz
and Pfaff's linear transformation is <cit.>
21a,bcz
=(1-z)^-a21a,c-bcz/z-1
=(1-z)^-b21b,c-acz/z-1.
§.§ The Gegenbauer and associated Legendre functions
The functions which satisfy quadratic transformations of the Gauss
hypergeometric function are given by Gegenbauer and associated Legendre functions of the
first and second kind. As we will see, these
functions correspond to Jacobi functions of the first and second kind when
their parameters satisfy certain relations. We now describe some of the
properties of these functions, which have a deep and long history.
Let n∈_0. The Gegenbauer (ultraspherical) polynomial which is an important
specialization of the Jacobi polynomial for
symmetric parameters values, is given
in terms of a terminating Gauss hypergeometric series
<cit.>
C_n^μ(z)=(2μ)_n/(μ+1/2)_n
P_n^(μ-1/2,μ-1/2)(z)
=(2μ)_n/n!21-n,2μ+nμ+1/21-z/2.
Note that the ultraspherical polynomials
satisfy the following parity relation
<cit.>
C_n^μ(-z)=(-1)^n C_n^μ(z).
Gegenbauer functions which generalize ultraspherical polynomials
with arbitrary degrees n=λ∈ are solutions w=w(z)=w_λ^μ(z) to the
Gegenbauer differential equation
<cit.>
(z^2-1)^2w(z)/ z^2+(2λ+1)z w(z)/ z
-λ(λ+2μ)w(z)=0.
There are two linearly independent solutions to this second order ordinary differential equation which are referred to as Gegenbauer functions of the first and second kind C_λ^μ(z), D_λ^μ(z).
A closely connected differential equation to the Gegenbauer differential equation (<ref>) is the associated Legendre differential equation which is given by <cit.>
(1-z^2)^2w(z)/ z^2-2z w(z)/ z
+(ν(ν+1)-μ^2/1-z^2)w(z)=0.
Two linearly independent solutions to this equation are referred to as associated Legendre functions of the first and second kind P_ν^μ(z), Q_ν^μ(z). In the following subsection we will present the definitions of these important functions which are Gauss hypergeometric functions which satisfy a quadratic transformation.
§.§.§ Hypergeometric representations of the Gegenbauer and associated Legendre functions
The Gegenbauer function of the first kind is defined by <cit.>
C_λ^μ(z):=
√(π) Γ(λ+2μ)/2^2μ-1Γ(μ)Γ(λ+1)21-λ,2μ+λμ+1/21-z/2,
where λ+2μ∉-_0.
It is a clear extension of the Gegenbauer
polynomial when the index is allowed to be
a complex number as well as a non-negative integer.
Two representations
which will be useful for us in comparing to
the Jacobi function of the second kind are referred to as
Gegenbauer functions of the second kind which have hypergeometric representations given
with λ+2μ∉-_0, <cit.>
D_λ^μ(z)
:=^iπμΓ(λ+2μ)/Γ(μ)(2z)^λ+2μ211/2λ+μ,
1/2λ+μ+1/2λ+μ+11/z^2
=
^iπμ2^λΓ(λ+μ+1/2)
Γ(λ+2μ)/√(π) Γ(μ)(z-1)^λ+μ+1/2
(z+1)^μ-1/221λ+1,λ+μ+1/22λ+2μ+12/1-z,
and in the second representation
λ+μ+1/2∉-_0.
The equality of these two representations of the Gegenbauer function of the second kind follow from a quadratic transformation of the Gauss hypergeometric function from Group 3 to Group 1 in <cit.>.
The associated Legendre function
of the first kind is defined as <cit.>
P_ν^μ(z):=
(z+1/z-1)^1/2μ21-ν,ν+11-μ1-z/2,
where |1-z|<2, and elsewhere in z by analytic continuation.
The associated Legendre function of the second kind
Q_ν^μ:ℂ∖(-∞,1]→ℂ,ν+μ∉-ℕ, has the following two single Gauss hypergeometric function representations
<cit.>, <cit.>,
Q_ν^μ(z):=√(π) ^iπμΓ(ν+μ+1)
(z^2-1)^1/2μ/2^ν+1
z^ν+μ+121ν+μ+1/2,
ν+μ+2/2ν+3/21/z^2
=
2^ν^iπμΓ(ν+1)Γ(ν+μ+1) (z+1)^1/2μ/(z-1)^1/2μ+ν+121ν+1,ν+μ+1
2ν+22/1-z,
and for the second representation, ν∉-.
The first and second single Gauss hypergeometric representations are convergent as a Gauss hypergeometric series for
|z|>1, respectively |z-1|>2, and elsewhere in z∈∖(-∞,1] by analytic continuation
of the Gauss hypergeometric function.
The relations between the
associated Legendre functions of the first and second kind
are related
to the Gegenbauer functions of the first and second kind by <cit.>
P_ν^μ(z)=Γ(1/2-μ)Γ(ν+μ+1)/2^μ√(π) Γ(ν-μ+1)(z^2-1)^1/2μ
C_ν+μ^1/2-μ(z),
Q_ν^μ(z)=^2π i(μ-1/4)√(π) Γ(1/2-μ)Γ(ν+μ+1)/2^μΓ(ν-μ+1)(z^2-1)^1/2μ
D_ν+μ^1/2-μ(z),
which are valid for μ∈∖{1/2,3/2,…}, ν+μ∈∖-.
Equivalently, the inverse relationships are given by
C_λ^μ(z)=√(π) Γ(λ+2μ)2^μ-1/2Γ(μ)Γ(λ+1)(z^2-1)^μ/2-1/4
P_λ+μ-1/2^1/2-μ(z),
D_λ^μ(z)=^2π i(μ-1/4)Γ(λ+2μ)/√(π) 2^μ-1/2Γ(μ)Γ(λ+1)
(z^2-1)^1/2μ-1/4Q_λ+μ-1/2^1/2-μ(z),
which are valid for all λ+2μ∈∖-_0.
By comparing Gauss hypergeometric representations of the various functions, one may express _2F_1(a,a+1/2;c;z) in terms of associated Legendre functions of the first and second kind P_ν^μ, Q_ν^μ and the Gegenbauer functions of the first and second kind C_ν^μ, D_ν^μ using the following very useful formulas. Let
z∈ℂ∖[1,∞). Then
21a,a+1/2cz=
2^c-1z^1/2(1-c)(1-z)^1/2c-a-1/2P_2a-c^1-c(1/√(1-z))
=2^2c-2Γ(c-1/2)Γ(2 (a-c+1))/√(π) Γ(2a)(1-z)^aC_2a-2c+1^c-1/2(1/√(1-z)),
where 2c∉{1,-1,-3,…}, 2a-2c∉{-2,-3,…}, and
21a,a+1/2cz=
^iπ(c-2a-1/2)2^c-1/2(1-z)^1/2c-a-1/4/√(π) Γ(2a)z^1/2c-1/4Q_c-3/2^2a-c+1/2(1/√(z))
=^iπ(2a-c)2^2c-2a-1Γ(c-2a)(1-z)^c-2a-1/2/Γ(2c-2a-1)z^c-a-1/2
D_2a-1^c-2a(1/√(z)),
where c,c-2a∉-ℕ_0.
§.§.§ The Gegenbauer functions on-the-cut (-1,1) and the Ferrers Functions
We will consider Jacobi functions of the second kind on-the-cut
in Section <ref>.
As we will see, for certain combinations
of the parameters which we will describe below, the Jacobi functions
of the first and second kind on the cut are related to the
the Gegenbauer functions of the first and second kind on-the-cut and the associated Legendre
functions of the first and second kind on-the-cut (Ferrers functions).
The Gegenbauer functions of the first and second kind
on-the-cut are defined in terms of the Gegenbauer functions immediately above and below the segment (-1,1) in the complex plane. These definition are given by
<cit.>
C_λ^μ(x):=D_λ^μ(x+i0)+^-2π iμD_λ^μ(x-i0)=C_λ^μ(x± i0), x∈(-1,1]
D_λ^μ(x):=-iD_λ^μ(x+i0)+i^-2π iμD_λ^μ(x-i0), x∈(-1,1).
Note that C_λ^μ(x) and D_λ^μ are real for real values of λ and μ.
The Ferrers functions of the first and second kind are defined
as <cit.>
P_ν^μ(x):=^± iπμP_ν^μ(x± i0)=i^-iπμ/π(^-1/2 iπμQ_ν^μ(x+i0)-^1/2 iπμQ_ν^μ(x-i0)),
Q_ν^μ(x):=^-iπμ/2(^-1/2 iπμQ_ν^μ(x+i0)+^1/2 iπμQ_ν^μ(x-i0)).
Using the above definition one can readily obtain a single hypergeometric representation of the Gegenbauer function of the first kind on-the-cut, namely
C_λ^μ(x)=
√(π) Γ(2μ+λ)/2^2μ-1Γ(μ)Γ(λ+1)21-λ,2μ+λμ+1/21-x/2,
which is identical to the Gegenbauer function of the first kind (<ref>) because this function analytically continues to the segment (-1,1), see (<ref>).
For the Gegenbauer function of the second kind on-the-cut, one can readily obtain a double hypergeometric representation of by using the definition (<ref>) and then using the interrelation between the Gegenbauer function of the second kind and the Legendre function of the second kind and then comparing to the Ferrers function of the second kind through its definition
(<ref>).
However, first we will give hypergeometric representations of the Ferrers function
of the first and second kind which are easily found in the literature. The first author recently co-authored a paper with Park and Volkmer where all double hypergeometric representations of the Ferrers function of the second kind were computed <cit.>.
Using (<ref>) one can derive hypergeometric representations of the Ferrers function of the first kind
(associated Legendre function of the first kind on-the-cut)
𝖯_ν^μ:(-1,1)→ℂ. For instance,
one has a single hypergeometric representation given by
<cit.>
P_ν^μ(x)=
(1+x/1-x)^1/2μ21-ν,ν+11-μ1-x/2.
Let ν∈, μ∈∖, ν+μ∉-, then a double hypergeometric representation of the Ferrers function of the second kind is given by
<cit.>
Q_ν^μ (x) = π/2 sin(πμ)( cos(πμ)
( 1+x/1-x)^1/2μ21-ν, ν+11 - μ1-x/2
- Γ(ν+μ+1)/Γ(ν-μ+1)( 1-x/1+x)^1/2μ21-ν, ν+11+μ1-x/2).
Let x∈∖((-∞,1]∪(1,∞)), λ,ν,μ∈. Then
D_λ^μ(x)=
Γ(λ+2μ)/2^μ-3/2√(π) Γ(μ)Γ(λ+1)(1-x^2)^1/2μ-1/4 Q_λ+μ
-1/2^1/2-μ(x),,
such that λ+2μ∉-_0 and
Q_ν^μ(x)=√(π) Γ(1/2-μ)Γ(ν+μ+1)/2^μ+1Γ(ν-μ+1)(1-x^2)^1/2μ D_ν+μ^1/2-μ(x),
such that μ∉{1/2,3/2,…} and ν+μ∉-.
Start with the definition (<ref>) and use the interrelation between the Gegenbauer function of the second kind on-the-cut and the Ferrers function of the second kind
(<ref>). Then applying this relation to the double hypergeometric representation
completes the proof.
Let x∈∖((-∞,1]∪(1,∞)), λ,μ∈, such that λ+2μ∉-_0. Then
D_λ^μ(x)=√(π)/cos(πμ)2^μ-1/2Γ(μ)(sin(πμ)Γ(λ+2μ)/Γ(λ+1)(1+x)^μ-1/221λ+μ+1/2,1/2-λ-μ1/2+μ1-x/2
-1/(1-x)^μ-1/221λ+μ+1/2,1/2-λ-μ3/2-μ1-x/2).
Start with the definition (<ref>) and use the interrelation between the Gegenbauer function of the second kind and the Legendre function of the second kind
(<ref>). Then comparing with the double hypergeometric representation given by (<ref>) completes the proof.
Note that we also have
interrelation between
the Ferrers function of the first kind
and the Gegenbauer function of the first kind on-the-cut
<cit.>
P_ν^μ(x)=Γ(1/2-μ)Γ(ν+μ+1)/2^μ√(π) Γ(ν-μ+1)(1-x^2)^1/2μ C_ν+μ^1/2-μ(x),
where μ∉{1/2,3/2,…}, ν+μ∉- ℕ,
or
equivalently
C_λ^μ(x)=
√(π) Γ(λ+2μ)2^μ-1/2Γ(μ)Γ(λ+1)(1-x^2)^1/2μ-1/4 P_λ+μ-1/2^1/2-μ(x).
Finally we should add that the Legendre polynomial (the associated Legendre
function of the first kind P_ν^μ and the Ferrers
function of the first kind P_ν^μ with μ=0
and ν=n∈ℤ) is given by <cit.>
P_n(x):=P_n^0(x)= P_n^0(x)=C_n^1/2(x)=P_n^(0,0)(x),
which vanishes for n negative.
§.§ Brief introduction to Jacobi functions of the first and second kind
Now we will discuss fundamental properties and special
values and limits for the Jacobi functions.
Jacobi functions are
complex solutions w=w(z)=w_γ^(α,β)(z)
to the Jacobi differential equation
<cit.>
(1-z^2)^2w/ z^2+(β-α-z(α+β+2))
w/ z
+γ(α+β+γ+1)w=0,
which is a second order linear homogeneous differential equation.
Solutions to this differential equation satisfy the following three-term recurrence relation <cit.>
B_γ^(α,β)w_γ^(α,β)(z)+A_γ^(α,β) (z)w_γ+1^(α,β)(z)+w_γ+2^(α,β)(z)=0,
where
A_γ^(α,β)(z)=-(α+β+2γ+3)(α^2-β^2+
(α+β+2γ+2)
(α+β+2γ+4)
z)/2(γ+2)(α+β+γ+2)(α+β+2γ+2),
B_γ^(α,β)=(α+γ+1)(β+γ+1)(α+β+2γ+4)/(γ+2)(α+β+γ+2)(α+β+2γ+2).
This three-term recurrence relation is very useful for deriving various solutions to (<ref>) when solutions are known for values which have integer separations.
§.§.§ The Jacobi function of the first kind
The Jacobi function of the first kind
is a generalization of the Jacobi polynomial (as given by (<ref>)) where
the degree is no longer restricted to be an integer.
In the following material we derive properties
for the Jacobi function of the first kind.In the following result we present the four
single Gauss hypergeometric function representations
of the Jacobi function of the first kind.
Let
α,β,γ∈ℂ such that
α+γ∉-ℕ. Then,
the Jacobi function of the first kind
P_γ^(α,β):ℂ∖(-∞,-1]→ℂ
can be defined by
P_γ^(α,β)(z)=
Γ(α+γ+1)/
Γ(γ+1)21-γ,α+β+γ+1α+11-z/2
=Γ(α+γ+1)/
Γ(γ+1)(2/z+1)^β21
-β-γ,α+γ+1α+11-z/2
=Γ(α+γ+1)/
Γ(γ+1)(z+1/2)^γ21
-γ,-β-γα+1z-1/z+1
=Γ(α+γ+1)/
Γ(γ+1)(2/z+1)^α+β+γ+121
α+γ+1,α+β+γ+1α+1z-1/z+1.
Start with (<ref>) and replace
the shifted factorial by a ratio of gamma
functions using (<ref>),
the factorial n!=Γ(n+1) and substitute
n↦γ∈ℂ,
x↦ z. Application of Euler's transformation (<ref>) and Pfaff's transformation (<ref>) provides the
other three single hypergeometric representations. This completes the proof.
There exist double Gauss hypergeometric
representations of the Jacobi function of the
first kind which can be obtained by using
the linear transformation formulas for the Gauss hypergeometric function
z↦ z^-1, z↦ (1-z)^-1, z↦ 1-z, z↦ 1-z^-1<cit.>,
respectively.
However, these in general
will be given in terms of a sum of two Gauss
hypergeometric functions.
We will will not present the double hypergeometric representations of
the Jacobi function of the first kind here.
One has the following connection relation for the Jacobi function of the first kind.
Let γ,α,β∈, z∈∖(-∞,1], γ∉-, β+γ∉_0. Then
P_-γ-α-β-1^(α,β)(z)=
Γ(-β-γ)Γ(γ+1)/Γ(-γ-α-β)
Γ(α+γ+1)P_γ^(α,β)(z).
This connection relation
can be derived by using (<ref>) and making the replacement
γ↦-γ-α-β-1
which
leaves the parameters and argument of the hypergeometric function unchanged. Comparing the prefactors completes the proof.
One of the consequences of the definition of the
Jacobi function of the first kind is the following
special value:
P_γ^(α,β)(1)
=Γ(α+γ+1)/Γ(α+1)Γ(γ+1),
where α+γ∉-ℕ.
For γ=n∈ℤ one has
P_n^(α,β)(1)=(α+1)_n/n!,
P_n^(α,β)(-1)=(-1)^n(β+1)_n/n!,
which is consistent with (<ref>)
and the parity relation for Jacobi polynomials
(see <cit.>).
From (<ref>) we have
P_0^(α,β)(z)=1,
and P_k^(α,β)(z)=0 for all k∈-ℕ.
§.§.§ The Jacobi function of the second kind
The Jacobi function of the second kind Q_γ^(α,β)(z), γ∈
is a generalization of the Jacobi function
of the second kind Q_n^(α,β)(z), n∈ℕ_0 (as given by <cit.>), where the degree is no longer restricted to be an integer.
In the following material we
derive properties for the Jacobi function of the second kind.
Below we give the four single Gauss hypergeometric function representations
of the Jacobi function of the second kind.
Let γ,α,β,z∈ℂ such that
z∈ℂ∖[-1,1],
α+γ,β+γ∉-ℕ.
Then, the Jacobi function of the second kind
has the following Gauss hypergeometric representations
Q_γ^(α,β)(z) :=
2^α+β+γΓ(α+γ+1)Γ(β+γ+1)/(z-1)^α+γ+1(z+1)^β21γ+1,α+γ+1α+β+2γ+22/1-z
=
2^α+β+γΓ(α+γ+1)Γ(β+γ+1)/(z-1)^α+β+γ+121β+γ+1,α+β+γ+1α+β+2γ+22/1-z
=
2^α+β+γΓ(α+γ+1)Γ(β+γ+1)/(z-1)^α(z+1)^β+γ+121γ+1,β+γ+1α+β+2γ+22/1+z
=
2^α+β+γΓ(α+γ+1)Γ(β+γ+1)/(z+1)^α+β+γ+121α+γ+1,α+β+γ+1α+β+2γ+22/1+z.
Start with <cit.> and
let n↦γ∈ℂ and x↦ z.
Application of Pfaff's (z↦ z/(z-1)) and
Euler's (z↦ z) transformations
<cit.> provides the other
three representations. This completes the proof.
One has the following
connection relation between Jacobi functions of the first kind and Jacobi functions of the second kind.
Let γ,α,β∈, z∈∖(-∞,1], α+γ,β+γ∉-, α+β+2γ∉. Then
P_γ^(α,β)(z)=
-2sin(π(β+γ))/πsin(π(α+β+2γ+1))
×(
sin(πγ)
Q_γ^(α,β)(z)
-sin(π(α+γ))Γ(α+γ+1)Γ(β+γ+1)/Γ(γ+1)Γ(α+β+γ+1)Q_-α-β-γ-1^(α,β)(z)).
This can be derived by starting with (<ref>), applying the linear transformation <cit.>z↦ z^-1 and then comparing twice with Theorem <ref>.
Using (<ref>) one can see that
for γ=n∈_0, that Q_-α-β-γ-1^(α,β)(z) is a Jacobi polynomial, namely
Q_-α-β-1-n^(α,β)(z)=
Γ(-α)Γ(-β)/2Γ(-α-β)n!(α+β+1)_n/(α+1)_n(β+1)_n
P_n^(α,β)(z)
=-π/2sin(π(α+β))/sin(πα)sin(πβ)n! Γ(α+β+1+n)/Γ(α+1+n)Γ(β+1+n)
P_n^(α,β)(z).
From Theorem
<ref> one can derive the following special values for Q_-1^(α,β)(z)and Q_0^(α,β)(z), namely
Q_-1^(α,β)(z) =
2^α+β-1Γ(α)Γ(β)/Γ(α+β)(z-1)^α(z+1)^β,
Q_0^(α,β)(z) =
2^α+βΓ(α+1)Γ(β+1)/(z+1)^α+β+121α+1,α+β+1α+β+22/1+z.
Using the three-term recurrence relation (<ref>) one can derive values of the Jacobi function of the second kind at all negative integer values. For instance, one can derive
Q_-2^(α,β)(z) =
2^α+β-2Γ(α-1)Γ(β-1)/Γ(α+β-1)(z-1)^α(z+1)^β(α-β+(α+β-2)z),
and also expressions for Jacobi functions of the second kind with further negative integer values of γ.
If one examines the Gauss hypergeometric representations presented in Theorem <ref> one can see that they are not defined for certain values of γ, α, β since we must avoid α+γ and β+γ being a negative integer. In fact, these singularities are removable and one is able to compute the values of these Jacobi functions. One can evaluate the Jacobi function of the second kind when the parameters α, β, and degree γ is a non-negative integer in the following result, which was inspired by the work in <cit.>.
Let n,a,b∈_0, z∈∖[-1,1]. Then
Q_n^(a,b)(z)=(-1)^a+n/2^n+1∑_k=0
k n^a+b+2n(-2)^k/(n-k)((z+1)^n-k-(z-1)^n-k)
P_k^(a+n-k,b+n-k)(z)
+(-1)^a/2log(z+1/z-1)P_n^(a,b)(z).
Start with the integral
representation for the Jacobi function of the second kind
<cit.>
Q_γ^(α,β)(z)=1/2^γ+1(z-1)^α(z+1)^β∫_-1^1(1-t)^α+γ(1+t)^β+γ/(z-t)^γ+1dt,
provided (α+γ), (β+γ)>-1<cit.>
and
identify (γ,α,β)=(n,a,b)∈_0^3.
Then consider
μ_n,k^(a,b)(z):=
^k/ z^k(1-z)^n+a(1+z)^n+b
=(-1)^k2^kk!(1-z)^a+n-k(1+z)^b+n-kP_k^(a+n-k,b+n-k)(z),
where we have used the Rodrigues-type formula for Jacobi polynomials
<cit.>.
It is easy to show that
(1-t)^n+a(1+t)^n+b=∑_k=0^2n+a+bμ_n,k^(a,b)(z)(t-z)^k/k!,
and the right-hand side is valid for all z∈.
Now start with (<ref>) and insert (<ref>) into the integrand and perform the integration over t∈(-1,1) using
∫_-1^1 (z-t)^k-n-1 t
=
{[ (z+1)^k-n-(z-1)^k-n/k-n if k n,; log(z+1/z-1) if k=n, ].
which completes the proof.
By using (<ref>) we find that
if |z|∼ 1+ϵ then as ϵ→0^+ one has the following behavior of the Jacobi function of the second kind near the singularity at z=1, namely
Q_γ^(α,β)(1+ϵ)∼2^α-1Γ(α)Γ(β+γ+1)/Γ(α+β+γ+1)ϵ^α,
where α>0, β+γ∉-.
By using (<ref>) we
see that as |z|→∞ one has
Q_γ^(α,β)(z)∼2^α+β+γΓ(α+γ+1)Γ(β+γ+1)/Γ(α+β+2γ+2)z^α+β+γ+1,
where α+γ+1,β+γ∉-.
§.§.§ Jacobi functions of the first and second kind on-the-cut
We now refer to the real segment (-1,1) as the cut and the Jacobi functions
of the first and second kind on-the-cut as P_γ^(α,β),
Q_γ^(α,β).
The natural definitions of these Jacobi functions
are due to Durand and can be found in
<cit.> (see also
<cit.>). These are given as follows:
P_γ^(α,β)(x)
:=i/π(^iπα
Q_γ^(α,β)(x+i0)
-^-iπαQ_γ^(α,β)
(x-i0))
=P_γ^(α,β)(x± i0),
Q_γ^(α,β)(x)
:=1/2(^iπα
Q_γ^(α,β)(x+i0)
+^-iπαQ_γ^(α,β)(x-i0)
).
Note that the Jacobi function of the first kind
on-the-cut (<ref>) is simply an analytic
continuation of the Jacobi function of the first
kind (see Theorem <ref>) since
the complex-valued function is continuous across
the real interval (-1,1]. On the other hand,
the Jacobi function of the second kind is not an
analytic continuation of the Jacobi function of
the second kind
(see Theorem <ref>). This is because
Q_γ^(α,β) is not continuous
across the real interval (-1,1).
Hence, an `average' (<ref>) must be taken of the
function values with infinitesimal positive and negative
arguments in order to define it. Originally,
in Szegő's book <cit.> (see also
<cit.>) a definition for
the Jacobi function of the second kind on-the-cut was
given by Q_γ^(α,β)(x):=1/2( Q_γ^(α,β)(x+i0)
+Q_γ^(α,β)(x-i0)), but as is pointed
out by Durand <cit.>, Szegő's definition
destroys the analogy between
P_γ^(α,β)(cosθ),
Q_γ^(α,β)(cosθ) and the
trigonometric functions. Hence with the updated Durand
definitions for the Jacobi functions of the first and
second kind on-the-cut (<ref>), (<ref>),
one has the following asymptotics as n→∞, namely <cit.>
Q_n^(α,β)(cosθ± i0)∼1/2(π/n)^1/2(sin(12θ))^-α-1/2(cos(12θ))^-β-1/2^∓ iNθ∓ iπ/2(α+1/2),
where N:=n+1/2α+1/2β+1/2.
There are many double hypergeometric
representations of the Jacobi function
of the second kind on-the-cut
Q_γ^(α,β):ℂ∖((-∞,1]
∪[1,∞))→ℂ.
These hypergeometric representations follow
by applying the definition (<ref>) to
Theorem <ref> which provides the Gauss hypergeometric
representations for the Jacobi function of the second kind.
The application of (<ref>) takes the argument of the
Gauss hypergeometric functions just above and below the ray
(1,∞)
in which it is known that the Gauss hypergeometric
function is discontinuous.
The values of the Gauss hypergeometric function z
above and below this ray may then be transformed
into a region where the Gauss hypergeometric function
is continuous in a complex neighborhood of the argument
of the Gauss hypergeometric function by utilizing the
transformations which one can find in
<cit.>.
These transformations, which map from Gauss hypergeometric
functions with argument x± i0 to sums of Gauss
hypergeometric functions with arguments given by
1/x, 1-x, 1-x^-1 and (1-x)^-1.
Eight Gauss hypergeometric function representations of
the Jacobi function of the second kind on-the-cut can
be obtained by starting with (<ref>)-(<ref>),
applying the transformation <cit.>z↦ z^-1 and by either utilizing
the Euler (<ref>) or Pfaff (<ref>)
transformations as needed.
There are certainly more Gauss hypergeometric
representations that can be obtained for the Jacobi
function of the second kind on-the-cut by applying
<cit.>, but the derivation
of these representations must be left to a later publication.
We will give two of these here for
γ,α,β∈ℂ such that
α, β∉ℤ, α+γ,β
+γ∉-ℕ, namely
Q_γ^(α,β)(x)=
π/2sin(πα)(-
cos(πα)Γ(α+γ+1)/Γ(γ+1)21-γ,α+β+γ+11+α1-x/2
+Γ(β+γ+1)/Γ(α+β+γ+1)(2/1-x)^α(2/1+x)^β21-α-β-γ,γ+11-α1-x/2)
=
π/2^γ+1sin(πα)
(-cos(πα)Γ(α+γ+1)/Γ(γ+1)
(1+x)^γ21-γ,-β-γ1+αx-1/x+1
+Γ(β+γ+1)/Γ(α+β+γ+1)(1+x)^α+γ/(1-x)^α
21-α-β-γ,-α-γ1-αx-1/x+1).
Just as we were able to compute the values of the Jacobi function of the second kind with non-negative integer parameters and degree, the same evaluation can be accomplished for the Jacobi function of the second kind on-the-cut which we present now.
Let n,a,b∈_0, x∈(-1,1). Then
Q_n^(a,b)(x)=(-1)^n/2^n+1∑_k=0
k n^a+b+2n(-2)^k/(n-k)((1+x)^n-k-(x-1)^n-k)P_k^(a+n-k,b+n-k)(x)
+1/2log(1+x/1-x)P_n^(a,b)(x).
Start with Theorem <ref> and use the definition (<ref>)
which completes the proof.
Note that by setting a=b in the above result we can
obtain an interesting finite sum expression for the Ferrers functions of the second kind with non-negative integer degree
and order given as a sum over ultraspherical polynomials.
Let n,a∈_0, x∈(-1,1). Then
Q_n^a(x)=(-1)^a(1-x^2)^1/2a/2√(π)((-1)^n+a2^n(n+a)!
×∑_k=0
k n-a^2n(-1)^kΓ(n-k+1/2)/2^k(2n-k)!(n-a-k)((1+x)^n-a-k-(x-1)^n-a-k)C_k^n-k+1/2(x)
+2^aΓ(a+12)log(1+x/1-x)C_n-a^a+1/2(x)).
Start with (<ref>) and
set a=b. Then utilizing
(<ref>) below with
(<ref>) completes the proof.
By using (<ref>) we
see that as x=1-ϵ one has
as ϵ→ 0^+,
Q_γ^(α,β)(1-ϵ)∼2^α-1Γ(α)Γ(β+γ+1)/Γ(α+β+γ+1)ϵ^α,
where β+γ+1∉-_0 and
α>0.
§.§ Specializations to Gegenbauer, associated Legendre and Ferrers functions
Here we discuss some limiting cases where the Jacobi functions
reduce to more elementary functions such as Gegenbauer,
associated Legendre, and Ferrers functions.
These identities involve
symmetric and antisymmetric Jacobi functions of
the first kind.
The relation between the symmetric Jacobi function
of the first kind and the Gegenbauer function
of the first kind for z∈ℂ∖(-∞,-1]
is given by
P_γ^(α,α)(z)=
Γ(2α+1)Γ(α+γ+1)/Γ(α+1)Γ(2α+γ+1)
C_γ^α+1/2(z).
This follows by starting with (<ref>) and then
comparing it to the Gauss hypergeometric representation
of the Gegenbauer function of the first kind on the
right-hand side using (<ref>).
The relation between the symmetric Jacobi function
of the first kind and the Ferrers function of the
first kind is
P_γ^(α,α)(z)=
2^αΓ(α+γ+1)/Γ(γ+1)
(1-x^2)^1/2α P_α+γ^-α(x),
where x∈ℂ∖((-∞,-1]∪[1,∞)) and the relation between
the symmetric Jacobi function
of the first kind
and the associated Legendre function
of the first kind is
P_γ^(α,α)(z)=
2^αΓ(α+γ+1)/Γ(γ+1)(z^2-1)^1/2α
P_α+γ^-α(z)
where z∈ℂ∖(-∞,1].
These are easily obtained
through
<cit.>
and
<cit.>.
The relation between the antisymmetric Jacobi function of the first kind on-the-cut and
the Ferrers function of the first kind and the Gegenbauer function of the first kind on-the-cut is
P_γ^(α,-α)(x)
=Γ(α+γ+1)/Γ(γ+1)(1+x/1-x)^1/2α P_γ^-α(x)
=
Γ(2α+1)Γ(γ-α+1)/2^αΓ(γ+1)Γ(α+1)
(1+x)^α C_γ-α^α+1/2(x)
,
where x∈ℂ∖((-∞,-1]∪[1,∞)) and the relation between
the antisymmetric Jacobi function
of the first kind
and the associated Legendre and
Gegenbauer function
of the first kinds is
P_γ^(α,-α)(z)=Γ(α+γ+1)/Γ(γ+1)(z+1/z-1)^1/2αP_γ^-α(z)
=
Γ(2α+1)
Γ(γ-α+1)/2^αΓ(γ+1)Γ(α+1)
(z+1)^α
C_γ-α^α+1/2(z),
where z∈ℂ∖(-∞,1].
These are obtained by comparing (<ref>) with (<ref>) and
(<ref>).
One has the following quadratic transformations for the symmetric Jacobi functions of the first kind
which can be found in <cit.>. Let z∈∖(-∞,1], γ,α∈, α+γ∉-ℕ. Then
P_2γ^(α,α)(z)=
√(π) Γ(α+2γ+1)/2^2γΓ(γ+1/2)Γ(α+γ+1)P_γ^(α,-1/2)(2z^2-1),
where α+2γ∉-ℕ,
γ∉-ℕ+1/2, and
P_2γ+1^(α,α)(z)=√(π) Γ(α+2γ+2)z/2^2γ+1Γ(γ+3/2)Γ(α+γ+1)P_γ^(α,1/2)(2z^2-1),
where
α+2γ+1∉-ℕ,
γ∉-ℕ-1/2.
The restrictions on the parameters
come directly by applying the restrictions on the parameters in Theorem <ref> to the Jacobi functions of the first kind on both sides of the relations.
Below we present some identities which involve
symmetric and antisymmetric Jacobi functions of
the second kind
Two equivalent relations between the symmetric
Jacobi function of the
second kind and the associated Legendre function of the second kind are given by
Q_γ^(α,α)(z)
=2^α^iπαΓ(α+γ+1)/Γ(γ+1)(z^2-1)^1/2α
Q_α+γ^-α(z),
Q_γ^(α,α)(z)=2^α^-iπαΓ(α+γ+1)/Γ(2α+γ+1)(z^2-1)^1/2α
Q_α+γ^α(z),
where α+γ∉-ℕ. Also,
two equivalent
relations between antisymmetric Jacobi functions of the second kind and the associated Legendre function of the second kind are given by
Q_γ^(α,-α)(z)
=^-iπαΓ(γ-α+1)/Γ(γ+1)(z+1/z-1)^1/2α
Q_γ^α(z),
Q_γ^(-α,α)(z)=^-iπαΓ(γ-α+1)/Γ(γ+1)(z-1/z+1)^1/2αQ_γ^α(z),
where γ-α∉-ℕ.
By comparing (<ref>) and (<ref>) with (<ref>)
and by using the Legendre duplication formula <cit.>
one can obtain all these formulas in a straightforward way.
See <cit.> for an interesting
application of the symmetric relation for associated Legendre functions of the second kind.
Observe that by identifying (<ref>) and (<ref>) and
for z∈ℂ∖ [-1,1] one has
Q_γ^(α,α)(z)=
2^2α/(z^2-1)^α{[ Q_γ+2α^(-α,-α)(z), if z∈∖[-1,1] s.t. z≥ 0,; ^2π iα Q_γ+2α^(-α,-α)(z), if z∈∖[-1,1] s.t. z<0 and z<0,; ^-2π iα Q_γ+2α^(-α,-α)(z), if z∈∖[-1,1] s.t. z<0 and z≥ 0, ].
where the principal branches of complex powers are taken.
Let α,γ∈ℂ, z∈ℂ∖[-1,1],
α+γ∉-ℕ.
Then the relations between the symmetric and antisymmetric Jacobi
functions of the second kind to the Gegenbauer function
of the second kind is given by
Q_γ^(α,α)(z)=
^-iπ(α+1/2)√(π) 2^2αΓ(α+1/2)Γ(α+γ+1)/Γ(2α+γ+1)D_γ^α+1/2(z),
where
α∈ℂ∖{-1/2,-3/2,-5/2,…}
and
Q_γ^(α,-α)(z)=^iπ(α-1/2)2^2γ-α+1Γ(α+γ+1)Γ(1/2-α)Γ(γ+3/2)/Γ(2γ+2)(z-1)^αD_α+γ^1/2-α(z),
where α∈ℂ∖{1/2,3/2,5/2,…},
γ∈ℂ∖{-3/2,-5/2,-7/2,…}.
Start with the definition of the Jacobi function
of the second kind (<ref>) and take
β=α. Then comparing (<ref>) using
Euler's (z↦ z) transformation
<cit.> produces
(<ref>).
In order to produce (<ref>), start
with (<ref>) and take
β=-α. Then compare (<ref>) using
Euler's (z↦ z) transformation
<cit.>. This completes the
proof.
One has the following quadratic transformations for symmetric Jacobi functions of the second kind.
Let z∈∖[-1,1], γ,α∈, α+γ∉-ℕ. Then
Q_2γ^(α,α)(z)=
√(π) Γ(α+2γ+1)/2^2γΓ(γ+1/2)Γ(α+γ+1)Q_γ^(α,-1/2)(2z^2-1),
where α+2γ∉-ℕ,
γ∉-ℕ+1/2, and
Q_2γ+1^(α,α)(z)=√(π) Γ(α+2γ+2)z/2^2γ+1Γ(γ+3/2)Γ(α+γ+1)Q_γ^(α,1/2)(2z^2-1),
where
α+2γ+1∉-ℕ,
γ∉-ℕ-1/2.
Starting with the left-hand sides of
(<ref>), (<ref>) using the Gauss hypergeometric definition (<ref>),
the _2F_1's become of a form
where c=2b. Then for both equations we use the quadratic
transformation of the Gauss hypergeometric function <cit.>. This transforms the _2F_1 to a form which is recognizable with the right-hand sides through (<ref>), (<ref>), respectively. This completes the proof.
The restrictions on the parameters
come directly by applying the restrictions on the parameters in Theorem <ref> to the Jacobi functions of the second kind on both sides of the relations.
There is also an interesting alternative additional
quadratic transformation for the Jacobi function
of the second kind with α=±1/2. Note that
there does not seem to be a corresponding formula
for the Jacobi function of the first kind since in
this case the functions which would appear
on the left-hand side would be a sum of two Gauss hypergeometric
functions.
Let z∈ℂ such that |z|<1, β,γ∈ℂ such that
β+γ+1/2∉-ℕ_0. Then
C_2γ+1^β(z)=2^2γ+2Γ(β+γ+1/2)/Γ(-γ-1/2)Γ(2γ+2)Γ(β)(1-z^2)^β+γ+1/2
Q_-γ-1^(-1/2,β+2γ+1)(1+z^2/1-z^2),
C_2γ^β(z)=2^2γ+1Γ(β+γ+1/2)z/Γ(-γ+1/2)Γ(2γ+1)Γ(β)(1-z^2)^β+γ+1/2
Q_-γ-1^(1/2,β+2γ)(1+z^2/1-z^2).
The results are easily verified by starting with
(<ref>), (<ref>), substituting the related
values in the Jacobi function of the second kind and
comparing with associated Legendre functions of the
first kind with argument √((z-1)/(z+1)) and
utilizing a quadratic transformation of the Gauss
hypergeometric function which relates the two
completes the proof.
Note that in Theorem <ref>, if the argument
of the Jacobi function of the second kind has modulus greater than unity then the argument of the Gegenbauer function of the first kind has modulus less than unity.
Let z,β,γ∈ℂ such that z∈ℂ∖[-1,1]. Then
Q_γ^(1/2,β)(z)=
2^β+3γ+5/2Γ(-2γ-1)Γ(γ+3/2)Γ(β+2γ+2)/Γ(β+γ+3/2)(z-1)^1/2(z+1)^β+γ+1
C_-2γ-2^β+2γ+2(√(z-1/z+1)),
where -2γ-1,γ+3/2,β+2γ+2∉-ℕ_0, and
Q_γ^(-1/2,β)(z)=
2^β+3γ+1/2Γ(-2γ)Γ(γ+1/2)Γ(β+2γ+1)/Γ(β+γ+1/2)(z+1)^β+γ+1/2
C_-2γ-1^β+2γ+1(√(z-1/z+1)),
where -2γ,γ+1/2,β+2γ+1∉-ℕ_0.
Inverting Theorem <ref> completes the proof.
Note that the above results imply the following corollary.
Let z,β,γ∈ℂ such that z∈ℂ∖[-1,1],
γ+3/2,β+γ+1∉-ℕ_0. Then
Q_γ^(1/2,β)(z)=Γ(γ+3/2)Γ(β+γ+1)/Γ(γ+1)Γ(β+γ+3/2)(2/z-1)^1/2
Q_γ+1/2^(-1/2,β)(z).
Equating the two relations in Theorem <ref>
completes the proof.
Let x∈ℂ∖((-∞,-1]∪[1,∞)). Then
the relation between the symmetric and antisymmetric
Jacobi functions of the second kind on-the-cut and
the Ferrers function of the second kind are given by
Q_γ^(α,α)(x)
=2^αΓ(α+γ+1)/Γ(γ+1)
(1-x^2)^1/2α Q_γ+α^-α(x),
Q_γ^(α,-α)(x)
=Γ(α+γ+1)/Γ(γ+1)(1+x/1-x)^1/2α Q_γ^-α(x),
where α+γ∉-ℕ,
Q_γ^(-α,α)(x)
=Γ(γ-α+1)/Γ(γ+1)(1-x/1+x)^1/2α Q_γ^α(x),
where γ-α∉-ℕ.
The result follows by taking into account (<ref>) and
cf. <cit.>
Q_ν^μ (x) = π/2sin(πμ)[
cos(π(ν + μ))
Γ(ν+μ+1)/
Γ(ν-μ+1)( 1+x/1-x)^1/2μ21-ν, ν+11+μ1+x/2
-
cos(πν)
( 1-x/1+x)^1/2μ21-ν, ν+11 - μ1+x/2]
,
where ν∈ℂ, μ∈ℂ∖ℤ, such
that ν + μ∉ -ℕ.
The formula (<ref>) is obtained by taking β=α, then comparing (<ref>) with (<ref>).
The other identities follow by applying an analogous method taking β=-α.
This completes the proof.
§ ADDITION THEOREMS FOR THE JACOBI FUNCTION OF THE FIRST KIND
The Flensted-Jensen–Koornwinder addition theorem for Jacobi functions of the first kind is the extension of the Koornwinder addition theorem for Jacobi polynomials when the degree is allowed to be a complex number. This addition theorem has two separate contexts
and some interesting special cases. We will refer to the two separate
contexts as the hyperbolic and trigonometric contexts.
The hyperbolic context
arises when the Jacobi function is analytically continued in
the complex plane from the ray [1,∞).
The trigonometric context
arises when the argument of the Jacobi function
is analytically continued from the
real segment (-1,1).
First we will present the addition theorem for the Jacobi function of the first kind in the hyperbolic context. As we will see, the Jacobi function in the trigonometric context can be obtained from the Jacobi functions in the hyperbolic context (and vice versa).
We now present the most general form of the addition theorem for Jacobi functions of the first kind
in the hyperbolic and trigonometric contexts.
Let γ,α,β∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
x,w∈ℂ,
Z^±:=Z^±(z_1,z_2,w,x)=2z_1^2z_2^2+2w^2
(z_1^2-1)(z_2^2-1)± 4z_1z_2wx(z_1^2-1)^1/2(z_2^2-1)^1/2-1,
X^±:= X^±(x_1,x_2,w,x)=2x_1^2x_2^2+2w^2
(1-x_1^2)(1-x_2^2)
± 4x_1x_2wx(1-x_1^2)^1/2
(1-x_2^2)^1/2-1,
such that the complex variables γ,α,β,z_1,z_2,x_1,x_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
P_γ^(α,β)(Z^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k(-γ)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(-β-γ)_l/(α+γ+1)_l
(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×
P_γ-k^(α+k+l,β+k-l)(2z_1^2-1)
P_γ-k^(α+k+l,β+k-l)(2z_2^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/βC_k-l^β(x),
P_γ^(α,β)( X^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k(-γ)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(-β-γ)_l/(α+γ+1)_l
(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
× P_γ-k^(α+k+l,β+k-l)(2x_1^2-1)
P_γ-k^(α+k+l,β+k-l)(2x_2^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/βC_k-l^β(x).
Start with the form of the Flensted-Jensen–Koornwinder addition theorem in <cit.> (see also <cit.>). Define the Flensted-Jensen–Koornwinder–Jacobi function of the first kind <cit.> (Flensted-Jensen–Koornwinder refer to this function as the Jacobi function of the first kind)
φ_λ^(α,β)(t):=211/2(α+β+1+iλ),1/2(α+β+1-iλ)α+1-sinh^2t,
and express it in terms of
the Jacobi function of the first kind using
φ_λ^(α,β)(t)=Γ(α+1)
Γ(-12(α+β-1+iλ))/Γ(12(α-β+1-iλ))
P_-12(α+β+1+iλ)^(α,β)(cosh(2t)),
which follows by comparing the Gauss hypergeometric
representations of the functions. Replacing
λ=i(α+β+2γ+1) and setting
z_1=cosh t_1, z_2=cosh t_2 and w=cosψ
produces the form of the addition theorem (<ref>). Then analytically continuing
(<ref>) to X^±∈(-1,1)
using
(<ref>)
produces (<ref>). This
completes the proof.
It is worth mentioning that in the definitions of Z^±(<ref>) and X^±(<ref>), the
influence of the ± 1 factor on the addition theorems in
Theorem <ref> and elsewhere in this paper is simply due
to the influence of the parity relation for ultraspherical polynomials (<ref>) upon the
reflection map x↦ -x.
Note that there are various ways of expressing the variables Z^±(<ref>) and X^±(<ref>),
which are useful in different applications. For instance, we may also write
Z^±=
2z_1^2z_2^2(1-x^2)-1+2(z_1^2-1)(z_2^2-1)(w±xz_1z_2/√((z_1^2-1)(z_2^2-1)))^2
=2(z_1^2-1)(z_2^2-1)(
2z_1^2z_2^2(1-x^2)-1/2(z_1^2-1)(z_2^2-1)+(w±xz_1z_2/√((z_1^2-1)(z_2^2-1)))^2),
X^±=
2x_1^2x_2^2(1-x^2)-1+2(1-x_1^2)(1-x_2^2)(w±xx_1x_2/√((1-x_1^2)(1-x_2^2)))^2
=2(1-x_1^2)(1-x_2^2)(
2x_1^2x_2^2(1-x^2)-1/2(1-x_1^2)(1-x_2^2)+(w±xx_1x_2/√((1-x_1^2)(1-x_2^2)))^2).
First we will develop some tools which will help us prove the correct form of the double summation addition theorem for the Jacobi function of the second kind. Consider the orthogonality of the ultraspherical polynomials and the Jacobi polynomials with the argument
2w^2-1.
Let m,n,p∈_0, μ∈(-1/2,∞)α,β∈(-1,∞), α>β.
Then the ultraspherical and Jacobi polynomials satisfy the following orthogonality relations
∫_0^π C_m^μ(cosϕ)C_n^μ(cosϕ)(sinϕ)^2μ ϕ=π Γ(2μ+n)/2^2μ-1(μ+n)n! Γ(μ)^2δ_m,n,
∫_0^1 P_m^(α-β-1,β+p)(2w^2-1)
P_n^(α-β-1,β+p)(2w^2-1)w^2β+2p+1(1-w^2)^α-β-1 w
=Γ(α-β+n)Γ(β+1+p+n)/2(α+p+2n)Γ(α+p+n)n!δ_m,n.
These orthogonality relations follow easily from <cit.> upon making the straightforward substitutions.
§.§ The parabolic biangle orthogonal polynomial system
Define the 2-variable orthogonal polynomial system
which are sometimes referred to as parabolic biangle polynomials <cit.>
𝒫_k,l^(α,β)(w,ϕ):=
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)C_k-l^β(cosϕ),
where k,l∈_0 such that l≤ k.
These 2-variable polynomials are orthogonal over (w,ϕ)∈(0,1)×(0,π) with
orthogonality measure m^(α,β)(w,ϕ) defined by
m^(α,β)(w,ϕ):=(1-w^2)^α-β-1w^2β+1(sinϕ)^2β w ϕ.
The orthogonal polynomial system 𝒫_k,l^(α,β)(w,ϕ) is deeply connected to the addition theorem for Jacobi functions of the first and second kind. Using
the orthogonality relations in Lemma <ref> we can derive the orthogonality relation for the 2-variable parabolic biangle polynomials.
Let k,l,k',l'∈_0 such that l≤ k, l'≤ k', α,β∈(-1,∞), α>β. Then the 2-variable parabolic biangle polynomials
satisfy the following orthogonality relation
∫_0^1∫_0^π𝒫_k,l^(α,β)(w,ϕ)
𝒫_k',l'^(α,β)(w,ϕ) m^(α,β)(w,ϕ)=π Γ(β+1+k)Γ(2β+k-l)Γ(α-β+l)/2^2βΓ(β)^2(α+k+l)(β+k-l)Γ(α+k)(k-l)!l!δ_k,k'δ_l,l'.
Starting with the definition of the 2-variable
parabolic biangle polynomials (<ref>) and integrating over
(w,ϕ)∈(0,1)×(0,π) with
measure (<ref>) and using
the orthogonality relations in Lemma <ref> completes the proof.
The following result is a Jacobi function of the first kind generalization
of <cit.> for Jacobi polynomials.
Let k,l∈_0 with l≤ k, γ,α,β∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
Z^±
defined in (<ref>),
such that x=cosϕ and the complex variables γ,α,β,z_1,z_2 are in some
yet to be determined neighborhood of the real line.
Then
∫_0^1 ∫_0^π P_γ^(α,β)(Z^±) w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)C_k-l^β(cosϕ)
m^(α,β)(w,ϕ)
=(∓ 1)^k+l A^(α,β,γ)_k,l
(z_1z_2)^k-l ((z_1^2-1)(z_2^2-1))^1/2(k+l) P_γ-k^(α+k+l,β+k-l)(2z_1^2-1)
P_γ-k^(α+k+l,β+k-l)(2z_2^2-1),
where
A^(α,β,γ)_k,l:=πΓ(γ+1)(α+β+γ+1)_kΓ(2β+k-l)Γ(α-β+l)(-β-γ)_l/2^2βΓ(β)(-γ)_k (k-l)! l! Γ(α+γ+1+l).
Start with the addition theorem
for the Jacobi function of the first kind
(<ref>) and consider the (k,l)-th term in the double series. It involves
a product of two Jacobi functions of the first
kind with degree γ-k and parameters (α+k+l,β+k-l).
Replace in (<ref>) the summation indices k,l by k',l', multiply both sides of (<ref>) by
𝒫_k,l^(α,β)(w,ϕ) m^(α,β)(w,ϕ),
and integrate both sides over (w,ϕ)∈(0,1)×(0,π)
using (<ref>),
(<ref>). This completes the proof.
We will return to the parabolic biangle polynomials in Section <ref>.
§.§ Special cases of the addition theorem for the Jacobi function of the first kind
In the case when z_1,z_2,x_1,x_2,w,x=cosϕ are real numbers then the argument of the Jacobi
function of the first kind in the addition theorem takes a simpler form convenient form and was proved in Flensted-Jensen–Koornwinder <cit.>.
In the case where the variables z_1,z_2,x_1,x_2,x,w are real then you may
write Z^± and X^± as follows
Z^±=2|z_1z_2±^iϕw√(z_1^2-1)√(z_2^2-1)|^2-1,
X^±=2|x_1x_2±^iϕw√(1-x_1^2)√(1-x_2^2)|^2-1.
We now give a result which appears to be identical to Theorem <ref>, but it must be emphasized that it is only in the real case that we are able to write
Z^±, X^± using
(<ref>), (<ref>).
Otherwise one must use (<ref>), (<ref>).
Let γ,α,β∈ℂ, z_1,z_2∈(1,∞),
x_1,x_2∈(-1,1),
w∈ℝ, ϕ∈[0,π],
and Z^±, X^± is defined as
in (<ref>), (<ref>) respectively.
Then
P_γ^(α,β)(Z^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k(-γ)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(-β-γ)_l/(α+γ+1)_l
(z_1z_2)^k-l((z_1^2-1) (z_2^2-1))^k+l/2
×
P_γ-k^(α+k+l,β+k-l)(2z_1^2-1)
P_γ-k^(α+k+l,β+k-l)(2z_2^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)
β+k-l/βC_k-l^β(cosϕ),
P_γ^(α,β)( X^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k(-γ)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(-β-γ)_l/(α+γ+1)_l
(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
× P_γ-k^(α+k+l,β+k-l)(2x_1^2-1)
P_γ-k^(α+k+l,β+k-l)(2x_2^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/βC_k-l^β(cosϕ).
Starting with Theorem <ref>
and restricting such that the variables
z_1,z_2,x_1,x_2,w,x=cosϕ are real
completes the proof.
Next we have a specialization of Theorem <ref>
when w=1.
Let γ,α,β∈ℂ, r_1,r_2∈[0,∞), θ_1,θ_2∈[0,π/2],
ϕ∈[0,π],
Z^±:=2|cosh r_1cosh r_2±^iϕsinh r_1sinh r_2|^2-1
=cosh(2 r_1)cosh(2 r_2)±sinh(2 r_1)sinh(2 r_2)cosϕ,
X^±:=2|cosθ_1cosθ_2±^iϕsinθ_1sinθ_2|^2-1
=cos(2θ_1)cos(2θ_2)±sin(2θ_1)sin(2θ_2)cosϕ.
Then
P_γ^(α,β)(Z^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α)_k(α/2+1)_k(-β-γ)_k(α+β+γ+1)_k/(α/2)_k(β+1)_k(-γ)_k(α+γ+1)_k
(sinh r_1sinh r_2)^2k
×∑_l=0^k
(∓ 1)^l(α-β)_k-l(-α-γ-k)_l(-α-2k+1)_l/(k-l)!(-α-2k)_l(β+γ+1)_l
( r_1 r_2
)^l
×
P_γ-k^(α+2k-l,β+l)(cosh(2r_1))
P_γ-k^(α+2k-l,β+l)(cosh(2r_2))β+l/βC_l^β(cosϕ)
.
P_γ^(α,β)( X^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α)_k(α/2+1)_k(-β-γ)_k(α+β+γ+1)_k/(α/2)_k(β+1)_k(-γ)_k(α+γ+1)_k
(sinθ_1sinθ_2)^2k
×∑_l=0^k
(∓ 1)^l(α-β)_k-l(-α-γ-k)_l(-α-2k+1)_l/(k-l)!(-α-2k)_l(β+γ+1)_l
(θ_1θ_2
)^l
×
P_γ-k^(α+2k-l,β+l)(cos(2θ_1))
P_γ-k^(α+2k-l,β+l)(cos(2θ_2))β+l/βC_l^β(cosϕ)
.
Start with Theorem <ref> and let w=1 using
(<ref>)
and substituting l↦ l'=k-l followed by relabeling l'↦ l completes the proof.
By letting α=β in Corollary <ref> we
can relate the above result to associated Legendre and Gegenbauer functions of the first kind.
This is mentioned in <cit.>,
namely that Koornwinder's addition theorem for
Jacobi polynomials generalizes
Gegenbauer's addition theorem
(<ref>). Similarly, the extension to the
Flensted-Jensen–Koornwinder addition theorem for Jacobi functions of the first kind generalizes the addition theorem for Gegenbauer functions of the first kind. First we define
the variables
𝒵^±:=𝒵^±(r_1,r_2,ϕ):=cosh r_1cosh r_2±sinh r_1sinh r_2cosϕ,
𝒳^±:=𝒳^±(θ_1,θ_2,ϕ):=cosθ_1cosθ_2±sinθ_1sinθ_2cosϕ.
Let γ,α∈ℂ,
r_1,r_2∈[0,∞),
θ_1,θ_2∈[0,π/2],
ϕ∈[0,π],
and 𝒵^±, 𝒳^± as defined in (<ref>), (<ref>), respectively.
Then
C_γ^α(𝒵^±)=Γ(2α)Γ(γ+1)/Γ(2α+γ)∑_k=0^∞(∓ 1)^k 2^2k(α)_k(α)_k/(-γ)_k(2α+γ)_k(sinh(2r_1)sinh(2r_2))^k
× C_γ-k^α+k(cosh(2r_1))C_γ-k^α+k(cosh(2r_2))α-12+k/α-12C_k^α-1/2(cosϕ),
C_γ^α(𝒳^±)=Γ(2α)Γ(γ+1)/Γ(2α+γ)∑_k=0^∞(∓ 1)^k 2^2k(α)_k(α)_k/(-γ)_k(2α+γ)_k(sin(2θ_1)sin(2θ_2))^k
× C_γ-k^α+k(cos(2θ_1))C_γ-k^α+k(cos(2θ_2))α-12+k/α-12C_k^α-1/2(cosϕ),
or equivalently
1/(1-𝒵^±^2)^1/2α P_γ^-α(𝒵^±)=2^αΓ(α+1)/(sinh(2θ_1)sinh(2θ_1))^α
×∑_k=0^∞(± 1)^k(α-γ)_k(α+γ+1)_kP_γ^-α-k(cosh(2r_1))P_γ^-α-k(cosh(2r_2))α+k/αC_k^α(cosϕ),
1/(1-𝒳^±^2)^1/2α P_γ^-α(𝒳^±)=2^αΓ(α+1)/(sin(2θ_1)sin(2θ_1))^α
×∑_k=0^∞(± 1)^k(α-γ)_k(α+γ+1)_k P_γ^-α-k(cos(2θ_1)) P_γ^-α-k(cos(2θ_2))α+k/αC_k^α(cosϕ).
Start with Corollary <ref> and let α=β using
(<ref>),
(<ref>) respectively for the Jacobi functions of the first kind on the left-hand side and on the right-hand side. Then mapping
(2z_1^2-1,2z_2^2-1)↦(z_1,z_2),
(2x_1^2-1,2x_2^2-1)↦(x_1,x_2),
where z_1=cosh r_1, z_2=cosh r_2, x_1=cosθ_1, x_2=cosθ_2,
and simplifying using (<ref>)–(<ref>)
completes the proof.
Another way to prove this result is to take
β=-1/2, w=cosψ=1, γ→2γ in (<ref>) and use the quadratic transformation (<ref>). After using (<ref>), this produces the left-hand side of (<ref>) with degree 4γ
and order given by α+1/2. Because we set w=1, the sum over l only survives for l=0,1. By taking 4γ↦γ and expressing the contribution due to each of these terms one can identify Gegenbauer's addition theorem through repeated application of (<ref>) on the right-hand side and that
∑_k=0^∞(f_2k+f_2k+1)=∑_k=0^∞ f_k,
for some sequence {f}_k∈ℕ_0, one
arrives at (<ref>).
This other proof is similar for (<ref>).
§ ADDITION THEOREMS FOR THE JACOBI FUNCTION OF THE SECOND KIND
Now we present double summation addition theorems for the Jacobi functions of the second kind in the hyperbolic and trigonometric contexts.
§.§ The hyperbolic context for the addition theorem for the Jacobi function of the second kind
Now, we present the double summation addition theorem for the Jacobi function of the second kind in the hyperbolic context.
Define
z_≶:=[ min; max ]{z_1,z_2},
where z_1,z_2∈(1,∞),
and in the case where z_1,z_2∈, then if one
takes without loss of generality z_1=z_> to lie on
an ellipse with foci at ± 1, then z_2=z_< must
be chosen to be in the interior of that ellipse.
Let γ,α,β∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
x,w∈ℂ, Z^±
defined in (<ref>),
such that the complex variables γ,α,β,z_1,z_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
Q_γ^(α,β)(Z^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(γ+1)_k/(α+k)(β+1)_k(1-γ)_k
×∑_l=0^k
(± 1)^k-l
(α+k+l)
(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×
P_γ-k^(α+k+l,β+k-l)(2z_<^2-1)
Q_γ-k^(α+k+l,β+k-l)(2z_>^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/βC_k-l^β(x).
Start with Theorem <ref> which is equivalent
to (<ref>).
Now use the connection relation which relates the Jacobi function of the first kind with two Jacobi functions of the second kind (<ref>) once in the integrand of the double integral and again on the Jacobi function of the first kind
with argument 2z_2^2-1, assuming without
loss of generality that z_2=z_>.
This results in the following equation
B_γ^(α,β)∫_0^1∫_0^πQ_γ^(α,β)(Z^±) w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)C_k-l^β(cosϕ)
m^(α,β)(w,ϕ)
+ C_γ,k,l^(α,β)(z_1z_2)^k-l ((z_1^2-1)(z_2^2-1))^1/2(k+l)
P_γ-k^(α+k+l,α+k-l)(2z_<^2-1) Q_γ-k^(α+k+l,α+k-l)(2z_>^2-1)
= D_γ^(α,β)∫_0^1∫_0^πQ_-α-β-γ-1^(α,β)(Z^±)w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)C_k-l^β(cosϕ)
m^(α,β)(w,ϕ)
+ E_γ,k,l^(α,β)(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^1/2(k+l)
P_-α-β-γ-k-1^(α+k+l,α+k-l)(2z_<^2-1)
Q_-α-β-γ-k-1^(α+k+l,α+k-l)(2z_>^2-1),
where
B_γ^(α,β):=-2sin(πγ)sin(π(β+γ))/πsin(π(α+β+2γ+1)),
C_γ,k,l^(α,β):
=sin(πγ)sin(π(β+γ))Γ(γ+1)(α+β+γ+1)_kΓ(2β+k-l)Γ(α-β+l)(-β-γ)_l/2^2β-1sin(π(α+β+2γ+1))Γ(β)(-γ)_k(k-l)! l! Γ(α+γ+1+l),
D_γ^(α,β):
=
-2sin(π(α+γ))sin(π(β+γ))Γ(α+γ+1)Γ(β+γ+1)/πsin(π(α+β+2γ+1))Γ(γ+1)Γ(α+β+γ+1),
E_γ,k,l^(α,β):=sin(π(α+γ))sin(π(β+γ))Γ(β+γ+1)Γ(2β+k-l)Γ(α-β+l)/2^2β-1sin(π(α+β+2γ+1))Γ(β)Γ(α+β+γ+1)(k-l)! l!.
Now consider the asymptotics of all four terms
as z_2→∞.
The asymptotic behavior of Z^± as z_2→∞ is Z^±∼ z_2^2. The behavior
of the Jacobi function of the second kind
as the argument |z|→∞ is (<ref>)
Q_γ^(α,β)(z)∼1/z^α+β+γ+1.
Therefore, one has
the following asymptotic behavior considered as functions of
ζ=z_> with z_< fixed,
Q_γ^(α,β)(Z^±)
∼(Z^±)^-γ-α-β-1∼ζ^-2γ-2α-2β-2,
Q_-α-β-γ-1^(α,β)(Z^±)∼ (Z^±)^γ∼ζ^2γ,
Q_γ-k^(α+k+l,β+k-l)(ζ)
∼ζ^-γ-α-β-k-1,
Q_-α-β-γ-1-k^(α+k+l,β+k-l)(ζ)∼ζ^γ-k.
The above relation (<ref>) with leading order asymptotic contribution as z_2→∞ taken out
can be written as a function of two analytic functions
f(z_2^-1):= f_γ,k,l^(α,β)(z_1,z_2^-1),
g(z_2^-1):= g_γ,k,l^(α,β)(z_1,z_2^-1),
as
z_2^-2(γ+α+β+1) f(z_2^-1)=z_2^2γ g(z_2^-1).
For 4γ∉-2(α+β+1)+ℤ,
the only way the equation
can be true is if f= g
identically vanishes. The case of general γ then follows by analytic continuation in γ.
Therefore we have now verified separately all terms in the double series expansion of the Jacobi function of the second kind given in (<ref>).
Given this, and proof of overall
convergence by Koornwinder and Flensted-Jensen <cit.>,
this completes the proof.
If you apply (<ref>) to the Jacobi functions of the second kind on the left-hand side and right-hand side of (<ref>), then it becomes the hyperbolic context of the addition theorem for Jacobi polynomials.
Let k,l∈_0 with l≤ k, γ,α,β∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
Z^±
defined in (<ref>),
such that x=cosϕ and the complex variables γ,α,β,z_1,z_2 are in some
yet to be determined neighborhood of the real line.
Then
∫_0^1 ∫_0^π Q_γ^(α,β)(Z^±) w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)C_k-l^β(cosϕ)
m^(α,β)(w,ϕ),
=(± 1)^k+l A^(α,β,γ)_k,l
(z_1z_2)^k-l ((z_1^2-1)(z_2^2-1))^1/2(k+l) P_γ-k^(α+k+l,β+k-l)(2z_<^2-1)
Q_γ-k^(α+k+l,β+k-l)(2z_>^2-1),
where A_k,l^(α,β,γ) is defined in
(<ref>).
This follows directly from (<ref>) since in the proof of Theorem <ref>, we showed that f=0. The result g=0 is equivalent to this result under the transformation γ↦-γ-α-β-1.
The above integral representations Theorem <ref> and Corollary <ref> are equivalent to the double summation addition theorems for the Jacobi function of the first kind (<ref>) and second kind, Theorem <ref>.
One has the following well-known product representations which are
the k=l=0 contribution
of the integral representations Theorem <ref> and Corollary <ref>.
Let x=cosϕ, γ,α,β∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
Z^± defined in (<ref>),
such that the complex variables
γ,α,β,z_1,z_2 are in some
yet to be determined neighborhood of the real line.
Then
P_γ^(α,β)(2z_1^2-1)
P_γ^(α,β)(2z_2^2-1)=2Γ(α+γ+1)/√(π) Γ(γ+1)Γ(β+1/2)Γ(α-β)∫_0^1
∫_0^πP_γ^(α,β)(Z^±) m^(α,β)(w,ϕ),
P_γ^(α,β)(2z_<^2-1)
Q_γ^(α,β)(2z_>^2-1)=
2Γ(α+γ+1)/√(π) Γ(γ+1)Γ(β+1/2)Γ(α-β)∫_0^1
∫_0^π
Q_γ^(α,β)(Z^±) m^(α,β)(w,ϕ).
For independent verification of these product representations, see <cit.>
for the product formula (<ref>)
and
<cit.>
for the product formula (<ref>).
In the case when z_1,z_2,w,x=cosϕ are real numbers, then the argument of the Jacobi
function of the second kind in the addition theorem for the Jacobi function of the second kind takes a simpler and more convenient form.
This is analogous to the Flensted-Jensen–Koornwinder addition theorem of the first kind (<ref>).
We present this result now.
Let γ,α,β,w∈ℝ, α+γ,β+γ∉-,
ϕ∈[0,π],
z_1,z_2∈(1,∞),
and Z^±, z_≶, as defined as in (<ref>), (<ref>), respectively.
Then
Q_γ^(α,β)(Z^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k(-γ)_k
×∑_l=0^k
(± 1)^k+l(α+k+l)(-β-γ)_l/(α+γ+1)_l
(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×
P_γ-k^(α+k+l,β+k-l)(2z_<^2-1)
Q_γ-k^(α+k+l,β+k-l)(2z_>^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)
β+k-l/βC_k-l^β(cosϕ).
This follows from
Theorem <ref>
by setting the variables
γ,α,β,z_1,z_2,w,x=cosϕ to real numbers.
§.§ The trigonometric context of the addition theorem for the Jacobi function of the second kind
In the trigonometric context for the addition theorem for Jacobi functions of the second kind, one must then use the Jacobi function of the second kind on-the-cut Q_γ^(α,β)(x)(<ref>), which are
defined in Section <ref> and
have a hypergeometric representation
given by (<ref>). Note that this representation is not unique and there are many other double Gauss hypergeometric representations of this function. For more about this see the discussion immediately above (<ref>).
Define
x_≶:=[ min; max ]{x_1,x_2},
where x_1,x_2∈(-1,1),
and in the case where x_1,x_2∈, then if one takes without loss of generality x_1=x_> to lie on an ellipse with foci at ± 1, then x_2=x_< must be chosen to be in the interior of that ellipse.
Let k,l∈_0, l≤ k, γ,α,β∈ℂ,
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
α∉, α+γ,β+γ∉-,
X^±, x_≶ as defined
in (<ref>), (<ref>) respectively,
such that the complex variables γ,α,β,x_1,x_2 are in some
yet to be determined neighborhood of the real line.
Then
∫_0^1 ∫_0^π Q_γ^(α,β)(X^±) w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)C_k-l^β(cosϕ)
m^(α,β)(w,ϕ)
=(∓ 1)^k+l A^(α,β,γ)_k,l
(x_1x_2)^k-l ((1-x_1^2)(1-x_2^2))^1/2(k+l) Q_γ-k^(α+k+l,β+k-l)(2x_<^2-1)
P_γ-k^(α+k+l,β+k-l)(2x_>^2-1),
where A_k,l^(α,β,γ) is defined in
(<ref>).
Starting with
Corollary <ref> and directly applying
(<ref>)
completes the proof.
Let γ,α,β∈ℂ,
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
α∉, α+γ,β+γ∉-,
X^±, x_≶ as defined
in (<ref>), (<ref>) respectively,
such that the complex variables γ,α,β,x_1,x_2 are in some
yet to be determined neighborhood of the real line.
Then
∫_0^1 ∫_0^π Q_γ^(α,β)(X^±)
m^(α,β)(w,ϕ)
=√(π) Γ(γ+1)Γ(β+1/2)Γ(α-β)/2Γ(α+γ+1) Q_γ^(α,β)(2x_<^2-1)
P_γ^(α,β)(2x_>^2-1).
Starting with Theorem <ref> and setting k=l=0 completes the proof.
Let γ,α,β∈ℂ,
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
α∉, α+γ,β+γ∉-,
x,w∈ℂ with
X^±, x_≶ as defined
in (<ref>), (<ref>) respectively,
such that the complex variables γ,α,β,x_1,x_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
Q_γ^(α,β)( X^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k(-γ)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(-β-γ)_l/(α+γ+1)_l
(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
× Q_γ-k^(α+k+l,β+k-l)(2x_<^2-1)
P_γ-k^(α+k+l,β+k-l)(2x_>^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/βC_k-l^β(x) .
The result follows by starting with the addition theorem
for Jacobi functions of
the second kind (<ref>) and applying the definition
(<ref>)
completes the proof.
In the case when z_1,z_2,w,x=cosϕ are real numbers, then the argument of the Jacobi
function of the second kind on-the-cut in the addition theorem for the Jacobi function of the second kind takes a simpler and more convenient form.
This is analogous to the addition theorem (<ref>).
We present this result now.
Let γ,α,β,w∈ℝ,
α∉, α+γ,β+γ∉-,
x_1,x_2∈(-1,1), ϕ∈[0,π],
with X^±, x_≶ as defined
in (<ref>), (<ref>) respectively.
Then
Q_γ^(α,β)( X^±)
=Γ(α+1)Γ(γ+1)/Γ(α+γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k(-γ)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(-β-γ)_l/(α+γ+1)_l
(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
× Q_γ-k^(α+k+l,β+k-l)(2x_<^2-1)
P_γ-k^(α+k+l,β+k-l)(2x_>^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/βC_k-l^β(cosϕ).
This result follows from Theorem <ref> by setting the complex variables to be real.
§ OLVER NORMALIZED JACOBI FUNCTIONS AND THEIR ADDITION THEOREMS
Koornwinder's addition theorem
for Jacobi polynomials with degree n∈ℕ_0(<ref>) is terminating with the k sum being over k∈{0,…,n}. One can see this by examination of (<ref>), (<ref>)
by recognizing that both Jacobi polynomials P_γ-k^(α+k+l,β+k-l) vanish for γ=n∈ℕ_0 and k≥ n+1. However, considering the limit as γ→ n for all values of k∈ℕ_0 in Koornwinder's addition theorem, the factor 1/(-γ)_k blows up for k≥ n+1. On the other hand, this factor in the limit when multiplied by the
Jacobi function of the first kind prefactor containing 1/Γ(γ-k+1), while considering the residues of the gamma function, the product will be
finite, namely
lim_γ→ n1/(-γ)_kΓ(γ-k+1)=
lim_γ→ nΓ(-γ)/Γ(-γ+k)Γ(γ-k+1)=(-1)^k/n!,
for k≥ n+1.
But when this finite factor is multiplied by the
second Jacobi polynomial P_γ-k^(α+β
+k+l,β+k-l) which vanishes for k≥ n+1, the
resulting expression vanishes for all these k values
which results in a terminating sum over k∈{0,…,n}.
Unlike Koornwinder's addition theorem for
Jacobi polynomials, the addition theorem for the
Jacobi functions of the second kind (see
<ref>) is not a terminating sum.
One can see this by examination of (<ref>),
by recognizing that the Jacobi polynomials
P_γ-k^(α+k+l,β+k-l) vanish for
γ=n∈ℕ_0 and k≥ n+1. However,
considering the limit as γ→ n for all values
of k∈ℕ_0 in Koornwinder's addition theorem,
the factor 1/(-γ)_k blows up for k≥ n+1.
On the other hand, this factor in the limit is multiplied by the
Jacobi function of the first kind prefactor containing
1/Γ(γ-k+1), while considering the residues
of the gamma function, the product will be
finite, namely (<ref>),
for k≥ n+1.
This finite factor is then multiplied by the Jacobi function
of the second kind
Q_γ-k^(α+β+k+l,β+k-l)(2z_>^2-1)
which does not vanish for k≥ n+1 for
α,β∉ℤ, unlike the case for
Jacobi polynomials.
§.§ Olver normalized Jacobi functions
We previously introduced Olver's normalization of the Gauss <cit.> and generalized
hypergeometric function (<ref>)
(see also <cit.>)
which results in these functions being entire functions of all of the parameters which appear including all denominator factors.
Olver applied this concept of special normalization previously to the associated Legendre function of
the second kind
<cit.> (see also <cit.>).
We now demonstrate how to apply this concept to the Jacobi functions of the first and second kind.
In the above description, instead of carefully determining the limits of the
relevant functions when there are removable singularities due to the appearance of various gamma function prefactors,
an alternative option is to use appropriately defined Olver normalized Jacobi
functions and recast the addition theorems correspondingly.
The benefit of using Olver normalized definitions of the Jacobi functions is that
one avoids complications due to gamma functions with
removable singularities.
Typical examples of these benefits occur in the often
appearing examples when one has degrees γ and
parameters α,β given by integers.
In these cases, using the standard definitions such as
those which appear in Theorems <ref>, <ref>,
the appearing functions are not defined and careful limits must be taken. However, if one adopts carefully chosen Olver normalized definitions where
only the Olver normalized Gauss hypergeometric
functions are used, then these functions will be
entire for all values of the parameters.
As we will see, by using these definitions, we arrive
at formulas for the addition theorems which are elegant
and highly useful! First we give our new choice of
the Olver normalization and then give the relations of the
Olver normalized definitions in terms of the usual definitions.
Our definitions of Olver normalized Jacobi functions of
the first and second kind in the hyperbolic and trigonometric contexts are given by
P_γ^(α,β)(z):=21-γ,α+β+γ+1α+11-z/2,
Q_γ^(α,β)(z):=2^α+β+γ/(z-1)^α+γ+1(z+1)^β21γ+1,α+γ+1α+β+2γ+22/1-z,
𝖯_γ^(α,β)(x):=21-γ,α+β+γ+1α+11-x/2,
𝖰_γ^(α,β)(x):=
12Γ(α+1)(1+x/2)^γ(
cos(πα)Γ(α+γ+1)/Γ(γ+1)21-γ,-β-γ1+αx-1/x+1
-Γ(β+γ+1)/Γ(α+β+γ+1)(1+x/1-x)^α21-α-γ,-α-β-γ1-αx-1/x+1).
Therefore one has the following connection relations between the Jacobi functions of the first and second kinds and their Olver normalized counterparts, namely
P_γ^(α,β)(z)=Γ(α+γ+1)/Γ(γ+1)P_γ^(α,β)(z),
Q_γ^(α,β)(z)=Γ(α+γ+1)Γ(β+γ+1) Q_γ^(α,β)(z),
P_γ^(α,β)(x)=Γ(α+γ+1)/Γ(γ+1)𝖯_γ^(α,β)(x).
Note that
𝖯_γ^(α,β)(x)
=P_γ^(α,β)(x± i0),
as in (<ref>).
Furthermore in the special case γ=0 one has
Q_0^(α,β)(z):=2^α+β/(z-1)^α+1(z+1)^β211,α+1α+β++22/1-z.
As of the date of publication of this manuscript, we have been unable to find an Olver normalized version of the Jacobi function of the second kind on-the-cut
𝖰_γ^(α,β)(x). However we did find
a special normalization of this
function which works well
when γ=0 and the β parameters
take integer values which is of
particular importance because they appear in a very important application
(see Section <ref> below).
Let b∈_0. Define
𝒬_-k^(α+k+l,b+k-l)(x):=lim_γ→0,β→ b
(-β-γ)_l Q_γ-k^(α+k+l,β+k-l)(x),
which is a well-defined function for all α,b,x,k,l in its domain.
§.§ Addition theorems for the Olver normalized Jacobi functions
Now that we've introduced the Olver normalized Jacobi functions of the first and second kind in the hyperbolic and trigonometric contexts, we are in a position to perform the straightforward derivation of the corresponding addition theorems for these functions.
Let γ,α,β∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
x,w∈ℂ, and
Z^±, X^± as defined in
(<ref>), (<ref>), respectively,
such that the complex variables γ,α,β,z_1,z_2,x_1,x_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
P_γ^(α,β)(Z^±)=
Γ(α+1)Γ(α+γ+1)/Γ(γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k(-γ)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(α+γ+1)_l(-β-γ)_l(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×P_γ-k^(α+k+l,β+k-l)(2z_1^2-1)
P_γ-k^(α+k+l,β+k-l)(2z_2^2-1)w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/β C_k-l^β(x),
Q_γ^(α,β)(Z^±)=
Γ(α+1)Γ(α+γ+1)
Γ(β+γ+1)
∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(α+γ+1)_l(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×Q_γ-k^(α+k+l,β+k-l)(2z_>^2-1)
P_γ-k^(α+k+l,β+k-l)(2z_<^2-1)w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/β C_k-l^β(x),
P_γ^(α,β)( X^±)=
Γ(α+1)Γ(α+γ+1)/Γ(γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k(-γ)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(α+γ+1)_l(-β-γ)_l(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
×𝖯_γ-k^(α+k+l,β+k-l)(2x_1^2-1)
𝖯_γ-k^(α+k+l,β+k-l)(2x_2^2-1)w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/β C_k-l^β(x),
𝖰_γ^(α,β)( X^±)=
Γ(α+1)
∑_k=0^∞(-1)^k(α+β+γ+1)_k(α+1)_k/(α+k)(β+1)_k
×∑_l=0^k(∓ 1)^k-l
(α+k+l)(-β-γ)_l(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
×𝖰_γ-k^(α+k+l,β+k-l)(2x_<^2-1)
𝖯_γ-k^(α+k+l,β+k-l)(2x_>^2-1)w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/β C_k-l^β(x).
Substituting
(<ref>)–(<ref>)
into
(<ref>),
(<ref>),
(<ref>),
(<ref>) as necessary
completes the proof.
There are also the corresponding expansions that are
sometimes useful with the l sum reversed, i.e.,
making the replacement l'=k-l and then replacing
l'↦ l in Theorem <ref>.
These are given as follows.
Let γ,α,β∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
x,w∈ℂ, with
Z^±, X^± as defined in
(<ref>), (<ref>), respectively,
and
the complex variables γ,α,β,z_1,z_2,x_1,x_2,x,w
are in some yet to be determined neighborhood of the real line.
Then
P_γ^(α,β)(Z^±)=
Γ(α+1)Γ(α+γ+1)/Γ(γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k(-γ)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^l(α+2k-l)(α+γ+1)_k-l(-β-γ)_k-l(z_1z_2)^l((z_1^2-1)(z_2^2-1))^2k-l/2
×P_γ-k^(α+2k-l,β+l)(2z_1^2-1)
P_γ-k^(α+2k-l,β+l)(2z_2^2-1)
w^lP_k-l^(α-β-1,β+l)(2w^2-1)β+l/β C_l^β(x),
Q_γ^(α,β)(Z^±)=
Γ(α+1)Γ(α+γ+1)
Γ(β+γ+1)
∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^l(α+2k-l)(α+γ+1)_k-l(z_1z_2)^l((z_1^2-1)(z_2^2-1))^2k-l/2
×Q_γ-k^(α+2k-l,β+l)(2z_>^2-1)
P_γ-k^(α+2k-l,β+l)(2z_<^2-1)w^lP_l^(α-β-1,β+l)(2w^2-1)β+l/β C_l^β(x),
P_γ^(α,β)( X^±)=
Γ(α+1)Γ(α+γ+1)/Γ(γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k(-γ)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^l(α+2k-l)(α+γ+1)_k-l(-β-γ)_k-l(x_1x_2)^l((x_1^2-1)(x_2^2-1))^2k-l/2
×𝖯_γ-k^(α+2k-l,β+l)(2x_1^2-1)
𝖯_γ-k^(α+2k-l,β+l)(2x_2^2-1)
w^lP_k-l^(α-β-1,β+l)(2w^2-1)β+l/β C_l^β(x),
𝖰_γ^(α,β)( X^±)=
Γ(α+1)
∑_k=0^∞(-1)^k
(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^l(α+2k-l)(-β-γ)_k-l(x_1x_2)^l((x_1^2-1)(x_2^2-1))^2k-l/2
×𝖰_γ-k^(α+2k-l,β+l)(2x_<^2-1)
𝖯_γ-k^(α+2k-l,β+l)(2x_>^2-1)
w^lP_k-l^(α-β-1,β+l)(2w^2-1)β+l/β C_l^β(x).
Making the replacement l↦ k-l in
Theorem <ref> completes the proof.
By examining the expansion of the Jacobi function of
the first kind, one can see that the (-γ)_k
shifted factorial in this alternative expansion is
moved from the denominator to the numerator, so it is
more natural for Jacobi polynomials where the sum is
terminating.
One can see the benefit is that all the functions
involved in the expansions are well-defined for all
values of the parameters, including integer values.
These expansions are extremely useful for expansions
of fundamental solutions of rank-one symmetric spaces
where all the degrees and parameters are given by integers.
One no longer has any difficulties with the various
functions not being defined for certain parameter values.
This is completely resolved. One example is that in
the integer context for the Jacobi function of
the second kind, the functions appear with degree equal
to γ-k for all k∈ℕ_0.
These quickly become undefined for negative values
of the degree. However, since the Olver normalized Jacobi
functions are entire functions, there is no longer
any problem here. These alternative expansions are
highly desirable!
Now we consider some special cases which are potentially
useful in applications. The first application we treat
is for the substitution w=1
in Corollary <ref> using the special
value (<ref>) which we present now.
Let γ,α,β∈ℂ,
z_1,z_2∈ℂ∖(-∞,1],
x∈ℂ, with Z^± as defined in
(<ref>) and
the complex variables γ,α,β,z_1,z_2,x_1,x_2,x are in some
yet to be determined neighborhood of the real line.
Then
P_γ^(α,β)(Z^±)=
Γ(α+1)Γ(α+γ+1)/Γ(γ+1)∑_k=0^∞(α+1)_k(α+β+γ+1)_k(-γ)_k/(α+k)(β+1)_k
×∑_l=0^k
(-1)^l(α+2k-l)(α+γ+1)_k-l
(-β-γ)_k-l(α-β)_l/l!(z_1z_2)^l((z_1^2-1)(z_2^2-1))^2k-l/2
×β+l/β C_l^β(x)
P_γ-k^(α+2k-l,β+l)(2z_1^2-1)
P_γ-k^(α+2k-l,β+l)(2z_2^2-1),
Q_γ^(α,β)(Z^±)=
Γ(α+1)Γ(α+γ+1)
Γ(β+γ+1)
∑_k=0^∞(α+1)_k(α+β+γ+1)_k/(α+k)(β+1)_k
×∑_l=0^k
(-1)^l(α+2k-l)(α+γ+1)_k-l(α-β)_l/l!(z_1z_2)^l
((z_1^2-1)(z_2^2-1))^2k-l/2
×β+l/β C_l^β(x)
Q_γ-k^(α+2k-l,β+l)(2z_>^2-1)
P_γ-k^(α+2k-l,β+l)(2z_<^2-1).
Making the replacement w=1
in Theorem <ref> using the special
value (<ref>) completes the proof.
One nice application of this special case of the
addition theorems for Jacobi functions is the special
value α=β which corresponds
to Gegenbauer functions. Due to the occurrence of
the shifted factorial (α-β)_l only the l=0
term in the above sum survives, and we derive
the corresponding addition theorems.
Let γ,α,β∈ℂ,
z_1,z_2∈ℂ∖(-∞,1],
x∈ℂ, with Z^± as defined in
(<ref>) and
the complex variables γ,α,β,z_1,z_2,x_1,x_2,x
are in some yet to be determined neighborhood of the real line.
Then
P_γ^(α,α)(Z^±)=
Γ(α+1)Γ(α+γ+1)/Γ(γ+1)∑_k=0^∞(α+1)_k(α+α+γ+1)_k(-γ)_k/(α+k)(α+1)_k
×
(α+2k)(α+γ+1)_k(-α-γ)_k
((z_1^2-1)(z_2^2-1))^2k/2
×α+l/α C_0^α(x)
P_γ-k^(α+2k,α)(2z_1^2-1)
P_γ-k^(α+2k,α)(2z_2^2-1),
Q_γ^(α,α)(Z^±)=
Γ(α+1)Γ(α+γ+1)
Γ(α+γ+1)
∑_k=0^∞(α+1)_k(α+α+γ+1)_k/(α+k)(α+1)_k
×
(α+2k)(α+γ+1)_k
((z_1^2-1)(z_2^2-1))^2k/2
×α+l/α C_0^α(x)
Q_γ-k^(α+2k,α)(2z_>^2-1)
P_γ-k^(α+2k,α)(2z_<^2-1).
§.§ Application to the non-compact and compact symmetric spaces of rank one
Let d=_ℝ𝕂, where 𝕂 is equal to either the real numbers , the complex numbers , the quaternions or the octonions .
For d=1, namely the real case, these are Riemannian manifolds of constant curvature which include Euclidean ℝ^n space, real hyperbolic geometry ℝ_R^n (noncompact) and real hyperspherical (compact) geometry ℝ_R^n, in various models. For d∈{2,4,8},
it is well known that there exists isotropic Riemannian manifolds of both noncompact and compact type which are referred to as the rank one symmetric spaces, see for instance <cit.>.
These include the symmetric spaces given by the complex hyperbolic ^n_R, quaternionic hyperbolic ^n_R, and the octonionic hyperbolic plane ^2_R and the
complex projective space ^n_R, quaternionic projective space ^n_R, and the octonionic projective (Cayley) plane ^2_R,
where R>0 is their corresponding radii of curvatures.
The complex, quaternionic and octonionic rank one symmetric spaces have real dimension given by
2n,
4n
and
16.
For a description of the Riemannian manifolds given by the rank one symmetric spaces,
see for instance <cit.> and the references therein.
Riemannian symmetric spaces, compact and non-compact, come in infinite series (4 corresponding to simple complex groups and 7 corresponding to real simple groups) and a finite class of exceptional spaces, see <cit.>. Each of
those come with a commutative algebra of invariant differential operators and correspondingly, a class of eigenfunctions, the spherical functions. Which in the above discussed rank one examples are hypergeometric functions. This has been used as
a motivation for several generalizations of the classical Gauss hypergeometric functions. In particular the Heckman-Opdam
hypergeometric functions <cit.>. Another direction for generalization is the work by Macdonald
<cit.> and related publications.
Due to the isotropy of the symmetric spaces of rank one, a fundamental solution of the Laplace-Beltrami operator on these manifolds can be obtained by solving a one-dimensional ordinary differential equation given in terms of the geodesic distance. Laplace's equation is satisfied on these manifolds when the Laplace-Beltrami operator acts on an unknown function and the result is zero.
In geodesic polar coordinates the Laplace-Beltrami operator is given in the rank one noncompact (hyperbolic) symmetric spaces by
Δ= 1/R^2{∂^2/∂ r^2+[d(n-1) r+2(d-1) (2 r )]
∂/∂ r+1/sinh^2rΔ_K/M}
=
1/R^2{∂^2/∂ r^2+[(dn-1) r+(d-1)tanh r]
∂/∂ r+1/sinh^2rΔ_K/M}
=:1/R^2(Δ_r+1/sinh^2rΔ_K/M),
and on the rank one compact (projective) spaces it is given by
Δ= 1/R^2{∂^2/∂θ^2+[d(n-1)θ+2(d-1) (2 θ )]
∂/∂θ+1/sin^2θΔ_K/M}
=
1/R^2{∂^2/∂θ^2+[(dn-1)θ+(d-1)tanθ]
∂/∂θ+1/sin^2θΔ_K/M}
=:1/R^2(Δ_θ+1/sin^2θΔ_K/M),
where r and θ are the geodesic distance on the noncompact and compact rank one symmetric spaces respectively (see <cit.>).
For a spherically symmetric solution such as a fundamental solution, the contribution from Δ_K/M vanishes and one needs to solve Laplace's equation for radial solutions, namely
Δ_r u(r)=0, Δ_θ v(θ)=0.
For the solution to these equations, the homogeneous solutions to the second order ordinary differential equation which appears are given by Jacobi/hypergeometric functions (see <cit.>).
It can be easily verified that a basis for radial solutions can be given by
u(r)=a P_0^(α,β)(cosh(2r))+b Q_0^(α,β)(cosh(2r)).
v(θ)=c P_0^(α,β)(cos(2θ))+d Q_0^(α,β)(cos(2θ)),
where on the complex, quaternionic and octonionic rank one symmetric spaces one has α∈{n-1,2n-1,7}
and β∈{0,1,3} respectively <cit.>.
Furthermore, for a fundamental solution, the solutions need to be singular at the origin and match up to a Euclidean fundamental solutions locally. This requires that the solutions should be irregular at the origin (r=0 and θ=0).
Therefore fundamental solutions must correspond to the solutions which are the functions of the second kind. Hence for a fundamental solution of Laplace's equation a=c=0 and we must determine b and d which will be a function of d, n and R.
Note that the general homogeneous solution as a function of the geodesic coordinate includes contribution from both the function of the first kind and function of the second kind. However, the function of the first kind with γ=0 is simply the constants a and c since P_0^(α,β)(z)=1(<ref>) (same for the functions on-the-cut).
On the other hand, in the case of non-spherically symmetric homogeneous solutions there will be contributions due to the function of the first kind because then the contribution to the Δ_K/M term will be non-zero.
Let ,∈^s, then a Euclidean fundamental solution of Laplace's equation is given by (see for instance <cit.>)
^s( x, x^')=
{[ Γ(s/2)/2π^s/2(s-2) x- x^'^2-s if s=1 or s≥ 3,; 1/2πlog x- x^'^-1 if s=2. ].
For a description of opposite antipodal fundamental solutions on the real hypersphere see <cit.>.
The above analysis leads us to the following theorem.
A fundamental solution and an opposite antipodal fundamental solution of the
Laplace-Beltrami operator on
the rank one noncompact and compact
symmetric spaces respectively given in terms of the geodesic radii r∈[0,∞), θ∈[0,π/2] on these manifolds are given by
(r)
=
(n-1)!/2π^nR^2n-2Q_0^(n-1,0)(cosh(2r)),
(r)=
(2n)!/2π^2nR^4n-2Q_0^(2n-1,1)(cosh(2r)),
(r)=302 400/π^8 R^14Q_0^(7,3)(cosh(2r)),
(θ)=
(n-1)!/2π^nR^2n-2𝖰_0^(n-1,0)(cos(2θ)),
(θ)=
(2n)!/2π^2nR^4n-2𝖰_0^(2n-1,1)(cos(2θ)),
(θ)=302 400/π^8 R^14𝖰_0^(7,3)(cos(2θ)).
The complex, quaternionic and octonionic rank one symmetric spaces all have even dimensions, namely s∈{2n,4n,16}, respectively. It is easy to verify that the homogeneous spherically symmetric solutions of Laplace's equation on the complex, quaternionic and octonionic rank one symmetric spaces are given by Jacobi functions of the first and second kind for the noncompact manifolds and are given by Jacobi functions of the first and second kind on-the-cut for the compact manifolds, both having γ=0, α∈{n-1,2n-1,7} and β∈{0,1,3} respectively. Furthermore, one requires that locally these fundamental solutions match up to a Euclidean fundamental solution.
Using (<ref>), (<ref>), assuming γ=0, α=a, β=b, a∈, b∈_0, one has
the following behaviors near the singularity at unity for the Jacobi function of the second kind and the Jacobi function of the second kind on-the-cut, for ϵ→0^+,
Q_0^(a,b)(1+ϵ)∼ Q_0^(a,b)(1-ϵ)
∼2^a-1(a-1)!b!/(a+b)! ϵ^a.
Referring to the geodesic distance on the hyperbolic manifolds as r∈[0,∞) and on the compact manifolds as θ∈[0,π/2],
one has
cosh(2r)∼cosh(2ρR)
∼ 1+2ρ^2R^2 ,
cos(2θ)∼cos(2ρR)∼1-2ρ^2R^2,
where ρ is the Euclidean geodesic distance.
Matching locally to a Euclidean fundamental solution
(<ref>)
using the
flat-space limit (see for instance <cit.>), one is able to determine the constants of proportionality which are multiplied by the Jacobi functions of the second kind. This completes the proof.
Since fundamental solutions on the rank one symmetric spaces all have γ=0, we first present the expansions in these cases. For the Jacobi functions of the first kind the γ=0 case just corresponds with unity. However, for the Jacobi functions of the second kind, these functions are quite rich, and the expansions are quite useful in that they allow one to produce separated eigenfunction expansions of a fundamental solution of Laplace's equation on these isotropic spaces.
The reader should be aware that
the addition theorems presented below for the Jacobi functions of the second kind with γ=0 are well-defined except in the case where the α and β parameters on the left-hand sides are non-negative integers. In that case, special care must be taken (refer to Theorems <ref>, <ref>),
even though the functions at these parameter values may be obtained by taking the appropriate limit.
Let α,β∈ℂ,
β∉, z_1,z_2∈ℂ∖(-∞,1],
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
x,w∈ℂ,
with
Z^±, X^± as defined in
(<ref>), (<ref>), respectively,
such that the complex variables α,β,z_1,z_2,x_1,x_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
Q_0^(α,β)(Z^±)=
Γ(α+1)Γ(α+1)
Γ(β+1)
∑_k=0^∞(α+1)_k(α+β+1)_k/(α+k)(β+1)_k
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(α+1)_l(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×Q_-k^(α+k+l,β+k-l)(2z_>^2-1)
P_-k^(α+k+l,β+k-l)(2z_<^2-1)
w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/β C_k-l^β(x),
𝖰_0^(α,β)( X^±)=
Γ(α+1)
∑_k=0^∞(-1)^k
(α+1)_k(α+β+1)_k/(α+k)(β+1)_k
×∑_l=0^k(∓ 1)^k-l
(α+k+l)(-β)_l(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
×𝖰_-k^(α+k+l,β+k-l)(2x_<^2-1)
𝖯_-k^(α+k+l,β+k-l)(2x_>^2-1)w^k-lP_l^(α-β-1,β+k-l)(2w^2-1)β+k-l/β C_k-l^β(x).
Substituting
γ=0 in Theorem
<ref> for the
Jacobi functions of the
second kind
completes the proof.
Next we give examples of the expansions for complex and quaternionic hyperbolic spaces where β∈{0,1,3} respectively.
First we treat the complex case which
corresponds to complex hyperbolic space and complex projective space.
In order to do this we start with
Corollary <ref> and
take the limit as β→ 0 using
<cit.>
lim_μ→0n+μ/μC_n^μ(x)=ϵ_n T_n(x),
where ϵ_n:=2-δ_n,0 is the Neumann factor commonly
appearing in Fourier series.
Let α∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
x,w∈ℂ,
with
Z^±, X^± as defined in
(<ref>), (<ref>), respectively,
such that the complex variables α,z_1,z_2,x_1,x_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
Q_0^(α,0)(Z^±)=
Γ(α+1)Γ(α+1)
∑_k=0^∞(α+1)_k(α+1)_k/(α+k)k!
×∑_l=0^k
(∓ 1)^k-l(α+k+l)(α+1)_l(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×Q_-k^(α+k+l,k-l)(2z_>^2-1)
P_-k^(α+k+l,k-l)(2z_<^2-1)w^k-lP_l^(α-1,k-l)(2w^2-1)ϵ_k-lT_k-l(x),
𝖰_0^(α,0)( X^±)=
Γ(α+1)
∑_k=0^∞(-1)^k
(α+1)_k(α+1)_k/(α+k)k!
×∑_l=0^k(∓ 1)^k-l
(α+k+l)(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
×𝒬_-k^(α+k+l,k-l)(2x_<^2-1)
𝖯_-k^(α+k+l,k-l)(2x_>^2-1)w^k-lP_l^(α-1,k-l)(2w^2-1)ϵ_k-lT_k-l(x).
Take the limit as β→ 0
in Corollary <ref>
using (<ref>) completes the proof.
Now we treat the case corresponding to the quaternionic hyperbolic and projective spaces which correspond to β=1.
Let α∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
x,w∈ℂ,
with
Z^±, X^± as defined in
(<ref>), (<ref>), respectively,
such that the complex variables α,z_1,z_2,x_1,x_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
Q_0^(α,1)(Z^±)=
Γ(α+1)Γ(α+1)
∑_k=0^∞(α+1)_k(α+2)_k/(α+k)(2)_k
×∑_l=0^k
(∓ 1)^k-l(1+k-l)(α+k+l)(α+1)_l(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×Q_-k^(α+k+l,1+k-l)(2z_>^2-1)
P_-k^(α+k+l,1+k-l)(2z_<^2-1)w^k-lP_l^(α-2,1+k-l)(2w^2-1)U_k-l(x),
𝖰_0^(α,1)( X^±)=
Γ(α+1)
∑_k=0^∞(-1)^k
(α+1)_k(α+2)_k/(α+k)(2)_k
×∑_l=0^k(∓ 1)^k-l
(1+k-l)(α+k+l)(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
×𝒬_-k^(α+k+l,1+k-l)(2x_<^2-1)
𝖯_-k^(α+k+l,1+k-l)(2x_>^2-1)w^k-lP_l^(α-2,1+k-l)(2w^2-1)U_k-l(x).
Take the limit as β→ 1
in Corollary <ref>
using <cit.>
which connects the Chebyshev polynomial of the second kind to the Gegenbauer polynomial with parameter equal to unity, namely C_n^1(x)=U_n(x). This completes the proof.
Now we treat the case corresponding to the octonionic hyperbolic space and octonionic projective space. This corresponds to β=3.
Let α∈ℂ, z_1,z_2∈ℂ∖(-∞,1],
x_1,x_2∈ℂ∖((-∞,-1]∪[1,∞)),
x,w∈ℂ,
with
Z^±, X^± as defined in
(<ref>), (<ref>), respectively,
such that the complex variables α,z_1,z_2,x_1,x_2,x,w are in some
yet to be determined neighborhood of the real line.
Then
Q_0^(α,3)(Z^±)=
2Γ(α+1)Γ(α+1)
∑_k=0^∞(α+1)_k(α+4)_k/(α+k)(4)_k
×∑_l=0^k
(∓ 1)^k-l(3+k-l)(α+k+l)(α+1)_l(z_1z_2)^k-l((z_1^2-1)(z_2^2-1))^k+l/2
×Q_-k^(α+k+l,3+k-l)(2z_>^2-1)
P_-k^(α+k+l,3+k-l)(2z_<^2-1)w^k-lP_l^(α-4,3+k-l)(2w^2-1)C_k-l^3(x),
𝖰_0^(α,3)( X^±)=
1/3Γ(α+1)
∑_k=0^∞(-1)^k
(α+1)_k(α+4)_k/(α+k)(4)_k
×∑_l=0^k(∓ 1)^k-l
(3+k-l)(α+k+l)(x_1x_2)^k-l((1-x_1^2)(1-x_2^2))^k+l/2
×𝒬_-k^(α+k+l,3+k-l)(2x_<^2-1)
𝖯_-k^(α+k+l,3+k-l)(2x_>^2-1)w^k-lP_l^(α-4,3+k-l)(2w^2-1)C^3_k-l(x).
Setting β=3
in Corollary <ref>
completes the proof.
The above calculations look almost trivial in that they are simply substitutions of the values of β∈{0,1,3} and γ=0 in the addition theorems given by Theorem <ref>. However, it should be understood that ordinarily these computations would be extremely difficult, particularly if one was to use the standard normalizations of the Jacobi functions. With standard normalizations of Jacobi functions these particular values, and in fact for values of integer parameters (α,β) and degrees γ, the Jacobi functions are not even defined. It is only because of the strategic choice of the particular normalization that we have chosen that the evaluation of these particular values becomes quite easy. We will further take advantage of these expansions in later publications.
§.§ Acknowledgements
We would like to thank Tom Koornwinder for so many things: first for being such a great source of ideas, inspiration, insight and experience over the years; for very useful conversations which significantly improved this manuscript; for his essential help in describing and editing for accuracy, his story regarding the addition theorem for Jacobi polynomials and his interactions with Dick Askey; for informing us about Moriz Allé and his pioneering work on the addition theorem ultraspherical polynomials; and for his assistance and instruction in constructing a rigorous proof of Theorem <ref>. Thanks also to Jan Derezińsky for valuable discussions and in particular about Olver normalization.
' to 0pt.2ex "16d10Alle1865
M. Allé.
Über die Eigenschaften derjenigen Gattung von Functionen, welche
in der Entwicklung von (1-2qx+q^2)^-m/2 nach aufsteigenden
Potenzen von q auftreten, und über die Entwicklung des Ausdruckes
{1-2q[cosθcosθ'+sinθsinθ'cos(ψ-ψ')]+q^2}^-m/2.
Sitzungsberichte der mathematisch-naturwissenschaftlichen Classe
der kaiserlichen Akademie der Wissenschaften Wien, 51:429–458, 1865.
AAR
G. E. Andrews, R. Askey, and R. Roy.
Special functions, volume 71 of Encyclopedia of
Mathematics and its Applications.
Cambridge University Press, Cambridge, 1999.
MR385197
R. Askey.
Jacobi polynomials. I. New proofs of Koornwinder's Laplace
type integral representation and Bateman's bilinear sum.
SIAM Journal on Mathematical Analysis, 5:119–124, 1974.
Askeyetal86
R. Askey, T. H. Koornwinder, and M. Rahman.
An integral of products of ultraspherical functions and a
q-extension.
Journal of the London Mathematical Society. Second Series,
33(1):133–148, 1986.
Cartan1929
E. Cartan.
Sur la détermination d'un système orthogonal complet dans un
espace de Riemann symétrique clos.
Rendiconti del Circolo Matematico di Palermo, 53:217–252,
1929.
Cartan1931
E. Cartan.
Lecons sur la géométrie projective complexe.
Paris: Gauthier-Villars. VII. 325 S. (1931)., 1931.
Cohlhypersphere
H. S. Cohl.
Opposite antipodal fundamental solution of Laplace's equation in
hyperspherical geometry.
Symmetry, Integrability and Geometry: Methods and Applications,
7(108):14, 2011.
Cohl12pow
H. S. Cohl.
Fourier, Gegenbauer and Jacobi expansions for a power-law
fundamental solution of the polyharmonic equation and polyspherical addition
theorems.
Symmetry, Integrability and Geometry: Methods and Applications,
9(042):26, 2013.
CohlPalmer
H. S. Cohl and R. M. Palmer.
Fourier and Gegenbauer expansions for a fundamental solution of
Laplace's equation in hyperspherical geometry.
Symmetry, Integrability and Geometry: Methods and Applications,
Special Issue on Exact Solvability and Symmetry Avatars in honour of Luc
Vinet, 11:Paper 015, 23, 2015.
Cohletal2021
H. S. Cohl, J. Park, and H. Volkmer.
Gauss hypergeometric representations of the Ferrers function of the
second kind.
SIGMA. Symmetry, Integrability and Geometry. Methods and
Applications, 17:Paper No. 053, 33, 2021.
Durand78
L. Durand.
Product formulas and Nicholson-type integrals for Jacobi
functions. I. Summary of results.
SIAM Journal on Mathematical Analysis, 9(1):76–86, 1978.
Durand79
L. Durand.
Addition formulas for Jacobi, Gegenbauer, Laguerre, and
hyperbolic Bessel functions of the second kind.
SIAM Journal on Mathematical Analysis, 10(2):425–437, 1979.
DurandFishSim
L. Durand, P. M. Fishbane, and L. M. Simmons, Jr.
Expansion formulas and addition theorems for Gegenbauer functions.
Journal of Mathematical Physics, 17(11):1933–1948, 1976.
ErdelyiHTFII
A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi.
Higher Transcendental Functions. Vol. II.
Robert E. Krieger Publishing Co. Inc., Melbourne, Fla., 1981.
FlenstedJensenKoornwinder73
M. Flensted-Jensen and T. Koornwinder.
The convolution structure for Jacobi function expansions.
Arkiv för Matematik, 11:245–262, 1973.
FlenstedJensenKoorn79
M. Flensted-Jensen and T. H. Koornwinder.
Jacobi functions: the addition formula and the positivity of the dual
convolution structure.
Arkiv för Matematik, 17(1):139–151, 1979.
Koornwinder79
M. Flensted-Jensen and T. H. Koornwinder.
Positive definite spherical functions on a noncompact, rank one
symmetric space.
In Analyse harmonique sur les groupes de Lie (Sém.,
Nancy-Strasbourg 1976–1978), II, volume 739 of Lecture Notes in
Math., pages 249–282. Springer, Berlin, 1979.
Gegenbauer1874
L. Gegenbauer.
Über einige bestimmte Integrale.
Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften.
Mathematische-Naturwissenschaftliche Classe., 70:433–443, 1874.
Gegenbauer1893
L. Gegenbauer.
Das Additionstheorem der Functionen C_n^ν(x).
Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften.
Mathematische-Naturwissenschaftliche Classe., 102:942–950, 1893.
GelfandShilov
I. M. Gel'fand and G. E. Shilov.
Generalized functions. Vol. 1.
Academic Press [Harcourt Brace Jovanovich Publishers], New York, 1964
[1977].
Properties and operations, Translated from the Russian by Eugene
Saletan.
HeckmanSchlichtkrull94
G. Heckman and H. Schlichtkrull.
Harmonic analysis and special functions on symmetric spaces,
volume 16 of Perspectives in Mathematics.
Academic Press, Inc., San Diego, CA, 1994.
HeckmanOpdam87
G. J. Heckman and E. M. Opdam.
Root systems and hypergeometric functions. I.
Compositio Mathematica, 64(3):329–352, 1987.
HeckmanOpdam2021
G. J. Heckman and E. M. Opdam.
Jacobi polynomials and hypergeometric functions associated with root
systems.
In Encyclopedia of special functions: the Askey-Bateman
project. Vol. 2. Multivariable special functions, pages 217–257.
Cambridge Univ. Press, Cambridge, 2021.
Heine1878
E. Heine.
Handbuch der Kugelfunctionen, Theorie und Anwendungen
(volume 1).
Druck und Verlag von G. Reimer, Berlin, 1878.
Helgason59
S. Helgason.
Differential operators on homogenous spaces.
Acta Mathematica, 102:239–299, 1959.
Helgason78
S. Helgason.
Differential geometry, Lie groups, and symmetric spaces,
volume 80 of Pure and Applied Mathematics.
Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York,
1978.
Helgason84
S. Helgason.
Groups and geometric analysis: Integral geometry, invariant
differential operators, and spherical functions, volume 113 of Pure and
Applied Mathematics.
Academic Press Inc., Orlando, FL, 1984.
Jacobi1859
C. G. J. Jacobi.
Untersuchungen über die Differentialgleichung der
hypergeometrischen Reihe.
J. Reine Angew. Math., 56:149–165, 1859.
Koekoeketal
R. Koekoek, P. A. Lesky, and R. F. Swarttouw.
Hypergeometric orthogonal polynomials and their
q-analogues.
Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010.
With a foreword by Tom H. Koornwinder.
Koornwinder73
T. Koornwinder.
The addition formula for Jacobi polynomials and spherical
harmonics.
SIAM Journal on Applied Mathematics, 25:236–246, 1973.
Koornwinder74
T. Koornwinder.
Jacobi polynomials. II. An analytic proof of the product formula.
SIAM Journal on Mathematical Analysis, 5:125–137, 1974.
Koornwinder75
T. Koornwinder.
Jacobi polynomials. III. An analytic proof of the addition
formula.
SIAM Journal on Mathematical Analysis, 6:533–543, 1975.
Koornwinder77
T. Koornwinder.
Yet another proof of the addition formula for Jacobi polynomials.
Journal of Mathematical Analysis and Applications,
61(1):136–141, 1977.
Koornwinder1972AI
T. H. Koornwinder.
The addition formula for Jacobi polynomials. I. Summary of
results.
Nederl. Akad. Wetensch. Proc. Ser. A 75=Indag. Math.,
34:188–191, 1972.
Koornwinder72A
T. H. Koornwinder.
The addition formula for Jacobi polynomials. I. Summary of results.
Stichting Mathematisch Centrum, Afdeling Toegepaste Wiskunde.
TW: 131/71, November 1972.
Koornwinder72B
T. H. Koornwinder.
The addition formula for Jacobi polynomials, II : the Laplace type
integral representation and the product formula.
Stichting Mathematisch Centrum, Afdeling Toegepaste Wiskunde.
TW: 133/72,
http://persistent-identifier.org/?identifier=urn:nbn:nl:ui:18-7722
http://persistent-identifier.org/?identifier=urn:nbn:nl:ui:18-7722,
April 1972.
Koornwinder72C
T. H. Koornwinder.
The addition formula for Jacobi polynomials, III : Completion of the
proof.
Stichting Mathematisch Centrum, Afdeling Toegepaste Wiskunde.
TW: 135/72,
http://persistent-identifier.org/?identifier=urn:nbn:nl:ui:18-12598
http://persistent-identifier.org/?identifier=urn:nbn:nl:ui:18-12598,
December 1972.
Koornwinder18
T. H. Koornwinder.
Dual addition formulas associated with dual product formulas.
In Frontiers in Orthogonal Polynomials and q-Series,
chapter 19, pages 373–392. World Scientific Publishing, Hackensack, NJ,
2018.
Zuhair Nashed and Xin Li, editors,
https://arxiv.org/abs/1607.06053v4
arXiv:1607.06053v4.
KoornwinderSchwartz97
T. H. Koornwinder and A. L. Schwartz.
Product formulas and associated hypergroups for orthogonal
polynomials on the simplex and on a parabolic biangle.
Constructive Approximation. An International Journal for
Approximations and Expansions, 13(4):537–567, 1997.
LiPeng2007
Z. Li and L. Peng.
Some representations of translations of the product of two functions
for Hankel transforms and Jacobi transforms.
Constructive Approximation. An International Journal for
Approximations and Expansions, 26(1):115–125, 2007.
Macdonald2013
I. G. Macdonald.
Hypergeometric Functions, I.https://arxiv.org/abs/1309.4568
arXiv:1309.4568, 2013.
MOS
W. Magnus, F. Oberhettinger, and R. P. Soni.
Formulas and theorems for the special functions of mathematical
physics.
Third enlarged edition. Die Grundlehren der mathematischen
Wissenschaften, Band 52. Springer-Verlag New York, Inc., New York, 1966.
Miller
W. Miller, Jr.
Symmetry and separation of variables.
Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, 1977.
With a foreword by Richard Askey, Encyclopedia of Mathematics and its
Applications, Vol. 4.
Olver:1997:ASF
F. W. J. Olver.
Asymptotics and Special Functions.
AKP Classics. A K Peters Ltd., Wellesley, MA, 1997.
Reprint of the 1974 original [Academic Press, New York].
Opdam1988
E. Opdam.
Generalized Hypergeometric Functions Associated with Root
Systems.
dissertation, Leiden University, Leiden, The Netherlands, 1988.
NIST:DLMF NIST Digital Library of Mathematical Functions.
https://dlmf.nist.gov/
https://dlmf.nist.gov/, Release 1.1.9 of 2023-03-15.
F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider,
R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and
M. A. McClain, eds.
Szego
G. Szegő.
Orthogonal polynomials.
American Mathematical Society Colloquium Publications, Vol. 23.
Revised ed. American Mathematical Society, Providence, R.I., fourth edition,
1975.
VilenkinSapiro1967
N. Ja. Vilenkin and R. L. Šapiro.
Irreducible representations of the group SU(n) of class I
relative to SU(n-1).
Izvestija Vysših Učebnyh Zavedeniĭ Matematika,
1967(7 (62)):9–20, 1967.
Sapiro1968
R. L. Šapiro.
Special functions related to representations of the group SU(n), of class I with respect to SU(n-1)(n≧ 3).
Izvestija Vysših Učebnyh Zavedeniĭ Matematika,
1968(4 (71)):97–107, 1968.
WimpMcCabeConnor97
J. Wimp, P. McCabe, and J. N. L. Connor.
Computation of Jacobi functions of the second kind for use in
nearside-farside scattering theory.
Journal of Computational and Applied Mathematics,
82(1-2):447–464, 1997.
Seventh 96 International Congress on Computational and Applied
Mathematics (Leuven).
|
http://arxiv.org/abs/2306.04232v1
|
20230607081448
|
Non-minimaxity of debiased shrinkage estimators
|
[
"Yuzo Maruyama",
"Akimichi Takemura"
] |
math.ST
|
[
"math.ST",
"stat.TH",
"62C20"
] |
Non-minimaxity of debiased estimators
Y. Maruyama & A. Takemura
Kobe University & Shiga University
e1,e2
We consider the estimation of the p-variate normal mean of X∼ N_p(θ,I)
under the quadratic loss function.
We investigate the decision theoretic properties of debiased shrinkage estimator,
the estimator which shrinks towards the origin for smaller x^2 and
which is exactly equal to the unbiased estimator X for larger x^2.
Such debiased shrinkage estimator seems superior to
the unbiased estimator X, which implies minimaxity.
However we show that it is not minimax under mild conditions.
[class=MSC]
[Primary ]62C20
[; secondary ]62J07
minimaxity
debiased shrinkage estimator
James-Stein estimator
§ INTRODUCTION
Let X have a p-variate normal distribution
𝒩_p (θ, I_p).
We consider the problem of estimating the mean vector θ under
the loss function
L(θ,θ̂)=θ̂ - θ^2
=∑_i=1^p(θ̂_i - θ_i)^2.
The risk function of an estimator θ̂(X) is
R(θ,θ̂)=[ θ̂(X) - θ^2]
=∫_R^pθ̂(x) - θ^2/(2π)^p/2exp(- x - θ^2/2) x.
The usual unbiased estimator X has the constant risk p and is
minimax for p∈ℕ.
<cit.> showed that there are orthogonally equivariant
estimators of the form
θ̂_ϕ(X)= ( 1- ϕ( X ^2)/ X ^2)X
which dominate X when p ≥ 3.
<cit.>
gave an explicit dominating procedure
θ̂_(X)=( 1- p-2/ X ^2)X,
called the James-Stein estimator.
Further, as shown in <cit.>, the James-Stein estimator is inadmissible since
the positive-part estimator
θ̂_^+(X)=max(0, 1- p-2/ X ^2)X
dominates θ̂_.
For a class of general shrinkage estimators θ̂_ϕ(X) given by (<ref>),
<cit.> proposed a sufficient condition for minimaxity, {<ref> and <ref>}
where
* 0≤ϕ(w)≤ 2(p-2) for all w≥ 0,
* ϕ'(w)≥ 0 for all w≥ 0.
Further
<cit.> expressed the risk of
θ̂_ϕ(X) as
[θ̂_ϕ-θ^2]=
p+[r_ϕ(X^2)],
where
r_ϕ(w)=ϕ(w)/w{ϕ(w)-2(p-2)}-4ϕ'(w).
Hence the shrinkage factor ϕ(w) with the inequality r_ϕ(w)≤ 0 for all w≥ 0,
implies minimaxity of θ̂_ϕ.
We see that {<ref> and <ref>} is a tractable sufficient condition
for r_ϕ(w)≤ 0 for all w≥ 0.
A series of papers, <cit.>, showed that
the James-Stein estimator can be interpreted as an empirical Bayes estimator under
θ∼𝒩_p(0,τ I_p).
Hence the shrinkage estimator including the James-Stein estimator utilize
the prior information that θ^2 is relatively small.
In fact, the risk function of the James-Stein estimator
is
p-(p-2)^2[1/X^2]
which is increasing in θ^2.
On the other hand,
the larger x^2 suggests that the prior information (θ^2 is relatively small)
is incorrect.
Although the James-Stein estimator uniformly dominates X under the quadratic risk,
for larger x^2,
the unbiased estimator X seems superior to
the shrinkage estimators with the bias given by
[( 1- ϕ( X ^2)/ X ^2)X]-θ
=-[ϕ( X ^2)/ X ^2X],
with O(1/θ) provided ϕ(w) is bounded.
Note that many popular shrinkage estimators have ϕ with
lim inf_w→∞ϕ(w)≥ p-2.
See a sufficient condition for admissibility by <cit.>.
In this paper, we define debiased shrinkage estimator by
* ϕ(w) is weakly differentiable with bounded ϕ'(w),
* For some a>0, 0<ϕ(w)≤ w on (0,a) and ϕ(w)=0 on [a,∞).
Hence the debiased shrinkage estimator shrinks
towards the origin for smaller x^2 and
is exactly equal to the unbiased estimator X for larger x^2.
Such debiased shrinkage estimators seem superior to
the unbiased estimator X, which implies minimaxity.
In this paper, we are interested in whether
the debiased shrinkage estimators are minimax or not.
In the literature, there are some debiased estimators including
SCAD (Smoothly Clipped Absolute Deviation) by <cit.> and
nearly unbiased estimators by MCP (Minimax Concave Penalty) by <cit.>,
which have not necessarily aimed at enjoying the conventional minimaxity.
The organization of this paper is as follows.
By (<ref>),
the risk difference between θ̂ and the minimax estimator X
is given by
[θ̂_ϕ-θ^2]-p=[r_ϕ(X^2)]
=
[r_ϕ(X^2)I_[0,a](X^2)],
where r_ϕ(w) is given by (<ref>) and
the second equality follows from <ref>.
In Section <ref>, we give a useful result, Theorem <ref>,
on the asymptotic behavior of this type of
an expected value when θ^2→∞.
In Section <ref>,
we review SCAD and MCP as a solution of penalized least squares and
investigate how the corresponding ϕ(w) approaches 0 as w↗ a.
In Section <ref>, using Theorem <ref>,
we show that the debiased shrinkage estimators with <ref> and <ref>
as well as mild conditions on the way how ϕ(w) approaches 0 as w↗ a,
are not minimax, which is not necessarily expected.
§ ASYMPTOTIC BEHAVIOR OF AN EXPECTED VALUE
For fixed a>0, we investigate the asymptotic behavior of the expected value
G(θ^2;a)=[g(X^2)I_[0,a](X^2)] as θ^2→∞
where g(w) satisfies <ref> and <ref>:
* w^(p-1)/2|g(w)| is bounded on [0,a].
* There exists a nonnegative real b such that
lim_w↗ ag(w)/(a-w)^b=1.
Notice that, on <ref>, we do not lose the generality even if
we assume the limit of g(w)/(a-w)^b is 1. If the limit is equal to g_*(≠ 0),
we have only to consider
g_* [g(X^2)/g_*I_[0,a](X^2)].
Then we have the following result.
Assume p≥ 2 and that g(w) satisfies <ref> and <ref>.
Let ν=θ^2 and
c(a,b,p)= a^(p-1)/4+b/22^bΓ(b+1)/√(2π)exp(a/2).
Then
lim_ν→∞ν^(p+1)/4+b/2e^ν/2/e^ G(ν;a)
=c(a,b,p).
We first prove the theorem under the proper subset of <ref>;
* |g(w)| is bounded on [0,a].
Note that X^2 can be decomposed as U^2+V where U∼ N(√(ν),1), V∼χ^2_p-1
and U and V are mutually independent.
Then we have
G(ν;a) =[g(U^2+V)I_[0,a](U^2+V)]
=_U[_V[g(U^2+V)I_[0,a](U^2+V) U=u]I_[-√(a),√(a)](U)]
=_U[_V[g(V+a-{a-U^2})I_[0,a-U^2](V) U=u]I_[-√(a),√(a)](U)]
=_U[H(a-U^2)(a-U^2)^b+qI_[-√(a),√(a)](U)] ,
where q=(p-1)/2, H(·) is given by
H(y)=1/y^b+q∫_0^y g(v+a-y)f_p-1(v) v
and f_p-1(v) is the pdf of χ^2_p-1.
Hence G(ν;a) is rewritten as
G(ν;a)= ∫_-^ H(a-u^2)
(a-u^2)^b+q1/√(2π) e^-(u-)^2/2 u.
Since the asymptotic behavior of G(ν;a) as ν→∞ is of interest,
ν>a is assumed in the following.
For G(ν;a), apply the change of variables,
z=(-)(-u+)
which implies
u= -z/-, u=-1/- z.
Then we have
G(ν;a)
=1/√(2π)(-)∫_0^2(-)(a-{ -z/-}^2)^b+q
×
H(a-{ -z/-}^2)
exp(-1/2{ -z/--}^2) z.
Further we rewrite it as
G(ν;a)
=exp(-{-}^2/2)/√(2π)(-)∫_0^2(-)
H(a-{ -z/-}^2)
×(2 z/--z^2/(-)^2)^b+qexp(-z-z^2/2(-)^2) z
=exp(-{-}^2/2)2^b+qa^(b+q)/2/√(2π)(-)^b+q+1∫_0^∞ z^b+qe^-zH_1(z;ν) z,
where
H_1(z;ν) =H(a-{ -z/-}^2)
(1-z/2 (-))^b+q
×exp(-z^2/2(-)^2)I_(0,2(-)) (z).
From Part <ref> of Lemma <ref> below, H(y) on [0,a] is bounded under <ref>.
Hence, for any ν, we have
H_1(z;ν)
≤max_y∈[0,a]|H(y)| 0≤ z≤ 2 (-)
=0 z>2 (-).
Further, by (<ref>) and Part <ref> of Lemma <ref>, we have
lim_ν→∞H_1(z;ν)=lim_y→ 0H(y)=Γ(b+1)/Γ(b+(p+1)/2)2^(p-1)/2.
By (<ref>) and (<ref>), the dominated convergence theorem,
gives
lim_ν→∞∫_0^∞ z^b+qe^-zH_1(z;ν) z
= ∫_0^∞ z^b+qe^-zlim_ν→∞H_1(z;ν) z
= Γ(b+1)/2^(p-1)/2,
which completes the proof under <ref>.
Now we assume <ref>, that is, w^(p-1)/2|g(w)| is bounded on [0,a] as
w^(p-1)/2|g(w)|<M.
Let f_p(w,ν) be the density of W=X^2.
Note that f_p(w,ν)/f_p(w,0) for any fixed ν>0 is increasing in w and that
w^-(p-1)/2 is decreasing in w.
By the correlation inequality, we have
| ∫_0^a/2 g(w) f_p(w,ν) w |
< ∫_0^a/2 |g(w)| f_p(w,ν) w
<M ∫_0^a/2 w^-(p-1)/2 f_p(w,ν) w
=M ∫_0^a/2 w^-(p-1)/2f_p(w,ν)/f_p(w,0)f_p(w,0) w
≤ M ∫_0^a/2 w^-(p-1)/2f_p(w,0) w/∫_0^a/2 f_p(w,0) w∫_0^a/2 f_p(w,ν) w
≤ M_1 ∫_0^a/2 f_p(w,ν) w
where
M_1=M ∫_0^a/2 w^-(p-1)/2f_p(w,0) w/∫_0^a/2 f_p(w,0) w.
Let
g_L(w)=
-M_1 0≤ w≤ a/2
g(w) a/2<w≤ a,
g_U(w)=
M_1 0≤ w≤ a/2
g(w) a/2<w≤ a,
which are both bounded.
Then we have
[g_L(W)I_[0,a](W)]<G(ν;a)<[g_U(W)I_[0,a](W)]
and, by the result under <ref>,
lim_ν→∞ν^(p+1)/4+b/2e^ν/2/e^[g_L(W)I_[0,a](W)]
=
lim_ν→∞ν^(p+1)/4+b/2e^ν/2/e^[g_U(W)I_[0,a](W)]
=c(a,b,p),
where c(a,b,p) is given by (<ref>).
Hence Theorem <ref> is valid for the case where
w^(p-1)/2|g(w)| is bounded.
The following lemma gives some properties on H(y) given by (<ref>),
needed in the proof of Theorem <ref>.
We assume that |g(w)| is bounded on [0,a] as in <ref>.
Then we have the following results.
*
lim_y→ 0H(y)=Γ(b+1)/Γ(b+(p+1)/2)2^(p-1)/2.
*
H(y) on [0,a] is bounded.
By (<ref>), for any ϵ>0, there exists δ_1(ϵ)>0 such that
(1-ϵ)(a-w)^b ≤ g(w)≤ (1+ϵ)(a-w)^b
for all a-δ_1≤ w < a and hence
(1-ϵ)(y-v)^b ≤ g(v+a-y)≤ (1+ϵ)(y-v)^b
for all 0<v<y≤δ_1(ϵ).
Further, for any 0<ϵ<1, we have
1-ϵ≤ e^-v/2 for all 0<v≤δ_2(ϵ) where
δ_2(ϵ)=-2log(1-ϵ) and hence
(1-ϵ)v^q-1/Γ(q)2^q≤ f_p-1(v) ≤v^q-1/Γ(q)2^q
with q=(p-1)/2 and for all v≤δ_2(ϵ).
Then for any ϵ>0 and all 0<y≤min(δ_1(ϵ),δ_2(ϵ)),
we have
(1-ϵ)^2
∫_0^y (y-v)^b/y^b+qv^q-1/Γ(q)2^q v≤ H(y) ≤
(1+ϵ)∫_0^y (y-v)^b/y^b+qv^q-1/Γ(q)2^q v,
where
∫_0^y (y-v)^b/y^b+qv^q-1 v=B(q,b+1)=B((p-1)/2,b+1).
Hence, for 0<y≤min(δ_1(ϵ),δ_2(ϵ)), we have
(1-ϵ)^2Γ(b+1)/Γ(b+(p+1)/2)2^(p-1)/2≤ H(y)≤(1+ϵ)Γ(b+1)/Γ(b+(p+1)/2)2^(p-1)/2.
and the part <ref> follows.
By (<ref>), we have
H(a)=1/a^b+(p-1)/2∫_0^a g(v)f_p-1(v) v,
which is bounded under <ref>.
By the continuity of H(y) and Part <ref> of this lemma,
the part <ref> follows.
§ REVIEW OF EXISTING DEBIASED SHRINKAGE ESTIMATORS
As we mentioned in Section <ref>,
in the literature, there are some “debiased shrinkage” estimators including
SCAD (Smoothly Clipped Absolute Deviation) by <cit.> and
nearly unbiased estimators by MCP (Minimax Concave Penalty) by <cit.>, although
they do not necessarily aim at enjoying the conventional minimaxity.
In this section, we assume p=1 and
review existing estimators as solutions of the penalized least squares problem;
θ̂(P;λ)=_θ{ (θ-x)^2+P(|θ|;λ)}.
Table <ref> summarizes three popular penalty functions P(|θ|;λ),
and the corresponding minimizers “ridge”, “soft thresholding” and “hard thresholding”.
For the three estimators,
the corresponding shrinkage factors, ϕ(x^2),
from the form
θ̂=(1-ϕ(x^2)/x^2)x
are
ϕ_R(w)=w/λ+1, ϕ_ST(w)=
w w ≤λ^2
λ w^1/2 w>λ^2
, ϕ_HT(w)=
w w ≤λ^2
0 w>λ^2
.
We see that
<ref>, <ref> and <ref> are not satisfied by
ϕ_R(w), ϕ_ST(w) and ϕ_HT(w), respectively.
SCAD (Smoothly Clipped Absolute Deviation) by <cit.> is the minimizer,
(<ref>), with the continuous differentiable penalty function defined by
P'(|θ|;λ,α)
=λ |θ|<λ
αλ - |θ|/α-1 λ≤ |θ|<αλ
0 |θ|≥αλ
where α>2.
The resulting solution is
θ̂_SCAD(λ;α)
=
0 0<x^2< λ^2
(1-λ/|x|)x λ^2 ≤ x^2 ≤ 4λ^2
(1--x^2+αλ |x|/(α-2)x^2)x 4λ^2 ≤ x^2 ≤α^2 λ^2
x x^2≥α^2λ^2,
where the corresponding shrinkage factor is
ϕ_SCAD(w)
=
w 0<w< λ^2
λ w^1/2 λ^2 ≤ w ≤ 4λ^2
-w+αλ w^1/2/a-2 4λ^2 ≤ w ≤α^2 λ^2
0 w≥α^2λ^2.
We see that ϕ_SCAD(w) satisfies both <ref> and <ref>.
Further, by (<ref>), the derivative at w=α^2λ^2 is
/ wϕ_SCAD(w)|_w=α^2λ^2=-1/2(α-2)<0.
As pointed in <cit.>,
the nearly unbiased estimator by MCP (Minimax Concave Penalty) considered in
<cit.> is equivalent to
the minimizer of (<ref>) with the continuous differentiable penalty function defined by
P'(|θ|;λ,α)
=
2{λ-|θ|/α} |θ|<αλ
0 |θ|≥αλ
where α>1.
Then the resulting solution is given by
θ̂_MCP(λ;α)
=
0 0<x^2< λ^2
(1--x^2+αλ |x|/(α-1)x^2)x λ^2 ≤ x^2 ≤α^2 λ^2
x x^2≥α^2λ^2,
where the corresponding shrinkage factor is
ϕ_MCP(w)
=
w 0<w< λ^2
-w+αλ w^1/2/α-1 λ^2 ≤ w ≤α^2 λ^2
0 w≥α^2λ^2.
We see that ϕ_MCP(w) satisfies both <ref> and <ref>.
Further, by (<ref>), the derivative at w=α^2λ^2 is
/ wϕ_MCP(w)|_w=α^2λ^2=-1/2(α-1)<0.
By (<ref>) and (<ref>),
both ϕ_SCAD(w) and ϕ_MCP(w)
approach 0 as w↗α^2λ^2 with the negative slope.
Aside from the justification as a solution of the penalized least squares problem (<ref>),
let us consider
ϕ_Q(w)=
w 0≤ w<2a+1-√(4a+1)/2
(a-w)^2 2a+1-√(4a+1)/2≤ w ≤ a
0 w>a.
We see that ϕ_Q(w) satisfies
both <ref> and <ref> and
lim_w→ a/ wϕ_Q(w)=0
and lim_w↗ a (a-w)(/ w) ϕ_Q(w)/ϕ_Q(w)=-2.
When ϕ(w) of debiased shrinkage estimator
approaches 0 from above as w→ a,
it seems that both {(<ref>) and (<ref>)}
and (<ref>) are typical behaviors characterized by ϕ'(w).
§ MAIN RESULT
In this section, we investigate the minimaxity of the shrinkage debiased estimators
with <ref> and <ref>.
Recall, as in (<ref>),
the risk difference between θ̂ and the minimax estimator X
is
[θ̂_ϕ-θ^2]-p
=[r_ϕ(X^2)I_[0,a](X^2)],
where r_ϕ(w) is given by (<ref>).
Under the assumptions on ϕ(w), <ref> and <ref>,
r_ϕ(w) given by (<ref>) is bounded, that is, there exists an M
such that
|r_ϕ(w)|<M on [0,a].
For ϕ(w) with lim_w↗ aϕ(w)=0 as wells as
ϕ(w)>0 for w<a, we consider two cases
as a generalization of {(<ref>) and (<ref>)}
and (<ref>):
* lim sup_w↗ aϕ'(w)<0.
* lim_w↗ aϕ'(w)=0 and there exist 0<ϵ<1 and γ>1 such that
-γ < (a-w)ϕ'(w)/ϕ(w)< -1/γ,
for all w∈(ϵ a,a).
Under <ref>, there exist δ_1>0 and 0<δ_2<1 such that
ϕ'(w) < - δ_1 and ϕ(w)/w{ϕ(w)-2(p-2)} > - δ_1,
for all w∈(δ_2 a, a).
Then, by (<ref>) and (<ref>), we have
r_ϕ(w) =ϕ(w)/w{ϕ(w)-2(p-2)}-4ϕ'(w)
≥ 3δ_1>0,
for all w∈(δ_2 a, a).
Hence, by Theorem <ref> with (<ref>), (<ref>) and (<ref>),
we have
lim inf_ν→∞ν^(p+1)/4e^ν/2/e^{[θ̂_ϕ-θ^2]-p}
≥ 3δ_1c(a,0,p) >0,
which implies that the debiased shrinkage estimator is not minimax under <ref>.
Under <ref>, the inequality
-γ < (a-w)ϕ'(w)/ϕ(w)
for w∈(ϵ a,a) implies
∫_ϵ a^w ϕ'(t)/ϕ(t) t>∫_ϵ a^w-γ/a-t t
which is equivalent to
ϕ(w)>ϕ_*(a-w)^γ, where ϕ_*=ϕ(ϵ a)/(a-ϵ a)^γ,
for all w∈(ϵ a, a).
Further let
ϵ'=1/1+1/{(p-2)γ}.
Then, for w∈(ϵ'a,a),
we have
-2(p-2)(a-w)/w≥ - 2/γ.
Hence, for w∈(max(ϵ,ϵ')a,a), we have
r_ϕ(w)
={ϕ(w)}^2/w-2(p-2)ϕ(w)/w-4ϕ'(w)
≥ -2(p-2)ϕ(w)/w-4ϕ'(w)
= ϕ(w)/a-w{ -2(p-2)(a-w)/w-4(a-w)ϕ'(w)/ϕ(w)}
≥2/γϕ(w)/a-w,
where the second inequality follows from (<ref>) and (<ref>).
Further, by (<ref>) and (<ref>), we have
r_ϕ(w)≥2ϕ_*/γ(a-w)^γ-1
for w∈(max(ϵ,ϵ')a,a).
Hence, by Theorem <ref> with (<ref>), (<ref>) and (<ref>), we have
lim inf_ν→∞ν^(p+1)/4+(γ-1)/2e^ν/2/e^{[θ̂_ϕ-θ^2]-p}
≥2ϕ_*/γc(a,γ-1,p) >0,
which implies that the debiased shrinkage estimator not minimax under <ref>.
In summary, we have the following theorem.
The debiased shrinkage estimator with <ref> and <ref> is not minimax
under either <ref> or <ref>.
Yet another application of Theorem <ref> is also related to Stein estimation,
the gain of the positive-part estimator θ̂_^+ given by (<ref>)
over
the naive James-Stein estimator θ̂_ given by (<ref>).
For these estimators, the corresponding ϕ(w) are given by
ϕ_^+(w)=min(w,p-2), ϕ_(w)=p-2.
By the general expression of the risk,
(<ref>) and (<ref>) with (<ref>),
we have
R(θ,θ̂_)-R(θ,θ̂_^+)
=[{-(p-2)^2/X^2 + 2p - X^2}I_[0,p-2](X^2)].
Let
f_k(v) =v^k/2-1e^-v/2/Γ(k/2)2^k/2,
f_k(v;ν)=∑_i=0^∞(ν/2)^iexp(-ν/2)/i!f_k+2i(v),
F_k(v) =∫_0^v f_k(w) w, F_k(v;ν)=∫_0^v f_k(w;ν) w.
<cit.>, in Theorem 15.7,
expressed the risk difference (<ref>) through
F_k(v) and F_k(v;ν),
the distribution functions of the central chi-square with and non-central chi-square, as
R(θ,θ̂_)-R(θ,θ̂_^+)
=2pF_p(p-2;ν)-pF_p+2(p-2;ν)-ν F_p+4(p-2;ν)
-(p-2)^2∑_i=0^∞(ν/2)^iexp(-ν/2)/i!F_p+2i-2(p-2)/p+2i-2.
<cit.> expressed the risk difference (<ref>) through
the Dawson integral given by
D(λ)=e^-λ^2∫_0^λ e^t^2 t.
The results by <cit.> and <cit.>,
do not seem to directly
provide the exact asymptotic order of the major term of (<ref>)
with the exact coefficient.
Using Theorem <ref>, we can get it as follows.
Since
. {-(p-2)^2/w + 2p - w}|_w=p-2=4,
and
w^(p-1)/2|-(p-2)^2/w + 2p - w|
=w^(p-3)/2|-(p-2)^2 + 2pw - w^2|
is bounded for w∈(0,p-2) and for p≥ 3,
Theorem <ref> gives
lim_ν→∞ν^(p+1)/4e^ν/2/e^√(p-2){R(θ,θ̂_)-R(θ,θ̂_^+)}
=4c(p-2,0,p)=4(p-2)^(p-1)/4/√(2π)exp(p/2-1).
15
[Baranchik1964]Baranchik-1964
[author]
Baranchik, A. J.A. J.
(1964).
Multiple regression and estimation of the mean of a multivariate normal
distribution
Technical Report No. 51,
Department of Statistics, Stanford University.
[Baranchik1970]Baranchik-1970
[author]
Baranchik, A. J.A. J.
(1970).
A family of minimax estimators of the mean of a multivariate normal
distribution.
Ann. Math. Statist.
41
642–645.
0253461
[Brown1971]Brown-1971
[author]
Brown, L. D.L. D.
(1971).
Admissible estimators, recurrent diffusions, and insoluble boundary
value problems.
Ann. Math. Statist.
42
855–903.
0286209
[Efron and Morris1971]Efron-Morris-1971
[author]
Efron, BradleyB. Morris, CarlC.
(1971).
Limiting the risk of Bayes and empirical Bayes estimators. I.
The Bayes case.
J. Amer. Statist. Assoc.
66
807–815.
[Efron and
Morris1972a]Efron-Morris-1972-jasa
[author]
Efron, BradleyB. Morris, CarlC.
(1972a).
Limiting the risk of Bayes and empirical Bayes estimators. II.
The empirical Bayes case.
J. Amer. Statist. Assoc.
67
130–139.
[Efron and
Morris1972b]Efron-Morris-1972-biometrika
[author]
Efron, BradleyB. Morris, CarlC.
(1972b).
Empirical Bayes on vector observations: an extension of Stein's
method.
Biometrika
59
335–347.
[Efron and
Morris1973]Efron-Morris-1973-jasa
[author]
Efron, BradleyB. Morris, CarlC.
(1973).
Stein's estimation rule and its competitors—an empirical Bayes
approach.
J. Amer. Statist. Assoc.
68
117–130.
[Fan and Li2001]Fan-Li-2001
[author]
Fan, JianqingJ. Li, RunzeR.
(2001).
Variable selection via nonconcave penalized likelihood and its oracle
properties.
J. Amer. Statist. Assoc.
96
1348–1360.
1946581
[Hansen2022]Hansen-2022b
[author]
Hansen, Bruce E.B. E.
(2022).
Probability and Statistics for Economists.
Princeton Univ Press, Princeton, NJ.
[James and Stein1961]James-Stein-1961
[author]
James, W.W. Stein, CharlesC.
(1961).
Estimation with quadratic loss.
In Proc. 4th Berkeley Sympos. Math. Statist. and Prob.,
Vol. I
361–379.
Univ. California Press, Berkeley, Calif.
0133191
[Robert1988]Robert-1988
[author]
Robert, ChristianC.
(1988).
An explicit formula for the risk of the positive-part James-Stein
estimator.
Canad. J. Statist.
16
161–168.
963730
[Stein1956]Stein-1956
[author]
Stein, CharlesC.
(1956).
Inadmissibility of the usual estimator for the mean of a multivariate
normal distribution.
In Proceedings of the Third Berkeley Symposium on
Mathematical Statistics and Probability, 1954–1955, vol. I
197–206.
University of California Press, Berkeley and Los
Angeles.
0084922
[Stein1974]Stein-1974
[author]
Stein, CharlesC.
(1974).
Estimation of the mean of a multivariate normal distribution.
In Proceedings of the Prague Symposium on Asymptotic
Statistics (Charles Univ., Prague, 1973), Vol. II
345–381.
Charles Univ., Prague.
0381062
[Strawderman and
Wells2012]Strawderman-Wells-2012
[author]
Strawderman, Robert L.R. L. Wells, Martin T.M. T.
(2012).
On hierarchical prior specifications and penalized likelihood.
In Contemporary developments in Bayesian analysis and statistical
decision theory: a Festschrift for William E. Strawderman.
Inst. Math. Stat. (IMS) Collect.
8
154–180.
Inst. Math. Statist., Beachwood, OH.
3202509
[Zhang2010]Zhang-2010
[author]
Zhang, Cun-HuiC.-H.
(2010).
Nearly unbiased variable selection under minimax concave penalty.
Ann. Statist.
38
894–942.
10.1214/09-AOS729
2604701
|
http://arxiv.org/abs/2306.03058v2
|
20230605172933
|
Shoal: Improving DAG-BFT Latency And Robustness
|
[
"Alexander Spiegelman",
"Balaji Arun",
"Rati Gelashvili",
"Zekun Li"
] |
cs.DC
|
[
"cs.DC"
] |
assumptionSecurity Assumption
propertyProperty
(i)
(ii)
(iii)
(iv)
(v)
(vi)
–
?
!
[1]
[every node/.style=inner sep=0,outer sep=0, scale=1.5]
[minimum size=1.5ex] at (0,-1.5ex) ;
[fill=white] (0,-1.5ex) circle (0.75ex); [fill=black] (0.75ex,-1.5ex) arc (0:#1:0.75ex);
Ł0
-180
3̋6̋0̋
SE[Receiving]ReceivingEndReceiving[1]upon
receiving #1
*EndReceiving
figures/
Aptos
Aptos
Aptos
Aptos
printfolios=false
printacmref=true
The Narwhal system is a state-of-the-art Byzantine fault-tolerant scalable architecture that involves constructing a directed acyclic graph (DAG) of messages among a set of validators in a Blockchain network.
Bullshark is a zero-overhead consensus protocol on top of the Narwhal's DAG that can order over 100k transactions per second.
Unfortunately, the high throughput of Bullshark comes with a latency price due to the DAG construction, increasing the latency compared to the state-of-the-art leader-based BFT consensus protocols.
We introduce , a protocol-agnostic framework for enhancing Narwhal-based consensus.
By incorporating leader reputation and pipelining support for the first time, significantly reduces latency.
Moreover, the combination of properties of the DAG construction and the leader reputation mechanism enables the elimination of timeouts in all but extremely uncommon scenarios in practice, a property we name “prevalent responsiveness" (it strictly subsumes the established and often desired “optimistic responsiveness" property for BFT protocols).
We integrated instantiated with Bullshark, the fastest existing Narwhal-based consensus protocol, in an open-source Blockchain project and provide experimental evaluations demonstrating up to 40% latency reduction in the failure-free executions, and up-to 80% reduction in executions with failures against the vanilla Bullshark implementation.
<ccs2012>
<concept>
<concept_id>10002978.10003006.10003013</concept_id>
<concept_desc>Security and privacy Distributed systems security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Distributed systems security
: Improving DAG-BFT Latency And Robustness
================================================
§ INTRODUCTION
Byzantine fault tolerant (BFT) systems, including consensus protocols <cit.> and state machine replication <cit.>, have been a topic of research for over four decades as a means of constructing reliable distributed systems.
Recently, the advent of Blockchains has underscored the significance of high performance.
While Bitcoin handles approximately 10 transactions per second (TPS), the proof-of-stake committee-based blockchains <cit.> are now engaged in a race to deliver a scalable BFT system with the utmost throughput and minimal latency.
Historically, the prevailing belief has been that reducing communication complexity was the key to unlocking high performance, leading to the pursuit of protocols with linear communication. However, this did not result in drastic enough improvements in the throughput, falling significantly short of the current blockchain network targets. For example, the state-of-the-art Hotstuff <cit.> protocol in this line of work only achieves a throughput of 3500 TPS <cit.>.
A recent breakthrough, however, stemmed from the realization that data dissemination is the primary bottleneck for leader-based protocols, and it can benefit from parallelization <cit.>. The Narwhal system <cit.> separated data dissemination from the core consensus logic and proposed an architecture where all validators simultaneously disseminate data, while the consensus component orders a smaller amount of metadata. A notable advantage of this architecture is that not only it delivers impressive throughput on a single machine, but also naturally supports scaling out each blockchain validator by adding more machines. The Narwhal paper <cit.> evaluated the system in a geo-replicated environment with 50 validators and reported a throughput of 160,000 TPS with one machine per validator, which further increased to 600,000 TPS with 10 machines per validator.
These numbers are more in line with the ambitions of modern blockchain systems. Consequently, Narwhal has garnered significant traction within the community, resulting in its deployment in Sui <cit.> and ongoing development in Aptos <cit.> and Celo <cit.>.
Developing a production-ready reliable distributed system is challenging, and integrating intricate consensus protocols only adds to the difficulty.
Narwhal addresses this issue by abstracting away networking from the consensus protocol. It constructs a non-equivocating round-based directed acyclic graph (DAG), a concept initially introduced by Aleph <cit.>. In this design, each validator contributes one vertex per round, and each vertex links to n-f vertices in the preceding round.
Each vertex is disseminated via an efficient reliable broadcast implementation, ensuring that malicious validators cannot distribute different vertices to different validators within the same round.
With networking abstraction separated from the details of consensus, the DAG can be constructed without contending with complex mechanisms like view-change or view-synchronization.
During periods of network asynchrony, each validator may observe a slightly different portion of the DAG at any given time. However, the structure facilitates a simpler ordering mechanism compared to monolithic BFT protocols. In DAG-based consensus protocols, vertices represent proposals, edges represent votes, and the concept of quorum intersection guarantees that validators can consistently order all DAG vertices. This provides efficient consensus because ordering is done via local computation only, without any additional communication cost.
Narwhal-based consensus protocols
As discussed, the idea shared by Narwhal-based consensus protocols is to interpret the DAG structure as the consensus logic <cit.>, but they differ in the networking assumptions and the number of rounds required for vertex ordering.
However, all three protocols share a common structure.
Prior to the protocol initiation, there is an a-priori mapping from specific rounds to leaders shared among all validators.
In the asynchronous protocols (DAG-Rider and Tusk), this mapping to the sequence of leaders is hidden behind threshold cryptography and revealed throughout the protocol.
We use the term anchor to refer to the vertex associated with the round leader in each relevant round.
The DAG local ordering process by each validator is divided into two phases. First, each validator determines which anchors to order (the rest are skipped). Then, the validators sequentially traverse the ordered anchors, deterministically ordering all DAG vertices contained within the causal histories of the respective anchors. The primary considerations that affect the protocol latency are as follows
* Bad leaders. When a validator is malicious or not fast enough, its vertex may not be included in the DAG.
In the case of leaders, the absence of anchors affects the ordering latency of all vertices in previous rounds that are not already ordered.
These vertices can only be ordered as a part of a causal history of a future anchor, directly impacting their latency.
* Sparse anchors. In Narwhal-based consensus protocols, not every round includes an anchor. Consequently, vertices located farther from the next anchor must wait for additional rounds before they can be ordered.
framework
This paper presents : a framework addressing the aforementioned challenges by incorporating leader reputation and pipelining mechanisms into all Narwhal-based consensus protocols.
So far, all available open-source implementations of Narwhal and Bullshark, including Meta [https://github.com/facebookresearch/narwhal/blob/main/consensus/src/lib.rs], and the production deployment on Sui [https://github.com/MystenLabs/sui/blob/main/narwhal/consensus/src/bullshark] lack these features, while our evaluations demonstrate they can provide significant performance improvements.
Leader reputation is an often overlooked concept in theoretical research, yet it holds crucial importance for practical performance.
In practice, Byzantine failures are rare due to robust protection and economic incentives for validators to adhere to the protocol. (Moreover, Narwhal-based DAG constructions, which provide non-equivocation, significantly reduce the range of potential Byzantine behavior).
Thus, the most common failure scenarios in Blockchain (esp. in Narwhal-based) systems involve validators who struggle to keep up, which can occur due to temporary crashes, slower hardware, or geographical distance. If unresponsive validators repeatedly become leaders, progress is inevitably impeded and degrades system performance. The leader reputation schemes select leaders based on the history of their recent activity, as introduced in Diem <cit.> and later formalized in <cit.>.
In the context of Narwhal-based consensus, pipelining means having an anchor in every round, which would result in improved latency for non-anchor vertices.
The main challenge
While the ability to order the DAG locally, without extra communication contributes to the scalability of Narwhal-based consensus, it poses a significant challenge to supporting leader reputation and pipelining.
The leader reputation problem is simpler to solve for monolithic BFT consensus protocols. While the validators may disagree on the history that determines the next leader's identity, the worst that can happen is a temporary loss of liveness until view synchronization, i.e. the quorum of validators can eventually recover by agreeing on a fall-back leader.
This exact method was utilized in <cit.>, electing the fall-back leaders by a simple round-robin.
In contrast, when all communication is done upfront for building the DAG, the safety of a consensus protocol relies on a key property of the local computation that all validators will decide to order the same set of anchors.
This must hold despite the local views of the DAG possibly differing among the validators across multiple rounds.
Hence, selecting the round leaders dynamically based on reputation (as opposed to the a-priori mapping) seems impossible due to a circular dependency: we need to agree on mapping to solve consensus, but we need consensus to agree on a new mapping.
For pipelining, even if all validators agree on the mapping, they also must agree on whether to order or skip each anchor.
Our attempts to solve the problem by delving into the inner workings of the protocol and exploring complex quorum intersection ordering rules have not been fruitful.
Intuitively, this is because consensus requires a voting round after each anchor proposal and the next anchor should link to the decisions (votes) on the previous one.
Our solution.
In , we lean into the power of performing computations on the DAG, in particular the ability to preserve and re-interpret information from previous rounds. For leader reputation, this allows bootstrapping the seemingly circular dependency on consensus, while for pipelining, it allows combining multiple instances of the protocol in a suitable manner.
In fact, runs multiple instances of the protocol one after the other, where the trick is to agree on the switching point based on the following observation:
For any Narwhal-based consensus protocol, since all validators agree on which anchors to order vs skip, they in particular agree on the first ordered anchor.
With this observation in mind,
each validator can start locally interpreting its view of the DAG by running an instance of its favorite protocol until it determines the first ordered anchor.
Since validators agree on this anchor, they can all deterministically start a new protocol instance in the following round.
Note that this too, happens locally, from a validator's perspective, as a part of re-interpreting the DAG.
As a result, ensures the following
* Leader reputation: validators select new anchors for future rounds based on the information available in the causal history of the ordered anchors.
* Pipelining: allocate an anchor in the first round of the new instance. That way, if the first anchor in every instance is ordered, we get an anchor in every round, providing the pipelining effect.
Our system and prevalent responsiveness
We implemented in the open-source codebase of one of the live Blockchain networks and instantiated it with the partially synchronous version of Bullshark[of bull sharks.].
In this setting, we also discovered a way to eliminate timeouts in all except extremely rare scenarios, a property we refer to as prevalent responsiveness.
The design with prevalent responsiveness demonstrates further performance improvements in our evaluations.
Added motivation to avoid timeouts in as many situations as possible comes from a purely practical point of view, as (1) when timeouts are common, the duration affects the system performance, but in a way that is non-trivial to configure in an optimal way as it is highly environmentally (network) dependent; and (2) timeout handling is known to add significant complexity to the implementation logic for managing potential state space of validators.
Monolithic leader-based BFT protocols use timeouts to trigger protocol progress every time a leader is faulty or slow, while optimistic responsiveness property, popularized by the HotStuff <cit.> protocol, effectively eliminates timeout implications in ideal scenarios when the network is synchronous and there are no failures.
However, when failures do occur, all validators must still wait until the timeout expires before transitioning to the next leader.
Utilizing the inherent properties of the DAG construction, and leader reputation mechanism, we ensure that makes progress at network speed under a much larger set of scenarios than optimistically responsive protocols would, which makes with partially synchronous Bullshark prevalently responsive. In , validators do wait for timeouts when a few leaders crash and the corresponding anchors are not ordered.
While the FLP <cit.> impossibility result dictates that there has to be a scenario that requires a timeout, design aligns this FLP scenario to be extremely improbably in practice (multiple, e.g., 10 consecutive skipped anchors). Conceptually, this is similar to how randomized protocols align FLP scenarios to have 0 probability in solving asynchronous consensus with probability 1 <cit.>.
All available Bullshark implementations use timeouts to ensure honest validators wait for slow anchors even if 2f+1 other vertices were already delivered.
By eliminating timeouts, immediately reduces latency when a leader is faulty, as the corresponding anchors would never be delivered and it is best to advance to the next round as fast as possible.
If the leader is not crashed and just slower, validators may skip anchors that they could order if they waited a little bit longer.
This is however, where the leader reputation mechanism of shines, filtering out slow validators that constantly delay new rounds and allowing the DAG to proceed at network speed while ordering most anchors.
Our experimental evaluation demonstrates up to 40% reduction in latency against vanilla Bullshark protocol implementation when there are no failures in the system, and up to 80% reduction in latency when there are failures. We provide experiments specifically designed to give insights into the impact of the improvements separately, i.e. pipelining, leader reputation and eliminating the timeouts (prevalent responsiveness).
In summary, the paper focuses on improving latency and robustness in DAG-Based protocols.
It provides , a framework to enhance any Narwhal-based consensus protocol with (1) Leader reputation mechanism that prevents slow, isolated, or crashed validators from becoming leaders, (2) pipelining support that ensures every round on the DAG has an anchor, and (3) eliminating timeouts in many cases further reducing the latency,
The remaining sections of the paper are organized as follows:
Section <ref> provides background information on DAG-BFT and highlights the main property utilized in this paper.
Section <ref> introduces our pipelining approach, while Section <ref> presents the leader reputation solution in .
In Section <ref>, we prove correctness of the proposed framework.
Section <ref> describes the implementation details and discusses timeouts. Section <ref> presents the results of our evaluation.
Section <ref> discusses related work, and finally, Section <ref> concludes the paper.
§ DAG BFT
We start by providing the necessary background on Narwhal-based BFT consensus (Section <ref>) and define a common property (Section <ref>) satisfied by such consensus protocols. We rely on this property while designing to enhance a given baseline protocol with pipelining and leader reputation, thereby reducing latency.
§.§ Background
The concept of DAG-based BFT consensus, initially introduced by HashGraph <cit.>, aims to decouple the network communication layer from the consensus logic. In this approach, each message consists of a collection of transactions and references to previous messages. These messages collectively form an ever-growing DAG, with messaging serving as vertices and references between messages serving as edges.
In Narwhal, the DAG is round-based, similar to Aleph <cit.>. In this approach, each vertex within the DAG is associated with a round number.
In order to progress to round r, a validator must first obtain n-f vertices (from distinct validators) belonging to round r-1.
Every validator can broadcast one vertex per round, with each vertex referencing a minimum of n-f vertices from the previous round.
The causal history of a vertex 𝗏 refers to the sub-graph that starts from 𝗏. Figure <ref> illustrates a validator's local view of a round-based DAG.
To disseminate messages, Narwhal uses an efficient reliable broadcast implementation that guarantees:
* Validity: if an honest validator has a vertex 𝗏 in its local view of the DAG, then it also has all the causal history of 𝗏.
* Eventual delivery: if an honest validator has a vertex in round 𝗋 by validator 𝗉 in its local view of the DAG, then eventually all honest validators have a vertex in round 𝗋 by validator 𝗉 in their local views of the DAG.
* Non-equivocation: if two honest validators have a vertex in round 𝗋 by validator 𝗉 in their local views of the DAG, then the vertices are identical.
Inductively applying Validity and Non-equivocation, we get:
* Completeness: if two honest validators have a vertex 𝗏 in round 𝗋 by validator 𝗉 in their local views of the DAG, then 𝗏's causal histories are identical in both validators' local view of the DAG.
In simple words, Narwhal construction guarantees that
* All validators eventually see the same DAG; and
* Any two validators that have the same vertex v locally also agree on the whole causal history of v (the contents of vertices and edges between them).
DAG-Rider / Tusk / Bullshark
DAG-Rider, Tusk, and Bullshark are all algorithms to agree on the total order of all vertices in the DAG with no additional communication overhead.
Each validator independently looks at its local view of the DAG and orders the vertices without sending a single message. This is done by interpreting the structure of the DAG as a consensus protocol, where a vertex represents a proposal and an edge represents a vote.
DAG-Rider <cit.> and Tusk <cit.> are randomized protocols designed to tolerate full asynchrony, which necessitates a larger number of rounds and consequently, a higher latency. Bullshark <cit.> also provides a deterministic protocol variant with a faster ordering rule, relying on partial synchrony for liveness.
While the specific details are not required to understand this paper, next we explain the high-level structure of these protocols and define a property they all share.
§.§ Common framework
Narwhal-based consensus protocols have the following common abstract structure:
* Pre-determined anchors. Every few rounds (the number depends on the protocol) there is a round with a pre-determined leader. The vertex of the leader is called an anchor. In the partially synchronous version of Bullshark, the leaders are a-priori known. In the asynchronous protocols (DAG-Rider, Tusk, asynchronous Bullshark) the leaders are hidden and revealed during the DAG construction.
* Order the anchors. All validators independently decide which anchors to skip and which to order. The details differ among the protocols, although they all rely on quorum intersection in the DAG structure. The key aspect is that each honest validator locally decides on a list of anchors, and all lists share the same prefix.
* Order causal histories. Validators process their list of ordered anchors one by one, and for each anchor order all previously unordered vertices in their causal history by some deterministic rule. By Completeness, all validators see the same causal history for any anchor, so all validators agree on the total order.
An illustration of the ordering logic appears in Figure <ref>.
The key correctness argument for all the above mention consensus protocols relies on the fact that all validators agree on which anchors to order and which to skip. In particular, they will all agree on the first anchor that no validator skips. More formally, the abstract property of the Narwhal-based consensus protocols that our framework relies on is the following:
Given a Narwhal-based protocol 𝒫, if all honest validators agree on the mapping from rounds to leaders before the beginning of an instance of 𝒫, then they will agree on the first anchor each of them orders during the execution of 𝒫.
The proof follows immediately from Proposition 2 in DAG-Rider <cit.> and Corollary C. in Bullshark <cit.>.
§
is protocol agnostic and can be directly applied to all Narwhal-based consensus protocols, i.e., DAG-Rider, Tusk, and Bullshark.
It makes no changes to the protocols but rather combines their instances in essentially a “black-box" manner.
The entire correctness argument can be derived solely from Property <ref>.
§.§ Pipelining
A natural progression after the high throughput scalability of BFT consensus achieved by Narwhal is to reduce latency as much as possible.
To this end, Bullshark already halved DAG-rider's latency for ordering anchors from 4 rounds to 2 by adding an optimistic path under the partially synchronous network communication assumption.
Intuitively, it is hard to imagine latency lower than 2 rounds as in the interpretation of the DAG structure as a consensus protocol, one round is needed to "propose" the anchor, while another is needed for "voting".
However, only anchors can be ordered in 2 rounds.
The rest of the vertices are ordered as part of the causal history of some anchor and require a minimum latency of 3 or 4 rounds.
This is because the vertices in a "voting" round require (minimum) 3 rounds, while vertices that share a round with an anchor have to wait for at least the next anchor to be ordered, thus requiring (minimum) 4 rounds.
An illustration of the ordering latency for different vertices appears in Figure <ref>.
Ideally, to reduce the latency of ordering vertices we would like to have an anchor in every round.
This would allow for non-anchor vertices to be ordered as a part of some anchor's causal history in each and every round, making latency and throughput of the protocol less spiky.
In Bullshark, it would become possible for every non-anchor vertex to be ordered in 3 rounds (see Figure <ref>), while in DAG-Rider the latency may be reduced from 10 rounds to 7 in expectation.
Solution
Let 𝒫 be any Narwhal-based consensus protocol.
On a high level, the core technique in is to execute 𝒫 until it, as a consensus protocol, guarantees agreement on some part of the DAG for all validators.
Starting from the round following the agreed part of the DAG, all validators can switch over and start executing a new instance of 𝒫 (or a different Narwhal-based consensus protocol, if desired) from scratch.
While the instances are not executing concurrently, this scheme effectively pipelines the “proposing" and “voting" rounds. As a result in , in a good case an anchor is ordered in every round.
The pseudocode appears in Algorithm <ref>.
In the beginning of the protocol, all validators interpret the DAG from round 0, and the function F is some pre-defined deterministic mapping from rounds to leaders.
Each validator locally runs 𝒫, using F to determine the anchors, until it orders the first anchor, denoted by A in round r.
The key is that, by the correctness of 𝒫 as stated in Property <ref>, all validators agree that A is the first ordered anchor (previous anchors are skipped by all validators).
Consequently, each validator can re-interpret the DAG from the next round (round r+1) according to a new instance of the protocol 𝒫 (or another Narwhal-based protocol) executing from scratch from round r+1.
To order the DAG, much like in the original 𝒫, the validators deterministically order A's causal history, and by the Completeness property, arrive at the same total order over the same vertices.
Note that without re-interpreting the DAG according to a new instance of 𝒫 starting from round r+1, the next anchor according to the previously executing instance of the protocol would appear in a strictly later round (e.g. r+4 for DagRider and r+2 for Bullshark).
The above process can continue for as long as needed.
An illustration appears in Figure <ref>.
Note that in Algorithm <ref>, function F is fixed and used by each instance of protocol 𝒫. In a true "black-box" implementation, the round numbers could be different from the perspective of the executing protocol instance (i.e. start from 0 for each new instance). However, F is fixed and always assigns the same anchor to any given round r in regardless of the protocol instance used for this round.
Note that with Shoal, ordering an anchor vertex requires 2 rounds, while all other vertices require 3.
In Section <ref> we discuss a potential direction to reduce the latency for non-anchor vertices by treating all vertices as anchors.
Intuitively, we can use Property <ref> to instantiate a binary agreement to decide whether to commit each vertex individually.
§.§ Leader Reputation
BFT systems are designed to tolerate Byzantine failures in order to provide as strong as possible worst-case reliability guarantees.
However, actual Byzantine failures rarely occur in practice.
This is because validators are highly secured and have strong economic incentives to follow the protocol.
Slow or crashed leaders are a much more frequent occurrence which can significantly degrade the system performance.
In Narwhal-based BFT, if the leader of round r crashes, no validator will have the anchor of round r in its local view of the DAG.
Thus, the anchor will be skipped and no vertices in the previous round can be ordered until some later point due to an anchor in a future round.
The way to deal with missing anchors is to somehow ensure that the corresponding leaders are less likely to be elected in the future.
A natural approach to this end is to maintain a reputation mechanism, assigning each validator a score based on the history of its recent activity. A validator that has been participating in the protocol and has been responsive would be assigned a high score. Otherwise, the validator is either crashed, slow, or malicious and a low score is assigned.
The idea is then to deterministically re-compute the pre-defined mapping from rounds to leaders every time the scores are updated, biasing towards leaders with higher scores. In order for validators to agree on the new mapping, they should agree on the scores, and thus on the history used to derive the scores.
Such a mechanism was previously proposed in <cit.> and implemented in the Diem Blockchain <cit.> to enhance the performance of Jolteon <cit.>, a leader-based consensus protocol.
One important property Jolteon is that Safety is preserved even if validators disagree on the identity of the leader, while liveness is guaranteed as long as they eventually converge.
Hence, validators could re-assign the reputation scores every time a new block was committed, even though during asynchronous periods it was possible for different validators to commit the same block in different rounds.
Unfortunately, this is not the case for Narwhal-based BFT. If validators disagree on the anchor vertices, they will order the DAG differently and thus violate safety.
This makes the leader reputation problem strictly harder in Narwhal-based BFT.
Solution
constructs a protocol identical to a given Narwhal-based consensus protocol 𝒫, but to support leader reputation anchors are selected according to a function F that takes into account validators' recent activity, e.g., the number of vertices they have successfully added to the DAG.
The function F should be updated as frequently as possible and aim to select validators with a better reputation as leaders more often than their counterparts with a lower reputation.
In , pipelining and leader reputation can be naturally combined as they both utilize the same core technique of re-interpreting the DAG after agreeing on the first ordered anchor.
In fact, the pseudocode for appears in Algorithm <ref> only differs from Algorithm <ref> by adding line <ref>.
The idea is that the validators simply need to compute a new mapping, starting from round r+1, based on the causal history of ordered anchor A in round r (which they are guaranteed to agree on by Property <ref>). Then, the validators start executing a new instance of 𝒫 from round r+1 with the updated anchor selection function F.
Our solution is protocol agnostic and can be directly applied to all Narwahl-based consensus protocols, i.e., DAG-Rider, Tusk, and Bullshark. An illustration can be found in Figure <ref>.
makes no changes to the protocols but rather combines their instances, and the entire correctness argument can be derived solely from Property <ref>.
§ CORRECTNESS
To prove the correctness of (Algorithm <ref>) we assume that the underlying protocol satisfies Property <ref>, which we will use inductively.
Let P be a Narwhal-based DAG-BFT protocol that satisfies Property <ref>.
Let D be a round-based DAG, and assume a known to all function F that maps rounds to anchors.
Then all the locally ordered lists of anchors by honest validators executing with P according to F share the same prefix.
Proof is by induction on the ordered anchors.
Base: We need to show that all honest validators agree on the first anchor.
Since starts by running P until the first anchor is ordered, the base case follows immediately from Property <ref>.
Step: Assume all honest validators agree on the first k ordered anchors, we need to prove that they agree on anchor k+1.
First, we show that all honest validators agree on the new function F (Line <ref> in Algorithm <ref>).
This holds because the new function F is deterministically computed according to the information in k's causal history, and by the Completeness property of the DAG, all honest validators have the same causal history of anchor k in their local view.
Next, let r be the round of anchor k.
By the inductive assumption, all honest validators agree on r.
Thus, all honest validators start the next instance of P in the same round r+1.
Now consider a DAG D' that is identical to D except it does not have the first r rounds.
By Property <ref>, all validators that run P with the new function F on D' agree on the first ordered anchor in D'.
Therefore, all validators agree on anchor k+1 in D.
Let P be a Narwhal-based DAG-BFT protocol that satisfies Property <ref>.
with P satisfies total order.
By Lemma <ref>, all validators order the same anchors. The theorem follows from the DAG Completeness property as all validators follow the same deterministic rule to order the respective causal histories of the ordered anchors.
§ IMPLEMENTATION AND PREVALENT RESPONSIVENESS
We have implemented Narwhal and the partially synchronous version of Bullshark as part of a publicly available open-source blockchain project[In order to uphold the anonymity requirement of the submission, we do not disclose the name of the blockchain project.]. This blockchain is live and the process of productionizing our implementation is underway.
The code is written in Rust, utilizing Tokio[<https://tokio.rs>] for asynchronous networking, BLS <cit.> implemented over BLS12-381 curves for signatures, RocksDB[<https://rocksdb.org>] for persistent data storage, and the Noise[<https://github.com/noiseprotocol/noise_spec>] protocol for authenticated messages.
§.§ Vanilla Bullshark
We implemented Bullshark according to <cit.>, but additionally incorporated weak links per <cit.> in our DAG construction.
Observing n-f vertices in a round is sufficient for progressing to the next round.
Therefore, without weak links, slow validators may consistently lag behind others in broadcasting their vertices and thus may consistently fail to add their vertices to the DAG.
This will incur significant latency for their client transactions.
Weak links from a vertex can reference vertices from earlier rounds in addition to the normal (strong) links to n-f vertices from the previous round.
These weak links are used when establishing the causal history of ordered anchors and thus facilitate the inclusion of transactions contributed by the slow validators into the total order.
We refer to this implementation as Vanilla Bullshark.
It is important to note that adding the support for weak links increases the average latency compared to the figures presented in <cit.>, which did not employ the weak links.
§.§ Eliminating Timeouts
The short paper for the stand-alone partially synchronous version of Bullshark <cit.> assumes the DAG is given and focuses on the ordering of its vertices. On the other hand, full Bullshark is an asynchronous protocol with a fast path under partial synchrony. The full Bullshark paper <cit.> describes how to build the DAG and in particular, the incorporation of timeouts to support the fast path.
Validators in Bullshark must observe n-f vertices in a round to advance to the next round.
Even rounds have anchors, while vertices in odd rounds determine the “voting" pattern.
Full Bullshark uses the following timeouts for every validator to support the fast path:
* Even-round: wait until the anchor of the round is delivered (or the timeout expires).
* Odd-round: wait until 2f+1 vertices that link to the anchor in the previous round are delivered (or the timeout expires).
The rationale for the above logic is to help order the anchor within 2 rounds.
However, part of the contribution of this paper is to eliminate these timeouts in such a way that actually significantly improves latency, according to our evaluation. Having fewer cases where timeouts can occur also inherently simplifies the potential state space and thus, the implementation of the protocol.
In Section <ref>, we refer to even-rounds as anchor rounds and to odd-rounds as vote rounds.
Vanilla bullshark w/o vote Timeout
In the full Bullshark 2f+1 votes are required to order anchors.
Without timeouts in odd rounds, a Byzantine adversary can prevent the fast pass from making progress even during synchrony.
As long as Byzantine validators deliberately not link to the anchor, and even 1 of their vertices get delivered among the first 2f+1 to an honest validator in an odd round, then the honest validator will not be able to order the anchor.
However, we discovered that we can completely eliminate timeouts in odd rounds in the partially synchronous variant of Bullshark.
The anchor ordering rule in this case is f+1 votes <cit.>.
As a result, even if f out of the first 2f+1 vertices delivered to a validator in a round is from Byzantine validators (and do not link to the anchor), the remaining f+1 vertices will link to the anchor due to the even-round timeout and be sufficient to order it.
Baseline Bullshark
FLP impossibility result <cit.> dictates that any deterministic protocol providing liveness under partial synchrony must use timeouts.
In Bullshark, without timeouts in the even rounds, an honest leader that is even slightly slower than the fastest 2f+1 validators will struggle to get its anchor linked by other vertices.
As a result, the anchor is unlikely to be ordered.
The timeout, therefore, ensures that all honest validators link to anchors during periods of synchrony (as long as the leader has not crashed and actually broadcasts the anchor vertex).
Even though timeouts are unavoidable in the worst case, we observe that the DAG construction combined with the leader reputation mechanism allows avoiding them in vast majority of cases in practice.
This is in contrast to leader-based monolithic consensus protocols, where timeouts are the only tool to bypass the rounds with bad leaders.
Without timeouts, a monolithic protocol could stall forever as there is no other mechanism to stop waiting for a crashed leader.
It is also hard to set the timeouts appropriately: conservative timeouts lead to excessive waiting for crashed leaders, while aggressive timeouts lead to bypassing slower validators (and hence unnecessarily failed rounds).
In contrast, the DAG construction provides a “clock" that estimates the network speed.
Even without timeouts, the rounds keeps advancing as long as 2f+1 honest validators continue to add their vertices to the DAG.
As a result, the DAG can evolve despite some leaders being faulty.
Eventually, when a non-faulty leader is fast enough to broadcast the anchor, the ordering will also make progress.
Recall that to be ordered, in partially synchronous Bullshark, an anchor needs f+1 votes (links) out of the 3f+1 vertices.
Therefore, as our evaluation demonstrates, in the failure-free case, most of the anchors are ordered in the next round.
The benefit are even more pronounced when there are failures.
This is because a crashed validator causes a timeout to expire, stalling the protocol for the entire duration.
Without a timer, however, the DAG will advance rounds at network speed and the Bullshark protocol is able to immediately move to the next anchor.
Timeouts as a fallback
By FLP <cit.> impossibility result, there exists an adversarial schedule of events that can prevent all anchors from getting enough votes to be ordered.
This scenario is extremely unlikely to occur in practice, but to be on the safe side, the protocol can deal with it by falling back to using timeouts after a certain amount of consecutive skipped anchors.
§.§ of Bullsharks
A realistic case in which timeouts can help the performance of a Narwhal-based consensus protocol is when the leader is slower than other validators.
Then, as discussed earlier, waiting for an anchor to be delivered even after 2f+1 other vertices can allow the anchor to be committed in the next round.
While we eliminated timeouts from partially synchronous Bullshark, note that, due to the leader reputation mechanism, instantiated with Bullshark does better than repeatedly waiting for the slow leaders.
Instead, the leader reputation mechanism excludes (or at least significantly reduces the chances of) slow validators from being selected as leaders.
This way, the system takes advantage of the fast validators to operate at network speed.
Prevalent Responsiveness
provides network speed responsiveness under all realistic failure and network scenarios, a property we name Prevalent Responsiveness.
Specifically, compared to optimistic responsiveness, continues to operate at network speed even during asynchronous periods or if leaders fail for a configurable number of consecutive rounds.
We implemented leader reputation and pipelining on top of the Baseline Bullshark and compared it to the baseline (no timeouts) implementation.
Leader reputation logic
As explained in Section <ref>, ensures all validators agree on the information used to evaluate the recent activity and to bias the leader selection process accordingly towards healthier validators.
Any deterministic rule to determine the mapping from rounds to leaders (i.e. the logic in pseudocode Line <ref> in Algorithm <ref>) based on this shared and agreed upon information would satisfy the correctness requirements.
Next, we discuss the specific logic used in our implementation.
At any time each validator is assigned either a high or a low score, and all validators start with a high score.
After ordering an anchor v, each validator examines v's causal history H.
Every skipped anchor in H is (re-)assigned a low score, and every ordered anchor in H is (re-)assigned a high score.
Then, the new sequence of anchors is pseudo-randomly chosen based on the scores, with a validator with a high score more likely to be a leader in any given round.
Note that while the validators use the same pseudo-randomness (so that they agree on the anchors), the computation is performed locally without extra communication.
Assigning higher scores to validators whose anchors get ordered ensures that future anchors correspond to faster validators, thus increasing their probability to be ordered.
However, we ensure that the low score is non-zero, and thus underperforming validators also get a chance to be leaders. This crucially gives a temporarily crashed or underperforming validator a chance to recover its reputation.
§ EVALUATION
We evaluated the performance of the aforemententioned variants of Bullshark and on a geo-replicated environment in Google Cloud.
In order to show the improvements from pipelining and leader reputation independently, we also evaluate PL, which is a instantiation with only pipelining enabled, and LR, which is a instantiation with only Leader Reputation enabled.
With our evaluation, we aim to show that
(i) maintains the same throughput guarantees as Bullshark.
(ii) can provide significantly lower latency than Bullshark and its variants.
(iii) is more robust to failures and can improve latency with the help of Leader Reputation.
For completeness, we also compare against Jolteon <cit.>, which is the current consensus protocol of the production system we use.
Jolteon combines the linear fast path of Tendermint/Hotstuff with a PBFT style view-change, and as a result, reduces Hotstuff latency by 33%.
The implementation extends the original Jolteon protocol with a leader reputation mechanism, which prioritizes well-behaved leaders from previous rounds for future rounds.
In addition, to mitigate the leader bottleneck and support high throughput, the implementation uses the Narwhal technique to decouple data dissemination via a pre-step component (called Quorum Store <cit.>).
We evaluate prevalent responsiveness by presenting experiments that compare variants of Bullshark w.o. timeout in different rounds versus as discussed in Section <ref>.
Experimental Setup.
Our experimental setup consists of type virtual machines spread equally across three different Google Cloud regions: us-west1, europe-west4, asia-east1.
Each virtual machine has 32 vCPUs, 128GB of memory, and can provide up to 10Gbps of network bandwidth.
The round-trip latencies are:
118ms between us-west1 and asia-east1,
251ms between europe-west4 and asia-east1,
and 133ms between us-west1 and europe-west4.
The experiments involve three different values of N (the number of validators): 10, 20, and 50, tolerating up to 3, 6, and 16 failures, respectively.
We only measure the consensus performance to avoid introducing noise from other parts of the production system, such as execution and storage.
The transactions are approximately 270B in size. We set a maximum batch size of 5000 transactions.
In our experiments, we measure latency as the time elapsed from when a vertex is created from a batch of client transactions to when it is ordered by a validator. The timeouts for moving to the next round, when applicable, are set to 1s, which is less than the 1.5s timeout used by the production Blockchain system we use.
§.§ Baseline Performance
First, we evaluate the performance of the Bullshark variants, namely Vanilla Bullshark, Vanilla Bullshark w/ Anchor Timeouts, and Baseline Bullshark, to align on a baseline performance to evaluate in the rest of the experiments. The results are in Figures <ref> and <ref>.
Figure <ref> shows the throughput and average latencies of the three Bullshark variants as the system size increases. The presence of timeouts in Vanilla Bullshark forces it to build the DAG slowly, which combined with the fact that fewer validators contribute vertices to the DAG when N=10, results in lower throughput than other variants, which have fewer or no timeouts. The latencies for Vanilla Bullshark is up to 88% higher due to the timeouts. Interestingly, the latencies are similar for baseline Bullshark and Vanilla Bullshark w/o Vote timeout in the normal case because there is a trade-off between building a DAG at network-speed while skipping an anchor and waiting slightly longer for the anchor to be part of the votes.
We also evaluated the vanilla variants and the baseline for N=50 and with varying the number of failures, in Figure <ref>. We observe that Baseline Bullshark provides lower latency than other variants by virtue of being able to build the DAG at network speed skipping failed anchors and ordering using the alive ones. Therefore, in the rest of the section, we use Baseline Bullshark as the baseline to evaluate .
§.§ Performance of under fault-free case
We now evaluate the variants against the baseline under the normal case where there are no failures. The results are in Figure <ref>. As expected, the throughput of the variants is similar as the number of validators increases. It can be observed that each variant of decreases the latency leading to full protocol. In summary, we observe that the 's average latency decreases by up to 20% compared to Baseline Bullshark.
On the other hand, Jolteon <cit.>, despite its use Narwhal's data dissemination decoupling, is only able to achieve a peak throughput of less than 60k, about 40% lower than .
This is because under high load leaders become the bottleneck again as they are not able to deal with the required network bandwidth, and as a result, unable to drive progress before timeouts expire.
Furthermore, in terms of latency, Jolteon is ≈50% better than Vanilla Bullshark, but only ≈20% better than .
Note that the latencies presented do not include the pre-step Quorum Store's latencies, because all the compared protocols include this optimization. However, in the case of , this latency can be avoided by merging Quorum Store into the DAG construction, as done in Narwhal, which will further close the latency gap from Jolteon.
In Figures <ref> and <ref>, we distinguish the latencies of transactions in the vote-round vertices from that in anchor-round vertices, in order to show the effect of the pipelining approach.
The vote and anchor round latencies for PL, as well as , are similar, which helps provide predictable and smooth latency for transactions in real production systems. In contrast, the vote and anchor round latencies for Baseline Bullshark and LR differ by 5-20% depending on the number of failures.
§.§ Performance of under faults
Figure <ref> shows the behavior of baseline and variants under faults. For this experiment, N=50 and the failures are increased from 4 to 16 (maximum tolerated). This is the case where the Leader Reputation mechanism helps to improve the latency significantly by reducing the likelihood of failed validators from being anchors.
Notice that without Leader Reputation, the latencies of Baseline Bullshark and PL increases significantly as the number of failures increases. provides up to 65% lower latencies than Baseline Bullshark under failures.
Figure <ref> shows the impact of skipping leaders on the latency by comparing vanilla Bullshark with on a timeline plot under failures. We have a system of 50 validators, 8 of which have failed. The x-axis represents a part of the experiment time window and the y-axis shows the latency.
The presence of timeouts and the need to skip anchors causes vanilla Bullshark's latency to fluctuate. In our experiment, we observed latency jitter of approximately one second, which makes it impossible to provide predictable latency in production systems. In constrast, maintains consistent low latency without any jitter.
§.§ Summary
In contrast to Vanilla Bullshark, provides up to 40% lower latency in the fault-free case and up to 80% lower latency under failures. Furthermore, we show that provides predictable latency and is able to commit at network speed in most cases and without waiting for timeouts.
§ RELATED WORK
§.§ BFT systems for Blockchains
Byzantine fault tolerance (BFT) has been an active area of research for over four decades, with a significant body of literature in both theory <cit.> and systems <cit.>. With the advent of Blockchain systems in recent years, the focus on performance and scalability has notably increased.
Initial efforts to enhance throughput and scalability attempted to reduce the communication complexity of leader-based eventually synchronous protocols. This resulted in a considerable body of work aiming to achieve communication complexity linear to the number of validators <cit.>.
Despite sound theoretical premise, the practical implications arguably fell short of expectations.
An independent evaluation and comparison conducted by <cit.> revealed that the well-known HotStuff <cit.> protocol achieved a throughput of only 3,500 TPS on a geo-replicated network.
The practical breakthrough occurred a few years later with the realization that the main bottleneck in BFT systems, particularly those relying on leaders, is data dissemination. Mir-BFT <cit.> introduced an innovative approach by running multiple PBFT <cit.> instances in parallel.
Independently, Narwhal <cit.> and later Dispersedledger <cit.> decoupled data dissemination from the consensus logic. These advancements showcased impressive results, with Narwhal achieving a peak throughput of 160,000 TPS.
There has been systems <cit.> and theoretical <cit.> research in asynchronous BFT protocols. However, to the best of our knowledge, no asynchronous protocol is deployed in production in an industrial system.
Another appealing property of Narwhal is the support of a partially synchronous <cit.> as well as asynchronous <cit.> (as long as randomness is available) protocols, and the ability to easily switch among them.
§.§ Timeouts and responsiveness
The FLP <cit.> impossibility result states that there is no deterministic consensus protocol that can tolerate a fully asynchronous network.
The proof relies on the fact that it is impossible to distinguish between crashed and slow validators during asynchronous periods.
The immediate application to partially synchronous networks, therefore, is that all deterministic protocols must rely on timeouts in some way to guarantee liveness against a worst-case adversary.
Indeed, to the best of our knowledge, all previous deterministic BFT protocols, including the partially synchronous version of Bullshark <cit.>, relied on timeouts to implement a simple version of a failure detector <cit.>.
This mechanism monitors the leaders and triggers view-changes when timeouts expire, i.e. when faults are suspected.
The optimistic responsiveness property, popularized by HotStuff <cit.>, avoids timeouts in the best-case failure-free scenario.
However, when failures do occur, all validators wait until the timeout expires before view-changing to the next leader, introducing a significant slowdown in the protocol execution.
Moreover, as discussed in Section <ref>, setting a proper timeout duration is a non-trivial problem in its own right.
provides prevalent responsiveness, which is a strictly better property than optimistic responsiveness as it guarantees network speed progress in case of healthy leaders and zero delays in case of failures.
achieves this by relying on the network speed “clock" inherent in the DAG construction itself <cit.>, combined with the leader reputation mechanism.
While due to the FLP result, the worst case in which a timeout would be required for maintaining the liveness of the protocol cannot completely be eliminated, successfully relegates such cases to occur in specific extremely uncommon scenarios from a practical point of view (multiple consecutive unordered anchors).
§.§ DAG-based BFT
DAG-based consensus in the context of BFT was first proposed by HashGraph <cit.>. The idea is to separate the network communication layer, i.e. efficiently constructing a system that forms a DAG of messages, and the consensus logic that can involve complex pieces such as view-change and view-synchronization.
The consensus logic is performed locally, whereby a validator examines its local view of the DAG and orders the vertices without sending any messages.
The challenge arises from the asynchronous nature of the network, which may cause different validators to observe slightly different portions of the DAG. To address this, the DAG structure is interpreted as a consensus protocol, wherein a vertex represents a proposal and an edge represents a vote.
Aleph <cit.> introduced a round-based DAG structure. Such a structure simplifies support for garbage collection and non-equivocation, which in turn simplifies the consensus logic to order the vertices.
Narwhal implements round-based DAG, and three Narwhal-based consensus protocols have been previously proposed. The first is DAG-Rider <cit.>, which introduced a quantum-safe asynchronous protocol with optimal amortized communication complexity and O(1) latency. Tusk <cit.> improved latency in the best case. An asynchronous version of Bullshark <cit.> includes a fast path <cit.>, while a stand-alone partially synchronous protocol <cit.> also exists and is currently deployed in production in Sui <cit.>. presents a framework that applies to all Narwhal-based protocols, enhancing their latency through a more efficient ordering rule and a leader reputation mechanism.
An orthogonal theoretical effort <cit.> trades off the non-equivocation property of the DAG construction (which typically requires reliable broadcast), as well as the separation from the consensus logic, in order to reduce latency.
§.§ Pipelining
To the best of our knowledge, pipelining in the BFT context was first proposed by Tendermint <cit.>, and later utilized in HotStuff <cit.> and Diem <cit.>.
State machine replication (SMR) systems can be constructed from multiple instances of single-shot consensus <cit.>, e.g. one approach to build Byzantine SMR is by running a PBFT instance <cit.> for each slot.
Tendermint introduced the elegant idea of chaining proposals or piggybacking single-shot instances such that a value for a new slot could be proposed before the value for the previous slot was committed. In this approach, a message in the i^th round of the k^th instance can be interpreted as a message in round i-1 of instance k+1. While the latency for each instance remains unchanged, clients experience improved latency as their transactions can be proposed earlier.
In DAG-based consensus, the concept of piggybacking proposals is inherent in the design, as each vertex in the DAG links to vertices in previous rounds. However, previous protocols did not allow having an anchor in every round.
framework supports having an anchor in each round in a good case for any Narwhal-based protocol, providing a "pipelining effect".
§.§ Leader reputation
Leader reputation is often overlooked in theory, yet it plays a crucial role in performance in practice.
While Byzantine failures are rare as validators are highly protected, isolated, and economically incentivized to follow the protocol, more common are validators that are unresponsive.
This may be because they temporarily crashed, running slow hardware, or are simply located farther away.
If a leader/anchor election is done naively, unresponsive validators will unavoidably stall progress and lead to significant performance impact.
A practical approach, implemented in Diem <cit.> and formalized in <cit.>, is to exclude underperforming validators from leader election. This is achieved by updating the set of candidates after every committed block based on the recent activity of validators. In a chained protocol, if all validators observe the same committed block, they can deterministically elect future leaders based on the information in the chain. However, in some cases, certain validators may see a commit certificate for a block earlier than others. This can lead to disagreements among validators regarding the list of next leaders, causing a temporary loss of liveness.
For DAG-based protocols, disagreements on the identity of round leaders can lead the validators to order the DAG completely differently. This poses a challenge for implementing leader reputation on the DAG. As evidence, a Narwhal and Bullshark implementation currently deployed in production in Sui blockchain does not support such a feature [github.com/MystenLabs/sui/blob/main/narwhal/consensus/src/bullshark.rs]. enables leader reputation in Narwhal-based BFT protocols without any additional overhead.
§ DISCUSSION
can be instantiated with any Narwhal-based consensus protocol, and can even switch between protocols during the DAG retrospective re-interpretation step.
uniformizes the latency and throughput across the validators and eliminates the use of timeouts except in very rare cases, which contributes to the robustness and performance of the system.
Predictable and smooth latency and throughput patterns have major practical benefits for real systems. It facilitates setting up effective monitoring and alerts for anomaly detection. This is crucial for ensuring security and quality of service by enabling timely response and any intervention necessary, be it manual or automated. Predictable consensus throughput also facilitates pipelining the ordering of transactions with other components of the Blockchain, e.g. transaction execution and commit.
satisfies the property we name prevalent responsiveness, ensuring the worst-case executions that must use timeouts due to the FLP impossibility result are aligned with the improbable (and worst-case) scenarios from the practical standpoint. Moreover, the design without timeouts plays into the strengths of the leader reputation mechanism of , and as a result, provides further latency improvements.
ACM-Reference-Format
§ MULTIPLE ANCHORS PER ROUND
With pipelining, introduces an anchor in every round.
As a result, in the best case, each anchor requires 2 rounds to commit and while non-anchor vertices require 3 rounds.
Next, we present an approach to further optimize the latency for non-anchor vertices, which relies on retrospectively re-interpreting the DAG structure.
We could envision a protocol in which we iterate over more than one vertex in each round in a deterministic order and treat each vertex as an anchor.
More specifically, for a vertex v in round r, we consider executing an instance of the underlying Narwhal-based consensus protocol 𝒫 (i.e., DAG-Rider, Tusk, and Bullshark) starting from round r with v being the first anchor.
This involves re-interpreting the existing DAG structure, and potentially letting it evolve, until a decision of whether v is ordered or skipped is locally made.
If v is ordered by 𝒫, then the causal history of v followed by v is added to the ordering determined by the new protocol. Otherwise, v is skipped and the protocol proceeds to considering a new instantiation of 𝒫 from the next potential anchor (which may be in the same round).
A pseudocode in which all vertices are considered as anchors appears in Algorithm <ref>.
In the good case, each vertex that is considered as an anchor can be ordered in 2 rounds.
However, the drawback of this approach is that if some validators are slow and a potential anchor takes many rounds to decide whether to skip or order, the progress of the whole protocol will be stalled.
This happens because potential anchor vertices must be considered in an agreed-upon and deterministic order. As a result, a vertex that necessitates more rounds incurs a latency penalty for the subsequent vertices.
The above issue can potentially be mitigated by combining it with a leader reputation mechanism to select the vertices that are considered as potential anchors, making the bad case delays less likely.
The other vertices can be ordered based on causal history as previously.
|
http://arxiv.org/abs/2306.01658v1
|
20230602162734
|
An Adaptive Method for Weak Supervision with Drifting Data
|
[
"Alessio Mazzetto",
"Reza Esfandiarpoor",
"Eli Upfal",
"Stephen H. Bach"
] |
cs.LG
|
[
"cs.LG"
] |
[
Naoki Tsuge
===============
We introduce an adaptive method with formal quality guarantees for weak supervision in a non-stationary setting. Our goal is to infer the unknown labels of a sequence of data by using weak supervision sources that provide independent noisy signals of the correct classification for each data point. This setting includes crowdsourcing and programmatic weak supervision. We focus on the non-stationary case, where the accuracy of the weak supervision sources can drift over time, e.g., because of changes in the underlying data distribution. Due to the drift, older data could provide misleading information to infer the label of the current data point. Previous work relied on a priori assumptions on the magnitude of the drift to decide how much data to use from the past. Comparatively, our algorithm does not require any assumptions on the drift, and it adapts based on the input. In particular, at each step, our algorithm guarantees an estimation of the current accuracies of the weak supervision sources over a window of past observations that minimizes a trade-off between the error due to the variance of the estimation and the error due to the drift. Experiments on synthetic and real-world labelers show that our approach indeed adapts to the drift.
Unlike fixed-window-size strategies, it dynamically chooses a window size that allows it to consistently maintain good performance.
§ INTRODUCTION
In order to efficiently create training data for machine learning, programmatic weak supervision <cit.> estimates the accuracy of multiple noisy sources of labels without access to ground truth.
Given a set of labeling functions that vote on the true label for each unlabeled example, the goal is to infer the latent ground truth.
Once inferred, these labels can be used as training data.
In this paper, we study the non-stationary setting, in which the accuracy of each labeling function can drift over time because of changes in the underlying data.
For example, in an image classification task, latent subclasses that make up each class might shift over time.
If the task is to classify animals into categories like “mammal” and “bird,” the accuracy of a weak labeler that looks for attributes like wings might change in accuracy if animals like bats become more or less prevalent.
We ask the question, “Under what conditions can we detect changes in the accuracies of weak labelers over time and bound their error without access to ground truth?”
Programmatic weak supervision is important for creating training data sets when resources are limited.
It can be used for natural language processing <cit.>, computer vision <cit.>, tabular data <cit.>, and other modalities <cit.>.
It has also enabled machine learning applications in industry <cit.> and academia <cit.>.
Even when prompting or fine-tuning large pre-trained models, weak supervision can unlock improved quality and enable adaptation to new tasks <cit.>.
The central modeling challenge in programmatic weak supervision is estimating the probabilistic relationships among the votes of the weak labelers and the latent ground truth.
It is hard because, without access to ground truth labels, the observed votes can be explained in many different ways.
Perhaps the votes tend to agree because they are all accurate labelers.
Or perhaps they are all inaccurate.
Perhaps there are correlations among the votes caused by relying on similar decision processes.
If one assumes that the votes are conditionally independent given the true label and that the examples are independent, and identically distributed (i.i.d.), this is equivalent to
the Dawid-Skene model <cit.> that is the basis for many related works in crowdsourcing <cit.>.
Many works on crowdsourcing and weak supervision have relaxed the conditional independence assumption in various ways to account for a wide range of weak labelers <cit.>.
With two exceptions discussed below, all these aforementioned works assume that the examples are i.i.d.. This is a restrictive assumption when data is collected over time, and it is natural to observe a change, or drift, in the distribution of the examples. In our work, we relax the identically distributed assumption, and assume only that the examples are independent. This introduces a trade-off: if we want to obtain a good estimate at the current time, using more past examples provides more data, which might result in a better estimate if that data is similarly distributed, but might harm the estimate if the window includes a significant distribution drift.
Much prior work has addressed the problem of drifting data in the supervised learning setting <cit.>.
These methods generally rely on labeled data that is unavailable in the weakly supervised setting.
Another broad line of work has viewed drift detection as an unsupervised problem, looking for non-stationarity in arbitrary distributions <cit.>.
These methods generally assume a prior distribution on the locations in time of drift.
That prior can be either defined explicitly in a Bayesian framework or implicitly via a heuristic cost function that penalizes the trade off between better fitting the data and finding more drift points.
In a similar vein, previous works on relaxing the i.i.d. assumption with multiple noisy labelers have placed assumption on how much their accuracies can drift <cit.>.
In contrast, our goal is to estimate the labelers' accuracies, without prior assumption on the drift, and without explicitly quantify it from the input.
This is a very challenging problem, as the drift is unknown and we cannot estimate the drift from the data, as we have access to only a single sample from each distribution.
Our Contributions.
We introduce the first adaptive algorithm for programmatic weak supervision in the presence of drift with formal guarantees on the quality of its parameter estimates.
The advantage of an adaptive algorithm is that it can react in a rigorously principled way to changes in the accuracies of the weak labelers as they occur (as opposed to having to make an assumption on how much drift will occur).
When the underlying process is stationary, it can accumulate as much data as possible in a large window of time in order to best estimate the accuracies of the labelers.
When drift does occur, it can react by using only the most recent (and most relevant) data to estimate the accuracies.
Our method selects the amount of data to use based on differences in the rates of agreement among the labelers.
We derive a principled decision rule for this selection and provide a rigorous analysis that bounds the resulting error of the estimated accuracies of the labelers.
Our novel bound separates the statistical error of estimating the parameters from the error caused by possible drift.
This analysis enables the algorithm to select a close-to-optimal trade-off to minimize the worst-case error.
The conceptual difference between our approach and all previous work is that we do not rely on prior information about the drift, or try to learn the drift from the data (both unrealistic in many applications). Instead, at each time step, our algorithm compares its estimation obtained using different window sizes and uses this information to detect drift and adjust the window size for the decision at that step. We analytically prove that this information is sufficient to allow the algorithm to efficiently adapt to drift in distribution, without explicitly estimating the magnitude of the drift.
In our experimental evaluation, we show that on synthetic data and a drifting dataset constructed from Animals with Attributes2 <cit.>, our algorithm adapts to the drift as it occurs, dynamically selecting the amount of data to use in an effective way.
Unlike fixed-window-size strategies, we find this approach consistently maintains good performance as drift occurs.
As the better window sizes change over time, the algorithm adapts by shifting to the better ones.
§ PROBLEM STATEMENT
Given a vector v∈ℝ^q, let ‖v‖_∞ = sup_1 ≤ i ≤ q|v_i|
be the largest component of v in absolute value. Similarly, given a matrix C∈ℝ^q× q, we define ‖C‖_∞ = sup_i,j|C_ij|.
Let 𝒳 = ℝ^d be our classification domain. A binary classification task is specified by a function y:𝒳×𝒴, where 𝒴 = { -1,1} is the label space. Given x, we would like to infer its label y(x).
We assume access to n weak labeling functions ℓ_1,…,ℓ_n, where each ℓ_i : 𝒳→𝒴 provides a tentative labeling of the item x. For example, each weak classifier can be a classifier that was trained for a related task, or a decision rule based on a simple programmatic criterion. The weak labeling functions ℓ_1(x), …, ℓ_n(x) are the only information sources for our classification task.
We receive a sequence of examples X_1, X_2, … over time.
For any given time t, our goal is to obtain an accurate estimate of the correct label y(X_t) of X_t given the weak labelling functions ℓ_1,…,ℓ_n and the input sequence up to time t, X_1,…,X_t.
We adapt the standard assumptions used in analyzing weak supervision, in particular crowdsourcing, with no drift <cit.>.
We first assume that the input sequence ( X_t)_t ∈ℕ is an independent, but not identically distributed stochastic process. Any finite subset of its random variables are mutually independent, and each X_t is sampled from a distribution D_t over 𝒳 that can drift over time. Formally, this is stated with the following assumption.
For any finite t ≥ 1, the input vector
(X_1,…,X_t) is distributed as ∏_i=1^t D_i.
The second standard assumption is that the weak labelers have independent errors.
For any t ≥ 1 and i≠ j,
for X_t ∼𝒟_t, we have that the events
{ℓ_i(X_t) ≠ y(X_t) } and {ℓ_j(X_t) ≠ y(X_t) } are independent given y(X_t).
We define the accuracy of the weak labeler i at time t as
p_i(t) ≐_X ∼ D_t( ℓ_i(X) = y(X) )
The value p_i(t) ∈ [0,1] represents the probability that the weak labeler ℓ_i is correct with a sample X_t ∼ D_t. The accuracy probability p_i(t) is a function of the input distribution D_t and therefore may drift in time.
Example. Assume that the classification task is to distinguish whether an input image contains a cat or a dog. Assume that there is a weak labeler ℓ_tail that detects whether an animal has a tail or not. This weak labeler provides no signal if we only observe images of cats and dogs that both have tails, however the relevance of this classifier can change over time: if the probability of observing a dog without a tail (e.g., a bulldog) grows over time, this weak labeler can provide a stronger signal towards the right classification.
Our goal is to adapt dynamically to the change in accuracy of the weak labelers.
Remark. For concreteness, we analyze our algorithm with respect to a drift in the input distribution over 𝒳. However, our analysis applies to a more general case, where in addition to drift in the input distribution there can be drifts over time in the accuracy of individual labelling functions. Such drift can be the results of a change in the functionality of the labeling functions.
For example, a human labeler can get tired and make more mistakes, or a sensor's accuracy can be affected by a change of light or temperature.
Formally, instead of a labelling function ℓ_i(X) we have a family of labelling functions
{ℓ_i,t(X) | t≥ 1}. Equation (<ref>) is replaced with
p_i(t) ≐_X ∼ D_t( ℓ_i,t(X) = y(X) ),
and the algorithm and analysis are the same.
§ RELATED WORK
To our knowledge only two works have considered relaxing the i.i.d. assumption in any way when learning from multiple noisy sources of labels.
They both require assumptions on how much the accuracies of the labelers can change over time.
The first <cit.> assumes that the accuracy of the weak labelers can change at most by a constant at each step. In particular, their assumptions imply that there exists a value Δ > 0, known a priori, that upper bounds the magnitude of the drift at each step, i.e. ‖p(t) - p(t+1) ‖_∞≤Δ for all t ≥ 1.
The second <cit.> assumes that the KL divergence between two consecutive distributions is upper bounded by a constant Δ.
These are essentially similar assumptions: an assumed upper bound on the magnitude of the drift allows these methods to determine before execution how much information they should use from the past.
These algorithms are unpractical as the value Δ is unknown in practice, and they cannot adapt if the rate of drift changes over time.
If one assumes too much drift, then the algorithm will use too small an amount of data and have greater statistical error in its estimates of the labelers' accuracies.
If one assumes too little drift, then the algorithm will use too large an amount of data and have errors from using data from too different a distribution.
A priori, there is usually no way to know how to choose.
In contrast, in this work, our goal is to dynamically choose the window size as a function of the observed votes without requiring any assumption on the amount of drift.
In other words, we want to adapt to drift as it occurs.
The challenge of coping with non-stationary drift has been studied in a number of other settings. A sequence of works <cit.>) considered supervised learning settings with distribution drift in the training set,
assuming some known upper bound on the drift. An alternative assumption <cit.>, is that the training set is drifting between a small number of distributions, and the algorithm has multiple samples from each distribution to learn the drift error. A minimax error for density estimation with distribution drift was studied in <cit.>, again with some a priori assumption on the drift rate. We also note that the non-stationary setting was also extensively studied in reinforcement learning (e.g., <cit.>).
That setting significantly differs from ours, as the goal is to minimize the regret, and the distribution of the samples is also affected by the decisions taken by a policy on the environment.
Our work is the first to provide an adaptive algorithm for weak supervision and crowdsourcing in a non-stationary setting and without any prior assumptions on the drift.
§ PRELIMINARY RESULTS
Our work builds on the following results that study the problem in settings where the accuracy probabilities are know, or there in no drift in the input distribution.
Assume first that the accuracy probabilities p(t) = (p_1(t),…,p_n(t)) of the weak labelers at any time t≥ 1 are known. With <Ref>, it is known that the optimal aggregation rule for classifying X_t is a weighted majority vote, where the weights are functions
of the accuracies of the weak labelers at time t <cit.>. In that case the optimal classification of x_t doesn't use information about the preceding inputs. In particular, consider the family of weighted majority classifiers f_w : 𝒳↦𝒴 with weights w = (w_1,…,w_n)^T, i.e.
f_w(x) = sign( ∑_i=1^n w_i ℓ_i(x) ).
It is shown in <cit.> that the optimal aggregation of ℓ_1(X_t), …, ℓ_n(X_t) is given by f_w^*(t) where
w^*(t) = ( ln(p_1(t)/1-p_1(t)), …, ln(p_n(t)/1-p_n(t)) )
In weak supervision and crowdsourcing applications, the accuracy probabilities of the weak labelers are unknown.
Several methods for estimating p(t) using previous samples, have been proposed
in the literature in a setting without distribution drift <cit.>. It is known that under mild assumptions, if we have access to enough identically distributed samples, it is possible to accurately estimate the accuracies of the weak labelers, and different minimax methods have been proposed in this setting <cit.>. Our contribution is an adaptive method that allows for this estimation in a non-stationary setting without any prior assumption on the drift.
Our estimation method is based on the technique developed by <cit.> that uses the weak labelers' correlation matrix to estimate the expertise of each weak labeler in a no-drift setting. In particular, for each t ≥ 1, we let the correlation matrix C(t) ∈ [-1,1]^n × n be defined as
C_ij(t) = _X ∼ D_t[ ℓ_i(X) ℓ_j(X) ] ∀ (i,j) ∈{1,…,n}^2 .
When there is no distribution drift and under mild assumptions on the bias of the estimates of the weak supervision sources, it is possible to show that a good estimation of the correlation matrix C(t) implies a good estimation of the accuracies p(t). The assumption on the bias is formalized as follows.
There exists τ > 0 such that p_i(t) ≥1/2 + τ for all t ≥ 1 and i ∈{1,…,n}.
With this assumption, the following result holds for the non-drift setting:
Let C∈ [-1,1]^n × n be a matrix such that ‖C - C(t)‖_∞≤ϵ, and assume n ≥ 3. Let Assumptions <ref>, <ref> and <ref> hold. Then, there exists an estimation procedure that given in input C, it outputs p̂ = (p̂_1,…,p̂_n) such that ‖p(t) - p̂‖_∞≤ (5/2)ϵ/τ^2.
Note that the algorithm for the non-drift case presented in <cit.> and the algorithm presented here for the drift case are oblivious to the value of τ.
§ ALGORITHM
As explained in the previous section, our method revolves around the estimation of the correlation matrix C(t) at the current time t in order to use <Ref> and obtain an estimate of p(t). In particular, we generalize the method in <cit.> to handle drift.
All proofs are deferred to the supplementary material.
We define the following quantity:
Ĉ^[r](t) ≐1/r∑_k=t-r+1^t ( ℓ_1(X_k), …, ℓ_n(X_k))^T( ℓ_1(X_k), …, ℓ_n(X_k)) .
The matrix Ĉ^[r](t) ∈ [-1,1]^n × n is the empirical correlation matrix computed using the latest r samples X_t-r+1,…,X_t. This matrix provides the following guarantee on the estimation of C(t).
Let t ≥ 1 and let δ∈ (0,1) and let <Ref> hold. The following inequality holds with probability at least 1-δ:
‖C(t) - Ĉ^[r](t) ‖_∞≤√(2 ln(n(n-1)/δ)/r)+ 12∑_k=t-r+1^t-1‖p(k) - p(k+1)‖_∞
<Ref> shows that the error of estimating C(t) by using the previous r samples can be upper bounded with the sum of two error terms: a statistical error and a drift error. The statistical error is related to the variance of the estimator Ĉ^[r](t), and it decays with rate O(1/√(r)). The drift error is unknown and it quantifies the error introduced due to the distribution shift, and it is measured as sum of the maximum variation of the accuracy of the weak labelers at each step. The drift error is non-decreasing with respect to r. There is a trade-off: we want to choose a value of r that minimizes the sum of the statistical error and the drift error.
Our main contribution is an algorithm that without any assumption on the drift can provide a close-to-optimal solution of the above trade-off (<ref>).
This is a challenging problem, as it is not possible to estimate the drift error, since we only have a single sample from each distribution. Specifically, our algorithm is parameterized by a sequence ℛ = { r_1, …, r_m } such that r_1 ≤…≤ r_m. Our algorithm guarantees an estimation error of the matrix C(t) that is essentially up to constant as tight as the value of r ∈ [r_1,r_m] that minimizes the right-hand side of (<ref>). This yields a guarantee on the estimation of p(t) by using <Ref>. The next theorem formalizes this result.
Let Assumptions <ref>, <ref> and <ref> hold. Let δ∈ (0,1) and β >0. Let ℛ = { r_1, …, r_m }. Assume n ≥ 3. If we run Algorithm <ref> at time t ≥ r_1, then with probability at least 1-δ it provides an estimate p̂ = (p̂_1,…,p̂_n) such that
‖p(t) - p̂‖_∞≤5Φ_ℛ,β/2τ^2min_r ∈ [r_1,min(t,r_m)]( A_δ,n,m/√(r)
+ 12∑_k=t-r+1^r-1‖p(k) - p(k+1) ‖_∞)
where A_δ,n,m≐√( 2 ln[(2m-1)· n(n-1)/δ]), and Φ_ℛ,β = 1+ max{2β + 2/γ_m(1-γ_M), 2β + 2/β(1-γ_M)}, with γ_M = max√(r_k/r_k+1) and γ_m = min√(r_k/r_k+1).
The pseudocode of our method is reported in Algorithm <ref>. The goal of the algorithm is to increase the window size ending at time t as long as it does not include significant drift in distribution. As a reference for making this decision we observe that
if the samples are identically distributed, we have that for any pair i,j, it holds (see <Ref> in the Appendix):
D_1 = … = D_t ⟹| Ĉ^[r_k+1]_ij - Ĉ^[r_k]_ij| ≤√(1/r_k - 1/r_k+1)
The strategy of our algorithm is the following. Starting with k=1, we iteratively compare the empirical covariance matrix computed respectively with r_k and r_k+1 samples. If there is minimal drift, the empirical quantity | Ĉ^[r_k+1]_i,j - Ĉ^[r_k]_i,j| should be smaller or comparable to the right-hand side of (<ref>) for any entry i,j. If that is the case, we increase the value of k. If this empirical quantity is larger, then a significant drift must have occurred. In this case, we can stop and show that using r_k samples is provably close to optimal.
In the algorithm, this strategy is implemented in lines 2–10. The threshold used as a terminating condition for the iteration of the algorithm is the right-hand side of line 4. The lines 10–15 implement the method that maps a correlation matrix to the accuracies of the weak supervision sources and attains the guarantees of <Ref> <cit.>.
Our algorithm has the following parameters:
* The value δ∈ (0,1) represents the failure probability of the algorithm.
* The sequence ℛ = { r_1, …, r_k } represents the possible window sizes that the algorithm considers. In order to obtain better guarantees in <Ref>, we look for a sequence ℛ such that: (i) the minimum ratio between consecutive elements γ_m is large, as this avoids comparing window sizes that are very similar with one another and for which it is very hard to detect if drift occurred; (ii) the maximum ratio between consecutive elements γ_M is small, as this prevents a situation in which ℛ is sparse, and there is no value in ℛ that is close to the optimal window size. With our analysis, the best guarantees of the algorithm are achieved by using a sequence of powers of 1/(√(2)-1)^2 as ℛ.
* The value of β affects the threshold used in our algorithm. Intuitively, the value of β is proportional on how much drift the algorithm must observe before stopping, and it affects the sensitivity of our algorithm to detect drift. The optimal value of β that minimizes the upper bound of our algorithm is β=√(2)-1.
§ EMPIRICAL EVALUATION
We demonstrate the functionality and the advantage of our algorithm over fixed-window-size strategies in two experimental settings. More experiments are reported in the supplemental material.
Setup
At each time step, we receive an unlabeled example which must be labeled based on the available weak labelers.
We use Algorithm <ref> to estimate the accuracies of the weak labelers.
We then make a prediction for the current time step's example by weighting the vote of each labeler proportionally to its estimated accuracy using the weighting w^* described in Equation (<ref>).
For all experiments, we run our algorithm using the first 20 powers of two as ℛ, and β = δ = 0.1.
As baselines, we consider majority vote and fixed-window-size strategies of size in ℛ.
These baseline algorithms are the same as Algorithm <ref>, except that we use a window size specified a priori to estimate Ĉ (line 10).
Since the triplet method for estimating accuracies (lines 11-19) is not constrained to return a probability between 0 and 1, we clip the estimated accuracies to the interval [0.1,0.9].
All code for reproducing the results is included in the supplementary material.
§.§ Synthetic Data
We first show how our algorithm adapts to changing input distributions with a toy experiment on synthetic data that satisfies all of our assumptions. The algorithm receives input from three weak labelers (n=3), and the input stream has 4 · T data points with T=5000. The data is partitioned into three contiguous blocks of size T, 2T and T. The accuracies of the weak labelers do not change within the same block, but do change between blocks. In particular, for each block, two weak labelers have high accuracy equal to 0.9, and the other one has low accuracy equal to 0.6. The weak labeler with low accuracy is different in each block. We remark that our algorithm is oblivious to this partitioning of the data.
In Figure <ref>, we plot the window size used by the adaptive algorithm and its estimates of the accuracies of each weak labeler in each time t, 1 ≤ t ≤ 4T. The reported results are an average over 10 independent generations of this synthetic data. The main observation is that our algorithm correctly identifies a change in distribution, and reduces the window size whenever it transitions to the next block. This allows for a very good estimation of the weak labeler accuracies, as the algorithm uses data mostly from the current block.
Clearly, there is a delay before our algorithm can correctly identify the distribution change, since it needs to collect enough data from the new block to assess that a significant change happened. As a result, the estimation of the weak labelers accuracy is worse for the data right after a block change.
The variation in the accuracy estimates in the middle of a block are due to the window size selection strategy of the algorithm: whenever the algorithm increases the window size, the larger window includes in the following few steps a small amount of samples from the previous block, resulting in a small additional error in the estimation.
§.§ Image Classification
The Animals with Attributes2 (AwA2) dataset <cit.> consists of images of animals from 50 disjoint classes, that are split into 40 seen classes, used for training, and 10 unseen classes, used for testing.
The dataset also provides the relations among 85 attributes (e.g., “patches”) and classes through a binary class-attribute matrix, where each entry indicates whether animals from a certain class exhibit an attribute or not.
Following previous work <cit.>, we obtain weak supervision sources by fine-tuning ResNet-18 models <cit.> on the seen classes to detect each of the attributes.
We use this dataset to construct a binary classification task with drift. We define two target classes over the unseen test classes. The first target class contains images from the classes “horse” and “sheep”; the second target class contains images from classes “giraffe” and “bobcat.”
We use the class-attribute matrix to identify attributes that are helpful to distinguish between those two target classes.
An attribute is helpful if 1) it appears only in one of the target classes and 2) it consistently appears or does not appear in both classes of each target class.
Using this criteria, we choose the attribute detectors for “black”, “white”, “orange”, “yellow”, “spots”, and “domestic” attributes as weak supervision sources.
To create a dataset with drift, we sample 5T images with repetition from the selected classes with T=4000.
We partition the data into five contiguous blocks of size T.
In the first block, we sample from “sheep” and “bobcat” classes with a probability of 0.1 and from “horse” and “giraffe” classes with a probability of 0.9. To create drift, we alternate the probability of sampling from each of the subclasses between 0.1 and 0.9 for consecutive blocks.
In Table <ref>, we report the average accuracy over three random executions for three different methods: majority vote, a window size with all the previous examples (All Past), and our adaptive algorithm (Adaptive).
Our algorithm outperforms the other two methods by 1.87 and 7.02 percentage points, respectively.
These results emphasize the importance of properly selecting the window size for achieving good performance.
In Figure <ref>, we visualize the accuracy of the adaptively selected window sizes and multiple fixed window sizes over time.
As expected, the accuracy of fixed window sizes changes over time.
For example, small window sizes achieve better accuracy shortly after a distribution shift occurs by limiting the number of out-of-distribution samples.
And large window sizes achieve better accuracy toward the end of each block by using more samples from the same distribution.
On the other hand, our algorithm successfully detects the drift and selects the best window size for each time step accordingly.
As a result, our algorithm maintains a close-to-optimal performance for most of the time steps.
These results emphasize that the optimal window size itself can change over time.
We report the window sizes selected by our algorithm at each time step in Figure <ref>.
Consistent with previous results on synthetic data, our algorithm successfully detects the drift and selects small window sizes to limit out-of-distribution samples.
At the same time, for stationary periods, our algorithm selects large window sizes to include more samples from the same distribution.
§ CONCLUSION
This paper presents the first method with rigorous guarantees for learning from multiple noisy sources of labels in non-stationary settings without any prior assumptions on the nature of the changes over time.
Instead of calculating a fixed strategy based on an assumed amount of drift (as in prior work <cit.>), our method adapts to drift as it occurs based on the observed votes of the labelers.
The major difference between our approach and all previous work is that we do not use prior information on the drift.
Instead, at each time step, our algorithm compares its estimation obtained using different window sizes, and uses this information to adjust the window size for the decision at that step.
Although the algorithm cannot explicitly quantify the drift, because it does not have any training data, we prove that the information observed is sufficient for efficiently adapting the window size to changes in the input distribution.
Our experimental evaluation shows that our algorithm can dynamically adapt to drift as it occurs, adjusting its window size for best performance over time.
As creating models with programmatic weak supervision becomes more common, our work offers practitioners a practical way to cope with drift in long-running and non-stationary deployments.
Limitations and Future Work
A critical assumption in our work—like many in programmatic weak supervision and crowdsourcing—is that the errors of the labelers are conditionally independent (Assumption <ref>).
In practice, this is often not a major limitation.
For example, in our experiments with real-world data, the labelers are not conditionally independent.
However, if the labelers' errors are strongly dependent, this can cause large errors in the estimates of their accuracies.
Other work has looked at learning without any assumptions on the joint distribution of the errors of the labelers and the true label <cit.>, but they all assume that the examples are i.i.d.
An important but highly challenging direction for future work is to learn from multiple noisy labelers with neither assumptions on the dependencies among labeler errors nor the i.i.d. assumption.
Also, the method presented here does not extend to multi-class classification. One can follow the heuristic in <cit.>, and execute multiple one-versus-all classifiers. However, this heuristic does not provide any formal analysis on the obtained result, and the outcome of different one-versus-all classifiers may not be consistent. Thus, a provable multi-class classification under drift is another interesting open problem.
§.§ Acknowledgements
We gratefully acknowledge support from Cisco. This material is based on research sponsored by Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) under agreement number FA8750-19-2-1006 and by the National Science Foundation (NSF) under award IIS-1813444. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) or the U.S. Government. Disclosure: Stephen Bach is an advisor to Snorkel AI, a company that provides software and services for data-centric artificial intelligence.
plain
§ DEFERRED PROOFS
In this section, we report the proofs of the results stated in the main paper. We outline the proof.
* We show that the estimation of the correlation matrix at time t by using the previous r samples can be decomposed into the sum of two error compontents: a statistical error and a drift error (<Ref>).
* In order to bound the statistical error, we use standard concentration inequalities to show that the estimation of the correlation matrix obtained by using the r previous samples is close to its expected value with error O(1/√(r)) with high-probability (<Ref>).
* In order to upper bound the drift error, we show an inequality that relates the drift in correlation matrices over time with the drift of the accuracies of the weak labelers (<Ref>).
* We use the previous results to show the trade-off between statistical error and drift error depicted in <Ref> (<Ref>).
* We prove <Ref>: we show how to dynamically select the window size in order to optimize the above trade-off (<Ref>).
§.§ Error Decomposition
We define the average correlation matrix over the previous r samples as
C^[r](t) = 1/r∑_k=t-r+1^t C(k) .
We show the following error decomposition in the upper bound of the error of estimating the matrix C(t) by using the empirical matrix Ĉ^[r](t) induced by the previous r samples.
For any 1 ≤ r ≤ t, we have that
‖C(t) - Ĉ^[r](t)‖_∞≤‖C^[r](t) - Ĉ^[r](t)‖_∞_statistical error + ∑_i=t-r+1^t-1‖C(i) - C(i+1)‖_∞_drift error .
We use the triangle inequality, and obtain that:
‖C(t) - Ĉ^[r](t)‖_∞ = ‖C(t) - C^[r](t)+ C^[r](t) - Ĉ^[r](t)‖_∞
≤‖C^[r](t) - Ĉ^[r](t)‖_∞+ ‖C(t) - C^[r](t)‖_∞ .
Again, by using the triangle inequality, we obtain the following chain of inequalities
‖C(t) - C^[r](t)‖_∞≤1/r∑_i=t-r+1^t ‖C(t) - C(i)‖_∞ ≤sup_t-r+1 ≤ i ≤ t‖C(t) - C(i)‖_∞
≤∑_i=t-r+1^t-1‖C(i+1) - C(i) ‖_∞ .
By plugging the above inequality into (<ref>), we obtain the statement.
Observe that by definition, we have the following relation Ĉ^[r](t) = C^[r](t). The statistical error term describes how much the empirical estimation deviate to its expectation, i.e., it is equal to
‖Ĉ^[r](t) - Ĉ^[r](t)‖_∞ .
This error is related to the variance of Ĉ^[r], and we will use a concentration inequality to provide an upper bound to this term (<Ref>).
The drift error term describes the estimation error due to a change in the accuracy of the weak labelers, and indeed it is equal to 0 if no change occurs. In <Ref>, we will show how to analytically relate this error to the drift of the weak labelers' accuracies.
§.§ Upper Bound to the Statistical Error
In this subsection, our main goal is to provide an upper bound to the statistical error term
‖Ĉ^[r](t) - Ĉ^[r](t)‖_∞ = ‖C^[r](t) - Ĉ^[r](t)‖_∞
by using a concentration inequality. The following result immediately follows by using McDiarmid's inequality.
Consider a pair of indexes (i,j) ∈{1,…,n}^2. Let δ > 0. With probability at least 1-δ, it holds
| C_ij^[r](t) - Ĉ_ij^[r](t)| ≤√(2ln(2/δ)/r) .
Let f(X_t-r+1, …, X_t) = Ĉ^[r]_ij(t). By definition of C(·), it is easy to verify that
f(X_t-r+1, …, X_t) = 1/r∑_k=t-r+1^t C_ij(k) = C_ij^[r](t) .
Since each change of a single variable can change the value of f by at most 2/r, we can use McDiarmid's inequality, and obtain that with probability at least 1-δ, it holds that
| f- f |≤| C^[r]_ij(t) - Ĉ_ij^[r](t)| ≤√(2ln(2/δ)/r) .
An upper bound to the statistical error term (<ref>) immediately follows by using the above proposition and taking an union bound over all possible indexes i,j. Since the matrices are symmetric, and the diagonal is always equal to 1, it is sufficient to take an union bound over only n(n-1)/2 choices of those indexes. Thus, with probability at least 1-δ, it holds that
‖C^[r](t) - Ĉ^[r](t)‖_∞≤√(2ln(n(n-1)/δ)/r) .
For our algorithm, we will also need to show an upper bound to the error of estimating the difference between two correlation matrices with different window sizes (see intuition in (<ref>)). Since this result follows with a similar argument of <Ref>, we report it here.
Consider a pair of indexes (i,j) ∈{1,…,n}^2. Let δ > 0, and let r,r' be two integers such that 1 ≤ r < r' ≤ t. With probability at least 1-δ, it holds
| Ĉ_ij^[r](t) - Ĉ_ij^[r'](t) - C_ij^[r] + C_ij^[r']| ≤√(2ln(2/δ)(1-r/r')/r) .
Let f(X_t-r'+1, …, X_t) = Ĉ_ij^[r](t) - Ĉ_ij^[r'](t), and observe that
|f - f| =| C_ij^[r](t) - C_ij^[r'](t) - Ĉ_ij^[r](t) + Ĉ_ij^[r'](t)| .
The function f is equivalent to
f(X_t-r'+1, …, X_t) = Ĉ_ij^[r](t) - Ĉ_ij^[r'](t)
= ∑_u=t-r+1^t( 1/r - 1/r') ℓ_i(X_u)ℓ_j(X_u) - ∑_u=t-r'+1^t-r1/r'ℓ_i(X_u)ℓ_j(X_u) .
Thus, if we change the variable X_u with t-r+1 ≤ u ≤ t, the value of f can change by at most 2( 1/r - 1/r'), and if we change the variable X_u with t-r'+1 ≤ u ≤ t-r, the value of f can change by at most 2/r'. We can use McDiarmid's inequality, and obtain that with probability at least 1-δ, it holds that:
| C_ij^[r](t) - C_ij^[r'](t) - Ĉ_ij^[r](t) + Ĉ_ij^[r'](t) |
≤√(ln(2/δ)/2)√(∑_u=t-r+1^t4( 1/r - 1/r')^2 + ∑_u=t-r'+1^t-r4/r'^2)
= √(2ln(2/δ))√(r( 1/r - 1/r')^2 + (r'-r) 1/r'^2)
=√(2ln(2/δ))√((r'-r)^2 + r(r'-r)/rr'^2)
= √(2ln(2/δ))√(r'(r'-r)/r r'^2)
= √( 2ln(2/δ)(1-r/r')/r)
We end this subsection by providing a result that we quote during the explanation of the algorithm. While this result is not necessary to prove <Ref>, it has a similar flavour than the previous proposition, and we report its proof here.
Let 1 ≤ r ≤ r' ≤ t. If D_1 = … = D_t, then for any pair of indexes i,j ∈{1,…,}^n, it holds
| Ĉ_ij^[r](t) - Ĉ_ij^[r'](t)| ≤√(1/r - 1/r') .
For i=j the statement is trivially true as the difference is 0. Let i ≠ j. Consider the random variables Z_k = ℓ_i(X_t-k+1)ℓ_j(X_t-k+1) for 1 ≤ k ≤ r'. By assumption, the random variables are independent, and Z_k ∈ [-1,1]. By using the definition of Ĉ^[r], we have:
| Ĉ_ij^[r](t) - Ĉ_ij^[r'](t)| = | 1/r∑_k=1^r Z_k - 1/r'∑_k=1^r' Z_k |
≤√(𝕍(1/r∑_k=1^r Z_k - 1/r'∑_k=1^r' Z_k)) ,
where in the last step we used Jensen's inequality. Now, we have that:
𝕍(1/r∑_k=1^r Z_k - 1/r'∑_k=1^r' Z_k) = 𝕍((1/r - 1/r')∑_k=1^r Z_k - 1/r'∑_k=r+1^r' Z_k)
= [( 1/r - 1/r')^2r + 1/r'^2(r'-r) ] 𝕍(Z_1)
= r' - r/rr'𝕍(Z_1) .
Since Z_1 ∈ [-1,1], by Popoviciu's inequality we have that 𝕍(Z_1) ≤ 1. Hence, we can conclude that
| Ĉ_ij^[r](t) - Ĉ_ij^[r'](t)| ≤√(1/r - 1/r') .
§.§ Upper Bound to the Drift Error
In this subsection, we show how to provide an upper bound to the drift error term
∑_i=t-r+1^t-1‖C(i) - C(i+1)‖_∞
as a function of the variation in the weak labelers' accuracies p(t-r+1), …, p(t). Intuitively, the correlation matrix does not change if the weak labelers' accuracies are the same, and a bounded drift in the those accuracies also implies a small variation in the correlation matrix. This is formalized in the following proposition.
For any 1 ≤ k ≤ t-1, the following inequality holds
‖C(k) - C(k+1)‖_∞≤ 12 ‖p(k) - p(k+1)‖_∞
Consider coordinates i,j such that i ≠ j.
By definition of C_i,j(k), we have that
C_i,j(k) = _X ∼ D_k[ ℓ_i(X) ·ℓ_j(X) ] .
We have that ℓ_i(X) ·ℓ_j(X) is equal to 1 if and only if ℓ_i and ℓ_j are both either correct or incorrect, and is equal to -1 otherwise. By using the definition of p_i(k) and <Ref>, we have that
C_i,j(k)= _X ∼ D_k[ ℓ_i(X) ·ℓ_j(X) ] = p_i(k) p_j(k) + (1-p_i(k))(1-p_j(k))
- p_i(k)(1-p_j(k)) - p_j(k)(1-p_i(k))
= 4p_i(k)p_j(k) - 2p_i(k) - 2p_j(k) + 1 .
Hence, we have that
| C_i,j(k) - C_i,j(k+1)| = | 4p_i(k)p_j(k) - 4p_i(k+1)p_j(k+1)
+ 2p_i(k+1) + 2p_j(k+1)- 2p_i(k) - 2p_j(k)|
≤ 4| p_i(k)p_j(k) - p_i(k+1)p_j(k+1) |
+2| p_i(k+1) - p_i(k) | + 2| p_j(k+1) - p_j(k) | .
where the first inequality follows from the triangle inequality. For ease of notation, let ρ_k = ‖p(k+1) - p(k)‖_∞. We have that
p_i(k)p_j(k) = p_i(k)(p_j(k) + p_j(k+1) - p_j(k+1))
≤ p_i(k)p_j(k+1) + p_i(k)|p_j(k) - p_j(k+1)|
≤ p_i(k)p_j(k+1) + ρ_k
= (p_i(k)-p_i(k+1)+p_i(k+1))p_j(k+1) + ρ_k
≤ p_i(k+1)p_j(k+1) + 2 ρ_k .
which implies that p_i(k)p_j(k)-p_i(k+1)p_j(k+1) ≤ 2ρ_k. Similarly, we can show that p_i(k+1)p_j(k+1)-p_i(k)p_j(k) ≤ 2ρ_k, hence we have that |p_i(k+1)p_j(k+1)-p_i(k)p_j(k)| ≤ 2ρ_k.
By using this inequality in (<ref>), we obtain | C_i,j(k) - C_i,j(k+1)|≤ 12 ρ_k. The statement follows by substituting the definition of ρ_k.
§.§ Proof of <Ref>
With <Ref>, we have that:
‖C(t) - Ĉ^[r](t)‖_∞≤‖C^[r](t) - Ĉ^[r](t)‖_∞ + ∑_i=t-r+1^t-1‖C(i) - C(i+1)‖_∞ .
In order to conclude the result, we upper bound each term of the right-hand side of the above inequality individually. We use <Ref> and take an union bound over n(n-1)/2 coordinates (see also (<ref>)), and we obtain that with probability at least 1-δ, it holds
‖C^[r](t) - Ĉ^[r](t)‖_∞≤√(2ln(n(n-1)/δ)/r)
<Ref> yields the following upper bound:
∑_i=t-r+1^t-1‖C(i) - C(i+1)‖_∞≤ 12∑_i=t-r+1^t-1‖p(i) - p(i+1)‖_∞
§.§ Dynamic Selection of the Window Size (<Ref>)
In this subsection, we show how to adaptively choose the number of past samples that minimizes a trade-off between the statistical error and the drift error.
As a prerequisite, our algorithm requires that the used empirical quantities provide a good approximation of their estimated expectations. If this is not the case, we simply assume that our algorithm fails, and this happens with probability ≤δ. The next corollary formalizes this required guarantee on the estimation.
We remind the definition of the value A_δ,n,m in the statement of <Ref>:
A_δ,n,m = √(2 ln[(2m-1)· n(n-1)/δ])
Let δ > 0
Let ℛ = {r_1,…,r_m}.
With probability at least 1-δ, it holds:
‖C(t) - Ĉ^[r_k](t) ‖_∞≤A_δ,n,m/√(r_k) + ‖C(t) - C^[r](t)‖_∞ ∀ k ≤ m
‖C^[r_k](t) - C^[r_k+1](t) - Ĉ^[r_k](t) + Ĉ^[r_k+1](t) ‖_∞≤ A_δ,n,m√(1-r_k/r_k+1/r_k) ∀ k ≤ m-1
By using the triangle inequality, we have that
‖C(t) - Ĉ^[r_k](t) ‖_∞≤‖C^[r_k](t) - Ĉ^[r_k](t)‖_∞ + ‖C(t) - C^[r](t)‖_∞
We use <Ref> and <Ref> to upper bound respectively ‖C(t) - Ĉ^[r_k](t) ‖_∞ and ‖C^[r_k](t) - C^[r_k+1](t) - Ĉ^[r_k](t) + Ĉ^[r_k+1](t) ‖_∞. Those propositions provide a guarantee for a single choice of window sizes and coordinates: we take an union bound over n(n-1)/2 choice of coordinates and m+(m-1) different choice of window sizes, hence we take an union bound over (2m-1)n(n-1)/2 events. The statement immediately follows by an inspection of the value A_δ,n,m.
Throughout this subsection, we assume that the event of <Ref> holds, otherwise our algorithm fails (with probability ≤δ). Let β, γ_m and γ_M be defined as in <Ref>.
We define the following function:
ℬ(r) = A_δ,n,m/√(r)2β+2/1-γ_M + ‖C(t) - C^[r](t) ‖_∞
The value ℬ(r) is the upper bound that our algorithm guarantees to ‖ C(t) - Ĉ^[r](t) ‖_∞ using any value r ∈ℛ. In fact, we have that
ℬ(r) = A_δ,n,m/√(r)2β+2/1-γ_M + ‖C(t) - C^[r](t) ‖_∞
≥A_δ,n,m/√(r) + ‖C(t) - C^[r](t) ‖_∞
≥‖C(t) - Ĉ^[r](t) ‖_∞∀ r ∈ℛ ,
where the last inequality follows from <Ref>. For any value k ≤ m-1, also let
𝒯(k) ≐ 2 β A_δ,n,m√(1/r_k) + A_δ,n,m√(1- r_k/r_k+1/r_k) ,
and observe that this is the quantity used as a threshold in Line 4 of the algorithm at iteration k.
The proof of <Ref> revolves around the following two Propositions <ref> and <ref>.
* We guarantee that if ‖Ĉ^[r_k+1](t) - Ĉ^[r_k](t) ‖_∞ is smaller than the threshold 𝒯(k), then a negligeable drift occured, and the upper bound ℬ(r_k+1) is smaller than ℬ(r_k) (<Ref>) In this case, we can keep iterating.
* On the other hand, if ‖Ĉ^[r_k+1](t) - Ĉ^[r_k](t) ‖_∞ is greater than the threshold 𝒯(k), a sizeable drift occurred, and we can provide a lower bound on the drift error (<Ref>). In this case, we can stop iterating and return the current window size r_k.
We prove those two propositions.
Let the event of <Ref> hold. Then, for any 1 ≤ k ≤ m-1
‖Ĉ^[r_k] - Ĉ^[r_k+1]‖_∞≤𝒯(k) ⟹ℬ(r_k+1) ≤ℬ(r_k)
We have that
ℬ(r_k+1) - ℬ(r_k) = A_δ,n,m2β+2/1-γ_M[ √(1/r_k+1) - √(1/r_k)] + ‖C(t) - C^[r_k+1](t) ‖_∞ - ‖C(t) - C^[r_k](t) ‖_∞
We can obtain the following upper
‖C(t) - C^[r_k+1](t) ‖_∞ - ‖C(t) - C^[r_k](t) ‖_∞
≤ ‖C^[r_k+1](t) - C^[r_k](t) ‖_∞
= ‖Ĉ^[r_k](t) - Ĉ^[r_k+1](t) - Ĉ^[r_k](t) + Ĉ^[r_k+1](t) + C^[r_k](t) - C^[r_k+1](t)‖_∞
≤ ‖Ĉ^[r_k](t) - Ĉ^[r_k+1](t) ‖_∞ + ‖ - Ĉ^[r_k](t) + Ĉ^[r_k+1](t) + C^[r_k](t) - C^[r_k+1](t)‖_∞
≤ 𝒯(k) + A_δ,n,m√(1-r_k/r_k+1/r_k)
where the first two inequalities are due to the triangle inequality, and the last inequality is due to the assumption of the proposition statement and <Ref>. By plugging the above inequality in (<ref>) and using the definition of 𝒯(k), we obtain
ℬ(r_k+1) - ℬ(r_k) ≤A_δ,n,m/√(r_k)[2β+2/1-γ_M√(r_k/r_k+1) - 2β+2/1-γ_M + 2β +2√(1-r_k/r_k+1)]
We have that
[2β+2/1-γ_M√(r_k/r_k+1) - 2β+2/1-γ_M + 2β +2√(1-r_k/r_k+1)]
≤ [ 2β+2/1-γ_Mγ_M - 2β+2/1-γ_M + 2β +2 ]
= 0
By using this inequality in (<ref>), we finally obtain that ℬ(r_k+1) - ℬ(r_k) ≤ 0.
This proposition guarantees that every time the If of Line 4 of the algorithm is true, then ℬ(r_k+1) is an upper bound at least as good as ℬ(r_k). Conversely, the next proposition shows that when the If of Line 4 is false, a drift must have occurred.
Let the event of <Ref> hold. Then, for any 1 ≤ k ≤ m-1
‖Ĉ^[r_k] - Ĉ^[r_k+1]‖_∞ >𝒯(k) ⟹∑_u=t-r_k+1+1^t-1‖C(u) - C(u+1) ‖_∞ > A_δ,n,m·β/√(r_k)
We have that:
‖Ĉ^[r_k] - Ĉ^[r_k+1]‖_∞ ≤ A_δ,n,m√(1-r_k/r_k+1/r_k) + ‖C^[r_k](t) - C^[r_k+1](t) ‖_∞
≤ A_δ,n,m√(1-r_k/r_k+1/r_k) + ‖ C(t) - C^[r_k+1](t) ‖_∞ + ‖ C^[r_k](t) - C(t) ‖_∞
where the first inequality is due to <Ref>, and the second inequality is due to the triangle inequality. By using (<ref>), we can show that
‖C(t) - C^[r_k+1](t) ‖_∞ + ‖C^[r_k](t) - C(t) ‖_∞≤ 2∑_u=t-r_k+1+1^t-1‖C(u) - C(u+1) ‖_∞ .
Hence, by using the assumption of the proposition and the definition of 𝒯(k), we obtain the following inequality:
2∑_u=t-r_k+1+1^t-1‖C(u) - C(u+1) ‖_∞ > 2 β A_δ,n,m/√(r_k) ,
and the statement immediately follows.
In the following Lemma, we use Proposition <ref> and <ref> to show that the matrix Ĉ of Line 10 of the algorithm provides a good approximation of C(t). Theorem <ref> immediately follows from this result by using <Ref>.
Consider the setting of <Ref>. Let Ĉ be the matrix defined at Line 10 of the algorithm. With probability at least 1-δ, it holds that
‖Ĉ - C(t)‖_∞≤Φ_ℛ,βmin_r ∈ [r_1,min(t,r_m)]( A_δ,n,m/√(r) + ∑_u=t-r+1^t-1‖ C(u) - C(u+1)‖_∞)
Assume that the event of <Ref> holds (otherwise we say that our algorithm fails, with probability ≤δ). For ease of notation, let ν = (2β+2)/(1-γ_M). Let k̂≤ m be the value such that Ĉ = Ĉ^[r_k̂](t). We remind that the algorithm guarantees an upper bound ℬ(r_k̂) to the estimation error ‖Ĉ - C(t)‖_∞ Let r^* be the integer that minimizes
r^* = argmin_r ∈ [r_1, min(t,r_m)]( A_δ,n,m/√(r) + ∑_u=t-r+1^t-1‖ C(u) - C(u+1)‖_∞),
and let ℬ^* be the minimum value of the above expression, i.e.
ℬ^* = A_δ,n,m/√(r^*) + ∑_u=t-r^*+1^t-1‖ C(u) - C(u+1)‖_∞
In order to prove the lemma, it is sufficient to show that ℬ(r_k̂)/ℬ^* ≤Φ_ℛ,β.
We distinguish two cases: (a) k̂ = m or r^* < r_k̂+1 and (b) r^* ≥ r_k̂+1.
Consider case (a). Let k̃ be the largest integer such that r_k̃≤ r^*. By construction, we can observe that k̃≤k̂. Since the algorithm did not interrupt in the iterations 1,…,k̃, …, k̂, we can use <Ref>, to show that ℬ(r_k̂) ≤ℬ(r_k̃). Hence, we have that:
ℬ(r_k̂)/ℬ^*≤ℬ(r_k̃)/ℬ^* = ν· A_δ,n,m/√(r_k̃) + ‖C(t) - C^[r_k̃](t)‖_∞/A_δ,n,m/√(r^*) + ∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞
≤ν· A_δ,n,m/√(r_k̃) + ∑_u=t-r_k̃+1^t-1‖C(u) - C(u+1)‖_∞/A_δ,n,m/√(r^*) + ∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞
≤ν√(r^*/r_k̃) + ∑_u=t-r_k̃+1^t-1‖C(u) - C(u+1)‖_∞/∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞
≤ν/γ_m + 1
where the second inequality is due to (<ref>), and the last inequality is due to the definition of r_k̃. We can observe that 1+ν/γ_m = 1 + 2β+2/γ_m(1-γ_M)≤Φ_ℛ,β and this concludes the first part of the proof.
We consider case (b). Since the algorithm stopped at iteration k̂ < m, the If condition of Line 4 is false during this iteration, and due to <Ref>, we have that:
∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞≥∑_u=t-r_k̂+1+1^t-1‖C(u) - C(u+1)‖_∞≥ A_δ,n,mβ/√(r_k̂) .
We obtain:
ℬ(r_k̂)/ℬ^* =ν· A_δ,n,m/√(r_k̂) + ‖C(t) - C^[r_k̂](t) ‖_∞/A_δ,n,m/√(r^*) + ∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞
≤ν· A_δ,n,m/√(r_k̂) + ∑_u=t-r_k̂+1^t-1‖C(u) - C(u+1)‖_∞/A_δ,n,m/√(r^*) + ∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞
≤ν· A_δ,n,m/√(r_k̂)/∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞ + ∑_u=t-r_k̂+1^t-1‖C(u) - C(u+1)‖_∞/∑_u=t-r^*+1^t-1‖C(u) - C(u+1)‖_∞
≤ν/β+1 .
where in the last inequality we used (<ref>) and the fact that r_k̂≤ r^*. We finally observe that ν/β+1= 1+2β+2/(1-γ_M)β≤Φ_ℛ,β, and this concludes the proof.
We can finally prove <Ref> as a simple corollary of <Ref>.
Let ϵ be the guarantee of <Ref>. If we let Ĉ be the matrix of Line 10 of the algorithm, we have that with probability at least 1-δ, it holds:
‖Ĉ - C(t)‖_∞≤ϵ
Lines 11-14 of the algorithm implement the procedure of <cit.> that attains the guarantee of <Ref>. Hence, we have that:
‖p̂ - p(t)‖_∞ ≤5ϵ/2τ^2
≤5Φ_ℛ,β/2τ^2min_r ∈ [r_1,min(t,r_m)]( A_δ,n,m/√(r) + ∑_u=t-r+1^t-1‖C(u) - C(u+1)‖_∞) .
The statement immediately follows by using <Ref>.
§ ADDITIONAL EXPERIMENTS
In Figure <ref> and Table <ref>, we plot the average accuracy of the dynamically selected window sizes and multiple fixed window sizes for the AwA2 dataset.
Using a proper window size is crucial for achieving good performance.
Using too large or too small window sizes decreases the accuracy by up to 7 percentage points.
On the other hand, our algorithm adapts to each time step and selects a close-to-optimal window size without any prior knowledge about the drift.
As a result, adaptive window sizes achieve better or comparable accuracy to any fixed strategy.
As discussed in Section <ref>, although some fixed strategies have a competitive accuracy on average, their accuracy changes significantly for different time steps.
|
http://arxiv.org/abs/2307.04816v1
|
20230701035032
|
Q-YOLO: Efficient Inference for Real-time Object Detection
|
[
"Mingze Wang",
"Huixin Sun",
"Jun Shi",
"Xuhui Liu",
"Baochang Zhang",
"Xianbin Cao"
] |
cs.CV
|
[
"cs.CV"
] |
Beihang University, Beijing, China
{wmz20000729,sunhuixin,ShiJun2020,1332671326,bczhang,xbcao}@buaa.edu.cn
Q-YOLO: Efficient Inference for Real-time Object Detection † Equal contribution. ⋆ Corresponding author.
Mingze Wang ^† Huixin Sun ^†Jun Shi ^† Xuhui Liu Baochang Zhang Xianbin Cao^⋆
========================================================================================================
Real-time object detection plays a vital role in various computer vision applications. However, deploying real-time object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper describes a low-bit quantization method to build a highly efficient one-stage detector, dubbed as Q-YOLO, which can effectively address the performance degradation problem caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO introduces a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme, which determines the maximum truncation values through histogram analysis by minimizing the Mean Squared Error (MSE) quantization errors. Extensive experiments on the COCO dataset demonstrate the effectiveness of Q-YOLO, outperforming other PTQ methods while achieving a more favorable balance between accuracy and computational cost. This research contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory overhead.
§ INTRODUCTION
Real-time object detection is a crucial component in various computer vision applications, such as multi-object tracking <cit.>, autonomous driving <cit.>, and robotics <cit.>. The development of real-time object detectors, particularly YOLO-based detectors, has yielded remarkable performance in terms of accuracy and speed. For example, YOLOv7-E6 <cit.> object detector achieves 55.9% mAP on COCO 2017, outperforming both transformer-based detector SWINL Cascade-Mask R-CNN <cit.> and convolutional based detector ConvNeXt-XL Cascade-Mask R-CNN <cit.> in both speed and accuracy. Despite their success, the computational cost during inference remains a challenge for real-time object detectors on resource-limited edge devices, such as mobile CPUs or GPUs, limiting their practical usage.
Substantial efforts on network compression have been made towards efficient online inference <cit.>. Methods include enhancing network designs <cit.>, conducting network search <cit.>, network pruning <cit.>, and network quantization <cit.>. Quantization, in particular, has gained significant popularity for deployment on AI chips by representing a network using low-bit formats. There are two prevailing quantization methods, Quantization-Aware Training (QAT) <cit.> and Post-Training Quantization (PTQ) <cit.>. Although QAT generally achieves better results than PTQ, it requires training and optimization of all model parameters during the quantization process. The need for pretraining data and significant GPU resources makes QAT challenging to execute. On the other hand, PTQ is a more efficient approach for quantizing real-time object detectors.
To examine low-bit quantization for real-time object detection, we first establish a PTQ baseline using YOLOv5 <cit.>, a state-of-the-art object detector. Through empirical analysis on the COCO 2017 dataset, we observe notable performance degradation after quantization, as indicated in Table <ref>. For example, a 4-bit quantized YOLOv5s employing Percentile achieves only 7.0% mAP, resulting in a performance gap of 30.4% compared to the original real-valued model. We find the performance drop of quantized YOLOs can be attributed to the activation distribution imbalance. As shown in Fig. <ref>, we observe high concentration of values close to the lower bound and the significant decrease in occurrences above zero. When employing fixed truncation values such as MinMax, representing activation values with extremely low probabilities would consume a considerable number of bits within the limited integer bit width, resulting in further loss of information.
In light of the above issue, we introduce Q-YOLO, a fully end-to-end PTQ quantization architecture for real-time object detection, as depicted in Fig. <ref>. Q-YOLO quantizes the backbone, neck, and head modules of YOLO models, while employing standard MinMax quantization for weights. To tackle the problem of activation distribution imbalance, we introduce a novel approach called Unilateral Histogram-based (UH) activation quantization. UH iteratively determines the maximum truncation value that minimizes the quantization error through histograms. This technique significantly reduces calibration time and effectively addresses the discrepancy caused by quantization, optimizing the quantization process to maintain stable activation quantization. By mitigating information loss in activation quantization, our method ensures accurate object detection results, thereby enabling precise and reliable low-bit real-time object detection performance. Our contributions can be summarized as follows:
* We introduce a fully end-to-end PTQ quantization architecture specifically designed for real-time object detection, dubbed as Q-YOLO.
* A Unilateral Histogram-based (UH) activation quantization method is proposed to leverage histogram analysis to find the maximum truncation values, which can effectively minimize the MSE quantization error.
* Through extensive experiments on various object detectors, we demonstrate that Q-YOLO outperforms baseline PTQ models by a significant margin. The 8-bit Q-YOLO model applied on YOLOv7 achieves a 3× acceleration while maintaining performance comparable to its full-precision counterpart on COCO, highlighting its potential as a general solution for quantizing real-time object detectors.
§ RELATED WORK
§.§ Quantization
Quantized neural networks are based on low-bit weights and activations to accelerate model inference and save memory. The commonly used model quantization methods include quantization-aware training (QAT) and post-training quantization (PTQ). In QAT, Zhang et al. <cit.> builds a binarized convolutional neural network based on a projection function and a new updated rule during the backpropagation. Li et al. <cit.> proposed an information rectification module and distribution-guided distillation to push the bit-width in a quantized vision transformer. TTQ <cit.> uses two real-valued scaling coefficients to quantize the weights to ternary values. Zhuang et al. <cit.> present a low-bit (2-4 bit) quantization scheme using a two-stage approach to alternately quantize the weights and activations, providing an optimal trade-off among memory, efficiency, and performance. In <cit.>, the quantization intervals are parameterized, and optimal values are obtained by directly minimizing the task loss of the network. ZeroQ <cit.> supports uniform and mixed-precision quantization by optimizing for a distilled dataset which is engineered to match the statistics of the batch normalization across different network layers. <cit.> enabled accurate approximation for tensor values that have bell-shaped distributions with long tails and found the entire range by minimizing the quantization error.While QAT often requires high-level expert knowledge and huge GPU resources for training or fine-tuning, especially the large-scale pre-trained model. To reduce the above costs of quantization, PTQ, which is training-free, has received more widespread attention and lots of excellent works arise. MinMax, EMA <cit.> methods are commonly used to compress or reduce the weights of the PTQ model. MinMax normalizes the weights and bias values in the model to a predefined range, such as [-1, 1], to reduce the storage space and increase the inference speed. MSE quantization involves evaluating and adjusting the quantized activation values to minimize the impact of quantization on model performance.
§.§ Real-time Object Detection
Deep Learning based object detectors can be generally classified into two categories: two-stage and single-stage object detectors. Two-stage detectors, such as Faster R-CNN <cit.>, RPN <cit.>, and Cascade R-CNN <cit.>, first generate region proposals and then refine them in a second stage. On the other hand, single-stage object detectors have gained significant popularity in real-time object detection due to their efficiency and effectiveness. These detectors aim to predict object bounding boxes and class labels in a single pass of the neural network, eliminating the need for time-consuming region proposal generation. One of the pioneering single-shot detectors is YOLO <cit.>, which divides the input image into a grid and assigns bounding boxes and class probabilities to predefined anchor boxes. The subsequent versions, YOLOv2 <cit.> and YOLOv3 <cit.>, introduced improvements in terms of network architecture and feature extraction, achieving better accuracy without compromising real-time performance. Another influential single-shot detector is SSD <cit.>, which employs a series of convolutional layers at different scales to detect objects of various sizes. By using feature maps at multiple resolutions, SSD achieves high accuracy while maintaining real-time performance. Variants of SSD, such as MobileNet-SSD <cit.> and Pelee <cit.>, further optimize the architecture to achieve faster inference on resource-constrained devices.
Efficiency is a critical aspect of real-time object detection, especially for deployment on computationally limited platforms. MobileNet<cit.> and its subsequent variants, such as MobileNetV2<cit.> and MobileNetV3 <cit.>, have received significant attention for their lightweight architectures. These networks utilize depth-wise separable convolutions and other techniques to reduce the number of parameters and operations without significant accuracy degradation. ShuffleNet<cit.> introduces channel shuffling operations to exploit group convolutions, enabling a trade-off between model size and computational cost. ShuffleNetV2<cit.> further improves the efficiency by introducing a more efficient block design and exploring different network scales.
§ METHODOLOGY
§.§ Preliminaries
§.§.§ Network Quantization Process.
We first review the main steps of the Post-Training Quantization (PTQ) process and supply the details. Firstly, the network is either trained or provided as a pre-trained model using full precision and floating-point arithmetic for weights and activations. Subsequently, numerical representations of weights and activations are suitably transformed for quantization. Finally, the fully-quantized network is deployed either on integer arithmetic hardware or simulated on GPUs, enabling efficient inference with reduced memory storage and computational requirements while maintaining reasonable accuracy levels.
§.§.§ Uniform Quantization.
Assuming the quantization bit-width is b, the quantizer Q(𝐱|b) can be formulated as a function that maps a floating-point number 𝐱∈ℝ to the nearest quantization bin:
Q(𝐱|b): ℝ→𝐱̂,
𝐱̂=
{ {-2^b-1,⋯ ,2^b-1-1} Signed,
{0⋯ ,2^b-1} Unsigned.
.
There are various quantizer Q(𝐱|b), where uniform <cit.> are typically used. Uniform quantization is well supported on most hardware platforms. Its unsigned quantizer Q(𝐱|b) can be defined as:
Q(𝐱|b)=clip(⌊𝐱/s_𝐱⌉+zp_𝐱, 0, 2^b-1),
where s_𝐱 (scale) and zp_𝐱 (zero-point) are quantization parameters. In Eq. <ref>, u (upper) and l (lower) define the quantization grid limits.
s_𝐱= u-l/2^b-1,zp_𝐱=clip(⌊-l/s⌉, 0, 2^b-1).
The dequantization process can be formulated as:
𝐱=(𝐱̂-zp_𝐱) × s_𝐱.
§.§ Quantization Range Setting
Quantization range setting is the process of establishing the upper and lower clipping thresholds, denoted as u and l respectively, of the quantization grid. The crucial trade-off in range setting lies in the balance between two types of errors: clipping error and rounding error. Clipping error arises when data is truncated to fit within the predefined grid limits, as described in Eq.<ref>. Such truncation leads to information loss and a decrease in precision in the resulting quantized representation. On the other hand, rounding error occurs due to the imprecision introduced during the rounding operation, as described in Eq.<ref>. This error can accumulate over time and have an impact on the overall accuracy of the quantized representation. The following methods provide different trade-offs between the two quantities.
§.§.§ MinMax.
In the experiments, we use the MinMax method for weight quantization, where clipping thresholds l_𝐱 and u_𝐱 are formulated as:
l_𝐱= min(𝐱), u_𝐱=max(𝐱).
This leads to no clipping error. However, this approach is sensitive to outliers as strong outliers may cause excessive rounding errors.
§.§.§ Mean Squared Error (MSE).
One way to mitigate the problem of large outliers is by employing MSE-based range setting. In this method, we determine l_𝐱 and u_𝐱 that minimize the mean squared error (MSE) between the original and quantized tensor:
l_𝐱, u_𝐱arg min MSE(𝐱, 𝐐_l_𝐱, u_𝐱),
where 𝐱 represents the original tensor and 𝐐_l_𝐱, u_𝐱 denotes the quantized tensor produced using the determined clipping thresholds l_𝐱 and u_𝐱. The optimization problem is commonly solved using grid search, golden section method or analytical approximations with closed-form solution.
§.§ Unilateral Histogram-based (UH) Activation Quantization
To address the issue of activation value imbalance, we propose a new approach called Unilateral Histogram-based (UH) activation quantization. We first provide an empirical study of the activation values after forward propagation through the calibration dataset. As depicted in Figure <ref>, we observe a concentrated distribution of values near the lower bound, accompanied by a noticeable decrease in occurrences above zero. Further analysis of the activation values reveals that the empirical value of -0.2785 serves as the lower bound. This phenomenon can be attributed to the frequent utilization of the Swish (SILU) activation function in the YOLO series.
Based on the empirical evidence, we introduce an asymmetric quantization approach called Unilateral Histogram-based (UH) activation quantization. In UH, we iteratively determine the maximum truncation value that minimizes the quantization error, while keeping the minimum truncation value fixed at -0.2785, as illustrated in the following:
u_𝐱= l_𝐱, u_𝐱arg min MSE(𝐱, 𝐐_l_𝐱,
u_𝐱),
l_𝐱=-0.2785.
To evaluate the quantization error during the search for the maximum truncation value, we utilize the fp32 floating-point numbers derived from the center values of the gathered 2048 bins, as introduces in Algorithm <ref>. These numbers are successively quantized, considering the current maximum truncation value under consideration. Through this iterative process, we identify the optimal truncation range. The UH activation quantization method offers two key advantages. Firstly, it significantly reduces calibration time. Secondly, it ensures stable activation quantization by allowing a larger set of integers to represent the frequently occurring activation values between 0 and -0.2785, thereby improving quantization accuracy.
§ EXPERIMENTS
In order to assess the performance of the proposed Q-YOLO detectors, we conducted a comprehensive series of experiments on the widely recognized COCO 2017 <cit.> detection benchmark. As one of the most popular object detection datasets, COCO 2017 <cit.> has become instrumental in benchmarking state-of-the-art object detectors, thanks to its rich annotations and challenging scenarios. Throughout our experimental analysis, we employed standard COCO metrics on the bounding box detection task to evaluate the efficacy of our approach.
§.§ Implementation Details
We randomly selected 1500 training images from the COCO train2017 dataset <cit.> as the calibration data, which served as the foundation for optimizing the model parameters. Additionally, the performance evaluation took place on the COCO val2017 dataset <cit.>, comprising 5000 images. The image size is set to 640x640.
In our experiments, unless otherwise noted, we employed symmetric channel-wise quantization for weights and asymmetric layer-wise quantization for activations. To ensure a fair and unbiased comparison, we consistently applied the MinMax approach for quantizing weights. The input and output layers of the model are more sensitive to the loss of accuracy. In order to maintain the overall performance of the model, the original accuracy of these layers is usually retained. We also follow this practice.
§.§ Main results
We apply our proposed Q-YOLO to quantize YOLOv5s <cit.>, YOLOv5m <cit.>, YOLOv7 <cit.> and YOLOv7x <cit.>, which have an increasing number of parameters.The results of the full-precision model, as well as the 8-bit and 4-bit quantized models using MinMax, Percentile, and Q-YOLO methods, are all presented in Table. <ref>.
Table. <ref> lists the comparison of several quantization approaches and detection methods in computing complexity, storage cost. Our Q-YOLO significantly accelerates computation and reduces storage requirements for various YOLO detectors. Similarly, in terms of detection accuracy, when using Q-YOLO to quantize the YOLOv5 series models to 8 bits, there is virtually no decline in the average precision (AP) value compared to the full-precision model. As the number of model parameters increases dramatically, quantizing the YOLOv7 series models to 8 bits results in an extremely slight decrease in accuracy. When quantizing models to 4 bits, the accuracy experiences a significant loss due to the reduced expressiveness of 4-bit integer representation. Particularly, when using the MinMax quantization method, the model loses all its accuracy; whereas the Percentile method, which roughly truncates 99.99% of the extreme values, fails to bring notable improvement. Differently, Q-YOLO successfully identifies a more appropriate scale for quantization, resulting in a considerable enhancement compared to conventional Post-Training Quantization (PTQ) methods.
§.§ Ablation Study
§.§.§ Symmetry in Activation Quantization.
Nowadays, quantization schemes are often subject to hardware limitations; for instance, NVIDIA<cit.> only supports symmetric quantization, as it is more inference-speed friendly. Therefore, discussing the symmetry in activation value quantization is meaningful. Table. <ref> presents a comparison of results using Q-YOLO for symmetric and asymmetric quantization, with the latter exhibiting higher accuracy. The range of negative activation values lies between 0 and -0.2785, while the range of positive activation values exceeds that of the negative ones. If we force equal integer expression bit numbers on both positive and negative sides, the accuracy will naturally decrease. Moreover, this decline becomes more pronounced as the quantization bit number decreases.
§.§.§ Quantization Type.
In Table. <ref>, we analyze the impact of different quantization types on the performance of the YOLOv5s and YOLOv5m models, considering three cases: quantizing only the weights (only weights), quantizing only the activation values (only activation), and quantizing both weights and activation values (weights+activation). The results demonstrate that, compared to quantizing the activation values, quantizing the weights consistently induces larger performance degradation. Additionally, the lower the number of bits, the greater the loss incurred by quantization. In YOLO, the weights learned by a neural network essentially represent the knowledge acquired by the network, making the precision of the weights crucial for model performance. In contrast, activation values serve as intermediate representations of input data propagating through the network, and can tolerate some degree of quantization error to a certain extent.
§.§ Inference speed
To practically verify the acceleration benefits brought about by our quantization scheme, we conducted inference speed tests on both GPU and CPU platforms. For the GPU, we selected the commonly used desktop GPU NVIDIA RTX 4090 <cit.> and the NVIDIA Tesla T4 <cit.> , often used in computing centers for inference tasks. Due to our limited CPU resources, we only tested Intel products, the i7-12700H and i9-10900, both of which have x86 architecture. For deployment tools, we chose TensorRT <cit.> and OpenVINO <cit.>. The entire process involved converting the weights from the torch framework into an ONNX model with QDQ nodes and then deploying them onto specific inference frameworks. The inference mode was set to single-image serial inference, with an image size of 640x640. As most current inference frameworks only support symmetric quantization and 8-bit quantization, we had to choose a symmetric 8-bit quantization scheme, which resulted in an extremely small decrease in accuracy compared to asymmetric schemes. As shown in Table. <ref>, the acceleration is extremely significant, especially for the larger YOLOv7 model, wherein the speedup ratio when using a GPU even exceeded 3× compared to the full-precision model. This demonstrates that applying quantization in real-time detectors can bring about a remarkable acceleration.
§ CONCLUSIONS
Real-time object detection is crucial in various computer vision applications. However, deploying object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper introduces Q-YOLO, a highly efficient one-stage detector built using a low-bit quantization method to address the performance degradation caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO employs a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme. Extensive experiments conducted on the COCO dataset demonstrate the effectiveness of Q-YOLO. It outperforms other PTQ methods while achieving a favorable balance between accuracy and computational cost. This research significantly contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory requirements.
splncs04
|
http://arxiv.org/abs/2306.06100v1
|
20230609175926
|
Rational $p$-adic Hodge theory for rigid-analytic varieties
|
[
"Guido Bosco"
] |
math.AG
|
[
"math.AG",
"math.NT"
] |
decorations.markings
double line with arrow/.style args=#1,#2decorate,decoration=markings, mark=at position 0 with (ta-base-1) at (0,1pt);
(ta-base-2) at (0,-1pt);,
mark=at position 1 with [#1] (ta-base-1) – (0,1pt);
[#2] (ta-base-2) – (0,-1pt);
trees
level 1=[level distance=2.4cm, sibling distance=6.5cm]
level 2=[level distance=2.4cm, sibling distance=2.5cm]
level 3=[level distance=3cm, sibling distance=0.8cm]
0.8pt
equationsection
theoremTheorem
theoremsection
lemma[theorem]Lemma
cor[theorem]Corollary
prop[theorem]Proposition
conj[theorem]Conjecture
problem[theorem]Problem
definition
df[theorem]Definition
convention[theorem]Convention
convnot[theorem]Notation and conventions
notation[theorem]Notation
construction[theorem]Construction
*AcknowledgmentsAcknowledgments
remark
rem[theorem]Remark
example[theorem]Example
examples[theorem]Examples
altenumerate
enumi
altitemize
∙
#1
endstuff
Rational p-adic Hodge theory for rigid-analytic varieties]Rational p-adic Hodge theory for rigid-analytic varieties
Max-Planck-Institut für Mathematik, Vivatsgasse 7, 53111 Bonn, Germany
[email protected]
We study a cohomology theory for rigid-analytic varieties over _p, without properness or smoothness assumptions, taking values in filtered quasi-coherent complexes over the Fargues–Fontaine curve, which compares to other rational p-adic cohomology theories for rigid-analytic varieties — namely, the rational p-adic pro-étale cohomology, the Hyodo–Kato cohomology, and the infinitesimal cohomology over the positive de Rham period ring. In particular, this proves a conjecture of Le Bras. Such comparison results are made possible thanks to the systematic use of the condensed and solid formalisms developed by Clausen–Scholze. As applications, we deduce some general comparison theorems that describe the rational p-adic pro-étale cohomology in terms of de Rham data, thereby recovering and extending results of Colmez–Nizioł.
[
Guido Bosco
July 31, 2023
=================
§ INTRODUCTION
In this introduction, we fix a prime number p. We denote by K a complete discretely valued non-archimedean extension of _p, with perfect residue field k, and ring of integers O_K. We fix an algebraic closure K of K. We denote by C:=K the completion of K, and by O_C its ring of integers. We let 𝒢_K:=(K/K) denote the absolute Galois group of K. We fix a compatible system (1, ε_p, ε_p^2, …) of p-th power roots of unity in O_C, which defines an element ε∈ O_C^♭ with Teichmüller lift [ε]∈ A_inf=W( O_C^♭).
§.§ Background and motivation
In the last decade, the field of p-adic Hodge theory has witnessed dramatic advances, starting with Scholze's development of perfectoid geometry, and Fargues–Fontaine's discovery of the fundamental curve. In particular, Scholze initiated the study of the p-adic Hodge theory for rigid-analytic varieties in <cit.>, proving the finiteness of the geometric p-adic étale cohomology of proper smooth rigid-analytic varieties, as well as the de Rham comparison theorem for such varieties. The latter was known before only for algebraic varieties, and we refer the reader to <cit.> for a historical account on the de Rham, crystalline, and semistable conjectures for algebraic varieties. After Scholze's work, there were efforts by a number of people to prove a version of the crystalline/semistable conjecture for proper rigid-analytic varieties having good/semistable reduction, culminating in the following theorem.
Let X be a proper p-adic formal scheme over O_K with semistable reduction. We write X_C for the geometric rigid-analytic generic fiber of X. Let i≥ 0. There is a natural isomorphism
H_^i( X_C, _p)⊗__pB_≅ H_^i( X_k/W(k)^0)⊗_W(k)B_
compatible with the Galois 𝒢_K, Frobenius φ and monodromy N actions, and filtrations.[Here, we write H^i_ for the log-crystalline cohomology, W(k)^0 denotes the log structure on W(k) associated to (→ W(k), 1↦ 0), and X_k is endowed with the pullback of the canonical log structure on X.]
In particular, there is natural 𝒢_K-equivariant isomorphism
H_^i( X_C, _p)≅ (H_^i( X_k/W(k)^0)⊗_W(k)B_)^N=0, φ=1∩^0(H_^i( X_K)⊗_K B_).
We remark that Colmez–Nizioł's strategy in <cit.> relies on a generalization of the syntomic method initiated by Fontaine–Messing, and later refined by Hyodo, Kato and Tsuji. Instead, Bhatt–Morrow–Scholze's strategy, in their epoch-making work <cit.>, is based on the construction of a cohomology theory for smooth p-adic formal schemes Z over O_C (generalized to the semistable case by Česnavičius–Koshikawa in <cit.>), called A_inf-cohomology RΓ_A_inf( Z), which in the proper case specializes to the (log-)crystalline cohomology of the special fiber and the étale cohomology of the generic fiber, thus allowing to compare the latter two as in Theorem <ref>. More recently, in a series of papers, Colmez–Nizioł, partially in joint work with Dospinescu, further generalized the syntomic method to study the rational p-adic Hodge theory of smooth rigid-analytic varieties, which are neither assumed to be proper, nor having semistable reduction, <cit.>, <cit.>, <cit.>, <cit.>. These works are motivated in part by the desire of finding a geometric incarnation of the p-adic Langlands correspondence in the p-adic cohomology of local Shimura varieties, as partially indicated by <cit.>. In order to state the goals of this paper, we will denote by
Y_:=(A_inf, A_inf)∖ V(p[p^♭])
the mixed characteristic punctured open unit disk,
:=Y_/φ^
the adic Fargues–Fontaine curve (relative to C^♭ and _p), and we fix ∞ the (C, O_C)-point of corresponding to Fontaine's map θ: A_inf→ O_C.
As observed by Fargues, Theorem <ref> can be reformulated as a natural isomorphism of 𝒢_K-equivariant vector bundles on
H^i_( X_C, _p)⊗__p O_≅ E(H_^i( X_k/W(k)^0)__p, φ, N, )
where the right-hand side of (<ref>) denotes the vector bundle on associated to the filtered (φ, N)-module H_^i( X_k/W(k)^0)__p.
Since the left-hand side of (<ref>) depends only on the geometric generic fiber of X, it is natural to ask whether one can give a more direct cohomological construction of the right-hand side that also depends only on the generic fiber, that interpolates between H^i_( X_C, _p) and the filtered (φ, N)-module H_^i( X_k/W(k)^0)__p, and that allows to prove extensions of the comparison (<ref>) to any rigid-analytic variety over C. Our first goal in this article will be to give a positive answer to the latter question by extending Bhatt–Morrow–Scholze's strategy and building crucially upon Le Bras' work <cit.>. We will study a cohomology theory for rigid-analytic varieties X over C, taking values in filtered quasi-coherent complexes over the Fargues–Fontaine curve , which compares to other rational p-adic cohomology theories for rigid-analytic varieties over C, without properness or smoothness assumptions — namely, the rational p-adic pro-étale cohomology, the Hyodo–Kato cohomology (<cit.>),[As we will see, the Hyodo–Kato cohomology is a cohomology theory for rigid-analytic varieties of C which refines the de Rham cohomology and, in the case of Theorem <ref>, compares to the rational log-crystalline cohomology of the special fiber.] and the infinitesimal cohomology over B_^+ (<cit.>, <cit.>).
In particular, this will allow us to obtain a general comparison theorem for rigid-analytic varieties defined over a p-adic field, describing the geometric rational p-adic pro-étale cohomology in terms of de Rham data, extending (<ref>), and recovering and generalizing the above-mentioned results of Colmez–Nizioł.
§.§ B-cohomology
In the following, we denote by B the ring of analytic functions on Y_. To pursue the goals stated in the previous section, we begin by defining the B-cohomology theory for rigid-analytic varieties over C. Then, we shall explain how this cohomology theory interpolates several other rational p-adic cohomology theories, and how to interpret our main comparison theorems in terms of the Fargues–Fontaine curve . For the reader willing to assume that X is smooth in Definition <ref> below, we note that, in this case, the -site of X (<cit.>, <ref>) can be replaced by the étale site of X (Proposition <ref>). Our main results on the B-cohomology theory are already new in the smooth case.
[cf. Definition <ref>]
Let X be a rigid-analytic variety over C. We denote by α:X_v→ X_ the natural morphism from the v-site to the -site of X.
* We define the B-cohomology of X as
RΓ_B(X):=RΓ_(X, Lη_tRα_*)
where denotes the v-site sheaf theoretic version of the ring B, and we write Lη_t(-) for the décalage functor with respect to t=log([ε])∈ B, i.e. Fontaine's 2π i.
* We define the B_^+-cohomology of X as
RΓ_B_^+(X):=RΓ_(X, Lη_tRα_*_^+)
where _^+ is the v-site sheaf theoretic version of the ring B_^+.
We endow both RΓ_B(X) and RΓ_B_^+(X) with the filtration décalée, coming from Bhatt–Morrow–Scholze's interpretation of the décalage functor in terms of the connective cover functor for the Beilinson t-structure (Definition <ref>).
The Frobenius automorphism of induces a φ_B-semilinear automorphism
φ: RΓ_B(X)→ RΓ_B(X)
which preserves the filtration décalée.
We recall that, in the paper <cit.>, Le Bras introduced and studied (an overconvergent version of) the B-cohomology theory for smooth rigid-analytic varieties over C. In particular, building upon results of <cit.>, for Z a smooth proper p-adic formal scheme over O_C, he compared the B-cohomology of the rigid-analytic generic fiber of Z with the crystalline cohomology of the special fiber of Z, <cit.>.
In the following, we denote by F the fraction field of the ring of Witt vectors W(k), we write F̆ for the completion of the maximal unramified extension of F in K and we denote by O_F̆ its ring of integers. To motivate our first main result on the B-cohomology theory, we recall that in <cit.>, for smooth rigid-analytic varieties X over C, Colmez–Nizioł (adapting a construction of Beilinson in the case of algebraic varieties <cit.>), via the alterations of Hartl and Temkin, defined a Hyodo–Kato cohomology theory
RΓ_(X)
taking values in the derived category of (φ, N)-modules over F̆, which refines the de Rham cohomology RΓ_(X), and in the case X has a semistable formal model X over O_C it is given by the rational log-crystalline cohomology RΓ_( X_ O_C/p^0/ O_F̆^0)__p (see also <ref>, and in particular Theorem <ref>).
At this point, based on Le Bras' work (Remark <ref>), it was natural to ask how the B-cohomology compares to the Hyodo–Kato cohomology, and whether, at least in the proper case, the latter cohomology theory (which is defined using log-geometry) can be recovered from the former (which is defined directly in terms of the generic fiber). To answer this question, the difficulty is twofold: the first issue comes from the very definition of the Hyodo–Kato cohomology, which forces us to construct a comparison morphism with the B-cohomology locally, and in a functorial way, using log-geometry;[Likewise, as explained to us by Česnavičius and Le Bras, it is a priori not clear whether the absolute crystalline comparison isomorphism for the A_inf-cohomology in the semistable case, constructed in <cit.>, is functorial.] the second issue is of topological nature, since, locally, RΓ_(X) is in general not a perfect complex over F̆. To avoid the topological issues, one could instead study an overconvergent version of the desired comparison (cf. <cit.>), however this makes the first mentioned difficulty even more challenging. As in our previous work <cit.>, we overcome the topological issues via the condensed mathematics recently developed by Clausen–Scholze, and we refer the reader to the introduction of loc. cit. for a more exhaustive explanation of the relevance of the condensed and solid formalism in the study of the p-adic Hodge theory for rigid-analytic varieties. Thus, given a condensed ring A,[All condensed rings will be assumed to be commutative and unital. Moreover, we refer the reader to <ref> for the set-theoretic conventions we adopt.] we denote by _A^ the category of A-modules in condensed abelian groups, and, for A a solid ring, we denote by _A^ the symmetric monoidal subcategory of A-modules in solid abelian groups, endowed with the solid tensor product _A. We denote by D(_A^) and D(_A^) the respective derived ∞-categories.
Our first main result is the following.
Let X be a connected, paracompact, rigid-analytic variety defined over C.
* We have a natural isomorphism in D(^_B)
RΓ_B(X)≃ (RΓ_(X)_F̆B_log)^N=0
compatible with the action of Frobenius φ, and the action of Galois 𝒢_K in the case when X is the base change to C of a rigid-analytic variety defined over K.
Here, B_log denotes the log-crystalline condensed period ring (see <ref>), and RΓ_(X) denotes the Hyodo–Kato cohomology of X (Definition <ref>).[Forgetting the condensed structure, the Hyodo–Kato cohomology of X agrees with the one defined by Colmez–Nizioł (<cit.>) in the case when X is smooth.]
* We have natural isomorphisms in D(^_B_^+)
RΓ_B(X)_BB_^+≃ RΓ_B_^+(X)≃ RΓ_inf(X/B_^+)
compatible with the isomorphism (<ref>). Here, RΓ_inf(X/B_^+) denotes the infinitesimal cohomology over B_^+ (<cit.>, <ref>).
If X is the base change to C of a rigid-analytic variety X_0 defined over K, then we have a natural isomorphism in D(^_B_^+)
RΓ_B_^+(X)≃ RΓ_(X_0)_K B_^+
compatible with the action of 𝒢_K, and with filtrations. Here, RΓ_(X_0) denotes the de Rham cohomology of X_0 (Definition <ref>).
The proof of Theorem <ref> proceeds by constructing functorial local isomorphisms, which are then globalized using some magical properties of the solid tensor product proved by Clausen–Scholze, which rely on the theory of nuclear modules.
Thanks to the properties of the solid tensor product, one can also easily deduce a version of Theorem <ref> for X a dagger variety over C. In particular, reinterpreting the latter result in terms of the Fargues–Fontaine curve (see <ref>), one can deduce a generalization of <cit.>, as shown in Theorem <ref>: for i≥ 0, given X a qcqs dagger variety over C, the cohomology group H^i_B(X) is a finite projective φ-module over B; then, denoting by H^i_(X) the associated vector bundle on , we have a natural isomorphism
H_^i(X)≅ E(H_^i(X))
where H_^i(X) is a finite (φ, N)-module over F̆, and the right-hand side denotes the associated vector bundle on ; moreover, the completion at ∞ of (<ref>) gives a natural isomorphism
H_^i(X)^∧_∞≅ H_inf^i(X/B_^+).
We note that (<ref>) implies in particular that the vector bundle H_^i(X) determines, up to isomorphisms, the φ-module structure on H_^i(X), and, while the latter is defined via log-geometry, the former is defined directly on the generic fiber. In addition, one can also recover from H_^i(X) the (φ, N)-module structure on H_^i(X) (see Remark <ref>).[
We also remark that, recently, Binda–Kato–Vezzani, via a motivic approach, proposed a definition of the overconvergent Hyodo–Kato cohomology theory without using log-geometry, <cit.>.]
As applications, using Theorem <ref>, via the relative fundamental exact sequence of p-adic Hodge theory, we show the following result.
Let X be a qcqs rigid-analytic variety defined over K. We have a 𝒢_K-equivariant pullback square in D(__p^)
RΓ_(X_C, _p) [r] [d] (RΓ_(X_C)_F̆B_log[1/t])^N=0, φ=1 [d]
^0(RΓ_(X)_K B_) [r] RΓ_(X)_K B_.
We note that Theorem <ref> can be regarded as a derived generalization of (<ref>): it tells us that the rational p-adic (pro-)étale cohomology of X_C can be recovered from the Hyodo–Kato cohomology of X_C and the de Rham cohomology of X together with its Hodge filtration.
§.§ Syntomic Fargues–Fontaine cohomology
The search for a theorem comparing the rational p-adic pro-étale cohomology of any rigid-analytic variety over C in terms of the B-cohomology and its filtration led us to define the following cohomology theory.
Let X be a rigid-analytic variety over C. Let i≥ 0 be an integer. We define the syntomic Fargues–Fontaine cohomology of X with coefficients in _p(i) as the complex of D(__p^)
RΓ_, (X, _p(i)):=^iRΓ_B(X)^φ=p^i
where RΓ_B(X) is endowed with the filtration décalée.
The first main result on the syntomic Fargues–Fontaine cohomology is the following.
Let X be a rigid-analytic variety over C. Let i≥ 0.
* We have a natural isomorphism in D(__p^)
τ^≤ iRΓ_, (X, _p(i))∼⟶τ^≤ iRΓ_(X, _p(i)).
* We have a natural isomorphism in D(__p^)
RΓ_, (X, _p(i))≃(RΓ_B(X)^φ=p^i→ RΓ_B_^+(X)/^i).
We remark that the construction of the comparison morphisms in Theorem <ref> is global in nature and it can be extended to coefficients (see <ref>). Combining Theorem <ref> with Theorem <ref>, we obtain the following result. Cf. <cit.> and <cit.> for some related results in the proper good reduction case, and <cit.> for smooth rigid-analytic varieties.
Let X be a connected, paracompact, rigid-analytic variety defined over K. For any i≥ 0, we have a 𝒢_K-equivariant isomorphism in D(__p^)
τ^≤ iRΓ_(X_C, _p(i))≃τ^≤ i((RΓ_(X_C)_F̆B_log)^N=0, φ=p^i→ (RΓ_(X)_K B_^+)/^i).
Another interesting fact about the syntomic Fargues–Fontaine cohomology concerns its close relationship with the curve , as it can be guessed from its very definition.
As explained in <ref>, for any X qcqs rigid-analytic variety over C, the φ-equivariant filtered complex ^⋆ RΓ_B(X) descends to a filtered object
^⋆ H_(X)
of the ∞-category of quasi-coherent complexes (), in the sense of Clausen–Scholze (see <ref>). Then, relying in particular on results of Andreychev on the analytic descent for nuclear complexes on analytic adic spaces, <cit.>, we show the following theorem.
Let X be a qcqs rigid-analytic variety over C. Let i≥ 0. Consider the quasi-coherent complex on defined by
H_(X)(i):=^i H_(X)⊗ O(i).
We have
RΓ(, H_(X)(i))=RΓ_, (X, _p(i)).
If X is proper, the complex H_(X)(i) is perfect, in particular the complex RΓ_, (X, _p(i)) identifies with the C-points of a bounded complex of Banach–Colmez spaces.
Fix i≥ 0. In <cit.>, via the alterations of Hartl and Temkin, Colmez–Nizioł, starting from the syntomic cohomology of Fontaine–Messing, defined a syntomic cohomology theory for any smooth rigid-analytic variety X over C, that here we will denote by RΓ_, (X, _p(i)).
We observe that H^j_, (X, _p(i)) is isomorphic to H^j_, (X, _p(i)) for any integer j≤ i, as in this case the two cohomology groups are both isomorphic to H^j_(X, _p(i)); however, in general, for j>i, the two cohomology groups are not isomorphic (see Example <ref>). This difference is reflected in the fact that, for X proper, the complex RΓ_, (X, _p(i)) canonically lifts to a complex of vector bundles on , as shown by Theorem <ref>, while the complex RΓ_, (X, _p(i)) canonically lifts to a complex of φ-modules jaugés over B^+, in the sense of Fargues (<cit.>), cf. <cit.>.[We recall that the category of φ-modules jaugés over B^+ is equivalent to the category of modifications of vector bundles on , <cit.>. However, this is not an equivalence of exact categories, in the sense of Quillen.]
§.§ Semistable conjectures
From the general derived comparison results we have stated above, in particular from Theorem <ref> and Theorem <ref>, one can deduce in some special cases a refined description of the single rational p-adic (pro-)étale cohomology groups in terms of de Rham data. For X a proper (possibly singular) rigid-analytic variety over C, we prove in Theorem <ref> a version of the semistable conjecture for X, generalizing Theorem <ref>. In the case when X is the base change to C of a rigid-analytic variety X_0 defined over K, this result relies on the degeneration of the Hodge-de Rham spectral sequence associated to X_0 (<cit.>, <cit.>). In general, we reduce to the previous case via a combination of Conrad–Gabber's spreading out for proper rigid-analytic varieties and a generic smoothness result recently proved by Bhatt–Hansen, <cit.>. Another case in which the Hodge-de Rham spectral sequence simplifies is for smooth Stein spaces, thanks to Kiehl's acyclicity theorem. In this case, we show the following theorem which reproves results of Colmez–Dospinescu–Nizioł <cit.> (in the semistable reduction case) and Colmez–Nizioł <cit.>.
Let X be a smooth Stein space over C. For any i≥ 0, we have a short exact sequence in __p^
0→Ω^i-1(X)/ d → H^i_(X, _p(i))→ (H^i_(X)_F̆B_log)^N=0, φ=p^i→ 0.
A recent conjecture of Hansen, <cit.>, suggests that any local Shimura variety is a Stein space, therefore Theorem <ref> potentially applies to any such variety.
Let X be a smooth Stein space over C. As a corollary of Theorem <ref>, we have that
H^i_(X, _p)=0
for all i> X (Corollary <ref>).
For smooth affinoid rigid spaces, the Hodge-de Rham spectral sequence simplifies similarly to smooth Stein spaces, thanks to Tate's acyclicity theorem (see also Proposition <ref>). Therefore, in view of Theorem <ref>, we are led to state the following conjecture.
Let X be a smooth affinoid rigid space over C. For any i≥ 0, we have a short exact sequence in __p^
0→Ω^i-1(X)/ d → H^i_(X, _p(i))→ (H^i_(X)_F̆B_log)^N=0, φ=p^i→ 0.
We remark that before the advent of condensed mathematics one couldn't even dare to formulate a conjecture in the spirit of the one above; in fact, for a smooth affinoid rigid space X over C, the de Rham H_^i(X) and Hyodo–Kato H_^i(X) cohomology groups are in general pathological if regarded as topological vector spaces (see <cit.>).[There were previously some partial and ad hoc solutions to this issue. For example, in <cit.>, for X a smooth affinoid over C of dimension 1, Colmez–Dospinescu–Nizioł considered the maximal Hausdorff quotient of the topological F̆-vector space H_^1(X).] Instead, regarded as condensed vector spaces, these objects are perfectly well-behaved, even though they are non-quasi-separated. Then, exploiting the possibility (provided by the solid formalism) of doing functional analysis with such new objects, we prove the following result.
Conjecture <ref> holds true for X a smooth affinoid rigid space over C of dimension 1.
We discuss in <ref> the obstruction to proving Conjecture <ref> in dimension higher than 1. We note however that an analogue of Remark <ref> for smooth affinoid spaces over C is known in any dimension, thanks to a result of Bhatt–Mathew (see Lemma <ref>).
§.§ Link to prismatic cohomology: toward an integral theory
We conclude this introduction by conjecturing the existence of an integral variant of the B-cohomology theory for rigid-analytic varieties, which in particular better explains the relation between the results of this paper and the work of Bhatt–Morrow–Scholze and Česnavičius–Koshikawa. In the following, we write y_C for the (C, O_C)-point of Y_ corresponding to Fontaine's map θ: A_inf→ O_C (and projecting to the point ∞ of ).
Let us recall the following result of Fargues.
The following two categories are equivalent:
* Shtukas (of vector bundles) over C^♭ relative to _p with one leg at φ^-1(y_C), i.e. vector bundles E on C^♭.×_p:=Y_[To motivate the notation and the terminology, we recall that Y_^≅( C)^× (_p)^ as diamonds.] together with an isomorphism
φ_ E: (φ^* E)|_Y_∖φ^-1(y_C)≅ E|_Y_∖φ^-1(y_C)
which is meromorphic at φ^-1(y_C).
* Admissible modifications of vector bundles on at ∞, i.e. triples ( F_1, F_2, α) where F_1 and F_2 are vector bundles on with F_1 semistable of slope 0, and α: F_1|_∖{∞}≅ F_2|_∖{∞} is an isomorphism.
Now, given X a proper rigid-analytic variety over C, for i≥ 0 we consider the modification of the vector bundle H^i_(X) at ∞ given by the B_^+-lattice
^0(H_B_^+^i(X)⊗_B_^+B_)⊂ H^i_(X)^∧_∞⊗_B_^+B_≅ H^i_(X, _p)⊗__pB_
which gives an admissible modification of vector bundles on at ∞
(H^i_(X, _p)⊗__p O_, H^i_(X), α)
(see Theorem <ref>). Then, inspired by Bhatt–Morrow–Scholze's work, it is natural to wonder whether one can give a direct geometric cohomological construction of the shtuka corresponding to (<ref>) via the above recalled Fargues' equivalence, and how the latter compares to the A_inf-cohomology theory in the semistable reduction case. More precisely, we formulate the following conjecture (for simplicity, we restrict ourselves to proper rigid-analytic varieties). We will consider the analytic adic space
Y_:=(A_inf, A_inf)∖ V([p^♭])
and denote by A the ring of analytic functions on Y_. We note that Y_⊂ Y_ is the open subset defined by the locus where p≠ 0.
There exists a cohomology theory
RΓ_(X/ Y_)
for proper rigid-analytic varieties X over C, taking values in shtukas of perfect complexes over C^♭ relative to _p with one leg at φ^-1(y_C) (i.e. perfect complexes E on C^♭.×_p:= Y_ together with an isomorphism φ_ E: (φ^* E)|_ Y_∖φ^-1(y_C)≅ E|_ Y_∖φ^-1(y_C)
which is meromorphic at φ^-1(y_C)), and satisfying the following properties.
* If X is the generic fiber of a proper p-adic formal scheme X over O_C with semistable reduction, there is a natural isomorphism between RΓ_(X/ Y_) and the shtuka of perfect complexes over C^♭ relative to _p with one leg at φ^-1(y_C) associated to RΓ_A_inf( X)⊗_A_inf A.
* Denoting by RΓ_(X/Y_) the restriction of RΓ_(X/ Y_) to Y_, the cohomology groups H_^i(X/Y_) are shtukas of vector bundles over C^♭ relative to _p with one leg at φ^-1(y_C), and the admissible modification of vector bundles on at ∞ defined in (<ref>) corresponds to H^i_(X/Y_) via Fargues' equivalence (Proposition <ref>).
In the notation of Definition <ref>, a natural candidate for the cohomology theory conjectured above is given by a lift of the complex RΓ_(X, Lη_μ Rα_*) to Y_, where is the v-site sheaf theoretic version of the ring A, and μ=[ε]-1∈ A_inf. However, it would be more interesting (especially for questions related to cohomological coefficients) to give a definition of RΓ_(X/ Y_) in the spirit of prismatic cohomology (<cit.>, <cit.>, <cit.>).[More precisely, such definition would give a Frobenius descent of RΓ_(X/ Y_).] We hope to come back on these questions in a future work.
§.§ Leitfaden of the paper
We have organized the paper as follows. We begin by defining the B-cohomology and the B_^+-cohomology, together with their filtration décalée, in <ref>. In <ref>, we revisit, in the condensed setting, the Hyodo–Kato cohomology of Colmez–Nizioł, and we extend it to singular rigid-analytic varieties over C. In <ref> and <ref>, we prove the first main result of the paper: Theorem <ref>. We then proceed in <ref> by introducing the syntomic Fargues–Fontaine cohomology and proving Theorem <ref> and Theorem <ref>; on the way, we also study nuclear complexes on the Fargues–Fontaine curve. In <ref>, we give applications of the main results proven in the previous sections, showing in particular Theorem <ref> and Theorem <ref>. We end with Appendix <ref> in which we collect some complements on condensed mathematics used in the main body of the paper.
§.§ Notation and conventions
Ground fields Fix a prime number p. We denote by K a complete discretely valued non-archimedean extension of _p, with perfect residue field k, and ring of integers O_K. We choose a uniformizer ϖ of O_K.
We fix an algebraic closure K of K. We denote by C:=K the completion of K, and by O_C its ring of integers. We denote by F the fraction field of the ring of Witt vectors W(k), we write F̆ for the completion of the maximal unramified extension of F in K, and we denote by O_F̆ its ring of integers.
Moreover, we let 𝒢_K:=(K/K) denote the absolute Galois group of K.
∞-categories We will adopt the term ∞-category to indicate a (∞, 1)-category, i.e. a higher category in which all n-morphisms for n>1 are invertible. We will use the language of ∞-categories, <cit.>, and higher algebra, <cit.>. We denote by Δ the simplicial category and, for every integer m≥ 0, we write Δ_≤ m for the full subcategory of Δ having as objects [n] for 0≤ n≤ m. We denote by :=() the ∞-category of anima, i.e. the ∞-category of animated sets, <cit.>.
Condensed mathematics We fix an uncountable cardinal κ as in <cit.>.
Unless explicitly stated otherwise, all condensed sets will be κ-condensed sets (and often the prefix “κ” is tacit). We will denote by the category of κ-condensed abelian groups, and by ⊂ the full subcategory of κ-solid abelian groups. All condensed rings will be κ-condensed commutative unital rings. Given a (κ-)condensed ring A, we denote by _A^ the category of A-modules in , and, for A a solid ring, we denote by _A^ the symmetric monoidal subcategory of A-modules in , endowed with the solid tensor product _A. We denote by D(_A^) and D(_A^) the respective derived ∞-categories; sometimes we abbreviate D(A)=D(_A^). Moreover, we write _A(-, -) for the internal Hom in the category _A^ (and in the case A=, we often omit the subscript ). Throughout the paper, we use Clausen–Scholze's non-archimedean condensed function analysis, for which we refer the reader to <cit.>.
Condensed group cohomology Given a condensed group G, and a G-module M in , the condensed group cohomology of G with coefficients in M will be denoted by
RΓ_(G, M):=R_[G](, M)∈ D()
where is endowed with the trivial G-action (see e.g. <cit.>).
Adic spaces We say that an analytic adic space X is κ-small if the cardinality of the underlying topological space |X| is less than κ, and for all open affinoid subspaces (R, R^+)⊂ X, the ring R has cardinality less than κ. In this paper, all the analytic adic spaces will be assumed to be κ-small.
Throughout the article, all Huber rings will be assumed to be complete, and will be regarded as condensed rings.
Pro-étale topology
We recall that there is a natural functor X↦ X^ from the category of analytic adic spaces defined over (_p, _p) to the category of locally spatial diamonds, satisfying |X|=|X^| and X_≅ X_^, <cit.>. For X an analytic adic space defined over (_p, _p), we denote by
X_:=X^_
its (κ-bounded) pro-étale site, <cit.>. Given f: X→(C, O_C) an analytic adic space over C, and F a sheaf of abelian groups on X_, we define the complex of D()
RΓ_(X, F):=Rf_ * F
(see also <cit.>).
Fargues–Fontaine curves For S=(R, R^+) an affinoid perfectoid space over _p, we let
Y_, S:=(W(R^+), W(R^+))∖ V(p[p^♭]).
We recall that Y_, S defines an analytic adic space over _p, <cit.>. The p-th power Frobenius on R^+ induces an automorphism φ of Y_, S whose action is free and totally discontinuous, <cit.>. The Fargues–Fontaine curve relative to S (and _p) will be denoted by
_S:=Y_, S/φ^.
For I=[s, r]⊂ (0, ∞) an interval with rational endpoints, we define the open subset
Y_, S, I:={|p|^r≤ |[p^♭]|≤ |p|^s}⊂ Y_, S.
We note that Y_, S, I is an affinoid space, as it is a rational open subset of (W(R^+), W(R^+)).
We denote by (_S) the category of vector bundles on _S, and by __p the category of isocrystals over _p (also called finite φ-modules over _p), i.e. the category of pairs (V, φ) with V a finite-dimensional _p-vector space and φ a σ-semilinear automorphism of V, where σ is the automorphism of _p=W(_p)[1/p] induced by the p-th power Frobenius on _p.
Recall that we have a natural exact ⊗-functor
__p→(_S), (V, φ)↦ E(V, φ).
For λ∈, we denote by (D_λ, φ_λ) the simple isocrystal over _p of slope λ in the Dieudonné–Manin classification, and we let
O__S(-λ):= E(D_λ, φ_λ).
In particular, for n∈, we have
O__S(n)= E(_p, p^-nσ).
In the case S=(C^♭, O_C^♭), we omit the subscript S from (<ref>) and (<ref>). We will often use the classification of vector bundles on (see <cit.>, <cit.>): the functor __p→() induces a bijection on isomorphism classes; in particular, any vector bundle on is isomorphic to a direct sum of vector bundles of the form O_(λ) with λ∈. We will denote by ∞ the (C, O_C)-point of the curve corresponding to Fontaine's map θ:W( O_C^♭)→ O_C, and
ι_∞: (C, O_C)→
the inclusion map.
Rigid-analytic varieties All rigid-analytic varieties, and all dagger varieties (<cit.>), occurring in this work will be assumed to be quasi-separated, and of finite dimension. We say that a rigid-analytic/dagger variety X is paracompact if it admits an admissible locally finite affinoid covering, i.e. there exists an admissible covering {U_i}_i∈ I of X by affinoid subspaces such that for each index i∈ I the intersection U_i∩ U_j is non-empty for at most finitely many indices j∈ I.
We recall that a paracompact rigid-analytic variety is taut, <cit.>, and it is the admissible disjoint union of connected paracompact rigid-analytic varieties of countable type, i.e. having a countable admissible affinoid covering, <cit.>. We refer the reader to <cit.> for further recollections on paracompact rigid-analytic varieties.
Formal schemes Unless explicitly stated otherwise, all formal schemes will be assumed to be p-adic and locally of finite type.
I am very grateful to Grigory Andreychev, Ko Aoki, Kȩstutis Česnavičius, Dustin Clausen, Pierre Colmez, Gabriel Dospinescu, Haoyang Guo, David Hansen, Hiroki Kato, Teruhisa Koshikawa, Arthur-César Le Bras, Lucas Mann, Matthew Morrow, Wiesława Nizioł, Peter Scholze, and Alberto Vezzani for helpful conversations, or for their comments on a draft version of this paper. Special thanks go to Teruhisa Koshikawa for a decisive suggestion, to Arthur-César Le Bras for what he thought me about the Fargues–Fontaine curve and for introducing me to <cit.>, to Wiesława Nizioł for reading several drafts of this manuscript, and to Peter Scholze for his crucial remarks and corrections as well as for discussions related to Conjecture <ref>.
I would like to thank the organizers of the RAMpAGE seminar for their invitation in June 2021, on which occasion the main results of this paper were first announced.
This project was carried out while the author was a Ph.D. student at Sorbonne Université, within the Institut de Mathématiques de Jussieu–Paris Rive Gauche. Moreover, parts of this manuscript were written while visiting the Banach Center, at the invitation of Piotr Achinger, and the Max-Planck-Institut für Mathematik. I thank all these institutions for their hospitality and support.
§ PRELIMINARIES
In this first section, our goal is to define the B-cohomology together with its filtration décalée. Along the way, we will establish several preliminary technical results, which will be used in the rest of the paper.
§.§ Décalage functors and Beilinson t-structure
In this subsection, we will recall an interpretation of the décalage functor in terms of the connective cover functor for the Beilinson t-structure.
§.§.§ Décalage functors
We shall use the following notation.
Let (T, O_T) be a ringed topos and let (f)⊂ O_T be an invertible ideal sheaf. We will write D( O_T) for the derived category of O_T-modules.
The following slight generalization of <cit.>, which goes back to Berthelot–Ogus, will be used in particular in <ref>.
[cf. <cit.>]
Let δ: → be a function. Let M^∙ be an f-torsion-free complex of O_T-modules. We denote by η_δ, f(M^∙) the subcomplex of M^∙[1/f] defined by
η_δ, f(M^∙)^i:={x∈ f^δ(i)M^i: dx ∈ f^δ(i+1)M^i+1}.
In the case δ=𝕀, we put η_f(-):=η_𝕀, f(-).
We note that the definition of η_δ, f(-) depends on the ideal sheaf (f)⊂ O_T and it is independent on the chosen generator of the latter.
Let δ: → be a non-decreasing function. The functor η_δ,f from f-torsion-free complexes of O_T-modules to D( O_T) factors canonically over the décalage functor (relative to (f) and δ)
Lη_δ, f:D( O_T)→ D( O_T).
First, recall that every complex of O_T-modules is quasi-isomorphic to an f-torsion-free complex of O_T-modules, <cit.>. We want to show that the endo-functor η_δ,f on the category of f-torsion-free complexes of O_T-modules preserves quasi-isomorphisms. The latter assertion is implied by the following claim (cf. <cit.>): given M^∙ an f-torsion-free complex of O_T-modules, for all i∈, the multiplication by f^δ(i) map, i.e. tensoring by -⊗_ O_T(f^δ(i)), induces an isomorphism
H^i(M^∙)/H^i(M^∙)[f^δ(i)-δ(i-1)]∼→ H^i(η_δ, fM^∙)
(note that δ(i)-δ(i-1)≥ 0 by assumption on the function δ: →). For this, let Z^i(M^∙)⊂ M^i and Z^i(η_δ, fM^∙)⊂ (η_δ, fM^∙)^i denote the cocycles. By f-torsion-freeness of the terms of the complex M^∙, the multiplication by f^δ(i) map induces an isomorphism
Z^i(M^∙)≅ Z^i(η_δ, fM^∙)
which in turn induces a surjection
H^i(M^∙)↠ H^i(η_δ, fM^∙).
Moreover, given a cocycle z∈ Z^i(M^∙) mapping to the zero class of H^i(η_δ, fM^∙) via multiplication by f^δ(i), we have that f^δ(i)z=d(f^δ(i-1)y) for some y∈ M^i-1, i.e. f^δ(i)-δ(i-1)z=dy, which means that the image of z in H^i(M^∙) is f^δ(i)-δ(i-1)-torsion, as claimed.
§.§.§ Beilinson t-structure
Next, as promised, we want to recall the ∞-categorical interpretation of the décalage functor Lη_f(-) as the connective cover functor for the Beilinson t-structure.
Let A be a commutative unital ring. Let D(A) denote the derived ∞-category of A-modules. We write
DF(A):=(^, D(A))
for the filtered derived ∞-category of A-modules. Given F∈ DF(A), for i∈, we define the i-th graded piece of F as the cofiber
^i(F):=F(i)/F(i+1).
We refer the reader to <cit.> for recollections on filtered derived ∞-categories.
[<cit.>]
Let DF^≤ 0(A)⊂ DF(A) be the full ∞-subcategory spanned by those F such that ^i(F)∈ D^≤ i(A) for all i∈, and let DF^≥ 0(A)⊂ DF(A) be the full ∞-subcategory spanned by those F such that F(i)∈ D^≥ i(A) for all i∈. The pair
(DF^≤ 0(A), DF^≥ 0(A))
is called the Beilinson t-structure on DF(A).
Note that the t-structure depends only on the triangulated category underlying the derived ∞-category D(A). The definition above is justified by the following result.
Fix notation as in Definition <ref>.
* The Beilinson t-structure (DF^≤ 0(A), DF^≥ 0(A)) is a t-structure on DF(A).
* Denoting by
τ_^≤ 0: DF(A)→ DF^≤ 0(A)
the connective cover functor for the Beilinson t-structure on DF(A), there is a natural isomorphism
^i∘τ_^≤ 0(-)≃τ^≤ i∘^i(-).
* Denote by
H^0_:DF(A)→ DF(A)^:=DF^≤ 0(A)∩ DF^≥ 0(A)
the 0-th cohomology functor for the Beilinson t-structure. The heart DF(A)^ is equivalent to the abelian category (A) of chain complexes of A-modules in abelian groups, via sending, for varying F∈ DF(A), the 0-th cohomology H^0_(F)∈ DF(A)^ to the chain complex (H^∙(^∙(F)), d) with differential d induced by the boundary map for the exact triangle
^i+1(F)→ F(i)/F(i+2)→^i(F).
In the following, let f be a non-zero-divisor in A.
Let M∈ D(A). Define ^⋆ Lη_f M∈ DF(A) the filtration on Lη_f M whose i-th level is given by Lη_ε_i, fM, where ε_i:→,j↦max(i, j). Denote by f^⋆⊗ M∈ DF(A) the filtration on M whose i-th level is given by f^i⊗_A M. Then, ^⋆ Lη_f M identifies with τ_^≤ 0(f^⋆⊗ M) in DF(A).
First, note that the function ε_i is non-decreasing, hence it satisfies the assumptions of Proposition <ref>. Then, the statement is contained in the proof of <cit.>.
Given M∈ D(A), we call the filtration ^⋆ Lη_f M defined in Proposition <ref> the filtration décalée on Lη_f M.
The definitions and the results above extend to any ringed topos (or site). In particular, they extend to the case of the ringed site (*_κ-, A), for κ a cut-off cardinal as in <ref>, where *_κ- is the site of κ-small profinite sets, with coverings given by finite families of jointly surjective maps, and A is a κ-condensed ring.
§.§ The éh-topology
In this subsection, we recall the definition of the -site for rigid-analytic varieties, introduced by Guo, and we consider its variant for dagger varieties. This site will be used crucially in the definition of the B-cohomology theory for arbitrary (possibly singular) rigid-analytic/dagger varieties.
§.§.§ Definition of the éh-site
We will use the following notation and conventions.
We denote by L a characteristic 0 complete valued field with a non-archimedean valuation of rank 1 and residue characteristic p. We write _L (resp. _L^†) for the category of rigid-analytic (resp. dagger) varieties over L, and we denote by _L (resp. _L^†) the category of smooth rigid-analytic (resp. dagger) varieties over L. We refer the reader to <cit.> for the foundations of dagger varieties (also called overconvergent rigid varieties), and to <cit.> for a quick recollection of the definitions and the main results on the subject, which we will freely use in the following. Given a dagger variety X=(X, O^†) over L with underlying rigid-analytic variety X and overconvergent structure sheaf O^†, we say that X is the limit of X, and vice versa that X is a dagger structure on X, <cit.>; moreover, we regard O^† as a sheaf with values in _L^.
Before defining the -site for rigid-analytic and dagger varieties, we need to introduce the notion of blowing-up. The construction of the blow-up of a rigid-analytic variety along a closed analytic subset, as well as the verification of its universal property, is due to Conrad, <cit.>. In turn, such construction relies on the definition of the relative analytified , <cit.>, denoted ^. We note that the latter definition translates verbatim to dagger varieties (replacing the structure sheaf with the overconvergent structure sheaf). We can then give the following definition (see <cit.>).
Let X be a rigid-analytic (resp. dagger) variety over L, and let Z=V( I) be the Zariski closed subset defined by a coherent ideal sheaf I over X. The blow-up of X along Z is the rigid-analytic (resp. dagger) variety over X defined by
_Z(X):=^(⊕_n≥ 0 I^n).
Keeping the notation above, the blow-up of X along Z has the following universal property (see the discussion after <cit.>): _Z(X)→ X is the final object in the category of morphisms f:Y→ X in _L (resp. _L^†) such that the coherent pullback f^* I is invertible.
[<cit.>]
The big -site _L, (resp. _L, ^†) is the Grothendieck topology on the category _L (resp. _L^†), such that the covering families are generated by étale coverings, universal homeomorphisms, and morphisms
_Z(Y)⊔ Z→ Y
with Z a closed analytic subset of Y.
Given X a rigid-analytic (resp. dagger) variety over L, we define the small -site X_ as the localization of the site _L, (resp. _L, ^†) at the object X.
The definition above is designed to make the following result hold true.
Let X be a quasi-compact, reduced, rigid-analytic (resp. dagger) variety over L. Then, there exists a proper -covering f:Y→ X with Y a smooth rigid-analytic (resp. dagger) variety over L.
We will check that the proof of <cit.> also holds for dagger varieties.
Since X is quasi-compact and reduced, by Temkin's non-embedded desingularization theorem, <cit.>, there exists a finite sequence of blowups
X_n→ X_n-1→⋯→ X_0=X
such that X_n smooth, with X_j=_Z_j-1(X_j-1) the blowup of X_j-1 along a smooth Zariski closed subset Z_j-1 of X_j-1.[We observe that in loc. cit. the blow-ups considered are analytification of scheme-theoretic blow-ups. However, by the universal property in Remark <ref>, we have natural comparison morphisms between the blow-up in the sense of Definition <ref> and the analytification of the scheme-theoretic blow-up, which are isomorphisms.] In fact, we note that loc. cit. also applies in the case when X is a dagger variety, as any dagger L-algebra is an excellent ring: this follows from a criterion of Matsumura <cit.>, using that Washnitzer algebras are regular, <cit.>, and L has characteristic 0.
In conclusion, the morphism
Y:=X_n(⊔_i=0^n-1Z_i)→ X
is a proper -covering with Y smooth.
Let X be a rigid-analytic (resp. dagger) variety over L. By Proposition <ref>, the Y∈ X_, with Y a smooth rigid-analytic (resp. dagger) variety over L, form a basis of X_. In fact, for any rigid-analytic (resp. dagger) variety Z over L, denoting by Z_ the reduced subspace of Z, the natural map Z_→ Z is a universal homeomorphism, hence it is an -covering.
§.§.§ Differential forms and de Rham cohomology of singular varieties
Next, we want to state a condensed version of Guo's descent result for the -differentials (Proposition <ref>), which will be useful in the following sections. For this, we refer the reader to <cit.> for a discussion on how to translate classical results on coherent cohomology of rigid-analytic varieties into the condensed setting. The following definition is based on Proposition <ref> (and Remark <ref>).
Let X be a rigid-analytic variety over L. Denote by B^_ the basis of the site X_ consisting of all smooth Y∈ X_. For i≥ 0, we define Ω^i_X_ as the sheaf on X_, with values in _L^, associated to the presheaf
( B^_)^→_L^: Y↦Ω^i_Y(Y).
Denote by Ω_X_^∙ the de Rham complex of X, given by
Ω_X_^∙:=[ O_X_d→Ω^1_X_d→Ω^2_X_d→⋯].
We define the de Rham cohomology of X (over L) as
RΓ_(X):=RΓ(X, Ω_X_^∙)∈ D(_L^)
and endow it with the ^-indexed filtration ^⋆RΓ_(X):=RΓ(X, Ω_X_^≥⋆), called Hodge filtration.
The next result shows in particular that in the smooth case the de Rham cohomology defined above agrees with the usual de Rham cohomology.
Let X be a smooth rigid-analytic variety over L. Let π:X_→ X_ be the natural morphism of sites. Then, for each i≥ 0, we have
Rπ_*Ω_X_^i=Ω_X_^i[0]
as complexes of sheaves with values in _L^.
The following boundedness result will also be useful in the sequel.
Let X be a qcqs rigid-analytic variety over L of dimension d. Then, H^i(X, Ω_X_^j) vanishes if i>d or j>d.
From the proposition above we deduce the following corollary.
Let X be a qcqs rigid-analytic variety over L of dimension d. Then, the de Rham cohomology complex RΓ_(X) lies in D^≤ 2d(_L^).
§.§ Period sheaves
In this subsection, we first recall the definitions of the pro-étale sheaf-theoretic version of the classical period rings of Fontaine, and we introduce a log-variant of the pro-étale sheaf-theoretic version of the ring B of analytic functions on Y_, i.e. the log-crystalline pro-étale period sheaf _log. Then, after some preliminary complementary results on the pro-étale period sheaves (and condensed period rings), we recall that, thanks to results of Scholze <cit.>, the pro-étale period sheaves satisfy v-descent.
§.§.§ Pro-étale period sheaves
Let X be an analytic adic space over (_p, _p). We define the integral -structure sheaf O_X^+ and the -structure sheaf O_X as the sheaves on X_ satisfying respectively
O_X^+(Y):= O^+_Y^♯(Y^♯), O_X(Y):= O_Y^♯(Y^♯)
for all perfectoid spaces Y∈ X_.
We recall that, thanks to <cit.>, O_X^+ and O_X are indeed sheaves.
Let X be an analytic adic space over (_p, _p). The following are defined to be sheaves on X_.
* The tilted integral -structure sheaf O_X^♭ +=_φ O_X^+/p, where the inverse limit is taken along the Frobenius map φ.
* The sheaves _inf=W(O_X^♭ +) and _inf=_inf[1/p]. We have a morphism of pro-étale sheaves θ: _inf→O^+_X that extends to θ: _inf→O_X.
* We define the positive de Rham sheaf _^+=_n∈_inf/(θ)^n, with filtration given by ^r _^+=(θ)^r _^+.
* Let t be a generator of ^1 _^+.[Such a generator exists locally on X_, it is a non-zero-divisor and unique up to unit, by <cit.>.] We define the de Rham sheaf _=_^+[1/t], with filtration ^r _=∑_j∈ t^-j^r+j_^+.
In the following, we denote by v(-) the valuation on O_C^♭ defined as follows: for x∈ O_C^♭, we define v(x) as the p-adic valuation of x^♯∈ O_C.
Let X an analytic adic space over (C, O_C). Let I=[s, r] be an interval of (0, ∞) with rational endpoints, and let α, β∈ O_C^♭ with valuation v(α)=1/r and v(β)=1/s. We define the following sheaves on X_
_inf, I=_inf[p/[α], [β]/p], _I=_n_inf, I/p^n, _I=_I[1/p].
Moreover, we define the sheaf on X_
= _I⊂ (0, ∞)_I
where I runs over all the compact intervals of (0, ∞) with rational endpoints.
We recall the following interpretation of the latter period sheaves defined above in terms of the curves Y_, S (see <ref> for the notation).
Let S^♯ be an affinoid perfectoid space over (C, O_C), and let S=(S^♯)^.
Let I=[s, r]⊂ (0, ∞) be an interval with rational endpoints. Then, we have
_I(S^♯)= O(Y_, S, I), (S^♯)= O(Y_, S).
The following fundamental exact sequences of p-adic Hodge theory summarize the relevant relations between the various rational period sheaves.
Let X an analytic adic space over (C, O_C). Let i≥ 0 be an integer.
We have the following exact sequences of sheaves on X_
0→^φ=p^i→→ 0
0→_p(i)→^φ=p^i→_^+/^i_^+→ 0.
See e.g. <cit.>.
Let X be an analytic adic space over (C, O_C). We have the following exact sequences of sheaves on X_
0→_e→[1/t]φ-1→[1/t]→ 0
0→_p→_e→_/_^+→ 0
where _e:=[1/t]^φ=1.
See <cit.> and <cit.>.
§.§.§ Log-crystalline period sheaves
We recall that Fargues–Fontaine defined in <cit.> the log-crystalline period ring
B_log:=B⊗__𝒪_C^♭^×_(C^♭)^×
where _(-) denotes the symmetric algebra over . The ring B_log is endowed with an action of the Galois group 𝒢_K, a Frobenius φ, and a monodromy operator N for which B_log^N=0=B. Moreover, there is a (non-canonical) isomorphism of rings
B[U]∼→ B_log, U↦log[p^♭]
where B[U] denotes the ring of polynomials over B in the variable U.
Now, keeping the notation of Definition <ref>, we introduce a pro-étale sheaf-theoretic version of the ring B_log.
Let X be an analytic adic space over (C, O_C). Let I=[s, r] be an interval of (0, ∞) with rational endpoints, and let α, β∈ O_C^♭ with valuation v(α)=1/r and v(β)=1/s. We define the following sheaves on X_
_log:=[U], _log, I:=_I[U].
We endow _log (resp. _log, I) with a Frobenius φ and a Galois action extending the ones on (resp. _I) by setting φ(U):=pU, and, for g ∈𝒢_K,
g(U):= U+ log[g(p^♭)/p^♭].
Moreover, we equip _log and _log, I with a monodromy operator N:=-d/dU.
Let us list some useful basic properties of _log.
The action of 𝒢_K on _log defined above commutes with φ and N, and we have Nφ =pφ N.
We have the following exact sequence of sheaves on X_
0→→_log_log→ 0.
For I=[s, r] an interval of (0, ∞) with rational endpoints such that s≤ 1≤ r, we have a natural inclusion
_I↪_^+.
The induced inclusion ↪_^+ extends to a 𝒢_K-equivariant injection _log↪_^+, via sending U to log([p^♭]/p) (see the proof of <cit.>).
In this article, we will adopt the following notation and conventions.
Condensed period rings We denote by A_inf, B_^+, B_, B, B_log, the condensed rings given respectively by the sheaves _inf, _^+, _, , _log on the site (C, O_C)_ and, for any compact interval I⊂(0, ∞) with rational endpoints, we similarly define A_inf, I, A_I, B_I, B_log, I.[See <cit.> for the relation to the classical topological period rings.]
In addition, we denote by A_, B_^+, B_^+ the condensed version[We take Fontaine's definitions in condensed sets.] of the crystalline and semistable period rings of Fontaine (relative to O_C), <cit.>.
Orientation We fix a compatible system (1, ε_p, ε_p^2, …) of p-th power roots of unity in O_C, which defines an element ε∈ O_C^♭. We denote by [ε]∈ A_inf its Teichmüller lift and μ=[ε]-1∈ A_inf. Furthermore, we let ξ=μ/φ^-1(μ)∈ A_inf and t=log[ε]∈ B.
Let us collect some useful facts on the above-defined (condensed) period rings, that we will repeatedly use in the following.
Let us recall that, for a compact interval I⊂[1/(p-1), ∞) with rational endpoints, we have that A_⊂ A_I (see e.g. <cit.>). In particular, for any such interval I, we also have B_^+ ⊂ B_log, I, via the (non-canonical) identification
B_^+[U]∼→ B_^+, U↦log[p^♭]
where we endow B_^+[U] with a Frobenius φ extending the one on B_^+ by setting φ(U):=pU, a Galois action extending the one on B_^+ as in (<ref>), and we equip it with a monodromy operator N:=-d/dU.
We will also need the following result.
Let I⊂ (0, ∞) be a compact interval with rational endpoints. The system of ideals of the ring A_I defined by (p^n A_I)_n≥ 1 and ({x∈ A_I: μ x∈ p^n A_I})_n≥ 1 are intertwined.
We will proceed by noetherian approximation, adapting the proof of <cit.>. We define Λ:=_p T_1, T_2, and we regard A_inf as a Λ-module via the _p-linear map
Λ→ A_inf, T_1↦ [ε], T_2↦ [p^♭].
First, note that μ is the image of T_1-1 under the map (<ref>). By setting
Λ_inf,I:=Λ[p/T_2^1/r, T_2^1/s/p]
we have that A_inf, I=A_inf⊗_ΛΛ_inf,I. In particular, denoting by Λ_I the p-adic completion of Λ_inf, I, we have A_I=A_inf⊗_ΛΛ_I, where the latter completion is p-adic. Then, it suffices to prove the statement with the ring Λ_I in place of A_I. For this, observing that Λ_I is noetherian, we conclude by the Artin-Rees lemma (<cit.>) for the system of ideals (p^n Λ_I)_n≥ 1 of Λ_I, and (T_1-1)Λ_I⊂Λ_I.
§.§.§ v-descent
As announced, our next goal is to state a consequence of Scholze's v-descent results in <cit.>, which will serve as a tool to prove the main comparison results of this paper for singular rigid-analytic varieties. To state the desired result we need some preliminary definitions.
Let X be an analytic adic space defined over (_p, _p). We denote by
X_v:=X^_v
its v-site.
We recall from <cit.> that the presheaves O^+: Y↦ O_Y^+(Y) and O: Y↦ O_Y(Y) on the v-site of all (κ-small) perfectoid spaces are sheaves. Then, similarly to Definition <ref>, we can give the following definition.
Let X be an analytic adic space defined over (_p, _p). We define the integral v-structure sheaf O_X^+ and the v-structure sheaf O_X on X_v by setting respectively
O_X^+(Y):= O^+_Y^♯(Y^♯), O_X(Y):= O_Y^♯(Y^♯)
for all perfectoid spaces Y∈ X_v.
Then, we can introduce the following notation.
For X an analytic adic space over (C, O_C), starting from the integral v-structure sheaf O_X^+, we define an analogue of the pro-étale period sheaves in <ref> on the v-site X_v.
By a slight abuse of notation, we denote such v-sheaves with the same symbol as the respective pro-étale period sheaves, adding a subscript (-)_v, resp. (-)_, in case of potential confusion.
Let I⊂ (0, ∞) be a compact interval with rational endpoints, and let m≥ 1 be an integer. Let
𝐁∈{_I, , _^+, _^+/^m}.
* For any Z affinoid perfectoid space over (C, O_C), we have H^i_v(Z, 𝐁)=0 for all i>0.
* Let X an analytic adic space over (C, O_C). Let λ: X_v→ X_ denote the natural morphism of sites. Then, we have
Rλ_* 𝐁_v= 𝐁_.
In particular, the pro-étale cohomology of 𝐁 satisfies v-hyperdescent.
By standard reduction steps (see e.g. <cit.> and the references therein), part compv:1 follows from the almost vanishing of H_v^i(Z, O_X^+) for i>0, which is proven in greater generality in <cit.>.
For part compv:2, by definition we have λ_* 𝐁_v= 𝐁_. Then, we want to show that R^iλ_*𝐁_v=0 for i>0. This follows from part compv:1 recalling that R^iλ_*𝐁_v is the sheafification of the presheaf U↦ H^i_v(U, 𝐁) on X_, and affinoid perfectoid spaces in X_ form a basis of the site.
§.§ B-cohomology and B_^+-cohomology
Now, we can finally define the B-cohomology and the B_^+-cohomology theories for rigid-analytic varieties over C. In the following, for X a rigid-analytic variety over C, we denote by X_, the site introduced in <cit.>, and similarly we define the site X_,. Note that we have a natural morphism of sites
α: X_v→ X_, .
A feature of the site X_, (as opposed to the site X_) is that the pushfoward along α retains the information captured by profinite sets.[We refer the reader to <cit.> for a more detailed discussion.]
Let X be a rigid-analytic variety over C. We denote by α:X_v→ X_, the natural morphism of sites. Let I⊂ (0, ∞) be a compact interval with rational endpoints, and let m≥ 1 be an integer. Given
𝐁∈{_I, , _^+, _^+/^m}
we write ℬ=𝐁_(C)_ for the corresponding condensed period ring.
We define the ℬ-cohomology of X as the complex of D(^_ℬ)
RΓ_ℬ(X):=RΓ_, (X, Lη_tRα_*𝐁).
We endow RΓ_ℬ(X) with the filtration induced by the filtration décalée of Definition <ref>.
Since φ(t)=pt, the Frobenius automorphism of induces a φ_B-semilinear automorphism
φ: RΓ_B(X)→ RΓ_B(X)
which preserves the filtration décalée.
Next, we begin to study some basic properties of the B-cohomology theory. As a preparation, we state the following boundedness result which relies on an improved version of the almost purity theorem recently proved by Bhatt–Scholze, <cit.>.
Let X be a rigid-analytic variety over C of dimension d. Let ν: X_→ X_, denote the natural morphisms of sites. Let 𝐁 any of the period sheaves of (<ref>). Then, R^iν_* 𝐁 vanishes for all i>d.
We will show that for any affinoid rigid space X over C of dimension d, we have H^i_(X, 𝐁)=0 for all i>d. By Noether normalization lemma, <cit.>, there exists a finite morphism f:X→_C^d, where the target denotes the d-dimensional unit closed disk over C. Then, by <cit.>,[This lemma relies on <cit.>, and hence on <cit.>.] the diamond X^ admits a _p(1)^d-torsor for the v-topology
X^→ X^
where X^ is a diamond representable by an affinoid perfectoid space.
Considering the Cartan–Leray spectral sequence associated to (<ref>), by <cit.> and Proposition <ref>, we have an isomorphism
RΓ_(_p(1)^d, H^0(X^, 𝐁))∼→RΓ_(X, 𝐁).
Then, the statement follows from <cit.>, which implies that _p(1)^d≅_p^d has cohomological dimension d.
The following lemma will be useful to reduce the study of the B-cohomology theory to the study of the B_I-cohomology theories for suitable intervals I⊂ (0, ∞).
With notation as in Definition <ref>, the natural maps
Lη_tRα_*→ R_I Lη_tRα_* _I Lη_t Rα_*_^+→ R_m Lη_t Rα_*(_^+/^m).
are isomorphisms compatible with the filtration décalée.
We prove that the left map in (<ref>) is an isomorphism compatible with filtrations (for the right map in (<ref>) the proof is similar and easier). By <cit.>, it suffices to show that the limit of the filtrations of the source and the target agree, and that such map is an isomorphism on graded pieces.
For the first assertion, by the uniform boundedness of the complexes Rα_* and Rα_* _I for varying compact intervals I⊂ (0, ∞) with rational endpoints, which follows from Proposition <ref>, we can reduce to showing that the natural map
Rα_*→ R_IRα_* _I
is an isomorphism. This follows recalling that the natural map → R_I _I is an isomorphism: in fact, using <cit.>, we can reduce to checking this on each affinoid perfectoid space Z over (C, O_C), where it follows from the topological Mittag-Leffler property of the countable inverse system {_I(Z)}_I which implies that, for all j>0, we have R^j_I _I(Z)=0 (<cit.>).
Then, by Proposition <ref>bbe:2 (and by twisting) it remains to prove that, for each i≥ 0, the natural map
τ^≤ iRα_*(/t)→ R_I τ^≤ iRα_*(_I/t)
is an isomorphism. For this, we observe that, by <cit.>, for any compact interval I⊂ (0, ∞) with rational endpoints, we have
_I/t=∏_y∈ |Y_, I|^_^+/t^_y(t)_^+
where |Y_, I|^⊂ |Y_, I| denotes the subset of classical points (we note that, by compactness of the interval I, the latter product is a finite direct product of copies of O).[For y∈ |Y_|^, we have _y(t)∈{0, 1 }: in fact, t has a simple zero at ∞ on =Y_/φ^.] Then, using again the topological Mittag-Leffler property of the countable inverse system {_I(Z)}_I for each Z affinoid perfectoid spaces over (C, O_C), we have that
/t=∏_y∈ |Y_|^_^+/t^_y(t)_^+
where |Y_|^⊂ |Y_| denotes the subset of classical points (cf. with <cit.>). Moreover, by (<ref>) we have that
R_I τ^≤ iRα_*(_I/t)=∏_I τ^≤ iRα_*(_I/t).
We conclude that the natural map (<ref>) is an isomorphism, combining (<ref>), (<ref>), (<ref>), and fact that cohomology commutes with direct products.
The next proposition gives in particular a convenient local description of the B-cohomology theory on a smooth affinoid rigid space over C.
With notation as in Definition <ref>, let ν :X_→ X_, denote the natural morphism of sites.
* If X is smooth, we have a natural identification in D(^_ℬ)
RΓ_ℬ(X)=RΓ(X, Lη_tRν_*𝐁).
* If X is a smooth affinoid over C, the natural map of complexes of condensed ℬ-modules
Lη_tRΓ_v(X, 𝐁)→ RΓ(X, Lη_tRα_*𝐁)=RΓ_ℬ(X)
is a filtered quasi-isomorphism. Here, on both sides, the filtration on Lη_t(-) is the filtration décalée of Definition <ref>.
We first prove part 3.11.2, adapting the proof of <cit.>, and using Proposition <ref> for the compatibility with the filtrations statement.
Thus, let X be a smooth affinoid over C. To show that (<ref>) is a filtered quasi-isomorphism, similarly to the proof of Lemma <ref>, by <cit.>, it suffices to show that the limit of the filtrations of the source and the target of (<ref>) agree, and that such map is an isomorphism on graded pieces. The former statement follows from Proposition <ref>. Then, by Proposition <ref>bbe:2 (and by twisting) it suffices to prove that, for each i≥ 0, the natural map
τ^≤ iRΓ(X, 𝐁/t)→ RΓ(X, τ^≤ iRα_*(𝐁/t))
is a quasi-isomorphism.
By (the proof of) Lemma <ref>, we can reduce to the case 𝐁∈{_I, _^+}. Then, recalling that _I/t is isomorphic to a finite direct product of copies of O (by (<ref>) and the compactness of the interval I), we can further reduce to showing that, for each i≥ 0, the natural map
τ^≤ iRΓ(X, O)→ RΓ(X, τ^≤ iRα_* O)
is a quasi-isomorphism. For this, considering the spectral sequences
H^j-k(X, H^k(Rα_* O)) H^j(X, O)
H^j-k(X, H^k(τ^≤ iRα_* O)) H^j(X, τ^≤ iRα_* O)
it suffices to show that, for j>i and k≤ i, we have H^j-k(X, R^kα_* O)=0, or more generally that
H^r(X, R^kα_* O)=0, for all r>0.
By (the proof of) <cit.>, for any Y smooth rigid-analytic variety over (C, O_C), denoting by ν: Y_→ Y_, the natural morphism of sites, we have a natural isomorphism
Ω_Y_^k(-k)∼→ R^kν_* O of sheaves with values in _C^.[Working with condensed group cohomology instead of continuous group cohomology in the proof of loc. cit..] Then, by -sheafification, we have a natural isomorphism of sheaves with values in _C^
Ω_X_^k(-k)∼→ R^kα_* O.
Denoting by π: X_→ X_ the natural morphism of sites, by Proposition <ref> (using that X is smooth), we have that
Rπ_*Ω_X_^k=Ω_X_^k[0]
as complexes of sheaves with values in _C^. Then, combining (<ref>) and (<ref>), we deduce (<ref>) from the condensed version of Tate's acyclicity theorem (see <cit.>). This concludes the proof of part 3.11.2.
For part 3.11.1, as the statement is étale local, we can reduce to the case when X is a smooth affinoid rigid space over C. In this case, similarly to part 3.11.2, the natural map
Lη_tRΓ_(X, 𝐁)→ RΓ(X, Lη_tRν_*𝐁)
is a quasi-isomorphism. Hence, combining (<ref>) and (<ref>), the statement follows from Proposition <ref>.
§ HYODO–KATO COHOMOLOGY
In this section, following Colmez–Nizioł, <cit.>, we define the Hyodo–Kato cohomology theory for rigid-analytic varieties over C, simplifying the topological treatment given in op. cit. and extending it to the singular case.
§.§ Local Hyodo–Kato morphism
We begin by revisiting in the condensed setup the Hyodo–Kato morphism constructed by Beilinson and Colmez–Nizioł.
Let n≥ 1 be an integer. For a condensed ring R, we denote by R_n the reduction of R modulo p^n.
Log structures We define a (pre-)log structure on a given condensed ring as a (pre-)log structure on the underlying ring. For O a discrete valuation ring, we denote by O^× (resp. O_n^×) the canonical log structure on O (resp. its pullback on O_n), and we denote by O^0 (resp. O_n^0) the log structure on O associated to (→ O, 1↦ 0) (resp. its pullback on O_n). We denote by O_C^× (resp. O_C, n^×) the canonical log structure on O_C (resp. its pullback on O_C, n). We write A_, n^× for the unique quasi-coherent, integral, log structure on A_, n lifting O_C, n^× (see e.g. <cit.>). We denote by A_^× the log structure on A_ associated to the pre-log structure
O_C^♭∖{0}→ A_, x↦ [x].
Note that the log structure A_, n^× is the pullback of the log structure A_^×.
Log-crystalline cohomology We refer the reader to <cit.> for a review of log-crystalline cohomology, and the terminology used in the following. We write PD as a shortening of divided power.Let ( Y, M_ Y, I, γ) be a (p-adic formal) log PD scheme such that ( Y, M_ Y) is quasi-coherent. Let ( X, M_ X) be an integral quasi-coherent (p-adic formal) log scheme over ( Y, M_ Y, I, γ).
We write
(( X, M_ X)/( Y, M_ Y))_
for the log-crystalline site of ( X, M_ X) over ( Y, M_ Y, I, γ), <cit.>, we denote by O_ its structure sheaf, regarded as a sheaf with values in condensed abelian groups, and we define the log-crystalline cohomology
RΓ_(( X, M_ X)/( Y, M_ Y)):= RΓ((( X, M_ X)/( Y, M_ Y))_, O_)∈ D().
In the case the relevant log structures are fixed, they are omitted from the notation.
Condensed period rings Recall from <ref> that O_F=W(k) and O_F̆=W(k̅), where k̅ is a fixed algebraic closure of k. We fix the unique Frobenius equivariant section k→ O_K/p of O_K/p→ k, in order to regard O_K as a O_F-algebra, and O_C, as well as the condensed period rings of <ref>, as a O_F̆-algebra.We denote r_ϖ^+= O_F T and we equip it with the log structure associated to T. We write r^_ϖ for the p-adic log PD envelope of r_ϖ^+ with respect to the kernel of the morphism
r_ϖ^+→ O_K^×, T↦ϖ
and we endow it with a Frobenius induced by T↦ T^p, and a monodromy defined by T↦ T.
Then, we define the condensed period rings
A_, n:=H^0_( O^×_C, n/r^_ϖ, n)≃ RΓ_( O^×_C, n/r^_ϖ, n), A_:=_n A_, n, B_^+:=A_[1/p]
and we equip them with their natural action of 𝒢_K, Frobenius φ, and monodromy N, <cit.>, <cit.>.
Let X be an integral quasi-coherent log scheme over O_C, 1^×. Denote by X^0 the pullback of X to O_F̆, 1^0.
Assume that X has a descent to Z a qcqs, fine, log-smooth, log scheme over O_L, 1^× of Cartier type,[See <cit.> for the definition of Cartier type.] for some finite extension L/K.
* There exists a natural isomorphism in D(__p^)
RΓ_( X^0/ O_F̆^0)_ O_F̆B_^+∼→ RΓ_( X/A_^×)_A_B_^+
independent of the descent, and compatible with the actions of Galois, Frobenius φ and monodromy N.[On the right-hand side of (<ref>) the operator N is the monodromy of B_^+, and on the left-hand side of (<ref>) it combines the monodromy of both factors of the tensor product.]
* There exists a natural isomorphism in D(__p^)
RΓ_( X^0/ O_F̆^0)_ O_F̆C∼→ RΓ_( X/ O_C^×)__p
independent of the descent, and compatible with the actions of Galois, Frobenius φ and with the quasi-isomorphism (<ref>) via the morphism RΓ_( X/A_^×)→ RΓ_( X/ O_C^×) induced by Fontaine's map θ: A_→ O_C.
For part bcn:1, the desired morphism (<ref>), satisfying the stated properties, is constructed in <cit.>, and we only need to carry loc. cit. to solid _p-vector spaces. By the independence of the descent proven in loc. cit. we can assume for simplicity that L=K.
Relying on <cit.>, we will construct (<ref>) as the composite
ε_:=δ^-1∘(ε_)^N-∘δ
where M^N-:=_r∈M^N^r=0, we denote by δ:B_^+∼→B_^+, N- the natural B_^+-linear isomorphism,[Which is compatible with Galois, Frobenius, and monodromy actions, <cit.>.] and
ε_:RΓ_( X^0/ O_F̆^0)_ O_F̆B_→ RΓ_( X/A_^×)_A_B_
is defined as follows. Considering the morphisms of PD thickenings
r^_ϖ, n [d][twoheadrightarrow]r O_K, 1^×[d]
A_, n [twoheadrightarrow]r O_C, 1^×
for varying n≥ 1, by base change, <cit.>, we have quasi-isomorphisms
RΓ_( Z/(r^_ϖ, n, O_K, 1^×))⊗_r^_ϖ, n^A_, n∼→RΓ_( X/(A_, n, O_C, 1^×)).
Now, denoting by ⊗^ the derived p-adic completion, again by base change <cit.>, after taking the derived inverse limit over n≥ 1, and then inverting p in (<ref>), the right-hand side identifies with (RΓ_( X/A_^×)⊗_A_^A_)__p, and, by <cit.>, the left-hand side is quasi-isomorphic to
(RΓ_( Z^0/ O_F^0)⊗_ O_F^A_)__p≃ ( RΓ_( X^0/ O_F̆^0)⊗_ O_F̆^A_)__p.
where Z^0 is the pullback of Z to O_F, 1^0. We denote by ε_ the induced from (<ref>) morphism
ε_:(RΓ_( X^0/ O_F̆^0)⊗_ O_F̆^A_)__p∼→(RΓ_( X/A_^×)⊗_A_^A_)__p.
To write the target of (<ref>) in terms of the derived solid tensor product, we note that, since the A_-algebra A_ is isomorphic to the p-adic completion of a divided power polynomial algebra of the form A_⟨ x⟩, applying Proposition <ref> with M=RΓ_( X/A_^×) and N=A_⟨ x⟩ regarded in D(_A_^), we obtain the identification
RΓ_( X/A_^×)⊗_A_^A_=RΓ_( X/A_^×)_A_A_.
Next, we want to show that
(RΓ_( X^0/ O_F̆^0)⊗_ O_F̆^A_)__p=( RΓ_( X^0/ O_F̆^0)_ O_F̆A_)__p.
We note that, choosing a basis for the F̆-Banach space B_^+, we can can identify it with N^∧_p[1/p], where N=⊕_I O_F̆ for some set I.[In fact, as F̆ is discretely valued, combining <cit.>, any F̆-Banach space is isomorphic to (⊕_I _p)^∧_p__pF̆, for some set I, and the latter is isomorphic to (⊕_I O_F̆)^∧_p[1/p] by Proposition <ref>.]
Since A_ is a lattice in B_^+, there exist n, m∈ such that p^nN^∧_p⊂A_⊂ p^m N^∧_p. Then, (<ref>) follows applying Proposition <ref> with M= RΓ_( X^0/ O_F̆^0) and N=⊕_I O_F̆ regarded in D(_ O_F̆^).
Therefore, in view of (<ref>) and (<ref>), using that the derived solid tensor product commutes with filtered colimits, the composite ε_=δ^-1∘(ε_)^N-∘δ is given by
RΓ_( X^0/ O_F̆^0)_ O_F̆B_^+ ∼→( RΓ_( X^0/ O_F̆^0)_ O_F̆B_^+)^N-
∼→(RΓ_( X/A_^×)_ O_F̆B_^+)^N-
∼← RΓ_( X/A_^×)_A_B_^+
where in (<ref>) we used that the monodromy operator N on RΓ_( X^0/ O_F̆^0) is nilpotent by Lemma <ref> (and base change), and in (<ref>) we used the triviality of the action of N on RΓ_( X/A_^×). This shows part bcn:1.
Part bcn:2 follows from bcn:1. In fact, under the (non-canonical) identification B_^+=B_^+[U], given by (<ref>), applying to (<ref>) the (non-Galois-equivariant) map B_^+→ B_^+: U↦ 0, and then Fontaine's map θ: B_^+→ C, by base change we get (<ref>). The compatibility of (<ref>) with the Galois action is checked in the proof of <cit.>.
Let Z be a quasi-separated, fine, saturated, log-smooth, locally of finite type log scheme over O_K, 1^× of dimension d. Then, the monodromy operator N on RΓ_( Z/W(k)^×) is nilpotent with nilpotency index bounded above by a function depending on d.
By <cit.> there exists a log-blow-up Y→ Z over O_K, 1^× that resolves singularities, and by the proof of <cit.> we have a natural quasi-isomorphism
RΓ_( Z/W(k)^×)∼→RΓ_( Y/W(k)^×)
compatible with monodromy N. Then, the statement follows from <cit.>.
§.§ Beilinson bases and ∞-categories of hypersheaves
In this subsection, we collect some ∞-categorical tools that we will need to extend to rigid-analytic varieties over C the local Hyodo–Kato morphism of <ref>.
[Hypersheaves and hypercompletion]
Let C be a site, and let D be a presentable ∞-category.
We denote by ( C, D) the ∞-category of sheaves on C with values in D.
We recall that
( C, D)=( C, )⊗ D
<cit.>, where ⊗ denotes the tensor product of ∞-categories <cit.>.
We denote by ^( C, ) the full ∞-subcategory of ( C, ) spanned by the hypercomplete objects, <cit.>, and we define the ∞-category of hypersheaves on C with values in D as
^( C, D):=^( C, )⊗ D.
The inclusion ^( C, D)↪( C, D) admits a left adjoint
(-)^:( C, D)→^( C, D)
called hypercompletion, <cit.>.
The following generalization of the notion of Grothendieck basis for a site is due to Beilinson, <cit.>.
Ler C be a small site. A Beilinson basis for C is a pair ( B, ℶ) where B is a small category and ℶ: B→ C is a faithful functor satisfying the following property:
* for any V∈ C and any finite family of pairs {(U_α, f_α)} with U_α∈ B and f_α:V→ℶ(U_α), there exists a family {U'_β} with U'_β∈ B and a covering family {ℶ(U_β')→ V} such that each composition
ℶ(U'_β)→ V→ℶ(U_α)
lies in the image of (U'_β, U_α)↪(ℶ(U'_β), ℶ(U_α)).
We endow B with the Grothendieck topology induced from that of C: a sieve in B is a covering sieve if its image under ℶ: B→ C generates a covering sieve in C.
We will use repeatedly the following result.
Let C be a small site, and let ( B, ℶ) be a Beilinson basis for C. For any presentable ∞-category D, the functor ℶ: B→ C induces an equivalence of ∞-categories
^( B, D)∼→^( C, D): F↦ F^ℶ
where the hypersheaf F^ℶ is defined via sending V∈ C to
F^ℶ(V)=_U_∙lim_[n]∈Δ F(U_n)
the colimit running over all simplicial objects U_∙ of B such that ℶ(U_∙)→ V is a hypercover; furthermore, chosen such a U_∙, the natural map
F^ℶ(V)→lim_[n]∈Δ F(U_n)
is an isomorphism in D.
It suffices to show the statement in the case D is the ∞-category of anima . By <cit.>, the functor ℶ: B→ C is continuous and induces an equivalence of topoi B^∼→ C^∼. Then, the statement follows interpreting the notion of hypercompleteness in terms of the Brown–Joyal–Jardine theory of simplicial presheaves, via <cit.>.
We will often apply Lemma <ref> in the case D=D(_A^) is the derived ∞-category of A-modules in , for a given condensed ring A. Note that such D is indeed presentable, since it is compactly generated, as it follows from <cit.>.[Recall also our set-theoretic conventions in <ref>.] Moreover, by <cit.>, we have an equivalence of ∞-categories
D(( C, _A^))∼→^( C, D(_A^))
sending M∈ D(( C, _A^)) to the hypersheaf U↦ RΓ(U, M).
§.§ Globalization
In this subsection, we extend to rigid-analytic varieties over C the local Hyodo–Kato morphism of <ref>, from a suitable Beilinson basis for the site _C,.
Semistable formal schemes
For each prime ℓ, we fix a compatible system (p, p^1/ℓ, p^ℓ^2, …) of ℓ-th power roots of p in O_C.
We denote by M_ the category of semistable p-adic formal schemes over ( O_C), that is the category of p-adic formal schemes over ( O_C) having in the Zariski topology a covering by open affines U with semistable coordinates, i.e. admitting an étale ( O_C)-morphism U→(R^□) with
R^□:= O_C{t_0, …, t_r, t^± 1_r+1, …, t_d^± 1}/(t_0⋯ t_r-p^q)
for some 0≤ r≤ d, and q∈_>0 (that may depend on U).
We denote by M_, the subcategory of M_ consisting of the qcqs formal schemes.
We write
(-)_η: M_→_C
for the generic fiber functor.
Log structures Unless stated otherwise, we equip X∈ M_ (resp. X_ O_C/p^n) with the canonical log structure, <cit.>, i.e. the log structure given by the subsheaf associated to the subpresheaf O_ X, ∩ ( O_ X, [1/p])^×↪ O_ X, (resp. its pullback).
For X∈ M_, we denote by X_ O_C/p^0 the pullback to O_F̆, 1^0 of the log scheme X_ O_C/p over O_C, 1^×.
Log de Rham cohomology For X∈ M_, we denote by Ω_ X, log^∙ the logarithmic de Rham complex of X over O_C, and we define the log de Rham cohomology of X (over O_C) as
RΓ_log( X):=RΓ( X, Ω_ X, log^∙)∈ D(_ O_C^).
* For any affine X ∈ M_ with semistable coordinates there exists a finite extension L/K and a p-adic formal scheme X '→( O_L) admitting an étale ( O_L)-morphism X'→(R') with
R':= O_L{t_0, …, t_r, t^± 1_r+1, …, t_d^± 1}/(t_0⋯ t_r-p^q)
for some 0≤ r≤ d, and q∈_>0, such that X= X'×_( O_L)( O_C): this follows from <cit.>. By <cit.>, the p-adic formal scheme X' can be endowed with a fine log structure, whose base change to ( O_C) gives the log structure on X we started with.
* For any X∈ M_, there exist a finite extension L/K and a descent of X_ O_C/p to a qcqs, fine, log-smooth, log scheme over O_L,1^× of Cartier type: covering X by a finite number of open affines with semistable coordinates, this follows from part (<ref>) and fact that morphisms of Cartier type are stable under base change.
The following Beilinson basis will be used to define the Hyodo–Kato cohomology for rigid-analytic varieties over C starting from the semistable reduction case.
The pair ( M_, (-)_η) is a Beilinson basis for the site _C,.
By Proposition <ref> (and Remark <ref>), it suffices to show that ( M_, (-)_η) is a Beilinson basis for _C,, i.e. the big étale site of smooth rigid-analytic varieties over C . This follows from Temkin's alteration theorem <cit.>, as shown in <cit.>.
§.§.§ Condensed (φ, N)-modules
Before defining the Hyodo–Kato cohomology for rigid-analytic varieties over C, we need to establish the following terminology.
Let φ: F̆→F̆ denote the automorphism induced by the p-th power Frobenius on the residue field.
* A condensed φ-module over F̆ is a pair (V, φ_V) with V∈_F̆^ and φ_V:V→ V a φ-semilinear automorphism, called Frobenius. A morphism of condensed φ-modules over F̆ is a morphism of condensed modules over F̆, which is compatible with the Frobenius.
* A condensed (φ, N)-module over F̆ is a triple (V, φ_V, N_V) with (V, φ_V) a condensed φ-module over F̆ and
N_V:V→ V a F̆-linear endomorphism, called monodromy operator, such that N_Vφ_V=pφ_V N_V (by abuse of notation, we often denote φ=φ_V and N=N_V).
A morphism of condensed (φ, N)-modules over F̆ is a morphism of condensed modules over F̆, which is compatible with the Frobenius and the monodromy operator.
Note that the category of condensed (φ, N)-modules over F̆ is an abelian category. We denote by D_(φ, N)(_F̆^) the corresponding derived ∞-category; we abbreviate D_(φ, N)(F̆)=D_(φ, N)(_F̆^).
For X∈ M_, we have that RΓ_( X_ O_C/p^0/ O_F̆^0)__p lies in D_(φ, N)(_F̆^), in fact, by <cit.> the Frobenius is an automorphism on it.
Then, we are ready to give the following definition, which is based on Lemma <ref> and Proposition <ref>.
[Hyodo–Kato cohomology]
We denote by F_ the hypersheaf on _C, with values in D_(φ, N)(_F̆^) associated to the presheaf
( M_)^→ D_(φ, N)(_F̆^): X↦ RΓ_( X_ O_C/p^0/ O_F̆^0)__p,
For X a rigid-analytic variety over C, we define the Hyodo–Kato cohomology of X as
RΓ_(X):=RΓ(X, F_)∈ D_(φ, N)(_F̆^).
The following result shows in particular that the Hyodo–Kato cohomology of X is a refinement of the de Rham cohomology of X (Definition <ref>).We refer the reader to <cit.> and <ref> for a review of the notion of nuclearity, introduced by Clausen–Scholze, used in the following statement and the rest of the paper.
Let X be a rigid-analytic variety over C.
* (Local-global compatibility) Assume X is the generic fiber of X∈ M_, then the natural map
RΓ_( X_ O_C/p^0/ O_F̆^0)__p→ RΓ_(X)
is an isomorphism in D_(φ, N)(_F̆^).
* (Boundedness and nuclearity) If X is qcqs of dimension d, then RΓ_(X) is represented by a complex of nuclear (solid) F̆-vector spaces, and it lies in D^≤ 2d(_F̆^). Moreover, the monodromy operator N on RΓ_(X) is nilpotent with nilpotency index bounded above by a function depending on d.
* (Hyodo–Kato isomorphism) Assume X is connected and paracompact, then we have a natural isomorphism in D(__p^)
ι_: RΓ_(X)_F̆ C∼→ RΓ_(X).
We start with some preliminary observations.
First, we observe that, for any X∈ M_,, by <cit.>, we have a natural quasi-isomorphism
RΓ_( X_ O_C/p/ O_C^×)__p≃ RΓ_log( X)__p≃ RΓ_( X_C)
and then, by Theorem <ref>, which applies thanks to Remark <ref>, we have a natural quasi-isomorphism
RΓ_( X_ O_C/p^0/ O_F̆^0)__p_F̆C∼→ RΓ_( X_C).
Moreover, we claim that RΓ_( X_ O_C/p^0/ O_F̆^0)__p is represented by a complex of F̆-Banach spaces. For this, we note that, as F̆ is discretely valued we can choose a basis of the F̆-Banach space C, and then there exists a F̆-Banach space V and an isomorphism
C≅F̆⊕ V in _F̆^.
Now, the claim follows using the quasi-isomorphism (<ref>) combined with the isomorphism (<ref>), observing that, as X_C is qcqs, RΓ_( X_C) is represented by a complex C-Banach spaces (and hence F̆-Banach spaces), and a direct summand of a Banach space is a Banach space.
For part mainHK:1, it suffices to show that given X∈ M_, with generic fiber X, then, for any simplicial object U_∙ of M_, such that U_∙, η→ X is a -hypercover, the natural map
RΓ_( X_ O_C/p^0/ O_F̆^0)__p→lim_[n]∈ΔRΓ_( U_n, O_C/p^0/ O_F̆^0)__p
is a quasi-isomorphism. First, we note that the map (<ref>) is compatible with the Frobenius and the monodromy operator. Next, we will use an idea from the proof of <cit.>. By (<ref>) we have the following commutative diagram
RΓ_(X_O_C/p^0/O_F̆^0)__p_F̆C [d, "≀"][r] lim_[n]∈Δ(RΓ_(U_n, O_C/p^0/O_F̆^0)__p_F̆ C) [d, "≀"]
RΓ_(X_C) [r] lim_[n]∈ΔRΓ_(U_n, C).
The bottom horizontal arrow is a quasi-isomorphism, as RΓ_(-) satisfies -hyperdescent, hence the top horizontal arrow is a quasi-isomorphism too. Moreover, setting M_n:=RΓ_( U_n, O_C/p^0/ O_F̆^0)__p we have
lim_[n]∈Δ(M_n_F̆ C)=(lim_[n]∈ΔM_n)_F̆ C
as it follows from <cit.>, recalling that each M_n is represented by a complex of F̆-Banach spaces (and hence nuclear F̆-vector spaces by <cit.>), and using that[Here, all the limits are derived.]
lim_[n]∈ΔM_n=_m∈lim_[n]∈Δ_≤ mM_n.
Then, considering the fibers of the horizontal arrows in following commutative diagram
RΓ_(X_O_C/p^0/O_F̆^0)__p [d][r] lim_[n]∈ΔRΓ_(U_n, O_C/p^0/O_F̆^0)__p[d]
RΓ_(X_O_C/p^0/O_F̆^0)__p_F̆C [r, "∼"] lim_[n]∈Δ(RΓ_(U_n, O_C/p^0/O_F̆^0)__p_F̆ C)
in order to show that the top horizontal arrow is a quasi-isomorphism, it suffices to prove that, for any M∈ D(_F̆^),
M_F̆ C acyclic M acyclic.
This immediately follows using the isomorphism (<ref>).
For part mainHK:2, to show that RΓ_(X) lies in D^≤ 2d(_F̆^), using the quasi-isomorphism (<ref>) combined with the isomorphism (<ref>) and -hyperdescent, we can reduce to the analogous statement for the de Rham cohomology, which follows from Corollary <ref>. To show that RΓ_(X) is represented by a complex of nuclear F̆-vector spaces, taking a simplicial object U_∙ of M_, such that U_∙, η→ X is a -hypercover, by -hyperdescent and part mainHK:1, we have
RΓ_(X)=lim_[n]∈ΔRΓ_( U_n, O_C/p^0/ O_F̆^0)__p
and then, by <cit.>, we can reduce to the fact that each complex RΓ_( U_n, O_C/p^0/ O_F̆^0)__p is represented by a complex of F̆-Banach spaces, which was shown above. The last statement of part mainHK:2 follows from Lemma <ref>.
For part mainHK:3, we first assume X qcqs. In this case, using (<ref>), the statement follows (<ref>) and <cit.>, which applies thanks to part mainHK:2. For a general X connected and paracompact, choosing a quasi-compact admissible covering {U_n}_n∈ of X such that U_n⊆ U_n+1, the statement follows from the previous case, using again <cit.> and part mainHK:2.
As a consequence of the Hyodo–Kato isomorphism, we have the following result.
Let X be a connected, paracompact, rigid-analytic variety over C. Then, the Hyodo–Kato complex RΓ_(X) and the de Rham complex RΓ_(X) have the same cohomological dimension.
By Theorem <ref>mainHK:3 and the flatness of C for the solid tensor product _F̆ (<cit.>), for any i≥ 0, we have an isomorphism
H^i_(X)_F̆ C≅ H^i_(X).
Therefore, if H^i_(X) vanishes then H^i_(X) vanishes as well, and the converse statement follows using the isomorphism (<ref>).
§.§ Finiteness in the overconvergent case
In this subsection, we extend the Hyodo–Kato morphism to dagger varieties over C. As we will see, this will follow easily from the results of the previous subsection, using that the solid tensor product commutes with colimits.
Moreover, we will prove a finiteness result for the Hyodo–Kato cohomology of qcqs dagger varieties over C, generalizing already known results to the singular case.
§.§.§ Hyodo–Kato cohomology of dagger varieties over C
We begin with a general construction that will allow us to canonically define a cohomology theory on _L, ^† starting from a cohomology theory defined on _L,. Then, we will specialize this construction to the Hyodo–Kato and the de Rham cohomology theories.
In the following, we keep notation and conventions from <ref>. In particular, we denote by L a characteristic 0 complete valued field with a non-archimedean valuation of rank 1 and residue characteristic p.
Let D be a presentable ∞-category. The continuous functor
l:_L, ^†→_L, : X↦X
given by sending a dagger variety X to its limit X, induces an adjunction
l_*:^(_L, ^†, D)⇄^(_L, , D):l^*
where l^* is given by the composite of the pullback functor l^*:(_L, , D)→(_L, ^†, D) and the hypercompletion functor (<ref>).
For F∈(_L, , D), we denote
F^†:=l^* F∈^(_L, ^†, D).
Now, using Construction <ref> in the case D=D(A)=D(_A^) (see Remark <ref>), we can give the following definition.
[de Rham and Hyodo–Kato cohomology of dagger varieties]
* Let X be a dagger variety over L. Denote by
F_∈^(_C, , D(L))
the hypersheaf given by RΓ_(-).
We define the de Rham cohomology of X as
RΓ_(X):=RΓ(X, F_^†)∈ D(L).
* Let X be a dagger variety over C. Consider the hypersheaf
F_∈^(_C, , D_(φ, N)(F̆))
introduced in Definition <ref>.
We define the Hyodo–Kato cohomology of X as
RΓ_(X):=RΓ(X, F_^†)∈ D_(φ, N)(F̆).
§.§.§ Presentation of a dagger structure
In order to construct the Hyodo–Kato morphism for dagger varieties over C, we will rely on its analogue for rigid-analytic varieties over C, that is Theorem <ref>. For this, we will need to express more explicitly the Hyodo–Kato/de Rham cohomology of a smooth dagger affinoid over C in terms of the respective cohomology of smooth affinoid rigid spaces over C. This is our next goal. We recall from <ref> that given a dagger variety X=(X, O^†) over L with underlying rigid-analytic variety X we say that X is a dagger structure on X.
We have the following important example of dagger structure.
We note that any smooth affinoid rigid space X=(R, R^∘) over L has a dagger structure. In fact, by <cit.>, there exist f_1, …, f_m elements of the Washnitzer algebra L⟨T⟩^† such that R≅ L⟨T⟩/(f_1, …, f_m).[Here, we write T for T_1, …, T_n where n is the dimension of X.] In particular, the dagger variety associated to the dagger algebra L⟨T⟩^†/(f_1, …, f_m) defines a dagger structure on X.
Next, we recall the following convenient definition.
[<cit.>]
Let X be an affinoid rigid space over L. A presentation of a dagger structure on X is a pro-(affinoid rigid space over L) _h∈ X_h with X and X_h rational subspaces of X_1, such that X⋐ X_h+1⋐ X_h,[For Y⊂ Z an open immersion of rigid-analytic varieties over L, we write Y⋐ Z if the inclusion map of Y into Z factors over the adic compactification of Y over L.] and this system is coinitial among rational subspaces containing X.
A morphism of presentations of a dagger structure on an affinoid rigid space over L is a morphism of pro-objects.
The next lemma relates affinoid dagger spaces to presentations of a dagger structure on an affinoid rigid space.
Let X be an affinoid rigid space over L, and let _h X_h be a presentation of a dagger structure on X. We denote by X^† the dagger affinoid over L associated to the dagger algebra R=_h O(X_h).
* The functor
_h X_h↦ X^†
from the category of presentations of a dagger structure on an affinoid rigid space over L to the category of affinoid dagger spaces over L is an equivalence.
* Let (_h X_h)_ denote the (small) étale site of _h X_h, <cit.>. We have natural morphisms of sites
X_→ (X^†)_→ (_h X_h)_
which induce an equivalence on the associated topoi.
Part dagglemma:1 is <cit.>, and part dagglemma:2 is <cit.>.
Entering the proof of <cit.>, we see that given an affinoid dagger space (X, O^†) over L, associated to a dagger algebra L⟨T⟩^†/(f_1, …, f_m), then the corresponding presentation of a dagger structure _h X_h on X can be defined as follows: since L⟨T⟩^† is noetherian, <cit.>, there exists an integer H sufficiently big such that f_1, …, f_m∈ L⟨π^1/HT⟩, where π is a pseudo-uniformizer of O_L; then, we set X_h:=(R_h, R_h^∘) with
R_h:= L⟨π^1/(h+H)T⟩/(f_1, …, f_m).
Then, the next result follows formally from Lemma <ref> by étale hyperdescent (cf. <cit.>).
Fix notation as in <ref> and Construction <ref>. Let F∈^(_L, , D) that is the pullback of an hypersheaf in
^(_L,, D). Let X be a smooth dagger affinoid over L with corresponding presentation _h X_h. Then, we have
RΓ(X, F^†)=_h∈RΓ(X_h, F).
The assumptions of the previous lemma are designed to be satisfied by the hypersheaves defining the de Rham and the Hyodo–Kato cohomology:
We note that Lemma <ref> applies to F= F_, thanks to Proposition <ref>, and it applies to F= F_ thanks to Theorem <ref>mainHK:1.
Next, we recall that the category of partially proper dagger varieties is equivalent to the category of partially proper rigid-analytic varieties, via the functor (<ref>), <cit.>. In the situation of Lemma <ref>, such equivalence preserves cohomology:
Fix notation as in <ref> and Construction <ref>. Let F∈^(_L, , D) that is the pullback of an hypersheaf in
^(_L,, D). Let X be a partially proper dagger variety over L. Then, there exists a natural isomorphism
RΓ(X, F^†)∼→RΓ(X, F).
Recalling that any partially proper dagger variety admits an admissible covering by Stein spaces (see the proof of <cit.>), we may assume that X is a Stein space. Then, let {U_n}_n∈ be a Stein covering of X. Writing RΓ(X, F^†)=R_n∈RΓ(U_n, F^†), and similarly RΓ(X, F)=R_n∈RΓ(U_n, F), it suffices to show that, for a fixed n∈, the natural map RΓ(U_n+1, F^†)→ RΓ(U_n, F^†) factors through RΓ(U_n+1, F). Renaming V:=U_n and W:=U_n+1, by Proposition <ref> (and Remark <ref>), we can choose an -hypercover W_∙→ W with each W_m smooth dagger affinoid over L; via pullback along the open immersion V→ W, we obtain an -hypercover V_∙→ V with each V_m smooth dagger affinoid over L. Then, we may reduce to the case V and W are smooth over L, which follows from Lemma <ref>.
§.§.§ Semistable weak formal schemes
In order to study the Hyodo–Kato cohomology of dagger varieties over C (Definition <ref>), we will define a convenient Beilinson basis for the site _C, ^†. In addition to Notation <ref>, we introduce the following notation. We refer the reader to <cit.> for the basics on the theory of weak formal schemes, and to <cit.> for an analogue of Raynaud's theorem relating the categories of weak formal schemes and dagger varieties.
We denote by M_^† the category of weak formal schemes over ( O_C) having in the Zariski topology a covering by open affines U with semistable coordinates, i.e. admitting an ( O_C)-morphism U→(R^□†) with
R^□†:= O_C[t_0, …, t_r, t^± 1_r+1, …, t_d^± 1]^†/(t_0⋯ t_r-p^q)
for some 0≤ r≤ d, and q∈_>0 (that may depend on U). We denote by M_, ^† the subcategory of M_^† consisting of the qcqs formal schemes.
We write
(-)_η: M_^†→_C^†
for the generic fiber functor.
The pair ( M_^†, (-)_η) is a Beilinson basis for the site _C, ^†.
As in the proof of Proposition <ref>, the statement follows from Proposition <ref> (and Remark <ref>) combined with <cit.>.
The following result is an overconvergent version of Theorem <ref>.
Let X be a dagger variety over C.
* (Local description) Assume X is the generic fiber of X∈ M_^†, then there is a natural quasi-isomorphism
RΓ_(X)≃ RΓ_( X_k̅/ O_F̆^0)
compatible with Frobenius φ and monodromy N. Here, the right-hand side denotes the (rational) log-rigid cohomology of X_k̅ over O_F̆^0, <cit.>, <cit.>.
* (Hyodo–Kato isomorphism) Assume X is connected and paracompact, then we have a natural isomorphism in D(__p^)
ι_: RΓ_(X)_F̆ C∼→ RΓ_(X).
Part mainHKover:1 follows from <cit.>.[Note that Theorem <ref>mainHK:1 implies that, for X a smooth rigid-analytic/dagger variety over C, the Hyodo–Kato cohomology RΓ_(X) agrees with the one defined in <cit.> considered in D().] Part mainHKover:3 for X smooth affinoid follows from Theorem <ref>mainHK:2 and Lemma <ref> (together with Remark <ref>), using that the tensor product _F̆ commutes with filtered colimits. From Lemma <ref> we also deduce that, for X smooth affinoid, RΓ_(X) is represented by a complex of nuclear F̆-vector spaces (recall that the category of nuclear F̆-vector spaces is closed under colimits). Therefore, the same argument used in the proof of Theorem <ref>mainHK:3 shows part mainHKover:3 in general.
§.§.§ Finiteness
Now, we state the promised finiteness result for the Hyodo–Kato cohomology groups of a qcqs dagger variety over C, and we give a bound on the slopes of such cohomology groups regarded as φ-modules.
Let X be a qcqs dagger variety over C. Let i≥ 0.
* The condensed cohomology group H_^i(X) (resp. H_^i(X)) is a finite-dimensional condensed vector space over F̆ (resp. over C).
* The vector bundle on associated to the finite φ-module H_^i(X) over F̆ has Harder–Narasimhan slopes ≥ -i.
In the case when X is the generic fiber of X∈ M_, ^† by Theorem <ref>mainHKover:1 part slopp:1 follows from a result of Grosse-Klönne, <cit.> (and base change), and part slopp:2 follows from <cit.>. In the general case, we take a simplicial object U_∙ of M_, ^† such that U_∙, η→ X is a -hypercover, and we consider the spectral sequence
E_1^j, i-j=H_^i-j( U_j, η) H_^i(X).
Then, part slopp:1 for the Hyodo–Kato cohomology follows immediately from the previous case, the spectral sequence (<ref>), and <cit.>. Similarly, part slopp:1 for the de Rham cohomology follows from the previous case and an analogous spectral sequence for the de Rham cohomology.[Alternatively, part slopp:1 for the de Rham cohomology follows from part slopp:1 for the de Hyodo–Kato cohomology and the Hyodo–Kato isomorphism, Theorem <ref>mainHKover:3.] For part slopp:2, applying to (<ref>) the exact functor E(-) sending a finite φ-module over F̆ to the associated vector bundle on , and then twisting by O(i), we deduce that the vector bundle E(H_^i(X))⊗ O(i) has non-negative Harder–Narasimhan slopes: in fact, by the previous case, for all j, the vector bundle E(H_^i-j( U_j, η))⊗ O(i) has non-negative Harder–Narasimhan slopes, and then the claim follows from the classification of vector bundles on .
§ B-COHOMOLOGY
This section is devoted to the proof of the following main result, which compares the B-cohomology with the Hyodo–Kato cohomology.
Let X be a connected, paracompact, rigid-analytic variety defined over C. Then, we have a natural isomorphism in D(_B^)
RΓ_B(X)≃ (RΓ_(X)_F̆B_log)^N=0
compatible with the action of Frobenius φ. If X is the base change to C of a rigid-analytic variety defined over K, then (<ref>) is 𝒢_K-equivariant.
We will first prove Theorem <ref> in the case when X has semistable reduction. This will be done in two main steps: we first compare, in <ref>, the B-cohomology with the log-crystalline cohomology over A_, and then, in <ref>, we relate the latter with the Hyodo–Kato cohomology.
In the following, we keep the notation and conventions introduced in <ref> and <ref>.
§.§ The comparison with the log-crystalline cohomology over A_
We begin by comparing the B-cohomology with the log-crystalline cohomology over A_.
Let X be a qcqs semistable p-adic formal scheme over ( O_C) and let I⊂[1/(p-1), ∞) be a compact interval with rational endpoints. Then, there is a natural isomorphism in D(__p^)
RΓ_B_I( X_C)≃ RΓ_(𝔛_ O_C/p/A_^×)_A_B_I
compatible with the action of Frobenius φ.
In the first instance, we prove a local version of Theorem <ref>, and then we globalize the result. Therefore, we begin by defining the local setting in which we will work.
Let 𝔛=(R) be a connected affine p-adic formal scheme over ( O_C) admitting an étale ( O_C)-morphism X→(R^□) with
R^□:= O_C{t_0, …, t_r, t^± 1_r+1, …, t_d^± 1}/(t_0⋯ t_r-p^q)
for some 0≤ r≤ d, and q∈_>0.
We denote by R_∞^□ the perfectoid R^□-algebra defined by R_∞^□:=(_m R_m^□)^∧_p with
R^□_m:= O_C{t_0^1/p^m, …, t_r^1/p^m, t^± 1/p^m_r+1, …, t_d^± 1/p^m}/(t_0⋯ t_r-p^q/p^m)
and we put X_C, ∞^□:=(R_∞^□[1/p], R_∞^□). We set
R_∞:=(R⊗_R^□ R_∞^□)^∧_p
and we note that (see also <cit.>)
X_C, ∞:=(R_∞[1/p], R_∞)→ X_C
is an affinoid perfectoid pro-étale cover of X_C with Galois group
Γ:=_p(1)^d≅_p^d
where the latter isomorphism is given by the choice of a compatible system of p-th power roots of unity in O_C (see <ref>). We denote by γ_1, …, γ_d the generators of Γ defined by
γ_i:=(ε^-1, 1, …, 1, ε, 1, …, 1) for i=1, …, r
γ_i:=(1, …, 1, ε, 1, …, 1) for i=r+1, …, d
where ε sits on the i-th entry.
§.§.§ The condensed ring _I(R_∞)
In the setting of Notation <ref>, given any pro-étale period sheaf of <ref>, we put
(R_∞^□):=( X_C, ∞^□) (R_∞):=( X_C, ∞)
which we regard as condensed rings.
We recall from <cit.> that we have the following decomposition of _inf(R_∞^□)
_inf(R_∞^□)≅ A_inf(R^□)⊕_inf(R_∞^□)^
where A_inf(R^□) denotes the “integral” part, and _inf(R_∞^□)^ the “nonintegral part”. We have
A_inf(R^□)≅ A_inf{X_0, …, X_r, X_r+1^± 1, …, X_d^± 1}/(X_0⋯ X_r-[p^♭]^q)
where X_i:=[t_i^♭], and the convergence is (p, μ)-adic. Such decomposition lifts to _inf(R_∞) as follows
_inf(R_∞)≅ A_inf(R)⊕_inf(R_∞)^
where A_inf(R) is the unique lift of the étale (R^□/p)-algebra R/p, along θ: A_inf(R^□)↠ R^□, to a (p, μ)-adically complete, formally étale A_inf(R^□)-algebra.
Given a compact interval I⊂ (0, ∞) with rational endpoints, by <cit.>, we have
_I(R_∞)≅_inf(R_∞)⊗_A_infA_I
where the completion ⊗_A_inf is p-adic. Then, one has similar decompositions as (<ref>) replacing _inf with _I, resp. _I, and A_inf(R) with A_I(R):=A_inf(R)⊗_A_infA_I, resp. B_I(R):=A_I(R)[1/p] (where the completion ⊗_A_inf is p-adic).
Let I⊂ (0, ∞) be a compact interval with rational endpoints. We claim that we have a natural isomorphism
_I(R_∞)≅_inf(R_∞)_A_infA_I
In particular, inverting p, using that the solid tensor product commutes with filtered colimits, we have an isomorphism
_I(R_∞)≅_inf(R_∞)_A_infB_I.
To show (<ref>), up to twisting by the Frobenius, we can assume that I⊂[1/(p-1), ∞). Now, we use the isomorphism (<ref>), and then we apply Proposition <ref> taking M=_inf(R_∞) and N=A_inf, I (see <ref> for the notation), regarded as objects of _A_inf^, thus obtaining that
_inf(R_∞)_A_inf(A_inf, I)^∧_p≅ (_inf(R_∞)⊗_A_infA_inf, I)^∧_p
where (-)^∧_p denotes the derived p-adic completion. Since A_inf, I is p-torsion-free, thanks to Lemma <ref> the derived p-adic completion (A_inf, I)^∧ identifies with A_I. Then, it remains to show that the derived p-adic completion appearing on the right-hand side of (<ref>) is underived: by <cit.> and Remark <ref>, we have μ^p-1/p∈ A_⊂ A_I, and therefore, for any integer n≥ 1, we have that A_inf, I/p^n=A_I/p^n≅ A_I/(p^n, μ^n') for a large enough integer n';[In fact, one can take n':=(p-1)n.] now, it suffices to observe that, by <cit.>, (p^n, μ^n') is an _inf(R_∞)-regular sequence and _inf(R_∞)/(p^n, μ^n') is flat over A_inf/(p^n, μ^n'),[In fact, loc. cit. translates to the condensed setting, for the ideal sheaf (p^n, μ^n') in the condensed ring _inf(R_∞), observing that both _inf(R_∞)/(p^n, μ^n') and A_inf/(p^n, μ^n') are discrete.] hence
_inf(R_∞)⊗^_A_infA_inf, I/(p^n, μ^n')
is concentrated in degree 0, and the claim follows.
§.§.§ Local computations
Next, by a standard argument, we express locally the B-cohomology and the B_^+-cohomology in terms of Koszul complexes.
Let I⊂ (0, ∞) be a compact interval with rational endpoints, and let m≥ 1 be an integer. Given
𝐁∈{_I, , _^+, _^+/^m}
we write ℬ=𝐁_(C)_ for the corresponding condensed period ring. In the setting of Notation <ref>, we have a natural isomorphism in D(^_ℬ)
RΓ_ℬ( X_C)≃ Lη_t_𝐁(R_∞)(γ_1-1, …, γ_d-1)
compatible with the filtration décalée of Definition <ref>.
Using Proposition <ref>3.11.2, it remains to check that
RΓ_( X_C, 𝐁)≃_𝐁(R_∞)(γ_1-1, …, γ_d-1).
Considering the Cartan–Leray spectral sequence associated to the affinoid perfectoid pro-étale cover X_C, ∞→ X_C of (<ref>) with Galois group Γ (<cit.>), we have the following natural isomorphism in D(^_ℬ)
RΓ_(Γ, 𝐁(R_∞))∼→RΓ_( X_C, 𝐁).
Then, the statement follows from <cit.>.
Our next goal is to express the right-hand side of (<ref>) in terms of differential forms. For this, recalling the notation introduced in Remark <ref> and Remark <ref>, in the setting of Notation <ref> we denote the dual basis of the log A_inf-derivations (see <cit.>) as follows:
∂_i:=∂/∂log(X_i):A_inf(R)→ A_inf(R)
for 1≤ i≤ d. Given I⊂ (0, ∞) a compact interval with rational endpoints, by slight abuse of notation, we will also denote by ∂_i the extension of the derivatives (<ref>) to A_I(R) or B_I(R).
Let I⊂[1/(p-1), ∞) be a compact interval with rational endpoints. In the setting of Notation <ref>, we have a B_I-linear quasi-isomorphism
_A_inf(R)(∂_1, …,∂_d)_A_infB_I∼→_B_I(R)(∂_1, …,∂_d)∼→Lη_t__I(R_∞)(γ_1-1, …, γ_d-1)
compatible with the action of Frobenius φ.
We will generalize the proof of <cit.>. Since μ divides γ_i-1 in B_I(R) for all i, i.e. Γ acts trivially on B_I(R)/μ, and since, by the choice of I, the elements μ and t differ by a unit in B_I, by <cit.> we have that
η_t_B_I(R)(γ_1-1, …, γ_d-1)≃_B_I(R)(γ_1-1/t, …, γ_d-1/t).
Using that A_⊂ B_I, by the choice of I, the arguments in <cit.> and <cit.> show that, for each i, we have the following Taylor expansion in B_I(R)
γ_i-1/t=∂/∂log(X_i)· h, with h:=1+∑_j≥ 1t^j/(j+1)!(∂/∂log(X_i))^j
where h-1 is topologically nilpotent, in particular the factor h is an automorphism of B_I(R); furthermore, the latter automorphism is φ-equivariant.[To check this one can argue as in the proof of <cit.>.] Then, recalling the notation (<ref>), we deduce that the maps
((B_I(R)∂_i→B_I(R))(𝕀, h)⟶(B_I(R)γ_i-1→B_I(R))
for 1≤ i≤ d, induce a φ-equivariant quasi-isomorphism
_B_I(R)(∂_1, …, ∂_d)∼→η_t_B_I(R)(γ_1-1, …, γ_d-1).
Next, we show that the natural map
η_t_B_I(R)(γ_1-1, …, γ_d-1)→η_t__I(R_∞)(γ_1-1, …, γ_d-1)
is a quasi-isomorphism. For this, recalling Remark <ref>, we have
_I(R_∞)≅ A_I(R)⊕_I(R_∞)^
where _I(R_∞)^ denotes the “nonintegral part” of _I(R_∞). Then, as Lη_μ(-) commutes with filtered colimits (hence with inverting p), it suffices to show that
Lη_μ__I(R_∞)^(γ_1-1, …, γ_d-1)≃ 0.
In order to show (<ref>) we need to prove that μ kills H^i_(Γ, _I(R_∞)^) for all i∈. By <cit.> (and <cit.>), the element μ kills H^i_(Γ, _inf(R_∞)^) for all i ∈, then we conclude by Corollary <ref> below.
Now, combining (<ref>) with (<ref>), to prove the statement it remains to check that we have an isomorphism
_A_inf(R)(∂_1, …, ∂_d)_A_infB_I∼→_B_I(R)(∂_1, …, ∂_d).
Since the solid tensor product commutes with filtered colimits, it suffices to show (<ref>) replacing B_I=A_I[1/p] with A_I. Then, using Proposition <ref>, we reduce to showing that we have an isomorphism
_A_inf(R)(∂_1, …, ∂_d)⊗^_A_infA_I∼→_A_I(R)(∂_1, …, ∂_d)
where the completion ⊗^_A_inf is derived p-adic. For this, we observe that the latter completion is an underived (termwise) p-adic completion: as recalled in Remark <ref>, for any integer n≥ 1, we have A_I/p^n≅ A_I/(p^n, μ^n') for a large enough integer n', and, by <cit.>, (p^n, μ^n') is an A_inf(R)-regular sequence with A_inf(R)/(p^n, μ^n') flat over A_inf/(p^n, μ^n').
We used crucially the following result.
Let A be a condensed ring, and let (f)⊂ A be a principal ideal sheaf. Let M be an (f)-adically complete A-module in . Consider the following condition on a given A-module P in :
for every j, n≥ 1 the map _j^A(P, M/f^n')→_j^A(P, M/f^n) vanishes for some n'>n.
For any bounded complex P^∙ of A-modules in , with each P^i and H^i(P^∙) satisfying (<ref>), for all i∈ we have a natural isomorphism in
H^i(P^∙⊗_A M)≅ H^i(P^∙)⊗_AM
where the completion ⊗_A is (f)-adic.
See the proof of <cit.>.
Let I⊂ [1/(p-1), ∞) be a compact interval with rational endpoints. Let us denote N_∞:= _inf(R_∞)^. Then, for every i∈, we have a natural isomorphism in
H^i_(Γ, N_∞⊗_A_infA_I)≅ H^i_(Γ, N_∞)⊗_A_infA_I
where the completion ⊗_A_inf is p-adic.
We will show that the condition (<ref>) of Lemma <ref> holds for A=A_inf, f=p, M=A_I, and each P∈{_inf(R_∞), H^i_(Γ, N_∞), H^i_(Γ, N_∞/μ)}, adapting the argument of <cit.> to our setting.
For P=_inf(R_∞), we even have that P⊗_A_inf^A_I/p^n∈ D() is concentrated in degree 0 for all n≥ 1: in fact, recalling that A_I/p^n≅ A_I/(p^n, μ^n') for a large enough integer n' (see Remark <ref>), it suffices to apply <cit.>, which implies that (p^n, μ^n') is an _inf(R_∞)-regular sequence with _inf(R_∞)/(p^n, μ^n') flat over A_inf/(p^n, μ^n') (noting that the latter two condensed rings are discrete).
Next, we claim that the case P=H^i_(Γ, N_∞) follows from the case P=H^i_(Γ, N_∞/μ). In fact, by <cit.> (and <cit.>), μ kills every H^j_(Γ, N_∞), therefore, the long exact sequence in condensed group cohomology associated to the short exact sequence 0→ N_∞μ→N_∞→ N_∞/μ→ 0, gives the short exact sequences
0→ H^i_(Γ, N_∞)→ H^i_(Γ, N_∞/μ)→ H^i+1_(Γ, N_∞)→ 0.
By <cit.>, H^i_(Γ, N_∞) vanishes for a large enough integer i, hence the claim follows by descending induction on i.
Finally, for P=H^i_(Γ, N_∞/μ), we claim that P satisfies the following conditions that imply the condition (<ref>) of Lemma <ref>:
* P is p-torsion-free;
* for every n≥ 1, P/p^n is a filtered colimit of A_inf-modules isomorphic to A_inf/(φ^-r(μ), p^n) for variable r≥ 0.
In fact, recalling <cit.>, condition afr:1 follows from <cit.>, and condition afr:2 follows from <cit.> and Lazard's theorem (<cit.>), observing that P/p^n is discrete. It remains to show that such conditions on P imply the condition (<ref>) of Lemma <ref> in our case. For this, by condition afr:1 and Lemma <ref>, we have that P⊗_A_inf^A_inf/p^n= P⊗_^/p^n is concentrated in degree 0, and then by condition afr:2 and the μ-torsion-freeness of A_inf/p^n, we can reduce to checking that, for every n≥ 1 and r≥ 0, the map
_1^A_inf/p^n'(A_inf/(φ^-r(μ), p^n'), A_I/p^n')→_1^A_inf/p^n(A_inf/(φ^-r(μ), p^n), A_I/p^n)
vanishes for some n'>n. In order to check this, we observe that the source of the map (<ref>) identifies with (A_I/p^n')[φ^-r(μ)] (and the target with (A_I/p^n)[φ^-r(μ)]), and we conclude observing that φ^-r(μ) divides μ in A_inf,[In fact, for any r≥ 1, one has μ=(∏_j=0^r-1φ^-j(ξ))·φ^-r(μ).] and, by Lemma <ref>, the map (A_I/p^n')[μ]→ A_I/p^n vanishes for some n'>n.
§.§.§ The functorial isomorphism
As we will see in this subsection, the source of the quasi-isomorphism (<ref>) of Lemma <ref> computes
RΓ_(𝔛_ O_C/p/A_^×)_A_B_I.
Therefore, Lemma <ref>, combined with Lemma <ref>, already provides a local version of the desired Theorem <ref>. However, such quasi-isomorphism depends on the choice of the coordinates X→(R^□), introduced in Notation <ref>. To have a functorial quasi-isomorphism, we will rely on a modification of the method of “all possible coordinates” pioneered by Bhatt–Morrow–Scholze in <cit.>, and used by Česnavičius–Koshikawa in <cit.> to prove the functoriality with respect to étale maps of the absolute crystalline comparison isomorphism for the A_inf-cohomology in the semistable case.
Let us begin resuming the setting of <cit.>.
We denote by k_C the residue field of O_C.
* Let 𝔛=(R) be an affine p-adic formal scheme over ( O_C), such that every two irreducible components of (R⊗_ O_Ck_C) intersect, and such that there exist finite sets Σ and Λ≠∅, and a closed immersion of p-adic formal schemes over ( O_C)
X↪(R_Σ^□)×_( O_C)∏_λ∈Λ(R_λ^□)=:(R_Σ, Λ^□)
where
R_Σ^□:= O_C{t_σ^± 1:σ∈Σ}
the induced map X→(R_Σ^□) is a closed immersion, and, for each λ∈Λ,
R_λ^□:= O_C{t_λ, 0, …, t_λ, r_λ, t_λ, r_λ+1^± 1, …, t_λ, d^± 1}/(t_λ, 0⋯ t_λ, r_λ-p^q_λ)
for some 0≤ r_λ≤ d, and q_λ∈_>0, and the induced map X→(R_λ^□) is étale.
We will denote by M_R the canonical log structure on R, Notation <ref>.
* We define
A_inf, Σ, Λ^□:=A_inf(R_Σ^□)⊗_A_inf⊗_λ∈ΛA_inf(R_λ^□)
where the completions are (p, μ)-adic.
We will denote by M_inf, Σ, Λ^□ the log structure on A_inf, Σ, Λ^□ associated to the log structures on A_inf(R_Σ^□) and A_inf(R_λ^□), for varying λ∈Λ, defined in <cit.>.
* Similarly to Notation <ref>, we define R_Σ, Λ, ∞^□ a perfectoid R_Σ, Λ^□-algebra such that
(R_Σ, Λ, ∞^□[1/p], R_Σ, Λ, ∞^□)→ X_C
is an affinoid perfectoid pro-étale cover of X_C with Galois group
Γ_Σ, Λ:=Γ_Σ×∏_λ∈ΛΓ_λ≅_p^|Σ|×∏_λ∈Λ_p^d
(see also <cit.>).
We denote by (γ_σ)_σ∈Σ, (γ_λ, i)_λ∈Λ, 1≤ i≤ d the generators of Γ_Σ, Λ defined by
γ_σ:=(1, …, 1, ε, 1, …, 1) for σ∈Σ
where ε sits on the σ-th entry, and
γ_λ, i:=(ε^-1, 1, …, 1, ε, 1, …, 1) for i=1, …, r_λ
γ_λ, i:=(1, …, 1, ε, 1, …, 1) for i=r_λ+1, …, d
where ε sits on the i-th entry.
The base change of (<ref>) along the generic fiber of (<ref>) defines an affinoid perfectoid pro-étale cover of X_C
X_C, Σ, Λ, ∞:=(R_Σ, Λ, ∞[1/p], R_Σ, Λ, ∞)→ X_C
with Galois group Γ_Σ, Λ.
* Given any pro-étale period sheaf of <ref>, we set
(R_Σ, Λ, ∞):=( X_C, Σ, Λ, ∞)
and we regard it as a condensed ring.
In Notation <ref>, the assumption on the special fiber (R⊗_ O_Ck_C) guarantees that each irreducible component of such special fiber is cut out by a unique t_λ, i with 0≤ i≤ r_λ (see also <cit.>).
We note that Remark <ref> and Remark <ref> hold in the setting of Notation <ref>, i.e. with R_Σ^□ and R_λ^□ in place of R^□. In particular, to fix the notation, we let
A_inf(R_Σ^□)≅ A_inf{X_σ^± 1|σ∈Σ}
A_inf(R_λ^□)≅ A_inf{X_λ, 0, …, X_λ, r_λ, X_r_λ+1^± 1, …, X_λ, d^± 1}/(X_λ, 0⋯ X_λ, r_λ-[p^♭]^q_λ)
be the isomorphisms (<ref>) for R_Σ^□ and R_λ^□.
For the proof of Theorem <ref>, we will need the next result from <cit.> on the log-crystalline cohomology over A_, that we translate here in the condensed setting.
In the setting of Notation <ref>, we have a φ-equivariant identification
RΓ_(𝔛_ O_C/p/A_^×)≃Ω_D_Σ, Λ(R)^∙:=_D_Σ, Λ(R)((∂_σ)_σ∈Σ,(∂_λ, i)_λ∈Λ, 1≤ i≤ d)
where D_Σ, Λ(R) is an A_-algebra in characterized by the following properties: D_Σ, Λ(R) is p-adically complete, and, for each integer n≥ 1, D_Σ, Λ(R)/p^n is the log PD envelope of [Here, R/p is equipped with the pullback of the canonical log structure M_R on R, and A_inf, Σ, Λ^□⊗_A_infA_/p^n is endowed with the pullback of the log structure M_inf, Σ, Λ^□ on A_inf, Σ, Λ^□, Notation <ref>.]
(A_inf, Σ, Λ^□⊗_A_infA_/p^n, M_inf, Σ, Λ^□) ↠ (R/p, M_R) over A_^×/p^n ↠ O_C^×/p.
Here, ∂_σ:=∂/∂log(X_σ), resp. ∂_λ, i:=∂/∂log(X_λ, i), are as in (<ref>) with R_Σ^□, resp. R_λ^□, in place of R.
This is the content of <cit.>, which relies on <cit.>. The characterization of D_Σ, Λ(R) follows from <cit.>.
We note that the action of Γ_Σ, Λ on A_inf, Σ, Λ^□ induces a natural action of Γ_Σ, Λ on D_Σ, Λ(R). The next lemma, which is a semistable version of <cit.>, expresses the complex Ω_D_Σ, Λ(R)^∙ of Lemma <ref>, which computes the log-crystalline cohomology over A_, in terms of condensed group cohomology RΓ_(Γ_Σ, Λ, D_Σ, Λ(R)) via passage through Lie algebra cohomology. Cf. <cit.>.
In the setting of Notation <ref>, we denote by Γ_Σ, Λ the Lie algebra of Γ_Σ, Λ, and we write exp:Γ_Σ, Λ≅Γ_Σ, Λ for the exponential isomorphism. Then, there is a natural action of Γ_Σ, Λ on D_Σ, Λ(R) defined for g∈Γ_Σ, Λ, with exp(g)=γ∈Γ_Σ, Λ, by
g=log(γ):=∑_n≥ 1(-1)^n-1/n(γ-1)^n.
We write U_Σ, Λ for the universal enveloping algebra of Γ_Σ, Λ, and we denote by
RΓ(Γ_Σ, Λ, D_Σ, Λ(R)):=R_U_Σ, Λ(_p, D_Σ, Λ(R))∈ D()
the Lie group cohomology.
* There is a quasi-isomorphism
Lη_μ RΓ(Γ_Σ, Λ, D_Σ, Λ(R))≃Ω_D_Σ, Λ(R)^∙.
* There is a quasi-isomorphism
RΓ(Γ_Σ, Λ, D_Σ, Λ(R))≃ RΓ_(Γ_Σ, Λ, D_Σ, Λ(R)).
We first need to check that, for g∈Γ_Σ, Λ, the series (<ref>) converges to an endomorphism of D_Σ, Λ(R). For this, it suffices to prove that the action of γ-1 on D_Σ, Λ(R) takes values in ([ε]-1)D_Σ, Λ(R): in fact, by <cit.>, this implies that the action of (γ-1)^n/n on D_Σ, Λ(R) has values in D_Σ, Λ(R), and such values converge to 0 as n→∞. Thus, following the proof of <cit.> and using <cit.>, we can reduce to checking that the action of γ-1 on A_inf, Σ, Λ^□ takes values in ([ε]-1)A_inf, Σ, Λ^□: this is clear for γ being one of the generators (γ_σ)_σ∈Σ, (γ_λ, i)_λ∈Λ, 1≤ i≤ d of Γ_Σ, Λ.[Note that, for 1≤ i≤ d, the element γ_λ, i-1 acts as follows on X_j: it sends X_i↦ ([ε]-1)X_i; X_j↦ 0 if 0<j≠ i; X_0↦ ([ε^-1]-1)X_0=-([ε]-1)[ε^-1]X_0 if i≤ r_λ; and X_0↦ 0 if i>r_λ.]
Next, to prove both part ghui:1 and part ghui:2, we denote T(Σ, Λ):= Σ∪{(λ, i): λ∈Λ, 1≤ i≤ d}. By (the proof of) <cit.>, the element γ_τ∈Γ_Σ, Λ acts on D_Σ, Λ(R) as the endomorphism exp(log([ε])·∂_τ), for varying τ∈ T(Σ, Λ). Therefore, denoting g_τ:=log([ε])·∂_τ∈Γ_Σ, Λ, we have a quasi-isomorphism
RΓ(Γ_Σ, Λ, D_Σ, Λ(R))≃_D_Σ, Λ(R)(g_τ∈ T(Σ, Λ))
Since log([ε]) and μ differ by a unit in A_, part ghui:1 follows from the quasi-isomorphism (<ref>) and <cit.>. For part ghui:2, we note that g_τ=log(γ_τ)=(γ_τ-1)· h_τ for τ∈ T(Σ, Λ), with h_τ automorphisms of D_Σ, Λ(R) commuting with each other and with γ_τ-1, therefore
_D_Σ, Λ(R)(g_τ∈ T(Σ, Λ))≅_D_Σ, Λ(R)((γ_τ-1)_τ∈ T(Σ, Λ))
(cf. the proof of <cit.>). Then, the statement of part ghui:2 follows combining (<ref>), (<ref>), and <cit.>.
We deduce the following result.
In the setting of Notation <ref>, we have a natural (in X and the datum (<ref>)) quasi-isomorphism
RΓ_(𝔛_ O_C/p/A_^×)∼⟶ Lη_μ RΓ_(Γ_Σ, Λ, D_Σ, Λ(R))
We first construct the desired natural morphism (<ref>). By the proof of <cit.> and by <cit.>, we have the following Čech-Alexander computation of the log-crystalline cohomology over A_:
RΓ_(𝔛_ O_C/p/A_^×)≃(D_Σ, Λ(R)(0)→ D_Σ, Λ(R)(1)→ D_Σ, Λ(R)(2)→⋯)
where D_Σ, Λ(R)(n):=_m≥ 1 D_Σ, Λ, m(R)(n) with (D_Σ, Λ, m(R)(n)) the (n+1)-fold product of (D_Σ, Λ(R)/p^m) in (𝔛_ O_C/p/A_, m^×)_ (we recall <ref> for the notation). On the other hand, by <cit.>, the condensed group cohomology is computed by
RΓ_(Γ_Σ, Λ, D_Σ, Λ(R))≃( D_Σ, Λ(R)→([Γ_Σ, Λ], D_Σ, Λ(R))→([Γ_Σ, Λ^2], D_Σ, Λ(R))→⋯).
Under the identifications (<ref>) and (<ref>), we define the morphism
RΓ_(𝔛_ O_C/p/A_^×) → RΓ_(Γ_Σ, Λ, D_Σ, Λ(R))
induced, in degree n≥ 0, by the composite of the termwise action Γ_Σ, Λ^n× D_Σ, Λ(R)(n)→ D_Σ, Λ(R)(n) with the co-diagonal map D_Σ, Λ(R)(n)→ D_Σ, Λ(R). By <cit.> there is a natural map
Lη_μ RΓ_(𝔛_ O_C/p/A_^×)→ RΓ_(𝔛_ O_C/p/A_^×)
which is a quasi-isomorphism, as it follows combining Lemma <ref> and Lemma <ref>ghui:1. Using the quasi-isomorphism (<ref>), applying the décalage functor Lη_μ(-) to (<ref>), we obtain the desired morphism (<ref>), which is a quasi-isomorphism thanks to Lemma <ref> (which relies on <cit.>, as (<ref>)) and Lemma <ref>.
As a next step toward Theorem <ref>, we want to construct a comparison morphism from the log-crystalline cohomology over A_ to the B_I-cohomology. For this, we will rely on the morphism (<ref>), defined in Corollary <ref>, and on the following construction (which is inspired from the proof of <cit.>), that will allow us to compare the target of (<ref>) with the B_I-cohomology.
In the setting of Notation <ref>, there is a natural (in R and the datum (<ref>)) Γ_Σ, Λ-equivariant map
D_Σ, Λ(R)→_(R_Σ, Λ, ∞)
where _(R_Σ, Λ, ∞):=_inf(R_Σ, Λ, ∞)⊗_A_infA_ and the completion ⊗_A_inf is p-adic.
Since both _(R_Σ, Λ, ∞) and D_Σ, Λ(R) are p-adically complete (for the latter, recall <cit.>), it suffices to construct, for each n≥ 1, a natural map D_Σ, Λ(R)/p^n→_(R_Σ, Λ, ∞)/p^n. We observe that _(R_Σ, Λ, ∞)/p^n=_inf(R_Σ, Λ, ∞)⊗_A_infA_/p^n. Then, we consider the following commutative diagram of log rings
(A_inf, Σ, Λ^□⊗_A_infA_/p^n, M_inf, Σ, Λ^□) [two heads]r[d] (R/p, M_R) [d]
(_inf(R_Σ, Λ, ∞)⊗_A_infA_/p^n, N)^a [two heads]r (R_Σ, Λ, ∞/p, M_R).
Here, R/p and R_Σ, Λ, ∞/p are equipped with the pullback of the canonical log structure M_R on R, and A_inf, Σ, Λ^□⊗_A_infA_/p^n is endowed with the pullback of the log structure M_inf, Σ, Λ^□ on A_inf, Σ, Λ^□, as in Lemma <ref>. Moreover, _inf(R_Σ, Λ, ∞)⊗_A_infA_/p^n is equipped with the pullback of the log structure on _inf(R_Σ, Λ, ∞) associated to the pre-log structure defined as follows: we set N:=(h^)^-1(M_inf, Σ, Λ^□) where h^ denotes the morphism of groups associated to the natural morphism of monoids h:M_inf, Σ, Λ^□→ M_R; the argument of <cit.> shows that the natural map M_inf, Σ, Λ^□→_inf(R_Σ, Λ, ∞) uniquely extends to a map N→_inf(R_Σ, Λ, ∞).[In fact, in the notation of <cit.>, it suffices to prove that _inf(R_Σ, Λ, ∞) is naturally an (A_inf, Σ, Λ^□⊗_[Q][P_λ_0])-algebra compatibly with the change of λ_0∈Λ, which is shown in <cit.>.] The resulting surjective map of log rings at the bottom of the diagram (<ref>) is exact by construction, hence, the universal property of the log PD envelope D_Σ, Λ(R)/p^n gives the desired natural map D_Σ, Λ(R)/p^n→_(R_Σ, Λ, ∞)/p^n.
We are now ready to prove Theorem <ref>.
It suffices to prove the statement Zariski locally on X in a functorial way, as X is assumed to be qcqs and the derived tensor product _A_ commutes with finite limits.
Thus, let X=(R) as in Notation <ref>, with fixed finite sets Σ and Λ. We will denote T(Σ, Λ):=Σ∪{(λ, i): λ∈Λ, 1≤ i≤ d}.
First, we note that, similarly to Lemma <ref>, we have a φ-equivariant identification
RΓ_B_I( X_C)≃ Lη_t RΓ_(Γ_Σ, Λ,_I(R_Σ, Λ, ∞))≃ Lη_t__I(R_Σ, Λ, ∞)((γ_τ-1)_τ∈ T(Σ, Λ)).
Then, using the natural map (<ref>) constructed in Corollary <ref>, together with the natural morphism (<ref>) constructed in Lemma <ref>, we have a natural (in X and the datum (<ref>)) morphism
RΓ_(𝔛_ O_C/p/A_^×)∼→ Lη_μ RΓ_(Γ_Σ, Λ, D_Σ, Λ(R))→ Lη_μ RΓ_(Γ_Σ, Λ,_I(R_Σ, Λ, ∞)).
Using that μ and t differ by a unit in B_I, (<ref>) induces a natural morphism
f_ X, Σ, Λ: RΓ_(𝔛_ O_C/p/A_^×)_A_B_I→ RΓ_B_I( X_C)
that we claim to be a quasi-isomorphism. For this, it suffices to show that, for a fixed λ∈Λ, we have a commutative diagram as follows, whose arrows are quasi-isomorphisms
_A_(R)_λ((∂_λ, i)_1≤ i≤ d)_A_B_I[r][d] η_t__I(R_λ, ∞)((γ_λ, i-1)_1≤ i≤ d)[d]
_D_Σ, Λ(R)((∂_τ)_τ∈ T(Σ, Λ))_A_B_I[r] η_t__I(R_Σ, Λ, ∞)((γ_τ-1)_τ∈ T(Σ, Λ))
with the bottom arrow of (<ref>) compatible with the morphism (<ref>), under the identifications (<ref>) and (<ref>). Here, we denote A_(R)_λ:=A_inf(R)_λ⊗_A_inf A_, where the completion ⊗_A_inf is p-adic, and A_inf(R)_λ is the unique lift of the étale (R_λ^□/p)-algebra R/p, along θ: A_inf(R_λ^□)↠ R_λ^□, to a (p, μ)-adically complete, formally étale A_inf(R_λ^□)-algebra.
* The right vertical map of (<ref>) is a quasi-isomorphism since, by Lemma <ref> and (<ref>), both the target and the source are quasi-isomorphic to RΓ_B_I( X_C).
* The top horizontal arrow of (<ref>) is induced by the quasi-isomorphism constructed in Lemma <ref>, observing that we have the following identifications:
_A_inf(R)_λ((∂_λ, i)_1≤ i≤ d)_A_infB_I=
(_A_inf(R)_λ((∂_λ, i)_1≤ i≤ d)_A_infA_)_A_B_I
and, by Proposition <ref>,
_A_inf(R)_λ((∂_λ, i)_1≤ i≤ d)_A_infA_≃_A_(R)_λ((∂_λ, i)_1≤ i≤ d)
as we now explain. In fact, Proposition <ref> implies that the derived solid tensor product _A_inf appearing in (<ref>) can be replaced by the derived p-adic completion ⊗^_A_inf, and then it remains to observe that the latter completion identifies with the underived (termwise) p-adic completion: for this, denoting by A_^(m)⊂ A_ the p-adic completion of the A_inf-subalgebra generated by ξ^j/j! for varying j≤ m, we note that, for any integer n≥ 1, we have A_/p^n≅_m≥ p A_^(m)/p^n, with A_^(m)/p^n≅ A_^(m)/(p^n, μ^n') for m≥ p, and a large enough integer n' (see <cit.>);[In fact, for m≥ p, one has μ^p/p!∈ A_^(m), therefore one can take n':=pn.] then, we recall that (p^n, μ^n') is an A_inf(R)-regular sequence with A_inf(R)/(p^n, μ^n') flat over A_inf/(p^n, μ^n') (see <cit.>).
* The left vertical map of (<ref>) is constructed as follows. Since D_Σ, Λ(R) is a p-adically complete pro-nilpotent thickening of R/p (here we use <cit.>), by the infinitesimal lifting criterion for the p-adic formally étale map A_(R_λ^□)→ A_(R)_λ, we deduce that in the following diagram
A_(R_λ^□) [r][d] A_(R)_λ [dl, dashrightarrow][d]
D_Σ, Λ(R) [two heads]r R/p
there exists a unique dotted arrow making the diagram commute.
The resulting map
_A_(R)_λ((∂_λ, i)_1≤ i≤ d)→_D_Σ, Λ(R)((∂_τ)_τ∈ T(Σ, Λ))
is a quasi-isomorphism, as both the source and the target compute RΓ_(𝔛_ O_C/p/A_^×), by <cit.> and Lemma <ref>, respectively.
* The bottom horizontal arrow of (<ref>) is constructed as follows. Similarly to the map (<ref>) in Lemma <ref>, for varying τ∈ T(Σ, Λ), we have φ-equivariant maps
((D_Σ, Λ(R)∂_τ→D_Σ, Λ(R))(𝕀, h_τ)⟶(D_Σ, Λ(R)γ_τ-1→D_Σ, Λ(R))
with h_τ:=1+∑_j≥ 1t^j/(j+1)!(∂_τ)^j, which induce a φ-equivariant quasi-isomorphism
_D_Σ, Λ(R)((∂_τ)_τ∈ T(Σ, Λ))∼→η_μ_D_Σ, Λ(R)((γ_τ-1)_τ∈ T(Σ, Λ))
(cf. <cit.>). Moreover, the Γ_Σ, Λ-equivariant map (<ref>) constructed in Lemma <ref>, induces a morphism
η_μ_D_Σ, Λ(R)((γ_τ-1)_τ∈ T(Σ, Λ))→η_μ__I(R_Σ, Λ, ∞)((γ_τ-1)_τ∈ T(Σ, Λ)).
Then, we define the bottom horizontal arrow of (<ref>) as the morphism induced by the composite of (<ref>) and (<ref>). The so constructed morphism makes the diagram (<ref>) commute, it is compatible with the morphism (<ref>), under the identifications (<ref>) and (<ref>), and, by the previous points, it is a quasi-isomorphism.
Taking the filtered colimit _Σ, Λ of the quasi-isomorphisms f_ X, Σ, Λ constructed in (<ref>), putting everything together, using that _A_ commutes with filtered colimits, we obtain the desired quasi-isomorphism
f_ X:RΓ_(𝔛_ O_C/p/A_^×)_A_B_I∼⟶_Σ, ΛK(R_Σ, Λ, ∞)≃ RΓ_B_I( X_C).
where we denoted K(R_Σ, Λ, ∞):=Lη_t RΓ_(Γ_Σ, Λ,_I(R_Σ, Λ, ∞)). It remains to show that the morphism f_ X depends functorially on X=(R). For this, it suffices to prove that the filtered colimit _Σ, ΛK(R_Σ, Λ, ∞) depends functorially on R, compatibly with the constructed morphism from the complex RΓ_(𝔛_ O_C/p/A_^×) (i.e. the filtered colimit _Σ, Λ of the morphisms defined in (<ref>)). We study separately the latter filtered colimit in the case X is smooth or non-smooth, and show that it reduces to a simpler filtered colimit.[We thank Teruhisa Koshikawa for suggesting the following idea.]
* Suppose X is smooth. In this case, given a finite set Λ as in Notation <ref>, for each pair (λ, i) with λ∈Λ and 0≤ i ≤ d, we have
t_λ, i=(p^q)^n_λ, i· u_λ, i for unique n_λ, i∈_≥ 0 and u_λ, i∈ R^×
where q∈_>0 is the unique element such that · q=∑_λ∈Λ· q_λ inside (see <cit.>). We recall that p-th power roots of p in O_C are fixed (Notation <ref>). Then, for sufficiently large finite sets Σ⊂ R^× (containing the u_λ, i's as in (<ref>)), denoting R_Σ, ∞:=R_Σ, ∅, ∞, we have natural surjections
R_Σ, Λ, ∞↠ R_Σ, ∞
given by assigning the images of p-th power roots of u_λ, i. The maps (<ref>) induce an isomorphism
_Σ, ΛK(R_Σ, Λ, ∞)∼⟶_ΣK(R_Σ, ∞)
as both sides of (<ref>) compute RΓ_B_I( X_C).
* Suppose X is non-smooth. We choose an étale map X→(R_λ_0^□) as in Notation <ref>. Then, by Remark <ref>, given a finite set Λ containing λ_0, for each pair (λ, i) with λ∈Λ and 0≤ i≤ d, either the image of t_λ, i in R lies in R^×, or it cuts out an irreducible component of the special fiber of X, and there exists a unique 0≤ i'≤ r_λ_0 (depending on i) such that
t_λ, i=u_λ, λ_0, i· t_λ_0, i' for a unique u_λ, λ_0, i∈ R^×
(see <cit.>). Then, for sufficiently large finite sets Σ⊂ R^× (containing the invertible t_λ, i's in R and the u_λ, λ_0, i's as in (<ref>)), we have natural surjections
R_Σ, Λ, ∞↠ R_Σ, {λ_0}, ∞
given by assigning the images of p-th power roots of t_λ, i in the case the image of t_λ, i in R lies in R^×, or by assigning the images of p-th power roots of u_λ, λ_0, i in the complementary case.
The maps (<ref>) induce an isomorphism
_Σ, ΛK(R_Σ, Λ, ∞)∼⟶_ΣK(R_Σ, {λ_0}, ∞).
For later reference, we note here that for a finite set Λ as in Notation <ref> containing λ_0, and for sufficiently large finite sets Σ, for any λ∈Λ we have a commutative diagram
[row sep=0.5em,column sep=3em]
R_Σ, {λ}, ∞[dd, "≀"]
R_Σ, Λ, ∞[ur, twoheadrightarrow] [dr, twoheadrightarrow]
R_Σ, {λ_0}, ∞.
Here, the bottom diagonal arrow is (<ref>), the top diagonal arrow is defined similarly to the latter (with λ in place of λ_0), and the vertical arrow is the isomorphism defined as follows. For (λ, i) such that the image of t_λ, i in R does not lie in R^×, the relation (<ref>) implies that there exists a unique 0≤ i'≤ r_λ_0 (depending on i) such that, for any integer m≥ 0, we have
t_λ, i^1/p^m=u_λ, λ_0, i^(m)· t_λ_0, i'^1/p^m in R_Σ, Λ, ∞ for a unique u_λ, λ_0, i^(m)∈ R_Σ, Λ, ∞^×
using that t_λ_0, i'^1/p^m is a unit in R_Σ, Λ, ∞[1/p] and R_Σ, Λ, ∞ is integrally closed in R_Σ, Λ, ∞[1/p]. Then, the vertical arrow of (<ref>) is defined by sending, for m≥ 0 and (λ, i) as before, the element t_λ, i^1/p^m in R_Σ, {λ}, ∞ to the image of u_λ, λ_0, i^(m)· t_λ_0, i'^1/p^m in R_Σ, {λ_0}, ∞; this defines an isomorphism as the u_λ, λ_0, i^(m)'s are invertible.
Now, let g: X'=(R')→ X=(R) be a map of affine p-adic formal schemes over ( O_C), with X and X' equipped with data as in Notation <ref>, for some sets Σ and Λ_0 with Λ_0={λ_0} in the case X is non-smooth and Λ=∅ in the smooth case, resp. Σ' and Λ'_0 with Λ'_0={λ_0'} in the case X' is non-smooth and Λ=∅ in the smooth case. Then, for a sufficiently large finite set Σ'⊂R'^×, there exists a unique map g^□:(R'_Σ', Λ'_0^□)→(R_Σ, Λ_0^□) of p-adic formal schemes over ( O_C) making the following diagram commute
X' [r, "g"][d] X[d]
(R'_Σ', Λ'_0^□) [r, "g^□"] (R_Σ, Λ_0^□).
In fact, suppose X and X' are non-smooth (in the other case one can argue similarly), then by Remark <ref>, for 0≤ i≤ d, we have the following relation in R'
t_λ_0, i=(p^q_λ'_0)^n_λ_0, i· u_λ_0, λ'_0, i·∏_0≤ j≤ r_λ_0'( t_λ'_0, j)^a_λ_0, i, j
for unique n_λ_0, i∈_≥ 0, u_λ_0, λ'_0, i∈ R'^×, and a_λ_0, i, j∈_≥ 0 not all positive.
For a sufficiently large finite set Σ'⊂R'^×, the diagram (<ref>) induces a morphism of Galois covers
X'_C, Σ', Λ'_0, ∞[r][d] X_C, Σ, Λ_0, ∞[d]
X'_C [r] X_C
which in turn induces a map _ΣK(R_Σ, Λ_0, ∞)→_ΣK(R'_Σ', Λ'_0, ∞). The latter map composed with the isomorphism (<ref>) (in the case R is smooth, and similarly for R') or the isomorphism (<ref>) (in the case R is non-smooth, and similarly for R'), induces a map
_Σ, ΛK(R_Σ, Λ, ∞)→_Σ', Λ'K(R'_Σ', Λ', ∞)
which does not depend on the choices of λ_0 and λ_0': this follows from the commutative diagram (<ref>) (and a similar diagram for R'). By construction, the map (<ref>) fits in the following commutative diagram
RΓ_(𝔛_ O_C/p/A_^×) [r][d] _Σ, ΛK(R_Σ, Λ, ∞)[d]
RΓ_(𝔛'_ O_C/p/A_^×) [r] _Σ', Λ'K(R'_Σ', Λ', ∞)
where the top, resp. bottom, horizontal arrow is the filtered colimit _Σ, Λ, resp. _Σ', Λ', of the morphisms defined in (<ref>). This concludes the proof.
§.§ The comparison with the Hyodo–Kato cohomology
Now, we have all the ingredients to conclude the proof of Theorem <ref>.
First, let X∈ M_,, i.e. X a qcqs semistable p-adic formal scheme over ( O_C). Combining Theorem <ref> with Theorem <ref>bcn:1, and recalling Remark <ref>, for any compact interval I⊂[1/(p-1), ∞) with rational endpoints, we have a natural isomorphism
RΓ_B_I( X_C)≃ (RΓ_( X_C)_F̆ B_log, I)^N=0.
Twisting by the Frobenius, and recalling that φ is an automorphism on the Hyodo–Kato cohomology, we deduce that (<ref>) extends to any compact interval I⊂ (0, ∞) with rational endpoints. Then, passing to the derived limit over all such I, by Lemma <ref> we have a natural isomorphism
R_I RΓ_B_I( X_C)≃ RΓ_B( X_C)
and then, by <cit.>, which applies observing that B_log, I is a nuclear F̆-vector space, and recalling that RΓ_( X_C) is representable by a bounded complex of F̆-Banach spaces (see Theorem <ref>mainHK:2 and its proof), we obtain a natural isomorphism
RΓ_B( X_C)≃ (RΓ_( X_C)_F̆B_log)^N=0
where B_log:=R_I B_log, I. Now, we claim that the natural map
(RΓ_( X_C)_F̆ B_log)^N=0→ (RΓ_( X_C)_F̆B_log)^N=0
is an isomorphism. In fact, recalling that B_log=B[U] and B_log, I=B_I[U], both the source and the target of (<ref>) identify with the complex RΓ_( X_C)_F̆ B via the operator exp(N· U) on the latter tensor product (cf. Lemma <ref>; note that the monodromy N is nilpotent on RΓ_( X_C), by Theorem <ref>mainHK:2, in particular such operator is well-defined).[We observe that, in the case when X_C is the base change to C of a rigid-analytic variety defined over K, such identifications may be not 𝒢_K-equivariant. However, in this case, the map (<ref>) is 𝒢_K-equivariant. ]
Next, let X be a qcqs rigid-analytic variety over C. Consider a simplicial object U_∙ of M_, such that U_∙, η→ X is a -hypercover. To show that for such X we have an isomorphism as in (<ref>), since the B-cohomology satisfies -hyperdescent, it suffices to show that the natural map at the top of the following commutative diagram is an isomorphism
lim_[n]∈Δ(RΓ_(U_n, O_C/p^0/O_F̆^0)__p_F̆ B_log)^N=0 [r] (RΓ_(X)_F̆ B_log)^N=0
lim_[n]∈Δ(RΓ_(U_n, O_C/p^0/O_F̆^0)__p_F̆ B) [u][r] RΓ_(X)_F̆B [u].
Here, arguing as above, the vertical arrows are the isomorphisms defined by the operator exp(N· U) (recall that the monodromy N is nilpotent on RΓ_(X), by Theorem <ref>mainHK:2, in particular such operator is well-defined).[Similarly to Footnote <ref>, in the case when X is the base change to C of a rigid-analytic variety defined over K, such isomorphisms may be not 𝒢_K-equivariant. However, in this case, the top horizontal arrow of the diagram above is 𝒢_K-equivariant.] We note that the bottom horizontal arrow of the diagram above is an isomorphism by -hyperdescent of the Hyodo–Kato cohomology, thanks to <cit.> (which applies recalling that B is a Fréchet space and each RΓ_( U_n, O_C/p^0/ O_F̆^0)__p is representable by a complex of nuclear F̆-vector spaces, and using (<ref>)). Then, we deduce that the top horizontal arrow is an isomorphism as well, as desired.
Lastly, for a X connected, paracompact, rigid-analytic variety over C, choosing a quasi-compact admissible covering {U_n}_n∈ of X such that U_n⊆ U_n+1, using Theorem <ref>mainHK:2, the same argument used above reduces us to the previous case.[Recall that any rigid-analytic variety is assumed to quasi-separated and of finite dimension, <ref>.]
§ B_^+-COHOMOLOGY
Our first goal in this section is to compare the B_^+-cohomology with the de Rham cohomology.
§.§ The comparison with the de Rham cohomology
In the smooth case, the following result is essentially already contained in <cit.>, and relies on Scholze's Poincaré lemma for _^+. In the following, we denote by RΓ_(X) the de Rham cohomology endowed with the Hodge filtration (Definition <ref>).
Let X be a connected, paracompact, rigid-analytic variety defined over K. Then, we have a natural isomorphism in D(_B_^+^)
RΓ_B_^+(X_C)≃ RΓ_(X)_K B_^+
compatible with filtrations, and the action of 𝒢_K.
Assume first that X is smooth. By <cit.> we have an isomorphism in D(_B_^+^)
RΓ_B_^+(X_C)≃ RΓ(X, Ω_X^∙_K B_^+)
compatible with filtrations. Then, in this case, the statement follows from <cit.>.
The general case follows by -hyperdescent, using <cit.>, and observing, for the compatibility with filtrations part, that the filtration on RΓ_(X) is finite, Proposition <ref> (as X is also assumed to be of finite dimension, <ref>).
The same argument and references used in the proof of Theorem <ref> also show the following result, which generalizes <cit.> to the singular case:
Let X be a connected, paracompact, rigid-analytic variety defined over K. Then, we have a natural isomorphism in D(_B_^+^)
RΓ_(X_C, _^+)≃^0(RΓ_(X)_K B_).
§.§.§ Compatibility
Our next goal is to prove that the comparison of Theorem <ref> is compatible with the comparison of Theorem <ref>.
Let X be a connected, paracompact, rigid-analytic variety defined over K.
* (Hyodo–Kato isomorphism over B_^+)
We have a natural isomorphism in D(^_B_^+)
RΓ_(X_C)_F̆ B_^+∼⟶ RΓ_(X)_KB_^+.
* (Compatibility) The isomorphism (<ref>) of Theorem <ref> is compatible with the isomorphism (<ref>) of Theorem <ref>, i.e. we have a commutative diagram as follows
RΓ_B(X_C) [d][r, "∼"] (RΓ_(X_C)_F̆B_log)^N=0 [d]
RΓ_B_^+(X_C) [r, "∼"] RΓ_(X)_KB_^+
where the left vertical map is induced by the inclusion ↪_^+, and the right vertical map is induced by (<ref>).
We refer the reader to Theorem <ref> for a version of Theorem <ref> for rigid-analytic varieties defined over C.
§.§ The comparison with the infinitesimal cohomology over B_^+
To prove Theorem <ref> we will relate the B_^+-cohomology of Definition <ref> to the infinitesimal cohomology over B_^+ in a way that is compatible with the comparison between the B-cohomology and the log-crystalline cohomology over A_ (following from Theorem <ref>).
§.§.§ The infinitesimal cohomology over B_^+ and its filtrations
We begin with some recollections on the infinitesimal cohomology over B_^+ of rigid-analytic varieties, introduced in <cit.>.
[Infinitesimal cohomology over B_^+]
Let X be a rigid-analytic variety over C. Given an integer m≥ 1, denote B_, m^+=B_^+/ξ^m.
The infinitesimal site (X/B_, m^+)_inf of X over B_, m^+ is defined as follows:
* the underlying category has objects the pairs (U, T) where U is an open subspace of X and U→ T is an infinitesimal thickening of adic spaces with T an adic space of topological finite presentation over B_, m^+, and morphisms (U, T)→ (U', T') with U→ U' an open immersion and T→ T' a compatible map of adic spaces over B_, m^+;
* the coverings are given by the families of morphisms {(U_i, T_i)→ (U, T)} with U_i→ U and T_i→ T coverings for the analytic topology.
The infinitesimal site of X over B_^+ is defined as the direct limit of sites (in the sense of <cit.>)
(X/B_^+)_inf:=_m(X/B_, m^+)_inf.
The infinitesimal structure sheaf O_inf on (X/B_^+)_inf is the sheaf with values in _B_^+^ sending (U, T) to O_T(T), and the infinitesimal cohomology of X over B_^+ is defined as
RΓ_inf(X/B_^+):=RΓ((X/B_^+)_inf, O_inf)∈ D(_B_^+^).
One can also define a big infinitesimal site version of the (small) infinitesimal sites defined above (cf. <cit.>).
[Infinitesimal filtration] Let X be a rigid-analytic variety over C. We define the infinitesimal filtration on the infinitesimal cohomology of X over B_^+ as the ^-indexed filtration
_inf^⋆RΓ_inf(X/B_^+):=RΓ((X/B_^+)_inf, J_inf^⋆)
induced on the i-th level by the subsheaf J_inf^i ⊂ O_inf on (X/B_^+)_inf, where J_inf is the kernel ideal of the natural map from the infinitesimal structure sheaf O_inf to the pullback, on the infinitesimal site, of the analytic structure sheaf of X.
We recall that the infinitesimal cohomology over B_^+ satisfies -hyperdescent:
The presheaf
_C^→ D(_B_^+^): Y↦ RΓ_inf(Y/B_^+)
satisfies -hyperdescent.
The statement follows from <cit.>.
On the other hand, the pieces of the infinitesimal filtration on the infinitesimal cohomology over B_^+ (Definition <ref>) do not satisfy -hyperdescent in general.[Similarly, the pieces of the infinitesimal filtration on the infinitesimal cohomology over C do satisfy -hyperdescent. In fact, supposing the contrary, by <cit.> and Proposition <ref> we would have, for any rigid-analytic variety X over C, a natural filtered isomorphism between the infinitesimal cohomology of X over C and the de Rham cohomology of X over C (Definition <ref>). Now, assuming that X is a complete intersection affinoid, there is a natural filtered isomorphism between the infinitesimal cohomology of X over C and the cohomology of the analytic derived de Rham complex of X over C, <cit.>; but, the graded pieces of the latter do not vanish if X is not smooth (recalling that the i-th graded piece of the analytic derived de Rham complex of X over C can be identified with a shift of the i-fold wedge product of the analytic cotangent complex of X over C), instead, by Proposition <ref>, the graded pieces of the de Rham cohomology of X over C eventually vanish.] For this reason, we introduce the following filtration on the infinitesimal cohomology over B_^+ that is closer to the Hodge filtration on the de Rham cohomology (Definition <ref>) and it will be crucial in the formulation of the semistable conjecture for proper (possibly singular) rigid-analytic varieties over C (see Theorem <ref>). The following definition is based on Proposition <ref> (and Remark <ref>).
[Hodge filtration] Let X be a rigid-analytic variety over C. We define the Hodge filtration on the infinitesimal cohomology of X over B_^+ as the ^-indexed filtration
_^⋆RΓ_inf(X/B_^+)
given on the i-th level by the cohomology on X of the hypersheaf on _C, associated to the presheaf
_C^→ D(_B_^+^):Y ↦_inf^iRΓ_inf(Y/B_^+).
In the smooth case the Hodge filtration on the infinitesimal cohomology over B_^+ (Definition <ref>) agrees with the infinitesimal filtration (Definition <ref>).
(Hodge filtration in the smooth case) Let X be a smooth rigid-analytic variety over C. We have a natural isomorphism of filtered objects
_inf^⋆RΓ_inf(X/B_^+)∼⟶_^⋆RΓ_inf(X/B_^+).
We will prove Proposition <ref> in the next subsection, together with the following already announced comparison of the B_^+-cohomology with the infinitesimal cohomology over B_^+.
For any rigid-analytic variety X over C, we have a natural isomorphism in D(^_B_^+)
RΓ_B_^+(X)≃ RΓ_inf(X/B_^+)
compatible with filtrations, endowing the B_^+-cohomology with the filtration decalée (Definition <ref>) and the infinitesimal cohomology over B_^+ with the Hodge filtration (Definition <ref>).
§.§.§ Proofs
We want to prove Theorem <ref>, Proposition <ref> and Theorem <ref>.
As a first step toward Theorem <ref>, we need to construct a natural map from the log-crystalline cohomology over A_ to the infinitesimal cohomology over B_^+.
Let X be a semistable p-adic formal scheme over O_C, and let X= X_C denote its generic fiber. Then, there exists a natural morphism
RΓ_( X_ O_C/p/A_^×)→ RΓ_inf(X/B_^+).
We can assume that X is affine. We note that we have a natural isomorphism
RΓ_( X_ O_C/p/A_^×)≃ RΓ_( X/A_^×).
Then, it suffices to construct, for each integer m≥ 1, a morphism of big sites
f: (X/B_, m^+)_→ ( X/A_^×)_
recalling that the restriction functor from the big topos the small one preserves cohomology (see <cit.> for the infinitesimal topos). We define f via the continuous functor sending U→(A) in the big log-crystalline site ( X/A_^×)_ to U_C→(A⊗_A_B_, m^+) in the big infinitesimal site (X/B_, m^+)_ (forgetting the log structures). One checks that f is a well-defined morphism of sites, with the help of <cit.>.
Before proving Theorem <ref>, we will show the following intermediate compatibility result.
Let X be the generic fiber of a qcqs semistable formal scheme X defined over ( O_C). Let I=[1, r]⊂ (0, ∞) be an interval with rational endpoints. Then, the isomorphism (<ref>) is compatible with the isomorphism (<ref>), i.e. we have a commutative diagram as follows
RΓ_B_I(X) [d][r, "∼"] RΓ_(X_O_C/p/A_^×)_A_B_I [d]
RΓ_B_^+(X) [r, "∼"] RΓ_inf(X/B_^+)
where the left vertical map is induced by the inclusion _I↪_^+, and the right vertical map is induced by the morphism (<ref>) constructed in Lemma <ref>.
To show Proposition <ref>, we will prove Theorem <ref> going over the same steps as in the proof of Theorem <ref>. We begin with the first step, corresponding to Lemma <ref>.
In the setting of Notation <ref>, for any integer m≥ 1, we have a B_, m^+-linear quasi-isomorphism
_A_inf(R)(∂_1, …,∂_d)⊗_A_inf^B_^+/ξ^m∼→_B_^+/ξ^m(R)(∂_1, …,∂_d)∼→Lη_t_(_^+/^m)(R_∞)(γ_1-1, …, γ_d-1)
compatible with the quasi-isomorphism (<ref>).
Since μ and t differ by a unit in B_^+/ξ^m, as in the proof of Lemma <ref> we can reduce to showing that the element μ kills H^i_(Γ, _inf(R_∞)^⊗_A_infB_^+/ξ^m) for all i∈. Let N_∞:=_inf(R_∞)^. We proceed by induction on m≥ 1. Since ξ is a non-zero-divisor in _inf(R_∞)⊃ N_∞, we have the following exact sequence
0→ N_∞⊗_A_infC(m)→ N_∞⊗_A_infB_^+/ξ^m+1→ N_∞⊗_A_infB_^+/ξ^m→ 0
which allows us to reduce to the case m=1. Then, it suffices to show that the element ε_p-1 kills H^i_(Γ, O^+(R_∞)^) for all i ∈, which follows from <cit.> (and <cit.>).
The following byproduct of Lemma <ref> will be useful later on.
In the setting of Notation <ref>, for any i≥ 0, we have a B_^+-linear quasi-isomorphism
ξ^max(i-∙, 0)Ω_B_^+(R)^∙∼→^i RΓ_B_^+(X)
where Ω_B_^+(R)^∙:=_B_^+(R)(∂_1, …,∂_d).
The statement follows combining Lemma <ref>, Lemma <ref>, and Lemma <ref>. In fact, by induction on i≥ 0, we can reduce to showing (<ref>) on graded pieces.
As done in <ref>, in order to construct a functorial isomorphism, we need to introduce more general coordinates. For this, we resume here the setting of <cit.>.[We warn the reader that our notation slightly differs from loc. cit..]
* Let X=(A, A^∘) an affinoid space over (C, O_C) that is the base change to (C, O_C) of an affinoid space X_0=(A_0, A_0^∘) defined over (L, O_L), for some finite subextension F̆⊂ L⊂ C, and admitting an étale (L, O_L)-morphism
X_0→(L⟨ T_0, …, T_r, T_r+1^± 1, …, T_d^± 1⟩ /(T_0⋯ T_r -p^q), O_L⟨ T_0, …, T_r, T_r+1^± 1, …, T_d^± 1⟩ /(T_0⋯ T_r -p^q))
for some 0≤ r≤ d, and q∈_>0.
Assume, in addition, that there are finite subsets Ψ_0⊂ (A_0^∘)^× and Ξ_0⊂ A_0^∘∩ A_0^× such that the L-linear map
L⟨ (X_u^± 1)_u∈Ψ_0, (X_a)_a∈Ξ_0⟩→ A_0, X_u↦ u, X_a↦ a,
is surjective. In particular, there are finite subsets Ψ⊂ (A^∘)^× and Ξ⊂ A^∘∩ A^× such that the C-linear map
C⟨ (X_u^± 1)_u∈Ψ, (X_a)_a∈Ξ⟩→ A, X_u↦ u, X_a↦ a,
is surjective.[The descent (<ref>) of (<ref>) to L is needed in <cit.>, which we will use in Lemma <ref> below.]
* We consider the affinoid perfectoid cover
C⟨ (X_u^± 1)_u∈Ψ, (X_a)_a∈Ξ⟩→ C⟨ (X_u^± 1/p^∞)_u∈Ψ, (X_a^1/p^∞)_a∈Ξ⟩
with Galois group
Γ_Ψ, Ξ:=∏_Ψ⊔Ξ_p(1)≅_p^|Ψ⊔Ξ|.
We denote by (γ_u)_u∈Ψ⊔Ξ the canonical generators of Γ_Ψ, Ξ.
The base change of (<ref>) along the surjection (<ref>) gives the following affinoid perfectoid pro-étale cover of X
X_Ψ, Ξ, ∞:=(A_Ψ, Ξ, ∞, A_Ψ, Ξ, ∞^+)→ X
with Galois group Γ_Ψ, Ξ.
* Given any pro-étale period sheaf of <ref>, we set
(A_Ψ, Ξ, ∞^+):=(X_Ψ, Ξ, ∞)
and we regard it as a condensed ring.
As observed in <cit.>, for any smooth rigid-analytic variety X over C, the affinoid spaces (A, A^∘) of Notation <ref> form an basis for the analytic topology of X.
The following result should be regarded as the analogue of Lemma <ref> over B_^+.
In the setting of Notation <ref>, we have a filtered quasi-isomorphism
RΓ_inf(X/B_^+)≃Ω^∙_D_Ψ, Ξ(A)/B_^+:=_D_Ψ, Ξ(A)((∂_u)_u∈Ψ,(∂_a)_a∈Ξ)
where D_Ψ, Ξ(A)=_m≥ 1D_Ψ, Ξ, m(A), and, for each m≥ 1, D_Ψ, Ξ, m(A) is the B_^+/ξ^m-algebra representing the envelope of
(A)↪(B_^+/ξ^m⟨ (X_u^± 1)_u∈Ψ, (X_a)_a∈Ξ⟩) over (C)↪(B_^+/ξ^m).
Here, ∂_u:=∂/∂log(X_u)=X_u·∂/∂ X_u for a∈Ψ∪Ξ, and the right-hand side of (<ref>) in endowed with the infinitesimal filtration, defined on the i-th level, for i≥ 0, as follows:
^iΩ^∙_D_Ψ, Ξ(A)/B_^+:=J^max(i-∙, 0)Ω^∙_D_Ψ, Ξ(A)/B_^+
where J:=_m J_m with J_m the ideal corresponding to the closed immersion (<ref>).
The statement follows from <cit.>.
Keeping the notation of the above lemma, we state the following result.
In the setting of Notation <ref>, for any sufficiently large Ψ and Ξ, we have an isomorphism in D(^_B_^+)
Ω^∙_D_Ψ, Ξ(A)/B_^+≃Ω^∙_A_0/K_L B_^+
compatible with filtrations, where the left-hand side is endowed with the infinitesimal filtration (<ref>), and the right-hand side is endowed with the tensor product filtration.
We consider the B_^+-algebra O D_Ψ, Ξ(A) defined as the completion of
(A_0_L B_^+)_B_^+D_Ψ, Ξ(A)→ A_0
along its kernel. Then, we have the following natural
D_Ψ, Ξ(A)→ O D_Ψ, Ξ(A)← A_0_L B_^+
and, denoting by Ω^∙_ O D_Ψ, Ξ(A)/B_^+ the de Rham complex associated to O D_Ψ, Ξ(A) over B_^+, we consider the natural maps of complexes
Ω^∙_D_Ψ, Ξ(A)/B_^+→Ω^∙_ O D_Ψ, Ξ(A)/B_^+←Ω^∙_A_0/K_L B_^+.
We claim that the zigzag (<ref>) is a filtered quasi-isomorphism. As in <cit.>, it suffices to check that, for sufficiently large Ψ and Ξ, we have
D_Ψ, Ξ(A)≅ (A_0_L B_^+)(X_u-u)_u∈ (Ψ⊔Ξ)∖{T_1, …, T_d}, O D_Ψ, Ξ(A)≅ (A_0_L B_^+)(X_u-u)_u∈Ψ⊔Ξ.
The first isomorphism follows from <cit.>, observing that the completed tensor product of Tate L-algebras, appearing in loc. cit., agrees with the solid tensor product _L. The second isomorphism follows from the first one, using that the completion of A_0_L A_0→ A_0 along its kernel is isomorphic to A_0(u⊗ 1-1⊗ u)_u∈{T_1, …, T_d}.
We proceed by constructing in coordinates a natural map from the log-crystalline cohomology over A_ to the infinitesimal cohomology over B_^+, with an eye to the compatibility between (<ref>) and (<ref>) that we want to prove. For this, we first need to relate the setting of Notation <ref> to the one of Notation <ref>.
In the setting of Notation <ref>, we put (A, A^∘)=X:= X_C, and
Ψ:={t_σ}_σ∈Σ∪⋃_λ∈Λ{t_λ, r_λ+1, …, t_λ, d}, Ξ:=⋃_λ∈Λ{t_λ, 1, …, t_λ, r_λ}.
These choices satisfy the assumptions of Notation <ref>, therefore in the following we can retain the notation of loc. cit. using such choices.
In the setting of Notation <ref>, there is a natural morphism
Ω^∙_D_Σ, Λ(R)/A_→Ω^∙_D_Ψ, Ξ(A)/B_^+
which, under the isomorphisms (<ref>) and (<ref>), is compatible with the morphism (<ref>) constructed in Lemma <ref>.
We need to construct, in the setting of Notation <ref>, for each integer m≥ 1, a natural map
D_Σ, Λ(R)→ D_Ψ, Ξ, m(A)
compatible with the natural maps to A.[This is done in <cit.>, which we rephrase here in a slightly different way, proceeding as in the proof of Lemma <ref>.]
By <cit.>, there exist a p-adically complete ring of definition D_Ψ, Ξ, m(A)_0 of D_Ψ, Ξ, m(A), and a commutative diagram of log rings as follows
(A_inf, Σ, Λ^□⊗_A_infA_/p^n, M_inf, Σ, Λ^□) [two heads]r[d] (R/p, M_R) [-,double line with arrow=-,-]d
(D_Ψ, Ξ, m(A)_0/p^n, N)^a [two heads]r (R/p, M_R).
Here, R/p is equipped with the pullback of the canonical log structure M_R on R, and the ring A_inf, Σ, Λ^□⊗_A_infA_/p^n is endowed with the pullback of the log structure M_inf, Σ, Λ^□ on A_inf, Σ, Λ^□. Moreover, D_Ψ, Ξ, m(A)_0/p^n is equipped with the pullback of the log structure on D_Ψ, Ξ, m(A)_0 associated to the following pre-log structure: as in Lemma <ref>, we set N:=(h^)^-1(M_inf, Σ, Λ^□) where h^ denotes the morphism of groups associated to the natural morphism of monoids h:M_inf, Σ, Λ^□→ M_R; the argument of <cit.> shows that the natural map M_inf, Σ, Λ^□→ D_Ψ, Ξ, m(A)_0 uniquely extends to a map N→ D_Ψ, Ξ, m(A)_0. The resulting surjective map of log rings at the bottom of the diagram (<ref>) is exact by construction, hence, the universal property of the log PD envelope D_Σ, Λ(R)/p^n gives the desired natural map D_Σ, Λ(R)/p^n→ D_Ψ, Ξ, m(A)_0/p^n. Since both D_Ψ, Ξ, m(A)_0 and D_Σ, Λ(R) are p-adically complete (for the latter, recall <cit.>), we obtain a map (<ref>) as desired.
We can finally prove the main results of this section.
The argument of will be analogous to the one in the proof of Theorem <ref>, in order to prove the compatibility stated in Proposition <ref>.
As both the B_^+-cohomology, together with its filtration decalée, and the infinitesimal cohomology over B_^+, together with its Hodge filtration, satisfy -hyperdescent (for the infinitesimal cohomology, see Lemma <ref> and Definition <ref>), it suffices to prove the statement -locally on X in a functorial way. Thus, by Proposition <ref>, we can place ourselves in the setting of Notation <ref>, with sufficiently large Ψ and Ξ.
Fix m≥ 1. As in Lemma <ref>, we have a quasi-isomorphism
RΓ( X_C, Lη_t Rν_*(_^+/^m))≃ Lη_t_(_^+/^m)(A^+_Ψ, Ξ, ∞)((γ_u-1)_u∈Ψ⊔Ξ).
Next, we claim that we have a commutative diagram as follows, whose arrows are natural quasi-isomorphisms
_A_(R)_λ((∂_λ, i)_1≤ i≤ d)_A_B_^+/ξ^m [r][d] η_t_(_^+/^m)(R_λ, ∞)((γ_λ, i-1)_1≤ i≤ d)[d]
_D_Ψ, Ξ, m(A)((∂_u)_u∈Ψ⊔Ξ) [r] η_t_(_^+/^m)(A^+_Ψ, Ξ, ∞)((γ_u-1)_u∈Ψ⊔Ξ).
* The right vertical map of (<ref>) is a quasi-isomorphism since, by Lemma <ref>, both the target and the source are quasi-isomorphic to the complex RΓ( X_C, Lη_t Rν_*(_^+/^m)).
* The top horizontal arrow of (<ref>) is the quasi-isomorphism obtained combining Lemma <ref> with (<ref>).
* The left vertical map of (<ref>) is induced by the one constructed in Lemma <ref>. We observe that both the target and the source of this map are derived ξ-adically complete.[For the latter, one can use for example the quasi-isomorphism of part num:2 combined with <cit.>.]
Then, by the derived Nakayama lemma, it suffices to show that such a map is a quasi-isomorphism for m=1. In this case, both the target and the source compute the de Rham cohomology RΓ_( X_C). The former follows reducing the quasi-isomorphism (<ref>) mod ξ. For the latter, we observe that (for m=1) by <cit.>, and Proposition <ref>, the source computes (RΓ_( X_ O_C/p/A_^×)⊗^_A_ O_C)__p, and by <cit.> we have a quasi-isomorphism
(RΓ_( X_ O_C/p/A_^×)⊗^_A_ O_C)__p∼→RΓ_log( X)__p∼→RΓ_( X_C).
* To construct the bottom horizontal arrow of (<ref>), we first note that, similarly to (<ref>), we have a quasi-isomorphism
_D_Ψ, Ξ, m(A)((∂_u)_u∈Ψ⊔Ξ)∼→η_t_D_Ψ, Ξ, m(A)((γ_u-1)_u∈Ψ⊔Ξ).
Next, we define a natural Γ_Ψ, Ξ-equivariant B_^+/ξ^m-linear map
D_Ψ, Ξ, m(A)→ (_^+/^m)(A^+_Ψ, Ξ, ∞)
via sending X_u to [(u, u^1/p, …)], which induces a morphism
_D_Ψ, Ξ, m(A)((γ_u-1)_u∈Ψ⊔Ξ)→η_t_(_^+/^m)(A^+_Ψ, Ξ, ∞)((γ_u-1)_u∈Ψ⊔Ξ).
Then, we define the bottom horizontal arrow of (<ref>) as the composite of (<ref>) and (<ref>). The so constructed map makes the diagram (<ref>) commute and, by the previous points, it is a quasi-isomorphism.
Now, taking the filtered colimit _Ψ, Ξ, over all sufficiently large Ψ and Ξ, of the constructed bottom horizontal quasi-isomorphism of (<ref>), and then passing to the limit R_m, using the quasi-isomorphism (<ref>) combined with Lemma <ref>, and recalling Lemma <ref>, we obtain the desired quasi-isomorphism: such morphism is functorial since taking instead the filtered colimit of _Ψ, over all sufficiently large Ψ (and Ξ=∅), of the bottom horizontal quasi-isomorphism of (<ref>), we obtain the same morphism. This finishes the proof of (<ref>).
For the compatibility with filtrations part, we need to use in addition the compatibility with filtration stated in Lemma <ref>, Lemma <ref>, and Corollary <ref>.
By the proof of Theorem <ref> above, we can reduce to checking the commutativity of the following diagram
D_Λ, Σ(R) [d][r] _(R_Σ, Λ, ∞) [d]
D_Ψ, Ξ, m(A) [r] (_^+/^m)(A^+_Ψ, Ξ, ∞)
where the top horizontal arrow is (<ref>), the left vertical arrow is (<ref>), the bottom horizontal arrow is (<ref>), and the right vertical arrow is induced by the composition A_↪ B_^+↠ B_^+/^m.
In turn, we can reduce to verifying the commutativity of the following diagram
A_inf, Σ, Λ^□[d][r] _inf(R_Σ, Λ, ∞) [d]
A_inf⟨(X_u^±1)_u∈Ψ, (X_a)_a∈Ξ⟩[r] _inf(A^+_Ψ, Ξ, ∞).
This is clear as both the composition maps from A_inf, Σ, Λ^□ to _inf(A^+_Ψ, Ξ, ∞) send X_τ to [(X_τ, X_τ^1/p, …)], for any τ∈Σ∪{(λ, i): λ∈Λ, 1≤ i≤ d}.
We may assume X qcqs. We want to show that, given i≥ 0, for any -hypercover Y_∙→ X of qcqs smooth rigid-analytic varieties over C, the natural map
_inf^iRΓ_inf(X/B_^+)→lim_[n]∈Δ_inf^iRΓ_inf(Y_n/B_^+)
is an isomorphism. For this, we observe that, by the proof of Theorem <ref> (recalling Remark <ref>), for smooth rigid-analytic varieties over C the infinitesimal filtration on the infinitesimal cohomology over B_^+ naturally identifies with the filtration decalée on the B_^+-cohomology, and the latter satisfies -hyperdescent.
Before proving Theorem <ref>, we state and prove a version of the latter for rigid-analytic varieties defined over C.
Let X be a connected, paracompact, rigid-analytic variety defined over C.
* (Hyodo–Kato isomorphism over B_^+)
We have a natural isomorphism in D(^_B_^+)
RΓ_(X)_F̆ B_^+∼⟶ RΓ_inf(X/B_^+).
* (Compatibility) The isomorphism (<ref>) of Theorem <ref> is compatible with the isomorphism (<ref>) of Theorem <ref>, i.e. we have a commutative diagram as follows
RΓ_B(X) [d][r, "∼"] (RΓ_(X)_F̆B_log)^N=0 [d]
RΓ_B_^+(X) [r, "∼"] RΓ_inf(X/B_^+)
where the left vertical map is induced by the inclusion ↪_^+, and the right vertical map is induced by (<ref>).
For part compatib2:1, as in the proof of Theorem <ref>mainHK:3, we can reduce to showing the statement locally for X the generic fiber of X∈ M_,. For this, by Theorem <ref>, which applies thanks to Remark <ref>, we have a natural morphism in D(_B_^+^)
RΓ_( X_ O_C/p^0/ O_F̆^0)__p_F̆B_^+∼→ RΓ_( X_ O_C/p/A_^×)_A_B_^+→ RΓ_inf(X/B_^+)
which is an isomorphism modulo ξ. Here, the right arrow of (<ref>) is the one induced by (<ref>). Since both the source and the target of (<ref>) are derived ξ-adically complete, we conclude, by the derived Nakayama lemma, that (<ref>) is an isomorphism, as desired.
Now, part compatib2:2 is clear from Proposition <ref> and the construction of (<ref>) in part compatib2:1.
We are ready to prove Theorem <ref>.
For part compatib:1, we first observe that we have a natural isomorphism
RΓ_inf(X_C/B_^+)≃ RΓ_(X)_K B_^+.
For this, with an eye to the compatibility of part compatib:2, by the same ingredients used in the proof of Theorem <ref>, we can reduce to showing (<ref>) -locally, using Lemma <ref> and Lemma <ref>.
Then, part compatib:1 follows from Theorem <ref>compatib2:1[To avoid confusion, we warn the reader that in Theorem <ref> the rigid-analytic variety X is defined over K, instead in Theorem <ref> the rigid-analytic variety X is defined over C.] combined with the isomorphism (<ref>).
For part compatib:2, by Theorem <ref>compatib2:2, we are reduced to showing the compatibility between (<ref>) and (<ref>), under the isomorphism (<ref>). For this, in the setting of Notation <ref>, one readily reduces to check the commutativity of the following diagram
D_Ψ, Ξ(A)[r][d] O D_Ψ, Ξ(A) [d] A_0_K B_^+ [l] [-,double line with arrow=-,-]d
_^+(A^+_Ψ, Ξ, ∞) [r] O _^+(A^+_Ψ, Ξ, ∞) A_0_K B_^+ [l].
Here, the top row is the zigzag (<ref>) of Lemma <ref>, and the bottom row, coming from Scholze's Poincaré lemma, is constructed, in the condensed setting, in <cit.>.
§ SYNTOMIC FARGUES–FONTAINE COHOMOLOGY
In this section, we define a cohomology theory for rigid-analytic varieties over C, called syntomic Fargues–Fontaine cohomology, which is close in spirit to Bhatt–Morrow–Scholze's syntomic cohomology theory for smooth p-adic formal schemes over O_C. In a stable range, we compare it with the rational p-adic pro-étale cohomology. Our definition is global in nature, it doesn't require neither the existence of nice formal models, nor smoothness, and it extends to coefficients. Moreover, it has a close relationship with the Fargues–Fontaine curve, as we show in <ref>.
Let X be a rigid-analytic variety over C. Let i≥ 0 be an integer. We define the syntomic Fargues–Fontaine cohomology of X with coefficients in _p(i) as the complex of D(__p^) given by the fiber
RΓ_, (X, _p(i)):=^iRΓ_B(X)^φ=p^i:=(^iRΓ_B(X)^i RΓ_B(X))
where RΓ_B(X) is endowed with the filtration décalée from Definition <ref>.
§.§ The comparison with the p-adic pro-étale cohomology
The announced comparison between the syntomic Fargues–Fontaine cohomology and the p-adic pro-étale cohomology will rely on the following result.
Let X an analytic adic space over (C, O_C). Let i≥ 0 be an integer. We have the following exact sequence of sheaves on X_
0→_p(i)→^i^i→ 0
where ^i=t^i.
The statement follows from the combination of (<ref>) and (<ref>) for i=0, recalling that φ(t^i)=p^it.
Now, let X be a rigid-analytic variety over C.
Via the exact sequence (<ref>) of sheaves on the pro-étale site X_ we have a natural morphism
RΓ_, (X, _p(i))→ RΓ_(X, _p(i)).
where we are implicitly using v-descent, Proposition <ref>.
Let X be a rigid-analytic variety over C. Let i≥ 0 be an integer.
* The truncation τ^≤ i of (<ref>) is an isomorphism in D(__p^), i.e. we have
τ^≤ iRΓ_, (X, _p(i))∼⟶τ^≤ iRΓ_(X, _p(i)).
* We have a natural isomorphism in D(__p^)
RΓ_, (X, _p(i))≃(RΓ_B(X)^φ=p^i→ RΓ_B_^+(X)/^i).
For part BK=pet:1, recalling Definition <ref>, we have a natural isomorphism
τ^≤ i^i Lη_tRα_*∼→τ^≤ iRα_*^i
which, taking cohomology, induces
τ^≤ i^i RΓ_B(X)∼→τ^≤ iRΓ_v(X, ^i ).
Then, the statement follows from the exact sequence (<ref>).
For part BK=pet:2, considering the natural map of fiber sequences
RΓ_, (X, _p(i)) [r] [d] ^iRΓ_B(X) [r, "φ p^-i-1"] [d] ^iRΓ_B(X) [d]
RΓ_B(X)^φ=p^i[r] RΓ_B(X)[r, "φ p^-i-1"] RΓ_B(X)
we deduce that it suffices to construct a natural map
RΓ_B_^+(X)/^i→(RΓ_B(X)/^iRΓ_B(X)/^i)
and show that it is an isomorphism. Endowing the source, resp. the target, of (<ref>) with its natural (finite) filtration, induced by the filtration decalée on RΓ_B_^+(X), resp. RΓ_B(X), we see that we can reduce to showing that, for 0≤ j < i, there is a natural map
^j Lη_tRα_*_^+→(^j Lη_tRα_*^j Lη_tRα_*)
that is an isomorphism, or equivalently, applying Proposition <ref>bbe:2, that there is a natural map
τ^≤ jRα_*^j_^+→(τ^≤ jRα_*^jτ^≤ jRα_*^j)
that is an isomorphism. For this, combining the exact sequences of sheaves (<ref>), (<ref>), and (<ref>), we have the following exact sequence of sheaves on X_
0→_^+/^i _^+→/^i/^i→ 0
where ^i_^+=t^i_^+ and ^i=t^i. Then, observing that (<ref>) is compatible with filtrations, we have, for 0≤ j< i, the following exact sequence of sheaves on X_
0→^j_^+→^j^j→ 0.
Now, the exact sequence (<ref>) induces a natural map (<ref>) as desired, that we want to prove to be an isomorphism. Thus, we want to show that the map φ p^-i-1:R^jα_*^j→ R^jα_*^j is surjective, or equivalently that the map R^j+1α_*^j_^+→ R^j+1α_*^j, induced by (<ref>), is injective: up to twisting, it suffices to observe that, for j=0, the left map in (<ref>) is given by the inclusion in the direct product (recall (<ref>))
^0 _^+↪^0 =∏_y∈ |Y_|^_^+/t^_y(t)_^+
corresponding to the classical point V(ξ)∈ |Y_|^. This concludes the proof of part BK=pet:2.
§.§ Fargues–Fontaine cohomology and nuclear complexes on the curve
In this subsection, we first reinterpret the main comparison theorems proved in the previous sections in terms of the Fargues–Fontaine curve (Theorem <ref>): such results were conjectured by Le Bras, and proven by him in some special cases, cf. <cit.>, and are related to work of Le Bras–Vezzani, <cit.>. Then, in Theorem <ref>, we naturally attach to any qcqs rigid-analytic variety over C a quasi-coherent complex on the Fargues–Fontaine curve , whose cohomology is the syntomic Fargues–Fontaine cohomology (Definition <ref>); we conclude by showing that in the proper case such complex is perfect.
§.§.§ Quasi-coherent, nuclear, and perfect complexes on
We will rely on results of Clausen–Scholze, <cit.>, <cit.>, and Andreychev, <cit.>, to talk about quasi-coherent, nuclear, and perfect complexes on the adic Fargues–Fontaine curve . Let us start by recalling some notations from <ref>.
Given a condensed ring R, we denote by _R⊂ D(_R^) the ∞-subcategory of perfect complexes over R, <cit.>. Given a pair (A, A^+) with A a complete Huber ring and A^+ a subring of A^∘, we denote by (A, A^+)_ the associated analytic ring, <cit.>. We write D((A, A^+)_) for the derived ∞-category of (A, A^+)_-complete complexes, and ((A, A^+)_) for the ∞-subcategory of nuclear complexes. Note that, in the case A^+=, we have D((A, )_)=D(_A^) in the notation of <ref>. Given Y an analytic adic space, we denote by (Y) the ∞-category of quasi-coherent complexes on Y, we write (Y) for the ∞-subcategory of nuclear complexes on Y, and (Y) for the ∞-category of perfect complexes on Y, <ref>.
The following construction will be used to define a lift of the B-cohomology theory to a cohomology theory with values in the ∞-category of quasi-coherent complexes on the adic Fargues–Fontaine curve =Y_/φ^, called Fargues–Fontaine cohomology.
We write
Y_=⋃_I⊂ (0, ∞)Y_, I
with Y_, I=(B_I, B_I^+) for varying I⊂ (0, ∞) compact intervals with rational endpoints.
By <cit.>, we have natural maps of analytic rings
(B_I, )_→ (B_I, B_I^+)_
for varying I⊂ (0, ∞) compact intervals with rational endpoints. Such maps induce base change functors
-⊗_(B_I, )_^(B_I, B_I^+)_: D((B_I, )_)→ D((B_I, B_I^+)_)
and then, by analytic descent for quasi-coherent complexes, Theorem <ref>, they induce a functor
(_B):=lim_I⊂(0, ∞)D((B_I, )_)⟶lim_I⊂(0, ∞)(Y_, I)=(Y_)
with source the ∞-category of coadmissible solid modules over B,[The terminology adopted here comes from <cit.>.] and target the ∞-category of quasi-coherent complexes on Y_.
In order to move to the study of quasi-coherent complexes on the Fargues–Fontaine curve, we need to make the following definitions.
We define the ∞-category of quasi-coherent φ-complexes over Y_ as the equalizer
(Y_)^φ:= ( (Y_) [r, shift left,"φ^*"] [r, shift right, swap,"𝕀"]
(Y_) )
that is the ∞-category of the pairs ( E, φ_ E), where E is a quasi-coherent complex on Y_, and φ_ E: φ^* E≃ E is an isomorphism.
We define (Y_)^φ (resp. (Y_)^φ) as the full ∞-subcategory of (Y_)^φ spanned by the pairs ( E, φ_ E), with E a nuclear (resp. perfect) complex on Y_.
We define the ∞-category of coadmissible solid φ-modules over B as the equalizer
(_B)^φ:= ( (_B) [r, shift left,"φ^*"] [r, shift right, swap,"𝕀"]
(_B) )
that is the ∞-category of the pairs ( M, φ_ M), where M is a coadmissible solid module over B, and φ_ M: φ^* M≃ M is an isomorphism.
We define (_B)^φ (resp. (_B)^φ) the ∞-category of coadmissible nuclear (resp. perfect) φ-modules over B as the full ∞-subcategory of (_B)^φ spanned by the pairs ( M, φ_ M) with M in lim_I⊂ (0, ∞)((B_I, )_) (resp. in lim_I⊂ (0, ∞)_B_I).
Now, recalling that the action of φ on Y_ is free and totally discontinuous, <cit.>, it follows formally from the analytic descent for quasi-coherent complexes, Theorem <ref>, that we have an equivalence of ∞-categories
(Y_)^φ≃(Y_/φ^).
Thus, from (<ref>) we obtain a functor
E_(-):(_B)^φ→()
with target the ∞-category of quasi-coherent complexes on the Fargues–Fontaine curve. Next, we focus on nuclear and perfect complexes on . We invite the reader to compare the following result with <cit.>.
* The functor E_(-), defined in (<ref>), induces an equivalence of ∞-categories
(_B)^φ∼⟶()
which restricts to an equivalence of ∞-categories
(_B)^φ∼⟶().
* Given E∈(), let ((M_I( E))_I⊂ (0, ∞), φ) be the coadmissible nuclear φ-module over B corresponding to E via the equivalence (<ref>). Let M( E):=Rlim_I⊂ (0, ∞)M_I( E). Then, there is a natural identification in D(__p^)
RΓ(, E)=(M( E)M( E)).
For part perff:1, we first observe that, for any compact interval I⊂ (0, ∞) with rational endpoints, the base change functor -⊗_(B_I, )_^(B_I, B_I^+)_: D((B_I, )_)→ D((B_I, B_I^+)_) induces an equivalence of ∞-categories[Such equivalence follows from Theorem <ref>, however we give here a self-contained proof.]
((B_I, )_)∼⟶((B_I, B_I^+)_).
In fact, by Theorem <ref>nuclearbanach:22, ((B_I, )_) is generated, under shifts and colimits, by the objects ([S], B_I), for varying S profinite sets; then, as, thanks to Corollary <ref>, nuclearity is preserved under base change, by Proposition <ref> we deduce that such objects also generate, under shifts and colimits, ((B_I, B_I^+)_).
Then, we have an equivalence of ∞-categories
(_B)^φ∼⟶(Y_)^φ.
Now, by analytic descent for nuclear complexes, Theorem <ref>, we have
(Y_)^φ≃(Y_/φ^)
which combined with (<ref>) implies the equivalence (<ref>). Such equivalence restricts to (<ref>) by analytic descent for perfect complexes, Theorem <ref>.
Part perff:2 follows from part perff:1, as we now explain. In fact, for any E∈(), using the equivalence (<ref>), we have the following identification
RΓ(, E) =_()( O, E)
=(_(Y_)( O, E|_Y_)_(Y_)( O, E|_Y_))
=(RΓ(Y_, E|_Y_)RΓ(Y_, E|_Y_))
where
RΓ(Y_, E|_Y_)=Rlim_I⊂ (0, ∞)_(Y_, I)( O, E|_Y_, I).
Now, assuming E∈() as in the statement, thanks to the equivalence (<ref>), we have
_(Y_, I)( O, E|_Y_, I)=_(Y_, I)( O, E|_Y_, I)=_((B_I, )_)(B_I, M_I( E))=M_I( E).
Hence, the statement follows, observing that all the ∞-categories appearing above are naturally enriched over D(__p^).
Perfect complexes on are well-understood. Let us recall the following characterization due to Anschütz–Le Bras.
Each E∈() is quasi-isomorphic to bounded complex of vector bundles on .
In addition, perfect complexes on are closely related to Banach–Colmez spaces, as we now recall.
[<cit.>]
The category BC of Banach–Colmez spaces over C (for short, BC spaces) is the smallest abelian subcategory of sheaves of _p-modules on the v-site (C, O_C)_v stable under extensions and containing the v-sheaves _p and (_C^1)^.
In the following, we denote by
τ: _v→(C, O_C)_v
the natural morphism of v-sites, sending an affinoid perfectoid S∈(C, O_C)_v to the relative Fargues–Fontaine curve _S∈_v.
We have an equivalence of categories
Rτ_*: ()∼⟶ D^b(BC)
where the right-hand side denotes the bounded derived category of Banach–Colmez spaces over C.
§.§.§ Fargues–Fontaine cohomology
We are almost ready to define the Fargues–Fontaine cohomology together with its filtration. First, in order to prove <cit.>, we want to extend the definition of the B-cohomology to dagger varieties over C. In the following, we abbreviate D(B)=D(_B^).
[B-cohomology of dagger varieties]
Let X be a dagger variety over C.
Denote
F_B∈^(_C, , D(B))
the hypersheaf defined by the B-cohomology RΓ_B(-). Via Construction <ref>, we define
RΓ_B(X):=RΓ(X, F_B^†)∈ D(B)
and we endow it with the filtration induced by the filtration décalée on the B-cohomology of rigid-analytic varieties over C.
We give analogous definitions replacing the B-cohomology with the B_I-cohomology, for varying I⊂ (0, ∞) compact intervals with rational endpoints.
We note that the main comparison theorems proved in <ref> for the B-cohomology (and the B_I-cohomology) of rigid-analytic varieties extend to the dagger case, thanks to the properties of the solid tensor product. In particular, a version of Theorem <ref> holds true for X a qcqs dagger variety over C: in fact, by the proof of loc. cit. we can reduce to the case X is a smooth dagger affinoid over C, which follows from the statement of loc. cit. using Lemma <ref> (which applies thanks to Proposition <ref>3.11.1) and the fact that the solid tensor product commutes with filtered colimits.
In order to define the Fargues–Fontaine cohomology, we will apply the functor (<ref>) in the following situation.
Let X be a qcqs rigid-analytic/dagger variety over C. Let i≥ 0. The pair
((^i RΓ_B_I(X))_I⊂(0, ∞), φ)
defines a coadmissible nuclear φ-module over B. Here, the complexes RΓ_B_I(X), for I⊂ (0, ∞) compact intervals with rational endpoints, are endowed with the filtration décalée (Definition <ref>).
First, we recall that φ(t)=pt, and p is invertible in B_I. Next, let us assume i=0. To show that the pair (<ref>) is a coadmissible solid φ-module over B, we need to check that
RΓ_B_J(X)_B_JB_I≃ RΓ_B_I(X)
for any I⊂ J⊂ (0, ∞) compact intervals with rational endpoints: this follows for example from Theorem <ref> (and Remark <ref>).[Trivializing the monodromy action.]
In order to prove the nuclearity of (<ref>) for i=0, we can use again Theorem <ref>, Theorem <ref>mainHK:2, and the fact that nuclearity is preserved under base change, Corollary <ref>. Alternatively, and more directly, the nuclearity can be checked as follows. By Theorem <ref>nuclearbanach:12 and -hyperdescent, we can reduce to the case when X is a smooth affinoid rigid space over C. Then, by Proposition <ref>, and the fact that nuclearity can be checked on cohomology groups thanks to Theorem <ref>nuclearbanach:32, using <cit.> we can reduce to the assertion that each complex of solid B_I-modules RΓ_(X, _I) is nuclear, which was shown in Lemma <ref>.
It remains to show that the isomorphism (<ref>) is compatible with the filtration décalée, and to prove the nuclearity of the filtered pieces. Proceeding by induction on i≥ 0, we can check this on graded pieces using Proposition <ref>bbe:2, again with the help of Theorem <ref>, Proposition <ref>, and Lemma <ref> for the nuclearity.
The following lemma on the nuclearity of the cohomology of the rational pro-étale period sheaves was used above.
Let X be a qcqs analytic adic space over (C, O_C). Let I⊂ (0, ∞) be a compact interval with rational endpoints, and let m≥ 1 be an integer. Given 𝐁∈{_I, _^+/^m}
we write ℬ=𝐁_(C)_ for the corresponding condensed period ring. Then, we have
RΓ_(X, 𝐁)∈((ℬ, )_).
Picking a simplicial -hypercover U_∙→ X such that all U_n are affinoid perfectoid spaces over (C, O_C), by -hyperdescent and Theorem <ref>nuclearbanach:12 (applied to the Banach _p-algebra ℬ), it suffices to check that for any affinoid perfectoid space U over (C, O_C) we have that RΓ_(U, 𝐁)∈((ℬ, )_). We note that, By <cit.> and <cit.>, we have
RΓ_(U,𝐁)=𝐁(U)[0] concentrated in degree 0 (as it can be checked on on S-valued points, for any κ-small extremally disconnected set S). Thus, by Remark <ref>, we are reduced to prove that 𝐁(U)[0]∈((_p, )_), which follows <cit.> observing that 𝐁(U) is a _p-Banach space.
Finally, we can give the following definition.
Let X be a qcqs rigid-analytic/dagger variety over C. We define the Fargues–Fontaine cohomology of X, denoted
H_(X)∈()
as the quasi-coherent complex on the Fargues–Fontaine curve , endowed with filtration ^⋆ H_(X), associated, via the functor E_(-) defined in (<ref>), to the filtered coadmissible solid φ-module over B given by (<ref>). For i∈, we denote by H^i_(X) its i-th cohomology group.
Using Definition <ref> we can now reformulate, in terms of the Fargues–Fontaine curve, the comparison theorems proven in the previous sections.
Let X be a qcqs dagger variety over C. Let i≥ 0.
* The quasi-coherent complex H_(X) on is perfect, and its cohomology groups are vector bundles on . We have a natural isomorphism
H_^i(X)≅ E(H_^i(X))
where E(H_^i(X)) is the vector bundle on associated to the finite (φ, N)-module H_^i(X) over F̆. If X is the base change to C of a rigid-analytic variety defined over K, then (<ref>) is 𝒢_K-equivariant.
* The completion at ∞ of (<ref>) gives a natural isomorphism
H_^i(X)^∧_∞≅ H_inf^i(X/B_^+)
where RΓ_inf(X/B_^+) is defined, via Construction <ref>, from Definition <ref>.
Part lb:1 follows from Theorem <ref> (combined with Remark <ref>), Theorem <ref>slopp:1 and Lemma <ref>. Part lb:2 follows from Theorem <ref> (which extends to the dagger case similarly to Remark <ref>).
Let X be a qcqs dagger variety over C and let i≥ 0. Recalling that the functor from finite φ-modules over F̆ to vector bundles on induces a bijection on isomorphism classes, <cit.>, we deduce from Theorem <ref>lb:1 that the vector bundle H_^i(X) determines, up to isomorphisms, the φ-module structure on H_^i(X).
Using Theorem <ref>lb:1, we can also recover from H_^i(X) the (φ, N)-module structure on H_^i(X), up to isomorphisms: in fact, denoting 𝒢_F̆=(C/F̆), by <cit.> we have a natural isomorphism of (φ, N)-modules over F̆
H_^i(X)≅(H^0(∖{∞}, H_^i(X))⊗_B_eB_log[1/t])^𝒢_F̆
observing that, since (<ref>) is an isomorphism of 𝒢_F̆-equivariant vector bundles on , we have a 𝒢_F̆-equivariant isomorphism
H^0(∖{∞}, H_^i(X))≅ (H_^i(X)⊗_F̆B_log[1/t])^φ=1, N=0.
Next, we state the main result of this subsection.
Let X be a qcqs rigid-analytic/dagger variety over C. Let i≥ 0. Consider the quasi-coherent complex on defined by
H_(X)(i):=^i H_(X)⊗ O(i).
We have
RΓ(, H_(X)(i))=RΓ_, (X, _p(i)).
If X is a proper rigid-analytic variety over C, the complex H_(X)(i) is perfect, in particular the complex RΓ_, (X, _p(i)) identifies with the C-points of a bounded complex of Banach–Colmez spaces.
By Proposition <ref>perff:2, which applies thanks to Lemma <ref>, we have
RΓ(, H_(X)(i)) =(RΓ(Y_, H_(X)(i)|_Y_)RΓ(Y_, H_(X)(i)|_Y_))
=(RΓ(Y_, ^i H_(X)|_Y_) RΓ(Y_, ^i H_(X)|_Y_))
=(^iRΓ_B(X)^iRΓ_B(X))
where in the last step we used in addition Lemma <ref>. This shows (<ref>).
Next, assume X proper. To show that H_(X)(i) is a perfect complex, using the derived Beauville–Laszlo gluing, Lemma <ref>, we can reduce to showing that, for I⊂ (0, ∞) a compact interval with rational endpoints, the complexes RΓ_(X, _I[1/t]) and ^iRΓ_B_^+(X) are perfect. First, we note that such complexes are bounded thanks to Proposition <ref>. Then, to prove that the complex RΓ_(X, _I[1/t]) is perfect, we can apply Theorem <ref> combined with Theorem <ref>slopp:1 (or, alternatively, <cit.>), and for the complex ^iRΓ_B_^+(X) we use Proposition <ref> below.
Finally, we note that, by Proposition <ref>, the complex RΓ_, (X, _p(r)) identifies with the C-points of a bounded complex of Banach–Colmez spaces, as desired (in fact, with the same notation as in Proposition <ref>, for E∈(), we have Rτ_*( E)((C, O_C))=RΓ(, E), by the v-descent result <cit.> combined with Proposition <ref>).
We used the following derived version of the classical Beauville–Laszlo gluing.
Let R be a commutative condensed ring, let f∈ R be a non-zero-divisor, and denote by R the f-adic completion of R. We have a natural equivalence of ∞-categories
_R≃_R[1/f]×__R[1/f]_R.
By <cit.>, we have a natural equivalence of ∞-categories _R(*)≃_R[1/f](*)×__R[1/f](*)_R(*).
In order to carry such equivalence into the condensed setting, we recall that, for any commutative condensed ring A, we define the condensification functor, <cit.>, as the composite
_A: D(_A(*))↪ D(_A(*)^)D(_A^).
Then, the statement follows observing that _A preserves perfect complexes, and applying <cit.>.
We also used the following finiteness/degeneration result on the B_^+-cohomology of proper rigid-analytic varieties over C. As we will see, part fillemma:1 follows from results of Guo, instead part fillemma:2 relies crucially on a combination of Conrad–Gabber's spreading out for proper rigid-analytic varieties and a generic smoothness result recently proved by Bhatt–Hansen, <cit.>, which allow us to reduce the statement to the case when X is the base change to C of a proper smooth rigid-analytic variety over a discretely valued subfield of C.
Let X be a proper rigid-analytic variety over C.
* For all i∈, the cohomology group H^i_B_^+(X) is a finite free module over B_^+.
* For all i, r∈, the natural map
H^i(^r RΓ_B_^+(X))→ H^i_B_^+(X)
is injective. Equivalently, for all i, r∈, the natural map
H^i_B_^+(X)/^r→ H^i(RΓ_B_^+(X)/^r)
is an isomorphism, where ^r H^i_B_^+(X):=(H^i(^r RΓ_B_^+(X))→ H^i_B_^+(X)).
Part fillemma:1 follows from <cit.> combined with Theorem <ref>. Next, we prove part fillemma:2 and give at the same time an alternative proof of part fillemma:1. We first note that the equivalence of the two assertions in part fillemma:2 immediately follows considering the long exact sequence in cohomology associated to the exact triangle
^r RΓ_B_^+(X)→ RΓ_B_^+(X)→ RΓ_B_^+(X)/^r.
We will use that, by <cit.>, there exists a proper flat morphism f: X→ S of rigid-analytic varieties over a discretely valued subfield L⊂ C such that X is the fibre of f over a point η∈ S(C); we note that we may assume S=(A, A^∘) to be a smooth affinoid over L. Then, the map η corresponds to a map A→ C of affinoid L-algebras, and, by the (formal) smoothness of A over L, the latter map lifts to a map A→ B_^+. Now, we divide the argument in several cases.
* In the case S is a point, we have that X is the base change to C of a proper rigid-analytic variety X defined over a discretely valued subfield L⊂ C. Then, by Theorem <ref>, we have a natural filtered quasi-isomorphism
RΓ_( X)⊗_L^ B_^+≃ RΓ_B_^+(X).
Here, we used that, as X is proper, the cohomology groups of RΓ_(X) and its filtered pieces, are finite-dimensional over L: in fact, by Proposition <ref>, there exists a proper -hypercover X_∙→ X with each X_n smooth over L; then, by cohomological descent, we can reduce to the case X is smooth, which follows from <cit.>. Then, part fillemma:1 is clear from the quasi-isomorphism (<ref>), and part fillemma:2 follows from the compatibility with filtrations of (<ref>) and the degeneration of the Hodge-de Rham spectral sequence for X
H^i-j( X, Ω^j_ X_) H^i_( X)
(see <cit.> for the case X is smooth, and <cit.> for the case X is singular).
* In the case f: X→ S is smooth, denoting by Rf_ * O_ X:=Rf_*Ω_ X/S^∙ the relative de Rham cohomology of f endowed with its Hodge filtration Rf_*Ω_ X/S^≥⋆, we claim that we have a filtered quasi-isomorphism
Rf_ * O_ X⊗_A^B_^+≃ RΓ_B_^+(X)
compatible with (<ref>) in the case S is a point. For this, by Theorem <ref> and Proposition <ref>, it suffices to show that we have a B_^+-linear map
Rf_ * O_ X⊗_A^B_^+ → RΓ_inf(X/B_^+)
which is a quasi-isomorphism compatible with filtrations, where the right-hand side is endowed with the infinitesimal filtration. Arguing as in the proof of <cit.>, we can construct (<ref>) on a hypercover of X by very small smooth affinoid spaces over L in the sense of <cit.>,[I.e. the affinoid spaces (A_0, A_0^∘) of Notation <ref> for r=0, q=1, and Ξ_0=∅.] using in addition Lemma <ref>.[In the case S is a point, the stated compatibility with (<ref>) follows from the proof of Theorem <ref> (see in particular the commutative diagram (<ref>)).]
We note that the constructed map (<ref>) is a quasi-isomorphism after applying the functor -⊗^_B_^+B_^+/ξ, recalling that we have a natural quasi-isomorphism
RΓ_inf(X/B_^+)⊗^_B_^+B_^+/ξ≃ RΓ_(X)
by <cit.>; then, in order to show that (<ref>) is a quasi-isomorphism, using the derived Nakayama lemma, it suffices to check that both the source and the target of (<ref>) are derived ξ-adically complete: for the source, we note that each R^if_ * O_ X is a coherent O_S-module with (integrable) connection, in particular it is a locally free O_S-module (see <cit.>); for the target, we can use for example the Čech-Alexander complex computing the infinitesimal cohomology over B_^+, <cit.>. To prove that the quasi-isomorphism (<ref>) is compatible with filtrations, proceeding by induction on the index i≥ 0 of the filtrations, it suffices to check the compatibility on graded pieces, which follows from Lemma <ref>.
Now, to prove part fillemma:1, using the quasi-isomorphism (<ref>), it suffices to recall that each R^if_ * O_ X is a finite projective A-module. For part fillemma:2, using the filtered quasi-isomorphism (<ref>), it suffices to recall that, thanks to <cit.> and <cit.>, for all i, j∈ the relative Hodge cohomology R^i-jf_*Ω_ X/S^j is a finite projective A-module, and that the relative Hodge-de Rham spectral sequence
R^i-jf_*Ω_ X/S^j R^if_ * O_ X
degenerates.
* In the general case, we will use that Rf_ *_p is a bounded Zariski-constructible complex of sheaves on S, thanks to <cit.>. Denoting by ν: S_→ S_ the natural morphism of sites, and setting
D_(Rf_ *_p):=R^0ν_*(Rf_ *_p⊗__p O_, S)
(see e.g. <cit.> for the definition of Scholze's pro-étale sheaf O_, S), we claim that we have a natural filtered quasi-isomorphism
D_(Rf_ *_p)⊗_A^B_^+≃ RΓ_B_^+(X).
For this, we argue by induction on (S). For the base case (S)=0, we have that S is a disjoint union of points, hence we can reduce to the case S is a point; then, by the de Rham comparison theorem for proper (possibly singular) rigid-analytic varieties defined over L, <cit.>, we have a natural filtered quasi-isomorphism D_(Rf_ *_p)≃ RΓ_( X), and the claim follows from the filtered quasi-isomorphism (<ref>) of part keya.
For the inductive step, using Proposition <ref>, picking a proper -hypercover X_∙→ X with each X_n smooth over L, we can reduce to the case X is smooth over L. In the latter case, by <cit.>, the maximal open subset S'⊂ S such that f':f^-1(S')→ S' is smooth is a dense Zariski-open subset; in particular, the Zariski-closed complement Z:=S∖ S' is nowhere dense in S, and hence we have (Z)< (S). Then, if η is contained in Z, restricting f over Z, the claim follows from the inductive hypothesis, by proper base change <cit.>; if η is contained in S', by the smoothness of the restriction f' of f over S', the claim follows again by proper base change from the filtered quasi-isomorphism (<ref>) of part keyb, recalling that in this case, by the relative de Rham comparison (<cit.> combined with <cit.>) we have a filtered quasi-isomorphism D_(Rf'_ *_p)≃ Rf'_ * O_ X.
Now, part fillemma:1 follows from the quasi-isomorphism (<ref>), recalling that Rf_ *_p is a bounded Zariski-constructible complex of sheaves on S. For part fillemma:2, using the filtered quasi-isomorphism (<ref>), it suffices to check that, up to replacing S by a suitable Zariski locally closed subset containing η, the spectral sequence associated to the filtered complex D_(Rf_ *_p), i.e.
H^i-j(^jD_(f_ *_p)) H^i(D_(Rf_ *_p)),
degenerates. For this, using again that Rf_ *_p is a bounded Zariski-constructible complex of sheaves on S, by <cit.>, we can suppose that all the terms of the spectral sequence above are vector bundles on S, in which case the degeneration can be checked on stalks at classical points, where it follows from the degeneration of the spectral sequence (<ref>) of part keya.
The following lemma was used in the proof above.
Let X be a smooth rigid-analytic variety over C. For any r≥ 0, we have a natural quasi-isomorphism
^r RΓ_B_^+(X)≃⊕_0≤ i≤ rRΓ(X, Ω_X^i)(r-i)[-i].
First, we note that, by Proposition <ref>bbe:2, we have ^r RΓ_B_^+(X)≃ RΓ(X, τ^≤ rRα_* O(r)). We want to show that we have a natural identification
τ^≤ rRα_* O(r)≃⊕_0≤ i≤ rΩ_X^i(r-i)[-i].
We may assume that X is a smooth affinoid over C, and then, by <cit.>, we can further assume that X is the base change to C of a smooth affinoid defined over a finite extension of K. In this case, (<ref>) follows from <cit.>.
§.§ Comparison with <cit.>
In this subsection, we compare the syntomic Fargues–Fontaine cohomology for rigid-analytic varieties over C, Definition <ref>, and the syntomic cohomology for semistable p-adic formal schemes over O_C defined (in the smooth case) by Bhatt–Morrow–Scholze, <cit.>. In particular, we show that the syntomic Fargues–Fontaine cohomology can be locally recovered from the A_inf-cohomology together with its Nygaard filtration.
The results proved here are not used in the rest of the paper, but we hope they will be useful for future reference. As the main comparison results of this subsection will be proven in the semistable case, we begin by recalling the definition of the A_inf-cohomology, as well as the Nygaard filtration on it, in the latter setting. We will phrase the definition of the Nygaard filtration in terms of the décalage functors of Definition <ref>, as this will be convenient for the desired comparison.
[Nygaard filtration on A_inf-cohomology]
Let 𝔛 be a semistable p-adic formal scheme over O_C, and let X denote its generic fiber, regarded as an adic space over (C, O_C). Denote by ν':X_→𝔛_, the natural morphism of sites.[Here, the site 𝔛_, is defined similarly to <cit.>.]
* We define the A_inf-cohomology of X as the complex of D(^_A_inf)
RΓ_A_inf( X):=RΓ( X, AΩ_ X), where AΩ_ X:=Lη_μRν'_*_inf.
* Given an integer i≥ 0, consider the function δ_i:→, j↦max(i-j, 0).
We endow the A_inf-cohomology of X with the Nygaard filtration whose i-th level is given by
_ N^iRΓ_A_inf( X):=RΓ( X, _ N^i AΩ_ X), where _ N^i AΩ_ X:= L(η_δ_i,ξ∘η_μ)Rν'_*_inf.
In the Definition <ref> above we implicitly used that the functor η_δ_i,ξ∘η_μ(-) preserves quasi-isomorphisms. To check the latter assertion we observe that, as ξ=μ/φ^-1(μ), we have
η_δ_i,ξ∘η_μ=η_δ_i, μ∘η_-δ_i, φ^-1(μ)∘η_μ=η_ε_i, μ∘η_-δ_i, φ^-1(μ)
where ε_i:→,j↦max(i, j). Then, since both ε_i and -δ_i are non-decreasing functions, we conclude by Proposition <ref>.
The lemma below, combined with <cit.>, shows that the Nygaard filtration defined above is equivalent to the one of <cit.> (in the smooth case). First, we recall that AΩ_ X comes equipped with a Frobenius, <cit.>: in fact, the Frobenius automorphism of _inf induces a φ_A_inf-semilinear map
φ:AΩ_ X→ AΩ_ X.
Let ξ̃:=φ(ξ). Under the notation of Definition <ref>, the Frobenius φ: AΩ_ X→ AΩ_ X factors functorially over a φ_A_inf-semilinear quasi-isomorphism
AΩ_ X∼→ Lη_ξ̃AΩ_ X
sending the Nygaard filtration (Definition <ref>) on the source to the filtration décalée (Definition <ref>) on the target.
Let AΩ_ X^ denote the presheaf version of AΩ_ X, <cit.>. Given an integer i≥ 0, it suffices to observe that the Frobenius automorphism of _inf induces a quasi-isomorphism
φ^*_ N^i AΩ_ X^≃ L(η_δ_i, ξ̃∘η_ξ̃)AΩ_ X^≃ Lη_ε_i, ξ̃AΩ_ X^
where we used that φ(μ)=ξ̃·μ and we denoted ε_i:→,j↦max(i, j).
The Frobenius automorphism of _inf induces a φ_A_inf-semilinear map
_ N^⋆(φ):_ N^⋆ AΩ_ X→ξ̃^⋆⊗ AΩ_ X.
We are almost ready to define the syntomic cohomology of Bhatt–Morrow–Scholze in the semistable reduction case. We shall use the following notation.
We consider the Breuil–Kisin–Fargues module over A_inf
A_inf{1}:=1/μ(A_inf⊗__p_p(1))
(<cit.>) and, given i∈, for any A_inf-module M we denote by
M{i}:=M⊗_A_infA_inf{1}^⊗ i
its i-th Breuil–Kisin–Fargues twist.
[Bhatt–Morrow–Scholze's syntomic cohomology, cf. <cit.>] Fix notation as in Definition <ref>. Let i≥ 0 be an integer. We define
RΓ_, ( X, _p(i)):=(_ N^iRΓ_A_inf( X){i}RΓ_A_inf( X){i})
where _ N^i RΓ_A_inf( X){i} denotes the Breuil–Kisin–Fargues twisted i-th level of the Nygaard filtration on the A_inf-cohomology of X, and we write φ{i} for the tensor product of _ N^i(φ) (Remark <ref>) with the Frobenius of A_inf{i}.
Next, we want to compare Definition <ref> with Definition <ref>. Recalling that the Fargues–Fontaine curve has a presentation given by the quotient of Y_, [1, p] via the identification φ: Y_, S, [1, 1]≅ Y_, S, [p, p], we will define a Nygaard filtration on the B_[1, p]-cohomology of rigid-analytic varieties over C, and we will explain how to recover the syntomic Fargues–Fontaine cohomology from the latter. Similarly to Definition <ref>, we can give the following.
[Nygaard filtration on B_I-cohomology]
Let X be a rigid-analytic variety over C and denote by α:X_v→ X_, the natural morphism of sites. Let I=[1, r]⊂ (0, ∞) be an interval with rational endpoints.
Given 𝐁∈{_I, _^+}, we write ℬ=𝐁_(C)_.
We endow the ℬ-cohomology of X with the Nygaard filtration whose i-th level is given by
_ N^i RΓ_ℬ(X):=RΓ(X, _ N^i Lη_tRα_*𝐁), where _ N^i Lη_tRα_*𝐁:=L(η_δ_i,ξ∘η_t)Rα_*𝐁.
In the Definition <ref> above, we used Remark <ref> together with the fact that, by the choice of the interval I, the elements t and μ differ by a unit in B_I.
We note that the Nygaard filtration on the B_^+-cohomology agrees with its filtration décalée, since the elements t and ξ generate the same ideal in B_^+. Moreover, we recall that the latter filtration has a more explicit expression in coordinates, as shown in Corollary <ref>. In a similar vein, we have the following result.
Let 𝔛=(R) be a semistable p-adic formal scheme over O_C as in Notation <ref>, and let X= 𝔛_C denote its generic fiber. Let I=[1, r]⊂ (0, ∞) be an interval with rational endpoints. For any i≥ 0, we have a B_I-linear quasi-isomorphism, compatible with Frobenius,
ξ^max(i-∙, 0)Ω^∙_B_I(R)∼→_ N^i RΓ_B_I(X)
where Ω_B_I(R)^∙:=_B_I(R)(∂_1, …,∂_d), in the notation of <ref>.
For i=0, the statement follows combining Lemma <ref> and Lemma <ref>.
Arguing by induction on i≥ 0, to show the statement in general it suffices to check (<ref>) on graded pieces.
We begin by observing that the natural map
(Lη_tRν_*_I)/_ N^i→ (Lη_tRν_*_^+)/_ N^i
is an isomorphism. In fact, we can reduce to showing that, for each j≥ 0, the natural map
_ N^j Lη_tRν_*_I→_ N^j Lη_tRν_*_^+
is an isomorphism (here, the graded pieces _ N^⋆ refer to the Nygaard filtration _ N^⋆). For this, since we can replace Lη_t with Lη_ξ on both sides of (<ref>) (recall that we have an isomorphism B_I/ξ∼→B_^+/ξ, and the elements t and ξ generate the same ideal in B_^+), by Proposition <ref>bbe:2, the claim reduces to the isomorphism
_ N^j_I∼→^j_^+
where we denote by _ N^⋆ the graded pieces for the ξ-adic filtration _ N^⋆ on _I.
Now, the desired statement, i.e. the quasi-isomorphism (<ref>) on graded pieces, follows from the quasi-isomorphisms (<ref>) of Corollary <ref> using that, for any j≥ 0, the natural map B_I(R)/ξ^j→ B_^+(R)/ξ^j is an isomorphism.
On the other hand, we have the following local description of the Nygaard filtration on the A_inf-cohomology.
Let 𝔛=(R) be a semistable p-adic formal scheme over O_C as in Notation <ref>. For any i≥ 0, there is an A_inf-linear quasi-isomorphism, compatible with the Frobenius,
_ N^iRΓ_A_inf( X)≃ξ^max(i-∙, 0)q-Ω^∙_A_inf(R)
where q-Ω^∙_A_inf(R) denotes the logarithmic q-de Rham complex defined as[See also <cit.> for the construction of the logarithmic q-de Rham complex in a more general setting.]
q-Ω^∙_A_inf(R):=_A_inf(R)(∂_q/∂_qlog(X_1), …, ∂_q/∂_qlog(X_d))
with q=[ε]∈ A_inf.
For i=0 the statement follows from <cit.>. The general case follows by induction on i≥ 0, computing the graded pieces of the filtrations, using Lemma <ref> and Proposition <ref>bbe:2 (cf. <cit.>).
The following proposition can be regarded as a refinement of Lemma <ref>.
Let 𝔛 be a qcqs semistable p-adic formal scheme over O_C, and let X=𝔛_C denote its generic fiber.
Denote by
ν:X_→ X_, λ: X_, →𝔛_,
the natural morphisms of sites, and let ν':X_→𝔛_, be their composition.
Let I=[1, r]⊂ (0, ∞) be an interval with rational endpoints. For any i≥ 0, the natural map
_ N^i Lη_μRν'_*_inf→ Rλ_*_ N^i Lη_μRν_*_I
induces a quasi-isomorphism[Here, we use the fact that μ and t differ by a unit in B_I.]
_ N^iRΓ_A_inf( X)_A_infB_I∼⟶_ N^i RΓ_B_I(X).
We can reduce to the case 𝔛=(R) is a semistable p-adic formal scheme over O_C as in Notation <ref>. The statement for i=0 is essentially contained in Lemma <ref>. In fact, combining Lemma <ref> and the quasi-isomorphism (<ref>) in the proof of Lemma <ref>, we have a quasi-isomorphism
RΓ_B_I(X)≃_B_I(R)(γ_1-1/t, …, γ_d-1/t).
Therefore, using Lemma <ref> for i=0, and recalling that t and μ=q-1 differ by a unit in B_I, it suffices to check that the natural map
_A_inf(R)(γ_1-1/q-1, …, γ_d-1/q-1)_A_infB_I→_B_I(R)(γ_1-1/q-1, …, γ_d-1/q-1)
is a quasi-isomorphism; this can be done as in the proof of (<ref>) in Lemma <ref>. Finally, the statement in general follows arguing by induction on i≥ 0, calculating again the graded pieces of the filtrations.
We can finally state and prove the main result of this subsection, which in particular tells us how the syntomic Fargues–Fontaine cohomology can be locally recovered from the A_inf-cohomology together with its Nygaard filtration.
Let X be a rigid-analytic variety over C. Denote I=[1, p] and I'=[1, 1]. Let i≥ 0 be an integer.
* We have a natural isomorphism in D(__p^)
RΓ_, (X, _p(i))≃(_ N^iRΓ_B_I(X) RΓ_B_I'(X)).
* Assume that X is the generic fiber of a qcqs semistable p-adic formal scheme 𝔛 over O_C. Then, we have a natural isomorphism in D(__p^)
RΓ_, (X, _p(i))≃(_ N^iRΓ_A_inf( X){i}_A_infB_I RΓ_A_inf( X){i}_A_infB_I').
In particular, there is a natural morphism
RΓ_, ( X, _p(i))⟶ RΓ_, (X, _p(i)).
For part crucialsyn:1, using the isomorphism
RΓ_B_I(X)/_ N^i∼→ RΓ_B_^+(X)/^i
coming from (<ref>), we have a natural isomorphism
(_ N^iRΓ_B_I(X) RΓ_B_I'(X))≃(RΓ_B_I(X)^φ=p^i→ RΓ_B_^+(X)/^i)
where
RΓ_B_I(X)^φ=p^i:=(RΓ_B_I(X) RΓ_B_I'(X)).
Then, combining (<ref>) with Theorem <ref>BK=pet:2, it remains to show that the natural map
RΓ_B(X)^φ=p^i→ RΓ_B_I(X)^φ=p^i
is an isomorphism. For this, in the notation of <ref>, we observe that by Proposition <ref> the source of (<ref>) is computed by the cohomology of H_(X)⊗ O(i)∈(). But the latter cohomology also computes the target of (<ref>), using the presentation of the curve as the quotient of Y_, [1, p] via the identification φ: Y_, [1, 1]≅ Y_, [p, p]. This concludes the proof of part crucialsyn:1.
For part crucialsyn:2, by part crucialsyn:1 and Proposition <ref> we have a natural isomorphism in D(__p^)
RΓ_, (X, _p(i))≃(_ N^iRΓ_A_inf( X)_A_infB_I RΓ_A_inf( X)_A_infB_I').
On the other hand, trivializing the Breuil–Kisin–Fargues twists we can rewrite the fiber in the statement of part crucialsyn:2 as
(_ N^iRΓ_A_inf( X)_A_infB_I RΓ_A_inf( X)_A_infB_I')
where ξ̃:=φ(ξ). We conclude observing that, writing μ=ut with u unit in B_I, the multiplication by u^i map induces an isomorphism between the fiber in (<ref>) and the fiber in (<ref>).
§.§ Comparison with <cit.>
In this subsection, we show that, in high degrees, the syntomic Fargues–Fontaine cohomology does not agree with the syntomic cohomology for smooth rigid-analytic varieties over C defined by Colmez–Nizioł.
Let X be a smooth rigid-analytic variety over C. Let i≥ 0 be an integer. We denote by RΓ_, (X, _p(i)) the syntomic cohomology of X with coefficients in _p(i) of Colmez–Nizioł, defined in <cit.>.
Let X=^1_C be the rigid-analytic projective line over C. We claim that
H^3_, (X, _p(0))≅ C/_p, H^3_, (X, _p(0))≅ 0.
For this, applying Theorem <ref>BK=pet:1, combined with Theorem <ref>, for the syntomic Fargues–Fontaine cohomology, and <cit.> for the syntomic cohomology of Colmez–Nizioł, we obtain respectively
H^3_, (X, _p(0))≅ (H_^2(X)⊗_F̆ B)/(φ-1), H^3_, (X, _p(0))≅ (H_^2(X)⊗_F̆ B_^+)/(φ-1)
where we used that H_^3(X)≅ 0. Now, the map φ-1 is surjective on H_^2(X)⊗_F̆ B_^+ (see e.g. <cit.>). Instead, observing that H_^2(X) is a one-dimensional φ-module over F̆ with slope 1 (by Theorem <ref>mainHKover:1, and <cit.>[In this case, in order to deduce that H_^2(X) has slope 1, it is sufficient to use the weak Lefschetz theorem for crystalline cohomology, <cit.>.]), we deduce that the vector bundle on the Fargues–Fontaine curve associated to H_^2(X) is isomorphic to O(-1); hence, we can identify (H_^2(X)⊗_F̆ B)/(φ-1) with H^1(, O(-1)), thus showing (<ref>).
§ APPLICATIONS
In this section, we gather the results we have obtained so far, giving some applications.
§.§ Fundamental diagrams of rational p-adic Hodge theory
Let X be a qcqs rigid-analytic variety defined over K. We have a 𝒢_K-equivariant pullback square in D(__p^)
RΓ_(X_C, _p) [r] [d] (RΓ_(X_C)_F̆B_log[1/t])^N=0, φ=1 [d]
^0(RΓ_(X)_K B_) [r] RΓ_(X)_K B_.
First, we note that we can rewrite the fundamental exact sequence (<ref>) of p-adic Hodge theory on the pro-étale site X_C, as a pullback square
_p [r] [d] [1/t]^φ=1 [d]
_^+ [r] _.
Then, the statement follows combining Theorem <ref>, <cit.> (together with Theorem <ref> for the singular case), as well as the compatibility proven in Theorem <ref>. In fact, by Theorem <ref>, using that X is qcqs, we have
RΓ_(X, [1/t]^φ=1)≃ RΓ_B(X)[1/t]^φ=1≃ (RΓ_(X_C)_F̆B_log[1/t])^N=0, φ=1
where in the last step we used that _F̆ commutes with filtered colimits.
We invite the reader to compare the following result with <cit.>.
Let X be a connected, paracompact, rigid-analytic variety defined over K. For any i≥ 0, we have a 𝒢_K-equivariant isomorphism in D(__p^)
τ^≤ iRΓ_(X_C, _p(i))≃τ^≤ i((RΓ_(X_C)_F̆B_log)^N=0, φ=p^i→ (RΓ_(X)_K B_^+)/^i).
This follows combining Theorem <ref>, Theorem <ref>, Theorem <ref> and Theorem <ref>.
In some special cases, we can explicitly compute the cohomology groups of the de Rham contribution in the fiber sequence of Theorem <ref>.
Let X be a rigid-analytic variety over K. Let i≥ 0, and denote
(X, i):=(RΓ_(X)_K B_^+)/^i.
* If X is proper, for any j≥ 0, we have a 𝒢_K-equivariant isomorphism in _K^
H^j((X, i))≅ (H^j_(X)⊗_K B_^+)/^i.
* If X is a smooth affinoid or Stein space, for j ≥ i we have H^j((X, i))=0, and for 0≤ j< i we have a 𝒢_K-equivariant exact sequence in _K^
0→ (Ω^j(X_C)/ d)(i-j-1) → H^j((X, i)) → H^j_(X)_K B_^+/t^i-j-1→ 0.
One can argue similarly to the proof of <cit.>. Part expic:1 follows from (the proof of) Proposition <ref>fillemma:2. For part expic:2, using Tate's acyclicity theorem for affinoid spaces, and Kiehl's ayclicity theorem for Stein spaces (which hold true in the condensed setting, <cit.>), and relying crucially on the flatness of the K-Fréchet space B_^+ (and its filtered pieces) for the solid tensor product _K (<cit.>), we have
(X, i)≃ [ O(X)_K B_^+/t^i→Ω^1(X)_K B_^+/t^i-1→⋯→Ω^i-1(X)_K B_^+/t]
from which one readily deduces the statement.
§.§ Proper spaces
In this subsection, we prove a version of the semistable conjecture for proper (possibly singular) rigid-analytic varieties over C. We remark that in the smooth case the following result is already known, <cit.>; however, already in the latter case our proof is different from loc. cit. as it does not rely on Fontaine–Messing syntomic cohomology.
Let X be a proper rigid-analytic variety over C. For each i≥ 0, we have a natural isomorphism
H^i_(X, _p)⊗__pB_log[1/t]≅ H^i_(X)⊗_F̆B_log[1/t]
compatible with the actions of the Frobenius φ and the monodromy N, and inducing a natural isomorphism
H^i_(X, _p)⊗__pB_≅ H^i_inf(X/B_^+)⊗_B_^+B_
compatible with filtrations.
In particular, we have a natural isomorphism
H^i_(X, _p)≅ (H^i_(X)⊗_F̆B_log[1/t])^φ=1, N=0∩^0(H^i_inf(X/B_^+)⊗_B_^+B_).
Here, the filtration on H^i_inf(X/B_^+) is defined by
^⋆ H^i_inf(X/B_^+):=(H^i(^⋆ RΓ_inf(X/B_^+))→ H^i_inf(X/B_^+))
where RΓ_inf(X/B_^+) is endowed with the Hodge filtration (Definition <ref>).
Let us fix i≥ 0. By the properness of X, using Scholze's primitive comparison theorem, and the finiteness of the _p-vector space H_^i(X, _p), <cit.>, we have a natural isomorphism of vector bundles on the Fargues–Fontaine curve
H^i_(X, _p)⊗__p O_≅ E(H_^i(X, _e), H_^i(X, _^+))
where the right-hand side of (<ref>) denotes the vector bundle on associated to the B-pair (H_^i(X, _e), H_^i(X, _^+)) with _e=[1/t]^φ=1.
We claim that (<ref>) is naturally isomorphic to the vector bundle on associated to the B-pair
(H_^i(X)⊗_F̆B_log[1/t])^N=0, φ=1, ^0(H^i(X/B_^+)⊗_B_^+B_)).
For this, using Theorem <ref>, and the perfectness of RΓ_(X) over F̆ proven in Theorem <ref>slopp:1 (combined with Proposition <ref>), we have a natural isomorphism
RΓ_(X, _e)≃ (RΓ_(X)⊗_F̆B_log[1/t])^N=0, φ=1.
Taking cohomology of (<ref>), by Lemma <ref> combined with Lemma <ref>, we have a natural isomorphism
H^i_(X, _e)≅ (H_^i(X)⊗_F̆B_log[1/t])^N=0, φ=1.
Moreover, by Theorem <ref> and Proposition <ref>, we have a natural isomorphism
H^i_(X, _^+)≅^0(H^i(X/B_^+)⊗_B_^+B_).
Hence, the desired claim follows combining the isomorphisms (<ref>) and (<ref>), and the compatibility shown in Theorem <ref>compatib2:1.
Now, we are ready to prove that we have a natural isomorphism (<ref>) as in the statement. From what we have shown above, applying H^0(, -) to (<ref>) we obtain (<ref>), from which we deduce that we have a natural B_log[1/t]-linear injective map
H^i_(X, _p)⊗__pB_log[1/t]→ H^i_(X)⊗_F̆B_log[1/t]
compatible with the actions of the Frobenius φ and the monodromy N. To conclude that (<ref>) is an isomorphism, we observe that
__pH^i_(X, _p)=_F̆H_^i(X).
For this, we note that __pH^i_(X, _p) is equal to the rank of the vector bundle (<ref>) on , and hence, from what we have shown above, it is equal to the rank of the vector bundle associated to the B-pair (<ref>); but the latter is a modification at ∞ of the vector bundle on associated to the finite (φ, N)-module H_^i(X) over F̆, whose rank is _F̆H_^i(X).
Lastly, we have that the isomorphism (<ref>) induces an isomorphism (<ref>) which is compatible with filtrations, recalling again Theorem <ref>, Theorem <ref>, and Proposition <ref>.
We used the following general results.
For any finite φ-module (V, φ) over F̆, the map
φ-1: V⊗_F̆B[1/t]→ V⊗_F̆B[1/t]
is surjective.
It suffices to show that for any sufficiently big integer m, the map
φ-1: V⊗_F̆t^-mB→ V⊗_F̆t^-mB
is surjective. For this, we consider E:= E(V, φ) the vector bundle on the Fargues–Fontaine curve associated to (V, φ), and, for any integer m, the vector bundle E(m):= E⊗ O(m) on . Note that we have
RΓ(, E(m)) =[H^0(Y_, E|_Y_)H^0(Y_, E|_Y_)]
=[V⊗_F̆BV⊗_F̆B].
Then, since for any integer m sufficiently big such that the vector bundle E(m) has non-negative Harder–Narasimhan slopes, one has H^1(, E(m))=0, we deduce that (recalling that φ(t)=pt) for any such m the map (<ref>) is surjective, as desired.
For any finite (φ, N)-module (V, φ, N) over F̆, we have a short exact sequence
0→ V⊗_F̆ B α→ V⊗_F̆ B_logN→ V⊗_F̆ B_log→ 0
where, recalling that B_log=B[U] (Definition <ref>), the morphism α is induced by the isomorphism of finite φ-modules over F̆
exp(N· U):V⊗_F̆ B ∼→ (V⊗_F̆ B_log)^N=0: x↦∑_j≥ 0(-1)^j/j!N^j(x)· U^j.
For any finite (φ, N)-module (V, φ, N) over F̆, the monodromy operator N has finite nilpotency index. If the nilpotency index is 1, i.e. N=0 on V, the statement follows from the exactness of the sequence (<ref>) for V=F̆. The statement in the general case follows by induction on such nilpotency index (cf. the proof of <cit.>).
§.§ Smooth Stein spaces
In this subsection, our goal is to prove the following result, Theorem <ref>. We remark that it could be deduced from Theorem <ref>, Proposition <ref>expic:2 and the theory of Banach–Colmez spaces, as done in <cit.>. However, we give here a more direct proof using the relative fundamental exact sequence of p-adic Hodge theory.
Let X be a smooth Stein space over C. For any i≥ 0, we have a commutative diagram in __p^ with exact rows
0 [r]
[-1em] Ω^i-1(X)/d [-,double line with arrow=-,-]d [r]
[-1em] H^i_(X, _p(i)) [d] [r]
[-1em] (H^i_(X)_F̆B_log)^N=0, φ=p^i [d] [r]
[-1em] 0
0 [r]
[-1em] Ω^i-1(X)/d [r]
[-1em] Ω^i(X)^d=0 [r]
[-1em] H^i_(X) [r]
[-1em] 0.
§.§.§ Recollections on Banach–Colmez spaces
As a preparation for the proof of Theorem <ref>, we need further reminders on the category of Banach–Colmez spaces BC (Definition <ref>).
Recall that we denote by
τ: _v→(C, O_C)_v
the natural morphism of v-sites.
We remark that the equivalence of derived categories (<ref>) in Proposition <ref> does not preserve the natural t-structures. In fact, Le Bras showed that suitably changing t-structure on the source of (<ref>) one can pass to the category of BC spaces:
The category of Banach–Colmez spaces BC is equivalent, via the functor τ_*, to the full subcategory of () having as objects the perfect complexes F concentrated in cohomological degrees [-1, 0] such that H^-1( F) has negative Harder–Narasimhan slopes and H^0( F) has non-negative Harder–Narasimhan slopes.
In the following, for E a vector bundle on , we denote the BC spaces
H^0(, E):=τ_* E, H^1(, E):=R^1τ_* E.
Let us recall that Colmez defined a Dimension function on BC spaces (<cit.>)
=(, ): BC→×
where is called dimension and height.[We refer the reader to <cit.> for the relation between Colmez's original definition of Espace de Banach de dimension finie and Definition <ref>.] In terms of the Fargues–Fontaine curve, the function is characterized by the following properties:
* The function is additive in short exact sequences;
* For any vector bundle E on , denoting by ( E) the degree of E and by ( E) its rank,
χ( E):= H^0(, E)- H^1(, E)=(( E), ( E))
where χ( E) is the Euler-Poincaré characteristic of E (see e.g. <cit.>).
Let E be a vector bundle on . In the following, we will need to consider the cohomology groups of E on as condensed _p-vector spaces. For this, we denote by
f: _→(C, O_C)_
the natural morphism of (small) sites, and we define
H^0(, E):=f_* E, H^1(, E):=R^1f_* E.
Note that the condensed structure on such cohomology groups is just the “shadow” of the structure of BC spaces (<ref>). As an example, we have the identification H^0(, O(-1))=(_C^1)^/_p as BC spaces, which restricted to the site (C, O_C)_ gives the identification H^0(, O(-1))=C/_p as condensed _p-vector spaces.
We are ready to prove the main result of this subsection.
Let X^† be the smooth Stein dagger space associated with X (via <cit.>).
Choose {U_n^†}_n∈ a Stein covering of X^†, and denote by {U_n}_n∈ the corresponding Stein covering of X (i.e. set U_n:=U_n^†). Fix n∈, and let V^†:=U_n^†. Our first goal is to show that we have a diagram as in the statement replacing X with V^†.
Denoting by _h V_h the presentation of a dagger structure on V corresponding to V^† (recall Lemma <ref>dagglemma:1), for a sheaf on the big pro-étale site _C,, we set
RΓ_(V^†, ):=_h∈RΓ(V_h, ).
With this definition, by the relative fundamental exact sequence of p-adic Hodge theory (<ref>), we have the following commutative diagram with exact rows
⋯[r] [-1em] H_^i(V^†, _p) [r][d] [-1em] H_^i(V^†, _e) [r, "α_i"][d] [-1em]H_^i(V^†, _/_^+)[r][-,double line with arrow=-,-]d [-1em] ⋯
⋯[r] [-1em] H_^i(V^†, _^+)[r] [-1em]H_^i(V^†, _) [r, "β_i"] [-1em] H_^i(V^†, _/_^+) [r] [-1em] ⋯
from which we obtain the following diagram with exact rows
0 [r] [-1em] α_i-1[r][d] [-1em] H_^i(V^†, _p) [r][d] [-1em] α_i [r][d] [-1em] 0
0 [r] [-1em] β_i-1[r] [-1em] H_^i(V^†, _^+)[r] [-1em] β_i [r] [-1em] 0.
By <cit.>, we may assume that V^† is the base change to C of a smooth dagger affinoid V_0^† defined over a finite extension L of K. Then, using Theorem <ref> together with the perfectness of RΓ_(V^†) over F̆ (Theorem <ref>slopp:1), Lemma <ref> and Lemma <ref> in order to compute H_^i(V^†, _e), and relying on <cit.> to determine H_^i(V^†, _/_^+), recalling the compatibility proven in Theorem <ref>compatib:2, we have the following commutative diagram with exact rows
0 [r]
[-1em] (H^i_(V^†)⊗_F̆B_log[1/t])^N=0, φ=1 [d, "γ_i"] [r, "∼"]
[-1em] H_^i(V^†, _e) [d, "α_i"] [r]
[-1em] 0 [d] [r]
[-1em] 0
0 [r]
[-1em] H_^i(V_0^†)⊗_L B_/t^-iB_^+ [r]
[-1em] H_^i(V^†, _/_^+) [r]
[-1em] (Ω^i(V^†)/d)(-i-1) [r]
[-1em] 0.
We claim that
γ_i= (H^i_(V^†)⊗_F̆B_log)^N=0, φ=p^i and γ_i=0.
For this, we consider the vector bundle E(H_^i(V^†)) on the Fargues–Fontaine curve associated to the finite (φ, N)-module H_^i(V^†) over F̆. By Theorem <ref>slopp:2, the vector bundle E= E(H_^i(V^†))⊗ O(i) has non-negative Harder–Narasimhan slopes, in particular H^1(, E)=0, and we have the short exact sequence
0→ H^0(, E)→ H^0(∖{∞}, E)→ E^∧_∞[1/t]/ E^∧_∞→ 0.
Note that such short exact sequence identifies with the short exact sequence
0→ (H^i_(V^†)⊗_F̆B_log)^N=0, φ=p^i→ (H^i_(V^†)⊗_F̆B_log[1/t])^N=0, φ=1γ_i→ H_^i(V_0^†)⊗_L B_/t^-iB_^+→ 0
thus proving the claim (<ref>).
Then, twisting by (i) the diagram (<ref>), putting everything together we deduce that, for each n∈, we have a commutative diagram with exact rows
0 [r]
[-1em] Ω^i-1(U_n^†)/ d [-,double line with arrow=-,-]d[r]
[-1em] H^i_(U_n^†, _p(i)) [d] [r]
[-1em] (H^i_(U_n^†)_F̆B_log)^N=0, φ=p^i[d] [r]
[-1em] 0
0 [r]
[-1em] Ω^i-1(U_n^†)/ d [r]
[-1em] Ω^i(U_n^†)^d=0[r]
[-1em] H^i_(U_n^†) [r]
[-1em] 0.
Now, since we have
RΓ_(X, _p(i))=RΓ_(X^†, _p(i))=R_nRΓ_(U_n^†, _p(i))
and similarly
RΓ_(X)=R_nRΓ_(U_n^†), RΓ_(X)=R_nRΓ_(U_n^†)
(see Proposition <ref> and cf. <cit.>), recalling the property <cit.> of the solid tensor product, the statement follows taking the inverse limit of (<ref>) over n∈, observing the following R^1 vanishing statements.
* Using that {U_n}_n∈ is a Stein covering of X, by the condensed version of the Mittag-Leffler criterion for Banach spaces, <cit.>, we have that
R^1_n Ω^i(U_n^†)=R^1_n Ω^i(U_n)=0.
* Since H_^i(U_n^†), for varying n∈, are finite-dimensional condensed C-vector spaces, we have that
R^1_n H_^i(U_n^†)=0.
* Lastly, we have
R^1_n (H^i_(U_n^†)⊗ _F̆B_log)^N=0, φ=p^i=0.
The claim (<ref>) is essentially contained in the proof of <cit.> (that we write here in slightly different terms). By the Mittag-Leffler criterion for condensed abelian groups,[It follows from the Mittag-Leffler criterion for abelian groups, <cit.>, applying the latter to the values on extremally disconnected sets, and using <cit.>.] it suffices to show that in the inverse system {(H^i_(U_n^†)⊗ _F̆B_log)^N=0, φ=p^i, f_nm}, for each n∈ there exists k≥ m such that, for every m≥ k, the image of the maps f_nm are equal to the image of f_nk. For this, recall that, for E_n= E(H_^i(U_n^†))⊗ O(i), we have H^0(, E_n)≅ (H^i_(U_n^†)⊗ _F̆B)^φ=p^i (trivializing the monodromy), and H^1(, E_n)=0. Considering the Euler-Poincaré characteristic of E_n, by (<ref>) we have that
H^0(, E_n)=(( E_n), ( E_n))
we deduce that the BC space H^0(, E_n) has height ≥ 0; moreover, by the characterization of BC spaces in terms of
, Proposition <ref>, any BC subspace of H^0(, E_n) has height ≥ 0. Thus, in the inverse system { H^0(, E_n), f_nm}, for each n∈ the image of the maps f_nm form a chain of BC spaces with decreasing Dimension (for the lexicographic order on ×) and height ≥ 0; in particular, such chain eventually stabilizes, as desired.
As a consequence of Theorem <ref> we have the following result.
Let X be a smooth Stein space over C. Then, we have
H^i_(X, _p)=0
for all i> X.
Using Theorem <ref>, it suffices to prove that H^i_(X)=0 for all i> X. Thus, by Corollary <ref>, we can reduce to showing that H^i_(X)=0 for all i> X, which holds true by Kiehl's ayclicity theorem for Stein spaces (cf. <cit.> for the statement in the condensed setting).
§.§ Smooth affinoid spaces
In view of Theorem <ref> and Proposition <ref>expic:2, it is natural to formulate the following conjecture.
Let X be a smooth affinoid rigid space over C. For any i≥ 0, we have a commutative diagram in __p^ with exact rows
0 [r]
[-1em] Ω^i-1(X)/d [-,double line with arrow=-,-]d [r]
[-1em] H^i_(X, _p(i)) [d] [r]
[-1em] (H^i_(X)_F̆B_log)^N=0, φ=p^i [d] [r]
[-1em] 0
0 [r]
[-1em] Ω^i-1(X)/d [r]
[-1em] Ω^i(X)^d=0 [r]
[-1em] H^i_(X) [r]
[-1em] 0.
The goal of this subsection is to show Conjecture <ref> for curves.
Conjecture <ref> holds true for X a smooth affinoid rigid space over C of dimension 1.
By <cit.>, we can assume that X is the base change to C of a smooth affinoid X_0 defined over a finite extension of K, and, without loss of generality, we can further assume that X_0 is defined over K.
We recall that by Theorem <ref>BK=pet:2 combined with Theorem <ref>, Theorem <ref> and Theorem <ref>, we have, for any i≥ 0, the following commutative diagram whose rows are exact triangles
[-1em] RΓ_, (X, _p(i)) [d] [r]
[-1em] (RΓ_(X_C)_F̆B_log)^N=0, φ=p^i [d] [r]
[-1em] (RΓ_(X)_K B_^+)/^i [-,double line with arrow=-,-]d
[-1em] ^i(RΓ_(X)_K B_^+) [r]
[-1em] RΓ_(X)_K B_^+[r]
[-1em] (RΓ_(X)_K B_^+)/^i
and, by Theorem <ref>BK=pet:1, we have an isomorphism τ^≤ iRΓ_, (X, _p(i))∼→τ^≤ iRΓ_(X, _p(i)).
Moreover, by Tate's acyclicity theorem, the bottom exact triangle of the diagram above maps, via Fontaine's morphism θ: B_^+→ C, to the following exact triangle
[-1em] Ω^≥i(X)[-i] [r]
[-1em] Ω^∙(X)[r]
[-1em] Ω^≤i-1(X).
Then, reducing to the case X is connected, the statement for i=0 follows immediately using that B_log^N=0, φ=1=_p.
For the statement in the case i=1, taking cohomology of the diagram above and using Proposition <ref>expic:2, it remains to check that
H^1(RΓ_(X)_F̆B_log)^N=0, φ=p)≅ (H^1_(X)_F̆B_log)^N=0, φ=p.
For this, we note that, since B_log is a flat solid F̆-vector space (by <cit.>, as it is a filtered colimit of F̆-Fréchet spaces), we have that H^1(RΓ_(X)_F̆B_log)≅ H^1_(X)_F̆B_log. Then, the isomorphism (<ref>) follows from the exactness of the sequences
0→ B→ B_logN→B_log→ 0 0→ B^φ=p→ Bφ-p→B→ 0.
Lastly, the statement for i>1 follows from Lemma <ref> below (which implies the vanishing of H^i_(X, _p(i)) as X is qcqs), together with the fact that, in this case, H^i_(X) vanishes, and, by Corollary <ref>, H^i_(X) vanishes as well.
We used crucially the following result.
Let X be a smooth affinoid rigid space over C. Then, we have
H^i_(X, _p)=0
for all i> X.
Fix an integer i> X. By a result of Bhatt–Mathew, <cit.>, for any n≥ 1, we have the vanishing H^i_(X, /p^n)=0.[The cited result does not keep track of the condensed structure on H^i_(X, /p^n), however doing this does not pose any problem.] Thus, using the exact sequence
0→ R^1_n H_^i-1(X, /p^n)→ H_^i(X, _p)→_n H_^i(X, /p^n)→ 0
it remains to show that
R^1_n H^i-1_(X, /p^n)=0.
Considering the long exact sequence associated to the short exact sequence of sheaves on X_
0→/p→/p^n+1→/p^n→ 0
and using again that H^i_(X, /p)=0, we deduce that the transition maps of the inverse system {H^i-1_(X, /p^n)}_n are surjective; then, we conclude by the Mittag-Leffler criterion.
Now, let us at least indicate a possible direction for proving Conjecture <ref> in dimension higher than 1.
Let us assume for simplicity that X is an affinoid rigid space over C having a smooth formal model over O_C. In this case, by Theorem <ref>mainHK:1 we have in particular that the monodromy action N on the Hyodo-Kato cohomology of X is trivial. We claim that to prove Conjecture <ref> it suffices to show that
H^1(, H_^n(X)⊗ O(m))=0 for all 0≤ n≤ m
where H_^n(X) denotes the n-th Fargues–Fontaine cohomology group of X (Definition <ref>). We note that, by Proposition <ref> (applied to H_^n(X)⊗ O(m)∈() concentrated in degree 0) and (the proof of) Theorem <ref>, the condition (<ref>) is equivalent to the following one[Here, we recall <cit.> and we note that we have Rlim_I⊂ (0, ∞) H^n_(X)_F̆B_I≃ H^n_(X)_F̆B by <cit.>, using that H^n_(X) is a quotient of F̆-Banach spaces.]
φ-p^m: H^n_(X)_F̆B → H^n_(X)_F̆B is surjective for all 0≤ n≤ m.
Before proving the claim, we pause to remark that (<ref>) holds replacing X with a dagger structure X^† on X (Remark <ref>), in fact, thanks to Theorem <ref>lb:1 and Theorem <ref>, the vector bundle H_^n(X^†) on has Harder–Narasimhan slopes ≥ -n. Recalling Theorem <ref>mainHK:1, this suggests the following question: can one prove (<ref>) using that the Frobenius on the n-th crystalline cohomology group has an inverse up to p^n, <cit.>?
Now, to prove that the condition (<ref>) implies Conjecture <ref>, as in the proof of Theorem <ref>, by <cit.>, we can assume that X is the base change to C of a smooth affinoid X_0 defined over a finite extension of K, and, without loss of generality, we can further assume that X_0 is defined over K.
Then, again as in the proof of Theorem <ref>, combining Theorem <ref> with Proposition <ref>expic:2, for all i≥ 0 we obtain the following commutative diagram in __p^ with exact rows
(H^i-1_(X)_F̆B)^φ=p^i [d, "γ_i"] [r]
[-1em] Ω^i-1(X)/dΩ^i-2(X) [-,double line with arrow=-,-]d [r]
[-1em] H^i_(X, _p(i)) [d] [r, "α_i"]
[-1em] (H^i_(X)_F̆B)^φ=p^i [d] [r]
[-1em] 0
0→H^i-1_(X) [r]
[-1em] Ω^i-1(X)/dΩ^i-2(X) [r]
[-1em] Ω^i(X)^d=0 [r]
[-1em] H^i_(X) [r]
[-1em] 0.
Here, we used that, for any 0≤ j≤ i, we have
H^j((RΓ_(X)_F̆B)^φ=p^i)≅ (H^j_(X)_F̆B)^φ=p^i.
In fact, since B is a flat solid F̆-vector space (by <cit.>), we have an exact sequence
0→ (H^j-1_(X)_F̆B)/(φ-p^i)→ H^j((RΓ_(X)_F̆B)^φ=p^i)→ (H^j_(X)_F̆B)^φ=p^i→ 0
and the left term vanishes thanks to (<ref>).
Now, it remains to show that we have
α_i ≅Ω^i-1(X)/ d.
From the diagram above, we obtain the following commutative diagram with exact rows
(H^i-1_(X)_F̆B)^φ=p^i[d, "γ_i"] [r]
[-1em] Ω^i-1(X)/dΩ^i-2(X) [-,double line with arrow=-,-]d[r]
[-1em] α_i [d] [r]
[-1em] 0
0→ H^i-1_(X) [r]
[-1em] Ω^i-1(X)/dΩ^i-2(X) [r]
[-1em] Ω^i-1(X)/ d [r]
[-1em] 0.
Then, using the snake lemma, we deduce that we need to show that the map γ_i is surjective. For this, by <cit.> we have the following exact triangle in ()
H_^i-1(X)⊗ O(i-1)· t→ H_^i-1(X)⊗ O(i)→ι_∞, *(H_^i-1(X))
where ι_∞, *:((C))→() denotes the pushforward functor. Here, we used Theorem <ref>mainHK:3 and the flatness of C for the solid tensor product _F̆ (<cit.>). Taking the long exact sequence in cohomology associated to (<ref>) on , recalling Proposition <ref>, from (<ref>) we deduce that γ_i is surjective, as desired.
§.§ Remarks about coefficients
In this subsection, we indicate a partial extension of Theorem <ref> to coefficients. For simplicity, we restrict ourselves to smooth rigid-analytic varieties over C.
Let X be a smooth rigid-analytic variety over C. Denote by ν:X_→ X_, the natural morphism of sites. For a -local system on X_, i.e. a sheaf of -modules that is locally on X_ free of finite rank, we define the B-cohomology of X with coefficients in as the complex of D(^_B)
RΓ_B(X, ):=RΓ_, (X, Lη_tRν_*).
and we endow it with the filtration décalée.
With the definition above, given a _p-local system on X_ with associated -local system =⊗__p, tensoring with ⊗__p- the exact sequence (<ref>), the same argument used in the proof of Theorem <ref>BK=pet:1 shows that we have a natural isomorphism in D(__p^)
τ^≤ i^iRΓ_B(X, )^φ=p^i∼→τ^≤ iRΓ_(X, (i)).
Similarly, we have a version with coefficients of Theorem <ref>BK=pet:2. Then, combining the same results cited in the proof of Theorem <ref> (in the smooth case), which rely on Scholze’s Poincaré lemma for _^+, we obtain the following result.
Let X be a connected, paracompact, smooth rigid-analytic variety defined over K. Let be a de Rham _p-local system on X_, with associated -local system =⊗__p, and associated filtered O_X-module with integrable connection ( E, ∇, ^∙). For any i≥ 0, we have a 𝒢_K-equivariant isomorphism in D(__p^)
τ^≤ iRΓ_(X_C, (i))≃τ^≤ i(RΓ_B(X_C, )^φ=p^i→ (RΓ_(X, E)_K B_^+)/^i).
toc5pt
§ COMPLEMENTS ON CONDENSED MATHEMATICS
This appendix consists of a miscellaneous collection of results on condensed mathematics that we use in the main body of the paper.
In this appendix, we adopt the same notation and set-theoretic conventions of <cit.>. All condensed rings will be assumed to be commutative and unital.
§.§ Derived p-adic completion and solidification
In this section, we compare the derived p-adic completion to the solidification.
Let p be a prime number. In the following, for M∈ D(), we denote by
M^∧_p:=R_n∈ (M⊗_^/p^n)∈ D()
its derived p-adic completion. We say that an object M∈ D() is derived p-adically complete if the natural map
M→ M^∧_p
is an isomorphism of D().
For A a solid ring, we write _A^ for the symmetric monoidal category of A-modules in , endowed with the solid tensor product _A.
Let A be a derived p-adically complete solid ring. For all cohomologically bounded above complexes M, N∈ D(_A^), we have a natural isomorphism
M^∧_p_A N^∧_p∼⟶ (M_A N)^∧_p.
First, we recall that the category _A^ is generated under colimits by the compact projective objects (∏_I)_A=(∏_I_p)__pA, for varying sets I. By hypothesis, we may assume that M and N are connective; then, writing M=_[n]∈Δ^M_n in D(_A^) with M_n a direct sum of objects of the form (∏_I_p)__pA, the natural map
_[n]∈Δ^(M_n)^∧_p→ M^∧_p
is an isomorphism (as it can be checked via a spectral sequence), and similarly for N.
Therefore, using that the solid tensor product commutes with colimits, it suffices to prove the statement for M=M'__pA and N=N'__pA (concentrated in degree 0), with M' and N' objects of __p^ of the form ⊕_j∈ J (∏_I_j_p) for varying sets J and I_j, for j∈ J.
In the case A=_p, using Lemma <ref> the statement readily reduces (cf. the proof of <cit.>) to the isomorphism
∏_I_p __p∏_I'_p=∏_I× I'_p
which holds for any sets I and I' (see e.g. <cit.>).
In general, we want to show that we can reduce to the case A=_p. We will use that ∏_I_p is flat for the tensor product __p, i.e. for any Q∈__p^ we have that ∏_I_p__pQ is concentrated in degree 0: for this, writing Q as a filtered colimit of quotients of objects of the form ∏_J _p, we can reduce to the case Q is derived p-adically complete; in this case, using (<ref>) for A=_p, by the derived Nakayama lemma, we can reduce to checking the claim modulo p, in which case it follows from <cit.>, using that any solid _p-module can be written as a filtered colimit of profinite _p-vector spaces, <cit.>. Then, from (<ref>) for A=_p, using that A is derived p-adically complete, we deduce that
M^∧_p≅ (M'__pA)^∧_p≅ (M')^∧_p__pA
and similarly for N. Hence, we have a natural isomorphism
M^∧_p_A N^∧_p ≅ (M')^∧_p__p (N')^∧_p__p A≅ (M'__p N'__pA)^∧_p≅ (M_A N)^∧_p
where we used again (<ref>) for A=_p.
We used crucially the following result.
For c∈ [0, 1] we denote (_p)_≤ c:={x∈_p: |x|≤ c}. For any sets J and I_j, for j∈ J, we have
(⊕_j∈ J∏_I_j_p)^∧_p=_f:J→ [0, 1], f→ 0 ∏_j∈ J∏_I_j(_p)_≤ f(j).
where the colimit runs over the functions f:J→ [0, 1] tending to 0 (i.e. for every ε >0, the set {j∈ J: |f(j)|≥ε} is finite) partially ordered by the relation of pointwise inequality f≤ g.
We will adapt the proof of <cit.>. It suffices to prove (<ref>) on S-valued points, for all extremally disconnected set S. Let M:=⊕_j∈ J∏_I_j_p. We note that, thanks to the flatness of the condensed abelian group M (which follows from Lemma <ref>), the derived p-adic completion of M agrees with its underived p-adic completion _n∈M/p^n. Then, we have
M^∧_p(S)=_n∈⊕_j∈ J(S, ∏_I_j_p/p^n)
from which we deduce that
M^∧_p(S)= {(g_j)_j∈ J with g_j∈𝒞^0(S,∏_I_j_p): ∀ε>0, g_j(S)⊆∏_I_j(_p)_≤ε for all but finitely many g_j}
which, in turn, identifies with
_f:J→ [0, 1], f→ 0(∏_j∈ J∏_I_j𝒞^0(S, (_p)_≤ f(j)))
thus showing (<ref>).
The following lemma was used above and in the main body of the paper.
A condensed abelian group M∈ is flat if and only if, for all extremally disconnected sets S, the abelian group M(S) is torsion-free.
§.§ ∞-category of nuclear complexes
In this section, we collect some general properties and characterizations of the ∞-category of nuclear complexes attached to an analytic ring, which are due to Clausen–Scholze, focusing in particular on a special class of analytic rings relevant to the main body of the paper. This section should be read in conjunction with <cit.>.
We recall that an analytic ring (A, M) (in the sense of <cit.>) is commutative if A is commutative (and unital), and normalized if the map A→ M[*] is an isomorphism. All analytic rings will be assumed to be commutative and normalized.
For an analytic ring (A, M), we denote by D(A) the derived ∞-category D(_A^), and we denote by D(A, M) the derived ∞-category of (A, M)-complete complexes in D(A), equipped with the symmetric monoidal tensor product ⊗_(A, M)^. For M∈ D(A, M) we write
M^∨:=_D(A, M)(M, A)
for its dual.
[<cit.>]
Let (A, M) be an analytic ring. A complex N∈ D(A, M) is nuclear if, for all S∈, the natural map
( M[S]^∨⊗_(A, M)^N)(*)→ N(S)
in D() is an isomorphism. We denote by (A, M) the full ∞-subcategory of D(A, M) spanned by the nuclear complexes.
We note that the ∞-subcategory (A, M)⊂ D(A, M) is stable under all colimits (as both the source and the target of (<ref>) commute with colimits in N), and under finite limits (as ⊗_( A, M)^ commutes with finite limits).
In order to recall a useful characterization of nuclear complexes, we need the following definitions.
[<cit.>]
Let (A, M) be an analytic ring.
A map f:M→ N in D(A, M) is called trace-class if it lies in the image of the natural map
(M^∨⊗_(A, M)^N)(*)→_D(A, M)(M, N).
[<cit.>]
Let (A, M) be an analytic ring. An object M∈ D(A, M) is called basic nuclear if it can be written as the colimit of a diagram
P_0f_0→P_1f_1→ P_2f_2→⋯
where P_n∈ D(A, M) are compact objects and f_n are trace-class maps.
Let (A, M) be an analytic ring. An object in D(A, M) is nuclear if and only if it can be written as a filtered colimit of basic nuclear objects.
We deduce the following result.
Let f:(A, M)→ (A, N) be a morphism of analytic rings. The base change functor
-⊗_(A, M)^(B, N):D(A, M)→ D(B, N)
preserves nuclear objects.
It suffices to apply Proposition <ref> observing that the base change functor preserves compact objects and trace-class maps.
To further study the ∞-category of nuclear complexes, we will use the following the definition.
[<cit.>]
Let (A, M) be an analytic ring. We define the trace-class functor
(-)^:D(A, M)→ D(A): M ↦ M^
where M^ is defined on S-valued points,[Via the equivalence of ∞-categories (D())≅ D().] for S∈, as
M^(S)=( M[S]^∨⊗_(A, M)^M)(*).
Now, we collect some basic properties of the trace-class functor.
Let (A, M) be an analytic ring.
* The trace-class functor (-)^:D(A, M)→ D(A) takes values in D(A, M).
* A map f:P→ M in D(A, M) with M compact object is trace-class if and only if it factors through M^.
* An object M∈ D(A, M) is nuclear if and only if the natural map M^→ M is an isomorphism.
For part incc:02, one can adapt the argument of <cit.>. For part incc:12, as in the proof <cit.>, the case P= M[S], with S extremally disconnected set, is clear; for a general P compact object of D(A, M), we can reduce to the previous case writing P as a retract of a finite complex whose terms are objects of the form M[S], with S extremally disconnected set (<cit.>). Part (<ref>) follows immediately from the definitions.
Let (A, M) be an analytic ring. For any M∈ D(A, M), the object M^ (and in particular any nuclear object in D(A, M))[Here, we use Lemma <ref>incc:22.] can be written as a colimit of shifts of objects of the form M[S]^∨ for S extremally disconnected sets.
Recalling that D(A, M) is generated, under shifts and colimits, by M[T] for varying extremally disconnected sets T, we can apply the same argument of <cit.>.
Next, we focus on the categorical properties of the ∞-category of nuclear complexes for the following special class of analytic rings used in the main body of the paper.
Let F be a non-archimedean local field, and let A be a solid F-algebra.
For the analytic ring (A, )_=(A, M_A) from <cit.> (i.e. the analytic ring structure on A induced from the analytic ring _), we denote by _A:=((A, )_)) the full ∞-subcategory of nuclear complexes of _A:=D((A, )_)), and we write _A for the symmetric monoidal tensor product ⊗_(A, )_.
Let F be a non-archimedean local field, and let A be a nuclear solid F-algebra.[Given a solid F-algebra A we say that it is nuclear if the underlying solid F-module is nuclear in the sense of <cit.> with respect to the analytic ring (F, )_ (as we will see in the proof below, this is equivalent to requiring that the complex A[0]∈_F is nuclear in the sense of Definition <ref>). For example, any Fréchet F-algebra is a nuclear F-algebra by <cit.>. ]
* The subcategory _A⊂_A is a stable ∞-category, closed under the tensor product _A, finite limits, countable products, and all colimits.
* The ∞-category _A is generated, under shifts and colimits, by the objects _A(A[S], A), for varying S profinite sets.
* An object M∈_A lies _A if and only if H^i(M)[0] lies in _A for all i.
First we note that, by <cit.>, we have M_A[S]^∨≅_A(A[S], A), for all profinite sets S. Moreover, as the latter objects are flat for the tensor product _A by <cit.>, a complex N[0]∈_A concentrated in degree 0 is nuclear, in the sense of Definition <ref>, if and only if the solid A-module N is nuclear, in the sense of <cit.>.
Thanks to the above observations, part nuclearbanach:22 follows from Proposition <ref> and <cit.>.
For part nuclearbanach:12, the closure of _A⊂_A under finite limits and all colimits was observed more generally in Remark <ref>; for the closure under the tensor product _A and countable products, taking K-flat resolutions in _A (which exist thanks to part nuclearbanach:22), we can reduce to the statement of <cit.>.
For part nuclearbanach:32, if M∈_A lies _A, then H^i(M)[0] lies in _A for all i, using again that the objects M_A[S]^∨, for varying profinite sets S, are flat for the tensor product _A. Conversely, passing to the Postnikov limit, M≃_nτ^≥ -nM, by part nuclearbanach:12, we can suppose that M is cohomologically bounded below, and using M≃_nτ^≤ nM, we can even suppose that M is bounded, and then concentrated in one degree, in which case the implication is clear.
Let A be a nuclear solid F-algebra and let M∈_A. We have that M lies in _A if and only if M (regarded in _F) lies in _F. In fact, by <cit.>, for all extremally disconnected sets S, we have a natural isomorphism M_A[S]^∨≅ M_F[S]^∨_F A.
§.§ Quasi-coherent, nuclear, and perfect complexes on analytic adic spaces
In this section, we recall some results of Andreychev on quasi-coherent, nuclear, and perfect complexes on analytic adic spaces, that we need in the main body of the paper.
Given a pair (A, A^+) with A a complete Huber ring and A^+ a subring of A^∘, we denote by (A, A^+)_ the associated analytic ring, <cit.>.
Let Y an analytic adic space.
The association taking any affinoid subspace U=(A, A^+)⊂ Y to the ∞-category D((A, A^+)_) defines a sheaf of ∞-categories on Y. We define the ∞-category of quasi-coherent complexes on Y as the global sections of such sheaf, and we denote it by (Y).
Next, we recall that also nuclear objects satisfy analytic descent on analytic adic spaces.
Let Y an analytic adic space.
The association taking any affinoid subspace U=(A, A^+)⊂ Y to the ∞-category ((A, A^+)_) defines a sheaf of ∞-categories on Y. We define the ∞-category of nuclear complexes on Y as the global sections of such sheaf, and we denote it by (Y).
It turns out that the ∞-category of nuclear objects associated to an analytic complete Huber pair does not depend on the ring of definition. More precisely, we have the following result.
Let (A, A^+) a pair with A an analytic complete Huber ring and A^+ a subring of A^∘. The ∞-category ((A, A^+)_) is generated, under shifts and colimits, by the objects _A(A[S], A) for varying S profinite sets.
In what follows, given a condensed ring R, we denote by _R⊂ D(_R^) the ∞-subcategory of perfect complexes over R, <cit.>. Andreychev showed that, passing to dualizable objects, Theorem <ref> implies the following result.
Let Y an analytic adic space.
The association taking any affinoid subspace U=(A, A^+)⊂ Y to the ∞-category _A defines a sheaf of ∞-categories on Y. We define the ∞-category of perfect complexes on Y as the global sections of such sheaf, and we denote it by (Y).
amsalpha
|
http://arxiv.org/abs/2306.11133v1
|
20230619193645
|
Chiral active matter in external potentials
|
[
"Lorenzo Caprini",
"Hartmut Löwen",
"Umberto Marini Bettolo Marconi"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.stat-mech"
] |
[email protected], [email protected]
Heinrich-Heine-Universität Düsseldorf, Institut für Theoretische Physik II - Soft Matter,
D-40225 Düsseldorf, Germany
[email protected]
Heinrich-Heine-Universität Düsseldorf, Institut für Theoretische Physik II - Soft Matter,
D-40225 Düsseldorf, Germany
Scuola di Scienze e Tecnologie, Università di Camerino - via Madonna delle Carceri, 62032, Camerino, Italy and
INFN Sezione di Perugia, I-06123 Perugia, Italy
We investigate the interplay between chirality and confinement induced by the presence of an external potential.
For potentials having radial symmetry, the circular character of the trajectories induced by the chiral motion reduces the spatial fluctuations of the particle, thus providing an extra effective confining mechanism, that can be interpreted as a lowering of the effective temperature.
In the case of non-radial potentials, for instance, with an elliptic shape, chirality displays a richer scenario.
Indeed, the chirality can break the parity symmetry of the potential that is always fullfilled in the non-chiral system.
The probability distribution displays a strong non-Maxwell-Boltzmann shape that emerges in cross-correlations between the two Cartesian components of the position, that vanishes in the absence of chirality or when radial symmetry of the potential is restored. These results are obtained by considering two popular models in active matter, i.e. chiral Active Brownian particles and chiral active Ornstein-Uhlenbeck particles.
Chiral active matter in external potentials
Umberto Marini Bettolo Marconi
Jun 18, 2023
===========================================
§ INTRODUCTION
Active matter, encompassing a wide range of self-propelled entities, has emerged as a fascinating field of study in soft matter and non-equilibrium statistical physics <cit.>.
Typical active systems are artificial particles, such as active colloids, active granular particles, and drones, but also living systems with biological origins, such as bacteria, sperms, and several animals.
These systems usually self-propel by virtue of internal mechanisms that convert energy to produce a net motion, through chemical reactions, cilia, flagella, and internal motors, to mention a few examples.
In several cases, the self-propelled motion is characterized by an almost straight path and a fluctuating orientation that changes stochastically without a preferential direction. This motion is induced by the breaking of the translational symmetry at the single-particle level in the body or in the swimming and running mechanism that induces a net polarity in the particle.
The physical or biological systems displaying this motion are classified as linear particles or swimmers.
This is the standard scenario for several bacteria, such as E. Coli, active colloids, such as Janus particles, or polar active granular particles.
However, in nature, several active systems show trajectories systematically rotating clockwise or counterclockwise, the so-called chiral or circular self-propelled particles <cit.>.
The concept of chirality or handedness was introduced by Lord Kelvin more than one century
ago in reference to the circular (helical) motion produced by solid bodies with asymmetric shapes in two (three)
dimensions. Nowadays, chirality has been renewed in the field of active matter <cit.>, being observed for instance in proteins <cit.>, bacteria <cit.> and sperms <cit.> moving on a two-dimensional planar substrate, and L-shape artificial microswimmers <cit.>. In addition, even spherical (non-chiral) particles can show circular (chiral) trajectories due to asymmetry in their self-propulsion mechanism, as occurs in colloidal propellers in a magnetic or electrical field <cit.>, and cholesteric droplets <cit.>. In addition, granular systems such as spinners <cit.> and Hexbug particles driven by light <cit.> usually display chiral motion.
Being ubiquitous in nature, the interest in chiral active matter is recently showing exponential growth in time, in different contexts ranging from the statistical properties of single-particles to collective phenomena displayed by interacting systems.
Through the introduction of simple models, the single-particle chiral active motion has been explicitly explored <cit.> with a focus on the mean-square displacement <cit.>, in a viscoelastic medium <cit.>, in the presence of pillars <cit.> or sinusoidal channels <cit.>.
In channel geometries, chirality is also responsible for the reduction of the accumulation near boundaries typical of active systems and for the formation of surface currents <cit.>.
In the case of interacting systems, chirality is able to suppress the clustering typical of active particles <cit.> but induces novel phenomena, such as emergent vortices induced by the chirality <cit.> or a global traveling wave in the presence of a chemotactic alignment <cit.>.
Chiral active particles exhibit fascinating phenomena also in the presence of alignment interactions giving rise to pattern formation <cit.> consisting of rotating macro-droplets <cit.>, chiral self-recognition <cit.>, dynamical frustration <cit.>, and chimera states <cit.>.
In addition, chirality appears as a fundamental ingredient to observe the hyper-uniform phase <cit.> in active matter as well as emerging odd properties <cit.> for instance in the viscosity <cit.>, elasticity <cit.>, and mobility <cit.>.
Recently, the circular motion has been also investigated in the framework of active glasses where it gives rise to a novel oscillatory caging effect entirely due to the chirality <cit.>.
Chirality could play a fundamental role in several applications due to their emerging properties, such as sorting <cit.> and synchronization <cit.>.
For instance, chiral microswimmers can be sorted according to their swimming properties by employing patterned microchannels with a specific chirality <cit.>.
Chirality is also at the basis of the ratcheting mechanism observed in an array of obstacles <cit.> even leading to translation at fixed angles with respect to the substrate periodicity due to a periodic potential <cit.>.
Moreover, binary mixtures of passive and active chiral particles, as well as mixtures of chiral particles with opposite chiralities show demixing <cit.>.
Spontaneous demixing has been also observed experimentally in a system of active granular particles, the so-called spinners that are self-propelled because of the asymmetry of internal components of their bodies <cit.>.
Despite the recent attention on chiral active matter, the interplay between chirality and external confinement due to an external potential has been less investigated <cit.> to the best of our knowledge.
Here, we focus on active chiral particles in a radial (circular) and non-radial (elliptic) potential, exploiting the influence of circular motion on the properties of the system.
In particular, we perform a numerical and analytical study based on two popular models in active matter, i.e. the chiral active Brownian particles and chiral active Ornstein-Uhlenbeck particles.
We anticipate that for a radial potential, the chirality induces only an increasing confinement in the particle's dynamics, effectively reducing the fluctuations of the systems and, thus its effective temperature (Fig. <ref> (a)).
In contrast, in the case of non-radial potential, the chirality is able to break the parity symmetry of an elliptic potential. This is reflected, for instance, in the occurrence of strong correlations between different spatial components of the system (Fig. <ref> (b)). This effect is uniquely based on the interplay between chirality and spatial asymmetry of the potential.
The paper is structured as follows:
in Sec. <ref>, we introduce and discuss the models, i.e. chiral active Brownian particles and chiral active Ornstein-Uhlenbeck particles, employed to perform the numerical and analytical study.
The dynamics in the radial and non-radial potentials are analyzed in Sec. <ref> and Sec. <ref>, respectively.
We summarize the results and report a conclusive discussion in the final section <ref>.
Finally, for the sake of completeness but also to render the presentation lighter, we reported
in an appendix the derivation of the Fokker-Planck equation governing the evolution of the probability distribution function
of the chiral active model together with a pair of simple illustrative cases.
§ MODEL
Active particles in the overdamped regime are described by the following dynamics for the particle position 𝐱:
γ𝐱̇= 𝐅(𝐱)+ γ√(2 D_t)𝐰 + γ v_0𝐧 ,
where 𝐰 is a Brownian white noise with unit variance and zero average accounting for the random collisions with the particle of the solvent.
The coefficient γ is the friction coefficient due to the solvent, while D_t is the translational diffusion coefficient of the system.
The term 𝐅(𝐱) is the external force due to a potential U(𝐱), such that 𝐅=-∇ U.
The last force term in Eq. (<ref>), namely v_0 γ𝐧, known as active force, describes at a coarse-grained level the chemical, biological or physical mechanism responsible for the self-propulsion.
The constant v_0 provides a velocity scale to the dynamics and it is often referred to in the literature as swim velocity, while the vector 𝐧 is a stochastic process with unit variance whose properties and dynamics determine the active model considered.
𝐧 is an additional degree of freedom that is absent for equilibrium systems where v_0=0.
Despite the generality of Eq. (<ref>), for simplicity, we restrict ourselves to two spatial dimensions.
§.§ Chiral active Brownian particles (ABPs).
In the ABP dynamics <cit.> independently of the chirality, the term 𝐧 is a unit vector, such that |𝐧|=1, usually associated with the orientation of the active particle.
Since the modulus of 𝐧 is unitary, the dynamics of 𝐧 can be conveniently expressed in polar coordinates.
In this representation, 𝐧=(cosθ, sinθ), where θ is the orientational angle of the active particle that evolves as a simple diffusive process:
θ̇ = √(2/τ) ξ + ω ,
where ξ is a white noise with unit variance and zero average and the typical time τ can be identified with the persistence time
induced by the rotational diffusion coefficient D_r=1/τ.
In the ABP dynamics, the chirality is introduced by adding an angular drift ω in Eq. (<ref>), which
breaks the rotational symmetry of the active force dynamics and induces a preferential rotation of the vector 𝐧 in the clockwise or counterclockwise direction depending on the sign of ω.
As a consequence, the single-particle trajectories of a chiral ABP
tend to be circular.
The value of |ω| determines the strength of chirality: the larger ω, the smaller the typical radius of the circular trajectories of a single particle, given by v_0/ω.
§.§ Chiral active Ornstein-Uhlenbeck particles (AOUPs).
In the AOUP dynamics <cit.>, 𝐧 is described by a two-dimensional Ornstein-Uhlenbeck process that allows both the modulus |𝐧| and the orientation θ to fluctuate with related amplitudes <cit.>.
The AOUP distribution is a two-dimensional Gaussian such that each component fluctuates around a vanishing mean value with unit variance.
The resulting dynamics of the vector 𝐧 reads:
𝐧̇= - 𝐧/τ + √(1/τ)χ + ω 𝐧×𝐳
where χ is a two-dimensional vector of white noises with uncorrelated components having unitary variance and zero average.
Here, τ represents the persistence time of the particle trajectory, i.e. the time that the particle,
in the absence of angular drift, spends moving in the same direction before a reorientation of the active force.
In the AOUP model the diffusion coefficient due to the active force is
obtained form the relation 2D_a/τ= v_0^2τ, which allows
a simple comparison between AOUP and ABP models <cit.>.
In the AOUP dynamics, the chirality is included by adding the force ω 𝐧×𝐳, where 𝐳 is the direction orthogonal to the plane of motion and the parameter ω quantifies the chirality of the particle <cit.>.
Such a force is always directed in the plane of motion, normal to 𝐳,
and is orthogonal to 𝐧, so that it rotates the self-propulsion vector in the clockwise or counterclockwise direction depending on the sign of ω.
Similarly to the chiral ABP model, the chiral AOUP dynamics displays circular trajectories. However, in contrast with the ABP dynamics, the typical circles observed by an AOUP are characterized by a fluctuating radius, that
on average is equal to the one of the ABP and ≈ v_0/|ω|.
It is worth noting that the chiral term in the AOUP dynamics is totally equivalent to the chiral term in the ABP dynamics. Indeed, the constant force ω 𝐧×𝐳 in polar coordinate affects only the dynamics of the polar angle through a constant term equivalent to the driving angular velocity written in Eq. (<ref>).
§.§ Relation between chiral AOUPs and chiral ABPs.
Despite the AOUP and ABP dynamics are different, both are usually employed to describe active particles and display similarities so that AOUP has been often employed to derive analytical predictions suitable to describe ABP numerical results.
The reason of this agreement lies in the fact that the two-time self-correlations of 𝐧 of the two models are identical
with an approprate choice of parameters <cit.>.
For both cases, we find
⟨𝐧(t)·𝐧(0)⟩=e^-t/τcos(ω t) .
It is worth noting that, in Eq. (<ref>), the chirality affects the shape of the autocorrelation by inducing oscillations.
Despite ABP and AOUP have different dynamics and are characterized by different steady-state distributions, such dynamical properties are at the basis of a plethora of similar phenomena observed for a single particle but also for interacting systems.
A comparison between the two models has been established for a single non-chiral active particle and a non-chiral active particle in a harmonic potential, while, more generally, the relation between the two models has been deepened in Ref. <cit.>.
However, the effect of chirality in the two models confined in an external potential has been poorly investigated in the literature.
§ CHIRAL ACTIVE PARTICLE IN A RADIAL POTENTIAL
We start by considering chiral active particles confined by a simple harmonic potential in two dimensions, U(𝐱) = k 𝐱/2, that exerts a linear force on the particle directed towards the origin.
Both in chiral ABP and chiral AOUP simulations, it is convenient to rescale time by the persistence time τ and the position by the persistence length v_0 τ.
In this way, the chirality can be tuned by changing the dimensionless parameter ωτ, which we call reduced chirality.
The other dimensionless parameters of the simulations are the reduced stiffness of the potential kτ/γ and the ratio between passive and active diffusion coefficients, D_t/(τ v_0^2).
For simplicity, we set D_t=0 and eliminate D_t/(τ v_0^2).
Indeed, the thermal noise is orders of magnitudes smaller than the diffusion due to the active force in several experimental systems <cit.>.
Finally, we set kτ/γ=1.
The effect of this parameter has been explored in the AOUP case analytically <cit.>, and in the ABP case numerically <cit.> and experimentally <cit.> by considering an active Janus particle in an optical tweezer.
Here, we focus on the role of reduced chirality, ωτ.
Active particles in radial potentials <cit.> have been widely investigated in the absence of chirality for which we summarize the results:
the AOUP dynamics in a harmonic potential can be solved exactly <cit.>, being fully linear, and is described by a multivariate Gaussian distribution in 𝐱 and 𝐧. As a consequence, the density p(𝐱) of the system is still Gaussian and the active force affects the distribution by changing its effective temperature only <cit.>.
The ABP dynamics in harmonic potential has been exactly solved only recently <cit.> and leads to a more intriguing scenario <cit.>. While in the small persistence regime, (small τ or large D_r), the density is Gaussian <cit.> and similar to the one of the AOUP, in the large persistence regime, ABPs accumulate far from the potential minimum, as confirmed experimentally by active colloids <cit.>, roughly at the distance where the active force balances the potential force, i.e. at |𝐱| ≈ v_0γ/k . As a result, the two-dimensional density in the plane of motion is characterized by a Mexican-hat shape while the density, projected onto a single coordinate, displays bimodality.
The results observed in the ABP are reminiscent to those originally obtained of considering Run&Tumble particles <cit.>.
§.§ Spatial distribution
To investigate the role of chirality, we plot the probability distribution p(x,y) in the plane of motion for three representative values of the reduced chirality, ωτ.
This analysis is performed both for the ABP (Fig. <ref> (a)-(c) ) and AOUP (Fig. <ref> (d)-(f)) models.
In the chiral AOUP case, the system is linear and, as a consequence, p(x,y) is a Gaussian centered at the origin in both spatial directions, independently of the value of ωτ.
The increase of the chirality induces a stronger confinement of the particle as if the potential was stiffer or
the dynamics governed by a lower effective temperature.
Indeed, the system is described by the following p(x,y)
p(x,y)=𝒩exp( - k (x^2+y^2/2 T_eff)
with effective temperature (in units of Boltzmann constant, k_B=1)
T_eff=⟨ x^2⟩/k=1+τ/γk/ (1+τ/γ k)^2+ω^2τ^2 τγ v_0^2 .
The theoretical results (<ref>) and (<ref>) are derived in Appendix <ref>, while the general method is described in Appendix <ref>.
The effective temperature T_eff is consistent with the expression for ωτ≪ 1, which a decrease as τ→ 0 and an increase proportional to v_0^2.
The effect of chirality ω manifests itself as a decrease of the effective temperature, consistently with Figs. <ref> (a), (b), and (c).
As expected, the ABP case is richer:
for small values of ωτ≲ 1, chiral ABPs accumulate at a finite distance from the minimum of the potential (Fig. <ref> (a)) as already observed in the absence of chirality.
The distribution displays the typical Mexican-hat shape, i.e. the particles accumulate on a ring roughly at distance ≈ v_0 / k from the origin. In this regime, the increase of the chirality broadens the width of the ring.
The tendency of particles to rotate (on average) in a clockwise (counterclockwise) direction hinders the ability of the particles to accumulate out of the minimum: a particle accumulated at a radial distance ≈ v_0 γ/k could change the direction of the active due to the rotation induced by the chirality.
For larger values of ωτ∼ 1, the rotations of the particles are stronger and characterized by a smaller radius of the circle. Thus, the accumulation is observed at a position much closer to the minimum of the potential with respect to the previous case (Fig. <ref> (b)): particles cannot reach the position v_0γ /k before the chirality turns the direction of the active force before the particles arrive at this position.
Finally, the accumulation is completely suppressed for ωτ≳ 1, when the particle simply performs small circular trajectories around the minimum of the potential.
In the latter regime (Fig. <ref> (c)), p(x,y) is again peaked at the origin and the effect of chirality can be mapped again onto an effective temperature.
This occurs because the radius of the circular trajectory, namely v_0/ω is smaller than the typical distance at which particles accumulate v_0/k. As a consequence, particles' ability to climb on the potential is contrasted by their tendency to spin and perform circular trajectories around the potential minimum.
§.§ Projected density and moments of the distribution
In Fig. <ref> the spatial density, p_1(x), projected onto a single spatial component are plotted for several values of reduced chirality ωτ.
As expected, the ABP case (Fig. <ref> (a)) is richer than the AOUP case (Fig. <ref> (b)).
The latter is characterized by a Gaussian p_1(x), whose variance varies with ωτ, while the former shows a transition from a bimodal distribution (characterized by two lateral peaks) to a unimodal distribution, when ωτ≳ 1.
We consider the moment of this distribution both for ABP and AOUP cases.
By symmetry, the first moment is zero, while
in both models, the variance ⟨ x^2 ⟩ of p(x) displays a monotonic decrease with ωτ starting at ωτ∼ 1.
For the variance of the distribution, both AOUP and ABP dynamics show consistent results.
Finally, we study the kurtosis of the distribution ⟨ x^4⟩ / ⟨ x^2⟩^2 in the AOUP and ABP to quantify the non-Gaussianity of the latter.
In the AOUP case, the kurtosis is equal to 3 being the model Gaussian, whereas
in the ABP, the kurtosis is always smaller than 3 as a result of the non-Gaussian nature of the distribution.
As ωτ increases the kurtosis goes from a value ≈ 2 (when p_1(x) is bimodal) to a large asymptotic value sightly smaller than 3 (where p_1(x) is unimodal).
This implies that the chirality reduces the non-Gaussianity of the distribution but that the unimodal p_1(x) observed for larger ωτ is still non-Gaussian.
§ CHIRAL ACTIVE PARTICLE IN A NON-RADIAL POTENTIAL
In this section, we investigate the dynamics of an active chiral particle in a potential that breaks the rotational symmetry of the system. We consider a harmonic potential with an elliptic shape: U(x,y)=1/2(k_x x^2 + k_y y^2).
Such a potential introduces an additional dimensionless parameter, k_y/k_x, which quantifies the asymmetry of the potential and chose k_y/k_x =3.
The remaining dimensionless parameters are k_y τ/γ =1 and D_t/(τ v_0^2)=0.
Here, again we vary the reduced chirality ωτ to study the interplay between chirality and asymmetry of the potential.
The asymmetry between the two orthogonal directions in the corresponding equilibrium system
would be fully described by the Maxwell-Boltzmann distribution: particles fluctuate around the origin and explore larger regions of space along the direction where the potential gradient is weaker.
The generalization to non-chiral active particles is rather straightforward both for AOUP and ABP and does not present significant changes with respect to the symmetric case.
Indeed, the non-chiral AOUP in the potential U(x,y) is characterized by a Gaussian distribution similar to the equilibrium case, while the non-chiral ABP, displays accumulation away from the minimum on an ellipsoidal domain rather than a circular one.
Intuitively, the accumulation along the more confined direction will be stronger.
§.§ Spatial distribution and cross-correlations
The role of chirality in a harmonic elliptic potential is analyzed by studying the two-dimensional density distribution p(x,y). The analysis is performed both for ABP and AOUP dynamics and for several values of the reduced chirality ωτ (Fig. <ref>).
In the AOUP case (Fig. <ref> (e)-(h)), p(x,y) displays a Gaussian shape, i.e. particles preferentially explore the spatial regions close to the origin, i.e. the minimum of the potential.
For small ωτ≪ 1 (Fig. <ref> (e)), the findings are consistent with the non-chiral scenario: active particles explore the elliptic region around the origin and the chirality slightly decrease the spatial fluctuations as seen in the case of a radial potential.
The effect of the chirality emerges for larger values of ωτ.
As shown in Fig. <ref> (f)-(h), the chirality tilts the main axis of the ellipse where the particles accumulate.
As a consequence, p(x,y) has a non-Maxwell-Boltzmann shape, since the distribution cannot be expressed as p(x,y) ∼ e^-U/T_eff, with k_B=1.
As already remarked, this effect is absent for non-chiral AOUP, and, thus, is purely induced by the interplay between the chirality and the breaking of the radial symmetry of the confining potential.
In general, we observe that the increase of ωτ increases the tilt angle of the ellipsoid until it reaches a saturation value that by symmetry cannot exceed π/4.
Finally, for ωτ≳ 1 the chirality leads to a stronger confinement and, thus, decreases the effective temperature of the system without altering the ellipsoidal shape of the potential, as shown from Fig. <ref> (g) to Fig. <ref> (h).
The last observation is consistent with the finding relative to the radial potential of Sec. <ref>.
The numerical results are confirmed by the expression for the probability distribution p(x,y) that reads (see Appendix <ref>)
p(x,y)=Cexp(-1/2⟨ y^2 ⟩ x^2+⟨ x^2 ⟩ y^2- 2⟨ xy⟩ xy/⟨ x^2 ⟩⟨ y^2 ⟩-⟨ xy ⟩^2)
where the variances ⟨ x^2⟩ and ⟨ y^2 ⟩ are given by
⟨x^2⟩ =
v_0^2τγ/ k_x (1+τ/γk_x) /(1+τ/γk_x)^2 +Ω^2τ^2
⟨y^2⟩ =
v_0^2τγ/ k_y (1+τ/γk_y) /(1+τ/γk_y)^2 +Ω^2τ^2 .
Expression (<ref>) shows that the interplay between chirality and elliptic confinement induces a cross-correlation ⟨ xy⟩.
The shape deformation of the probability distribution observed numerically in Fig. <ref> is
described analytically by the formula:
⟨ xy⟩=ωτv_0^2τγ/k_x+k_y( 1 /(1+τ/γk_y)^2+ω^2τ^2
- 1 /(1+τ/γk_x)^2+ω^2τ^2) .
The cross-correlation vanishes for ω→ 0 and displays a non-monotonic behavior as a function of the reduced chirality: it is positive or negative depending on the sign of ω and on the ratio k_y/k_x,
and vanishes when the radial symmetry is restored (k_x=k_y).
As in the case of radial potential, the ABP dynamics displays a richer scenario (Fig. <ref> (a)-(d)).
For small reduced chirality ωτ≪ 1 (Fig. <ref> (a)), particles accumulate away from the potential minimum along the ellipsoid determined by the potential. In particular, particles accumulate more along the x direction where the system is more confined, with respect to the y direction.
In this regime, the increase of the chirality is able to change the orientation of the accumulation area introducing an evident asymmetry in the shape of p(x,y) (Fig. <ref> (b)).
This effect is enhanced when the reduced chirality is increased, until the regime ωτ∼ 1.
Correspondingly, the tendency of particles to climb on the potential is reduced and we can observe larger spatial fluctuations (Fig. <ref> (c)).
The mechanism that leads to the latter effect is equal to that described in Sec. <ref>.
Finally, spatial fluctuations are consistent (Fig. <ref> (d)) as if the system was governed by a smaller effective temperature until the accumulation far from the potential minimum is completely suppressed.
Again, this is consistent with the results described for a chiral particle in a radial potential.
Both AOUP and ABP dynamics are characterized by a non-Maxwell-Boltzmann distribution with a breaking of the parity symmetry with respect to the x (or y) axis that characterizes the elliptic potential.
In other words, even if U(-x, y)=U(x,), we have p(-x,y)≠ p(x,y) (or equivalently p(x,-y)≠ p(x,y)).
This effect emerges in the occurrence of spatial correlations between the Cartesian components of the positions and is purely due to the interplay between chirality and asymmetry of the potential.
§.§ Moments of the distribution
To quantify this effect we consider the moments of the distribution for x and y coordinates (Fig. <ref>).
Specifically, Fig. <ref> (a) displays the variances ⟨ x^2 ⟩ and ⟨ y^2 ⟩ as a function of the reduced chirality ωτ.
The results are similar for both ABP and AOUP and agree with the theoretical prediction Eq. (<ref>) and Eq. (<ref>).
The variances of the distribution that can be interpreted as the effective temperature of the system decrease for both x and y components approximatively when ωτ≈ 1.
However, the effect of chirality manifests itself for smaller values of ωτ when the system is less confined, i.e. along the y component.
For ωτ≫ 1, the chirality decreases the effective temperature of the system as ∼ω^-2.
Similarly to Fig. <ref>, to quantify the non-Gaussian nature of the system we study the kurtosis along x and y components, defined as ⟨ x^4⟩/⟨ x^2 ⟩^2 and ⟨ y^4⟩/⟨ y^2 ⟩^2.
In agreement with our intuition, the kurtosis of the AOUP model
for every value of ωτ, is equal to 3.
In the ABP case, the two kurtosis display the same qualitative behavior observed in the case of the radial potential in Sec. <ref>. They start from values close to 2, when the system displays accumulation far from the potential minimum, and then increase with ωτ, until reach an asymptotic value slightly smaller than 3.
Here, the non-Gaussian nature of the chiral ABP is more evident along the x axis when the system is more confined.
Finally, we plot the cross-correlation ⟨ x y ⟩, as a function of ωτ, where again, the ABP and AOUP display similar results.
The cross-correlation of both models is reproduced by the theoretical prediction (<ref>) that shows a non-monotonic behavior.
In the regime of small reduced chirality, ωτ≪ 1, the cross-correlation starts from zero and then grows almost linearly until reaches a maximum around ωτ≈ 1.
From here, further increase of ωτ reduces the value of ⟨ x y ⟩ with a scaling ∼ω^-2 until vanishes.
§.§ Conditional moments of the distribution
To underpin the breaking of the parity symmetry of the distribution induced by the interplay between chirality and potential asymmetry, we study the conditional distribution of the system, p(y|x), i.e. the distribution calculated at fixed x, defined as p(y|x)=p(x,y)/p_1(x) (Fig. <ref>) and the corresponding first conditional moment.
Fig. <ref> (b) and (f) show p(y|x) for ωτ=2 for three positions x/(v_0τ)=0, 0.2, 0.5 considered as examples. Panel (b) refers to the ABP dynamics (whose joint distribution, p(x,y), is reported in Fig. <ref> (a)) while panel (c) refers to the AOUP dynamics (whose p(x,y) is reported in Fig. <ref> (e)).
In the AOUP case, the distribution has a Gaussian shape in all the cases.
However, for x/(v_0τ)=0, the Gaussian is centered in the origin while by increasing x/(v_0τ), the center of the Gaussian shifts to values larger than zero.
In other words, the parity symmetry (characterizing the elliptic potential) is broken at fixed x/(v_0τ), i.e. p(y|x)≠ p(-y|x).
This is consistent with our analytical prediction
p(y|x)= C' exp(-1/2⟨ xy ⟩^2 x^2+⟨ x^2 ⟩^2 y^2- 2⟨ xy⟩⟨ x^2 ⟩ xy/(⟨ x^2 ⟩⟨ y^2 ⟩-⟨ xy ⟩^2) ⟨ x^2 ⟩)
and
⟨ y(x) ⟩ = ⟨ xy⟩/⟨ x^2⟩x
is the first conditional moment of the distribution, i.e. the average y at fixed x, as a function of x.
As clear from the shape of p(x,y) and known results in the absence of chirality, the ABP has a non-Gaussian distribution.
The conditional distribution of both models shows a similar degree of asymmetry and, in particular, the breaking of the parity symmetry in the distribution p(y|x) ≠ p(-y|x).
Indeed, at x/(v_0τ)=0, the p(y|x) displays a fully symmetric bimodal profile.
For larger values of x/(v_0τ), the spatial shape of p(y|x) displays intrinsic asymmetry: the right peak of the distribution becomes larger than the left until the left peak is completely suppressed.
To characterize this asymmetry, we study the first conditional moment of the distribution ⟨ y(x)⟩.
This analysis is reported in Fig. <ref> (g) and (h) for the AOUP case and in Fig. <ref> (c) and (d) for the ABP dynamics for several values of the reduced chirality ωτ.
In both cases, ⟨ y(x)⟩ is described by a linear profile with the same slope, in agreement with our theoretical prediction Eq.(<ref>).
⟨ y(x)⟩ shows an almost flat profile for ωτ≪ 1, as expected from the non-chiral case.
The slope is an increasing function of the chirality until reaches a maximum for ωτ=2.
For larger values of ωτ, the slope decreases again until becomes almost flat.
This non-monotonicity explains the one observed in the behavior of the cross-correlation ⟨ xy⟩ (Fig. <ref> (c)). Indeed, the non-zero conditional moment ⟨ y(x)⟩ induces global cross-correlations in the full distribution and thus, the larger ⟨ y(x)⟩, the larger ⟨ xy⟩.
§ CONCLUSIONS
In summary, we have studied a chiral active particle confined in an external potential, with and without radial symmetry.
For radial potentials, the chirality affects the effective temperature of the system both for ABP and AOUP dynamics.
Specifically, in the AOUP case, the dynamics displays Gaussian properties due to the linearity of the system with an effective variance that decreases with the chirality.
In the ABP case, the chirality reduces the non-Gaussianity of the system, by suppressing the accumulation far from the minimum of the potential typical of the non chiral confined ABP. In other words, the chirality induces a transition from a bimodal to a unimodal density.
For non-radial potentials, the scenario is richer due to the interplay between chirality and asymmetry of the potential which is able to break the parity symmetry in the probability distribution of the system.
As a consequence, a non-Maxwell-Boltzmann distribution is found both for chiral ABP and chiral AOUP dynamics.
This effect emerges in cross-correlations between the Cartesian components of the position that are present both for chiral ABP and chiral AOUP.
The linearity of the AOUP makes possible analytical calculations that allow us to analytically predict the first two moments of the chiral ABP in a harmonic potential.
LC acknowledges support from the Alexander Von Humboldt foundation.
HL acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) through the SPP 2265, under grant number LO 418/25-1.
§ DERIVATION EFFECTIVE EQUATION FOR THE PROBABILITY DISTRIBUTION FUNCTION
Although the linear models can be solved by considering the Langevin equation for the coordinates and then deriving the distribution function from the first non vanishing cumulants, an equivalent description is possible in terms of an effective
Fokker-Planck equation (FPE) for the distribution function. At the linear level, the two methods yield equal results and the choice between them is a matter of taste, but when the potential is non quadratic the FPE method is simpler to implement.
Here, we develop the second method in the case of chiral active particles.
For the sake of completeness, we briefly illustrate the basic assumptions leading to a closed equation for the probability density distribution <cit.>.
The equation of motion (<ref>) for D_t=0 can be written for each component as
γd x̅_m(t)/dt = F_m + η_m(t)
where the index m marks denotes different Cartesian components (for instance, m=x,y in two dimensions) and η_m is a component of the active force γ v_0 𝐧.
By standard manipulations, we derive the equation for the associated probability distribution function
∂/∂ t p({x},t) =
- 1/γ∑_m∂/∂ x_m F_m({x}) p({x},t)
- ∑_m∂/∂ x_m⟨η_m(t) ρ̂({x},t)⟩ .
where ρ̂({x},t)= Π_mδ(x̅_m(t)-x_m), with x_m the local value assumed by x̅_m, and p({x},t)= ⟨ρ̂({x},t)⟩.
The average ⟨·⟩ is performed over the realizations of the stochastic process η_m and the curly brackets are used to denote a dependence over all the components of a vector.
Since Eq. (<ref>) is not a closed equation for the probability distribution function,
we employ the Novikov formula <cit.> to evaluate the average appearing in the last term.
This formula is valid for arbitrary Gaussian random functions (Note that the ABP is not described by a Gaussian noise):
⟨η_m(t) R[{η}] ⟩
=∫_0^t dt' ∑_n C_mn(t,t')
< δ R[{η}] /δη_n>
where R[{η}] denotes a functional of {η} and on the right hand side is the variational derivative of this functional.
The term
C_mn(t,t')=
⟨η_m(t) η_n(t') ⟩
is the active force correlation function.
Employing Eq. (<ref>) and the definition of ρ̂({x},t), we get
⟨η_m(t) ρ̂({x},t)⟩
= ∫_0^t dt' ∑_n C_mn(t,t')
∑_k < δ( ρ̂({x},t)) /δx̅_kδx̅_k(t)/δη_n(t')>
= - ∑_k ∑_n ∂/∂ x_k∫_0^t dt' C_mn(t,t')
< ρ̂({x},t)
δx̅_k(t)/δη_n(t')> .
The functional derivative of x̅_k(t) with respect to η_n(t') is given by the following expression valid for for t>t'
δx̅_k(t)/δη_n(t') = θ(t-t')
[exp∫_t'^t ds J(s)]_kn
where the matrix J(s) has elements J_kl(t) =1/γ∂ F_k({x(t)})/∂ x_l(t).
Combining Eq. (<ref>) with Eq. (<ref>), we find
⟨η_m(t) ρ̂({x},t) ⟩
=
- ∑_k ∑_n ∂/∂ x_k
∫_0^t dt' [C_mn(t,t') < ρ̂({x},t)
(exp∫_t'^t ds J(s) )_kn> ] .
The expressions obtained up to here are exact but not close.
Therefore, we employ a closure scheme to obtain a theoretical prediction for the probability distribution.
To achieve this goal, we estimate the Eq. (<ref>) as follows:
< ρ̂({x},t)
exp(∫_t'^t ds J(s) )_kn>
≃< ρ̂({x},t) >
( exp< J(t) > (t-t') )_kn .
Here, we have performed three approximations: 1) the factorization of the averages; 2) the replacement of the average of the exponential with the exponential of the average. 3) we have treated J(s) as a constant in the time integral in the exponent.
Let us remark that the above approximations are exact in the case of quadratic potentials because 𝐉(t)=const and not an approximation as in the general case.
Going back to Eq. (<ref>), we find
⟨η_m(t) ρ̂({x},t) ⟩
= - ∑_k ∂/∂ x_k p({x},t) D_mk(t)
where we have defined the following matrix elements:
D_mk(t)= ∑_n [∫_0^t dt̃ C_mn(t̃)
( exp< J(t) > t̃)_nk]
Finally, we obtain a closed equation for the probability distribution
∂/∂ t p({x},t) =
- ∑_m∂/∂ x_mF_m({x})/γ p
+ ∑_m k∂/∂ x_m[
∂/∂ x_k D_mk p
] .
The method developed here (and in particular the approximations 1), 2) and 3) in Eq. (<ref>)) are exact in the case of a chiral AOUP particle confined in a harmonic potential with radial or non-radial (elliptic) shape.
In constrast, for non-linear forces, 1), 2) and 3) are approximations whose accuracy depends on the potential considered.
Finally, the method represents only an approximation for the ABP because the Novikov formula, Eq. (<ref>), does not hold. Indeed, the ABP is governed by a non-Gaussian noise because 𝐧 is an orientation with a non-fluctuating unit modulus.
§ APPLICATION TO SIMPLE CASES.
The general method presented in the previous appendix is applied to a confining potential (with radial and non-radial symmetry) studied in Sec. <ref> and Sec. <ref>.
First, we estimate the components of the time-autorocorrelation of the active force C_mn(t-t'):
C_mn(t-t') =
v_0^2 e^-|t-t'|/τ([ cos(ω (t-t')) - sin(ω |t-t'|); sin(ω |t-t'|) cos(ω (t-t')) ]) .
Then, we estimate D_mk(t) for a rather general form of central potential, U(r), applying the definition (<ref>)
and taking the limit t→∞.
We obtain the following matrix elements
D_xx=
v_0^2 τ/r^2[y^2 u_I-xy w_I +
x^2 u_II+xy w_II]
D_yy= v_0^2 τ/r^2[x^2 u_I+xy w_I +
y^2 u_II-xy w_II]
D_xy= v_0^2 τ/r^2[ w_I x^2-xy u_I +
w_II y^2 + xy u_II]
D_yx= -v_0^2 τ/r^2[ w_I y^2+xy u_I +
w_II x^2 -xy u_II]
where we used the abbreviations:
u_II= (1+τU”/γ) /(1+τU”/γ)^2+ω^2τ^2
u_I= (1+τU'/r/γ) /(1+τU'/r/γ)^2+ω^2τ^2
w_II= ωτ/(1+τU”/γ)^2+ω^2τ^2
w_I= ωτ/(1 +τU'/r/γ)^2+ω^2τ^2 .
and the primed symbols stand for the first and second derivatives of U(r).
After eliminating x and y in favor of the radial coordinate r=√(x^2+y^2),
the resulting effective Fokker-Planck equation is conveniently written as:
∂/∂ t p=
1/r∂/∂ r[ r U'(r)/γ p
+ v_0^2τ( (u_II - u_I )p+
r ∂/∂ r ( u_II p)
)] .
The time independent solution of Eq. (<ref>) is obtained by imposing the vanishing of the radial
component, J_rad, of the probability current (i.e. minus the expression contained in the square parenthesis in the r.h.s. of Eq. (<ref>)).
For the particular case where is harmonic (U(r)=kr^2/2), expression (<ref>), the difference
(u_II - u_I ) vanishes and the explicit solution is:
p(r)=ρ_0 exp( - (1+τ/γ k)^2+ω^2τ^2/ (1+τ/γk) 1/τγ v_0^2k r^2/2) ,
while for arbitrary central potentials the problem can always be reduced to a simple quadrature.
Interestingly, it is easy to verify
that due to the handedness of the system the tangential component of the probability current does not vanish whenever ωτ≠ 0. In other words, the presence of a radial gradient
in the probability density induces a circulation of the particles in the direction orthogonal to it, but such a current does not affect the probability distribution itself. The tangential current reads:
J_tan= v_0^2τωτ/(1+τ/γk)^2+ω^2τ^2∂/∂ r p(r)
By expressing p(r) as a function of the Cartesian components we obtain Eq. (<ref>).
By contrast , in the case of the elliptic quadratic confining potential, U(r)=(k_x x^2+k_y y^2)/2,
one cannot exploit the radial symmetry of the problem and
the equation for the probability density reads:
∂/∂ t p(x,y,t) = ∂/∂ xk_x x/γ p +∂/∂ yk_y y/γ p
+ v_0^2τ[ (1+τ/γk_x) /(1+τ/γk_x)^2+ω^2τ^2∂^2/∂ x^2
+ (1+τ/γk_y) /(1+τ/γk_y)^2+ω^2τ^2∂^2/∂ y^2
+ ( ωτ/(1+τ/γk_y)^2+ω^2τ^2 - ωτ/(1+τ/γk_x)^2+ω^2τ^2) ∂^2/∂ x∂ y] p .
The steady probability p(x,y) can be obtained by first determining its cumulants (Eqs. (<ref>), (<ref>), (<ref>))
from Eq. (<ref>) and
using this information to express the pdf as in Eq. (<ref>).
rsc
|
http://arxiv.org/abs/2306.04670v3
|
20230607161338
|
Object Detection with Transformers: A Review
|
[
"Tahira Shehzadi",
"Khurram Azeem Hashmi",
"Didier Stricker",
"Muhammad Zeshan Afzal"
] |
cs.CV
|
[
"cs.CV"
] |
The astounding performance of transformers in natural language processing (NLP) has motivated researchers to explore their applications in computer vision tasks. DEtection TRansformer (DETR) introduces transformers to object detection tasks by reframing detection as a set prediction problem. Consequently, eliminating the need for proposal generation and post-processing steps. Initially, despite competitive performance, DETR suffered from slow training convergence and ineffective detection of smaller objects. However, numerous improvements are proposed to address these issues, leading to substantial improvements in DETR and enabling it to exhibit state-of-the-art performance. To our knowledge, this is the first paper to provide a comprehensive review of 21 recently proposed advancements in the original DETR model. We dive into both the foundational modules of DETR and its recent enhancements, such as modifications to the backbone structure, query design strategies, and refinements to attention mechanisms. Moreover, we conduct a comparative analysis across various detection transformers, evaluating their performance and network architectures. We hope that this study will ignite further interest among researchers in addressing the existing challenges and exploring the application of transformers in the object detection domain. Readers interested in the ongoing developments in detection transformers can refer to our website at https://github.com/mindgarage-shan/transformer_object_detection_surveyhttps://github.com/mindgarage-shan/transformer_object_detection_survey.
Transformer, Object Detection, DETR, Computer Vision, Deep Neural Networks.
Object Detection with Transformers: A Review
Tahira Shehzadi,
Khurram Azeem Hashmi,
Didier Stricker
and Muhammad Zeshan Afzal
All the members are with Department of Computer Science Technical University of Kaiserslautern, Mindgarage Lab,
German Research Institute for Artificial Intelligence (DFKI)
Kaiserslautern, Germany 67663
E-mail: [email protected]
=================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Object detection is one of the fundamental tasks in computer vision that involves locating and classifying objects within an image <cit.>. Over the years, convolutional neural networks (CNNs) have been the primary backbone for object detection models <cit.>. However, the recent success of transformers in natural language processing (NLP) has led researchers to explore their potential in computer vision as well <cit.>. The transformer architecture <cit.> has been shown to be effective in capturing long-range dependencies in sequential data <cit.>, making it an attractive candidate for object detection tasks.
In 2020, Carion et al. proposed a novel object detection framework called DEtection TRansformer (DETR) <cit.>, which replaces the traditional region proposal-based methods with a fully end-to-end trainable architecture that uses a transformer encoder-decoder network. The DETR network shows promising results, outperforming conventional CNN-based object detectors <cit.> while also eliminating the need for hand-crafted components such as region proposal networks and post-processing steps such as non-maximum suppression (NMS) <cit.>.
Since the introduction of DETR, several modifications and improvements have been proposed to overcome its limitations, such as slow training convergence and performance drops for small objects. Figure <ref> shows the literature overview on the Detection Transformer and its modifications to improve performance and training convergence. Deformable-DETR <cit.> modifies the attention modules to process the image feature maps by considering the attention mechanism as the main reason for slow training convergence. UP-DETR <cit.> proposes a few modifications to Pre-train the DETR similar to the pretraining of transformers in natural language processing. Efficient-DETR <cit.> based on original DETR and Deformable-DETR examines the randomly initialized object probabilities, including reference points and object queries, which is one of the reasons for multiple training iterations. SMCA-DETR <cit.> introduces a Spatially-Modulated Co-attention module that replaces the existing co-attention mechanism in DETR to overcome the slow training convergence of DETR. TSP-DETR <cit.> deals with the cross-attention and the instability of bipartite matching to overcome the slow training convergence of DETR. Conditional-DETR <cit.> presents a conditional cross-attention mechanism to solve the training convergence issue of DETR. WB-DETR <cit.> considers CNN backbone for feature extraction as an extra component and presents a transformer encoder-decoder network without a backbone. PnP-DETR <cit.> proposes a PnP sampling module to reduce spatial redundancy and make the transformer network computationally more efficient. Dynamic-DETR <cit.> introduces dynamic attention in the encoder-decoder network to improve training convergence. YOLOS-DETR <cit.> presents the transferability and versatility of the Transformer from image recognition to detection in the sequence aspect using the least information about the spatial design of the input and improves performance. Anchor-DETR <cit.> proposes object queries as anchor points that are extensively used in CNN-based object detectors. Sparse-DETR <cit.> reduces the computational cost by filtering encoder tokens with learnable cross-attention maps. D^2ETR <cit.> uses the fine-fused feature maps in the decoder from the backbone network with a novel cross-scale attention module. FP-DETR <cit.> reformulates the pretraining and fine-tuning stages for detection transformers. CF-DETR <cit.> refines the predicted locations by utilizing local information, as incorrect bounding box location reduces performance on small objects. DN-DETR <cit.> uses noised object queries as additional decoder input to reduce the instability of the bipartite-matching mechanism in DETR, which causes the slow convergence problem. AdaMixer <cit.> considers the encoder an extra network between the backbone and decoder that limits the performance and slower the training convergence because of its design complexity. It proposes a 3D sampling process and a few other modifications in the decoder. REGO-DETR <cit.> proposes an RoI-based method for detection refinement to improve the attention mechanism in the detection transformer. DINO <cit.> considers positive and negative noised object queries to make training convergence faster and to enhance the performance on small objects.
Due to the rapid progress of transformer-based detection methods, keeping track of new advancements is becoming increasingly challenging. Thus, a review of ongoing progress is necessary and would be helpful for the researchers in the field. This paper provides a detailed overview of recent advancements in detection transformers. Table <ref> shows the overview of Detection Transformer (DETR) modifications to improve performance and training convergence.
§.§ Our Contributions
* Detailed review of transformer-based detection methods from architectural perspective. We categorize and summarize improvements in DEtection TRansformer (DETR) according to Backbone modifications, pre-training level, attention mechanism, query design, etc. The proposed analysis aims to help researchers to have a more in-depth understanding of the key components of detection transformers in terms of performance indicators.
* A performance evaluation of detection transformers. We evaluate improvements in detection transformers using popular benchmark MS COCO <cit.>. We also highlight the advantages and limitations of these approaches.
* Analysis of accuracy and computational complexity of improved versions of detection transformers.
We present an evaluative comparison of state-of-the-art transformer-based detection methods w.r.t attention mechanism, backbone modification, and query design.
* Overview of key building blocks of detection transformers to improve performance further and future directions.
We examine the impact of various key architectural design modules that impact network performance and training convergence to provide possible suggestions for future research.
The remaining paper is arranged as follows. Section <ref> discusses previous related surveys on transformers. Section <ref> is related to object detection and transformers in all types of vision. Section <ref> is the main part which explains the modifications in the detection transformers in detail. Section <ref> is about evaluation protocol, and Section <ref> provides an evaluative comparison of detection transformers. Section <ref> discusses open challenges and future directions. Finally, Section <ref> concludes the paper.
§ RELATED PREVIOUS REVIEWS AND SURVEYS
Many surveys have studied deep learning approaches in object detection <cit.>. Table <ref> lists existing object detection surveys. Among these surveys, many studies comprehensively review approaches that process different 2D data types<cit.>. Other studies focus on specific 2D applications <cit.> and other tasks such as segmentation <cit.>, image captioning <cit.> and object tracking <cit.>. Furthermore, some surveys examine deep learning methods and introduce vision transformers <cit.>. However, most of the literature research was published before improvements in the detection transformer network, and a detailed review of transformer-based object detectors is missing. Thus, a survey of ongoing progress is necessary and would be helpful for researchers.
§ OBJECT DETECTION AND TRANSFORMERS IN VISION
§.§ Object Detection
This section explains the key concept of object detection and previously used object detectors. A more detailed analysis of object detection concepts can be found in <cit.>. The object detection task localizes and recognizes objects in an image by providing a bounding box around each object and its category. These detectors are usually trained on datasets like PASCAL VOC <cit.> or MS COCO <cit.>. The backbone network extracts the features of the input image as feature maps <cit.>. Usually, the backbone network, such as the ResNet-50 <cit.>, is pre-trained on ImageNet <cit.> and then finetuned to downstream tasks <cit.>. Moreover, many works have also used visual transformers <cit.> as a backbone. Single-stage object detectors <cit.> use only one network having faster speed but lower performance than two-stage networks.
Two-stage object detectors <cit.> contain two networks to provide final bounding boxes and class labels.
Lightweight Detectors: Lightweight detectors are object detection models designed to be computationally efficient and require less computational resources than standard object detection models. These are real-time object detectors and can be employed on small devices. These networks include <cit.>.
3D Object Detection: The primary purpose of 3D object detection is to recognize the objects of interest using a 3D bounding box and give a class label. 3D approaches are divided into three categories as image-based <cit.>, point cloud-based <cit.> and multimodal fusion-based <cit.>.
§.§ Transformer for Segmentation
The self-attention mechanism can be employed for segmentation tasks <cit.> that provides pixel-level <cit.> prediction results. Panoptic segmentation <cit.> jointly solves semantic and instance segmentation tasks by providing per-pixel class and instance labels. Wang et al.<cit.> proposes location-sensitive axial attention for panoptic segmentation task on three benchmarks <cit.>. The above segmentation approaches have self-attention in CNN-based networks. Recently segmentation transformers <cit.> containing encoder-
decoder modules give new directions to employ transformers for segmentation tasks.
§.§ Transformers for Scene and Image Generation
Previously, text-to-image generation methods <cit.> are based on GANs <cit.>. Ramesh et al. <cit.> introduced a transformer-based model for generating high-quality images from provided text details. Transformer networks are also applied for image synthesis <cit.>, which is important for learning unsupervised and generative models for downstream tasks. The feature learning with an unsupervised training procedure <cit.> achieves state-of-the-art performance on two datasets <cit.>, while SimCLR <cit.> provides comparable performance on <cit.>. The iGPT mage generation network <cit.> does not include pre-training procedures similar to language modeling tasks. However, unsupervised CNN-based networks <cit.> consider prior knowledge as architectural layout, attention mechanism and regularization. Generative Adversarial Networks (GAN) <cit.> with CNN-based backbones have been appealing for image synthesis <cit.>. TransGAN <cit.> is a strong GAN network where the generator and discriminator contain transformer modules. These transformer-based networks boost performance for scene and image generation tasks.
§.§ Transformers for Low-level Vision
Low-level vision analyses images to identify their basic components and create an intermediate representation for further processing and higher-level tasks. After observing the remarkable performance of attention networks in high-level vision tasks <cit.>, many attention-based approaches have been introduced for low-level vision problems, such as <cit.>.
§.§ Transformers for Multi-Modal Tasks
Multi-Modal Tasks involve processing and combining information from multiple sources or modalities, such as text, images, audio, or video. The application of transformer networks in vision language tasks has also been widespread, including visual question-answering <cit.>, visual commonsense-reasoning <cit.>, cross-modal retrieval <cit.>, and image captioning<cit.>. These transformer designs can be classified into single-stream <cit.> and dual-stream networks <cit.>. The primary distinction between these networks lies in the choice of loss functions.
§ DETECTION TRANSFORMERS
This section briefly explains DEtection TRansformer (DETR) and its improvements as shown in Figure <ref>.
§.§ DETR
DEtection TRansformer (DETR) <cit.> architecture is much simpler than CNN-based detectors like Faster R-CNN <cit.> as it removes the need for anchors generation process and post-processing steps such as Non-Maximal Suppression (NMS) and provides an optimal detection framework.
The DETR network has three main modules: a backbone network with positional encodings, an encoder, and a decoder network with attention mechanism. The extracted features from the backbone network as one single vector and their positional encoding <cit.> within the input vector fed to the encoder network. Here, the self-attention is performed on key, query, and value matrices forwarded to the multi-head attention and feed-forward network to find the attention probabilities of the input vector. The DETR decoder takes object queries in parallel with the encoder output. It computes predictions by decoding N number of object queries in parallel. It uses a bipartite-matching algorithm to label the ground-truth and predicted objects as given in the following equation:
σ̂ =min_σ∈ N∑_k^Nℒ_m (y_k,ŷ_σ(k)),
Here, y_k is a set of ground-truth (GT) objects. It provides boxes for both object and "no object" classes, where N is the total number of objects to be detected. Here, ℒ_m (y_k,ŷ_σ(k)) is the matching cost (for direct prediction) without duplicates between predicted objects with index σ(k) and ground-truth y_k as shown in the following equation:
.91!ℒ_m (y_k,ŷ_σ(k))=-1_{c_k≠ϕ}p̂_σ(k)(c_k)+1_{c_k≠ϕ}ℒ_bbox(b_k,b̂_σ̂(k))
The next step is to compute the Hungarian loss by determining the optimal matching between ground-truth (GT) and detected boxes regarding bounding-box region and label. The loss is reduced by Stochastic Gradient Descent (SGD).
.91!ℒ_H(y, ŷ)= ∑_k=1^N[-logp̂_σ̂(k)(c_k)+1_{c_k≠ϕ}ℒ_box(b_k,b̂_σ̂(k))]
Where p̂_σ̂(k) and c_k are the predicted class and target label, respectively. The term σ̂ is the optimal-assignment factor, b_k and b̂_σ̂(k) are ground-truth and predicted bounding boxes. The term ŷ and y = {(c_k, b_k)} are the prediction and ground-truth of objects, respectively. Here, the bounding box loss is a linear combination of the generalized IoU (GIoU) loss <cit.> and of the L1 loss, as in the following equation:
ℒ_bbox = λ_iℒ_iou (b_k,b̂_σ(k)) + λ_l1∥ b_k-b̂_σ(k)∥_1
Where λ_i and λ_l1 are the hyperparameters. DETR can only predict a fixed number of N objects in a single pass. For the COCO dataset <cit.>, the value of N is set to 100 as this dataset has 80 classes. This network doesn't need NMS to remove redundant predictions as it uses bipartite matching loss with parallel decoding <cit.>. In comparison, previous works used RNNs-based autoregressive decoding <cit.>. The DETR network has several challenges, such as slow training convergence and performance drops for small objects. To address these challenges, modifications have been made to the DETR network.
§.§ Deformable-DETR
The attention module of DETR provides a uniform weight value to all pixels of the input feature map at the initialization stage. These weights need many epochs for training convergence to find informative pixel locations. However, it requires high computation and extensive memory. The computation complexity of self-attention in the encoder is O (w_i^2h_i^2c_i), while the complexity of the cross-attention in the decoder is O (h_i w_i c_i^2 + N h_iw_i c_i). Here, h_i and w_i denote the height and width of the input feature map, respectively, and N represents object queries fed as input to the decoder. Let q∈Ω _q denotes a query element with feature z_q ∈ R^c_i, and k ∈Ω _k represents a key vector with feature x_k ∈ R^c_i, where c_i is the input features dimension, Ω _k and Ω _q indicate the set of key and query vectors, respectively. Then, the feature of Multi-Head Attention (MHAttn) is computed by:
.91!MHAttn(z_q, x)=∑_j=1^JW_j[∑_k∈Ω_k A_jqk. W_j^' x_k]
where j represents the attention head, W '_j ∈ R^c_v × c_i and W_j ∈ R^c_i× c_v are of learnable weights (c_v = c_i/J by default). The attention weights A_jqk∝ exp z_q^T U_j^T V_jx_k/√(c)_v are normalized as ∑_k∈Ω_k A_jqk = 1, in which U_j, V_j ∈ R^c_v× c_iare also learnable weights. Deformable-DETR <cit.> modifies the attention modules inspired by <cit.> to process the image feature map by considering the attention network as the main reason for slow training convergence and confined feature spatial resolution. This attention module works on taking a small number of samples nearby the reference point. Given an input feature map x ∈ R^c_i × h_i × w_i, let query q with content feature z_q and a 2d reference point r_q, the deformable attention feature is computed by:
.91!DeformAttn(z_q, r_q,x)=∑_j=1^JW_j[∑_k=1^K A_jqk. W_j x(r_q + Δ r_jqk)]
Where Δ r_jqk indexes the sampling offset. It takes ten times fewer training epochs than a simple DETR network. The complexity of self-attention becomes O (w_ih_ic_i^2), which is linear complexity according to spatial size h_i w_i. The complexity of the cross-attention in decoder becomes O (NK c_i^2) which is independent of spatial size h_i w_i.
In Figure <ref>, the top right block indicates deformable attention module in Deformable-DETR.
Multi-Scale Feature Maps: High-resolution input image features increase the network efficiency, specifically for small objects. However, this is computationally expensive. Deformable-DETR provides high-resolution features without affecting the computation. It uses a feature pyramid containing high and low-resolution features rather than the original high-resolution input image feature map. This feature pyramid has an input image resolution of 1/8, 1/16, and 1/32 and contains its relative positional embeddings. In short, Deformable-DETR replaces the attention module in DETR with the multi-scale deformable attention module to reduce computational complexity and improves performance.
§.§ UP-DETR
Dai et al. <cit.> proposed a few modifications to pre-train the DETR similar to pre-training transformers in NLP. The random-sized patches from the input image are used as object queries to the decoder as input. The pretraining proposed by UP-DETR helps to detect these random-sized query patches. In Figure <ref>, the bottom left block denotes UP-DETR. Two issues are addressed during pretraining: multi-task learning and multi-query localization.
Multi-Task Learning: Object detection task combines object localization and classification, while these tasks always have distinct features <cit.>. The patch detection damages the classification features. Multi-task learning by patch feature reconstruction and a frozen pretraining backbone is proposed to protect the classification features of the transformer. The feature reconstruction is given as follows:
ℒ_rec(f_k, f̂_σ̂(k) ) = ∥f_k/∥ f_k ∥_2-f̂_σ̂(k)/∥f̂_σ̂(k)∥_2∥_2^2
Here, the feature reconstruction term is ℒ_rec. It is the mean-squared error between l_2 (normalized) features of patches obtained from the CNN backbone.
Multi-query Localization: The decoder of DETR takes object queries as input to focus on different positions and box sizes. When this object queries number N (typically N = 100) is high, a single-query group is unsuitable as it has convergence issues. To solve the multi-query localization problem between object queries and patches, UP-DETR proposes an attention mask and query shuffle mechanism. The number of object queries is divided into X different groups, where each patch is provided to N/X object queries. The Softmax layer of the self-attention module in the decoder is modified by adding an attention mask inspired by <cit.> as follows:
P(q_i, k_i )= Softmax(q_ik_i^T/√(d) + M) . v_i
M_k,l =
0 k,linthesame group
-∞ otherwise
Where M_k,l is the interaction parameter of object query q_k and q_l. Though object queries are divided into groups, these queries don't have explicit groups during downstream training tasks. Therefore, these queries are randomly shuffled during pre-training by masking 10% query patches to zero, similar to dropout <cit.>.
§.§ Efficient-DETR
The performance of DETR also depends on the object queries as the detection head obtains final predictions from them. However, these object queries are randomly initialized at the start of training. Efficient-DETR <cit.> based on DETR and Deformable-DETR examines the randomly initialized object blocks, including reference points and object queries, which is one of the reasons for multiple training iterations. In Figure <ref>, the bottom right box shows Efficient-DETR.
Efficient-DETR has two main modules: a dense module and a sparse module. These modules have the same final detection head. The dense module includes the backbone network, encoder network, and detection head. Following <cit.>, It generates proposals by a class-specific dense prediction using the sliding window and selects Top-k features as object queries and reference points. Efficient-DETR uses 4-d boxes as reference points rather than 2d centres. The sparse network does the same work as the dense network, except for their output size. The features from the dense module are taken as the initial state of the sparse module, which is considered a good initialization of object queries. Both dense and sparse module use one-to-one assignment rule as in <cit.>.
§.§ SMCA-DETR
The decoder of the DETR takes object queries as input that are responsible for object detection in various spatial locations. These object queries combine with spatial features from the encoder. The co-attention mechanism in DETR involves computing a set of attention maps between the object queries and the image features to provide class labels and bounding box locations. However, the visual regions in the decoder of DETR related to object query might be irrelevant to the predicted bounding boxes. This is one of the reasons that DETR needs many training epochs to find suitable visual locations to identify corresponding objects correctly. Gao et al. <cit.> introduced a Spatially-Modulated Co-attention (SMCA) module that replaces the existing co-attention mechanism in DETR to overcome the slow training convergence of DETR. In Figure <ref>, the top right block represents SMCA-DETR. The object queries estimate the scale and center of its corresponding object, which are further used to set up a 2D spatial weight map. The initial estimate of scale l_h_i,l_w_i and center e_h_i, e_w_i of Gaussian-like distribution for object queries q is given as follows:
e_h_i^nrm, e_w_i^nrm =sigmoid (MLP(q)),
l_h_i,l_w_i = FC (q)
Where object query q provides a prediction center in normalized form by sigmoid activation function after two layers of MLP. These predicted centers are unnormalized to get the input image's center coordinates e_h_i and e_w_i. The object query also estimates the object scales as l_h_i, l_w_i. After the prediction of the object scale and center, SMCA provides a Gaussian-like weight map as follows:
W(x, y )= exp(-(x-e_w_i)^2/β l_w_i^2-(y-e_h_i)^2/β l_h_i^2)
Where β is the hyper-parameter to regulate the bandwidth, (x, y) is the spatial parameter of weight map W. It provides high attention to spatial locations closer to the center and low attention to spatial locations away from the center.
A_i= Softmax(q_ik_i^T/√(d) + log W)v_i
Here, A_i is the co-attention map. The difference between the co-attention module in DETR and this co-attention module is the addition of the logarithm of the spatial-map W. The decoder attention network has more attention near predicted box regions, which limits the search locations and thus converges the network faster.
§.§ TSP-DETR
TSP-DETR <cit.> deals with the cross-attention and the instability of bipartite matching to overcome the slow training convergence of DETR. TSP-DETR proposes two modules based on an encoder network with feature pyramid networks (FPN) <cit.> to accelerate the training convergence of DETR. In Figure <ref>, the bottom left block indicates TSP-DETR. These modules are TSP-FCOS and TSP-RCNN, which used classical one-stage detector FCOS <cit.> and classical two-stage detector Faster-RCNN <cit.>, respectively. TSP-FCOS used a new Feature of Interest (FoI) module to handle the multi-level features in the transformer encoder. Both modules use the bipartite matching mechanism to accelerate the training convergence.
TSP-FCOS: The TP-FCOS module follows the FCOS <cit.> for designing the backbone and FPN <cit.>. Firstly, the features extracted by the CNN backbone from the input image are fed to the FPN component to produce multi-level features. Two feature extraction heads, the classification head and the auxiliary head, use four convolutional layers and group normalization <cit.>, which are shared across the feature pyramid stages. Then, the FoI classifier filters the concatenated output of these heads to select top-scored features. Finally, the transformer encoder network takes these FoIs and their positional encodings as input, providing class labels and bounding boxes as output.
TSP-RCNN: Like TP-FCOS, this module extracts the features by the CNN backbone and produces multi-level features by the FPN component. In place of two feature extraction heads used in TSP-FCOS, the TSP-RCNN module follows the design of Faster R-CNN <cit.>. It uses Region Proposal Network (RPN) to find Regions of Interest (RoIs) to refine further. Each RoI in this module has an objectness score as well as a predicted bounding box. The RoIAlign <cit.> is applied on multi-level feature maps to take RoIs information. After passing through a fully connected network, these extracted features are fed to the Transformer encoder as input. The positional info of these RoI proposals is the four values (c_nx, c_ny, w_n, h_n), where (c_nx, c_ny) ∈ [0, 1]^2 represents the normalized value of center and (w_n, h_n) ∈ [0, 1]^2 represents the normalized value of height and width. Finally, the transformer encoder network inputs these RoIs and their positional encoding for accurate predictions.
The FCOS and RCNN modules in TSP-DETR accelerate the training
convergence and improve the performance of the DETR network.
§.§ Conditional-DETR
The cross-attention module in the DETR network needs high-quality input embeddings quality to predict accurate bounding boxes and class labels. The high-quality content embeddings increase the training convergence difficulty. Conditional-DETR <cit.> presents a conditional cross-attention mechanism to solve the training convergence issue of DETR. It differs from the simple DETR by input keys k_i and input queries q_i for cross-attention. In Figure <ref>, the bottom right box represents conditional-DETR. The conditional queries are obtained from 2D coordinates along with the embedding output of the previous decoder layer. The predicted candidate box from decoder-embedding is as follows:
box= sig(FFN(e) + [r^T 0 0 ]^T)
Here, e is the input embedding that is fed as input to the decoder. The box is a 4D vector [box_cx box_cy box_w box_h], having the box center value as (box_cx,box_cy), width value as box_w and height value as box_h . sig() function normalizes the predictions varies from 0 to 1. FFN() predicts the un-normalized box. r is the un-normalized 2D coordinate of the reference-point, and (0,0) is the simple DETR. This work either learns the reference point r for each box or generates them from the respective object query. It learns queries for multi-head cross-attention from input embeddings of the decoder. This spatial query makes the cross-attention head consider the explicit region, which helps to localize the different regions for class labels and bounding boxes by narrowing down the spatial range.
§.§ WB-DETR
DETR extracts local features by CNN backbone and gets global contexts by an encoder-decoder network of the transformer. WB-DETR <cit.> proves that the CNN backbone for feature extraction in detection transformers is not compulsory. It contains a transformer network without a backbone. It serializes the input image and feeds the local features directly in each independent token to the encoder as input. The transformer self-attention network provides global information, which can accurately get the contexts between input image tokens. However, the local features of each token and the information between adjacent tokens need to be included as the transformer lacks the ability of local feature modeling. The LIE-T2T (Local Information Enhancement-T2T) module solves this issue by reorganizing and unfolding the adjacent patches and focusing on each patch's channel dimension after unfolding. In Figure <ref>, the top right block denotes the LIE-T2T module of WB-DETR. The iterative process of the LIE-T2T module is as follows:
P = stretch (reshape (Pi))
Q = sig(e_2 · ReLU (e_1 · P ))
P_i+1 = e_3 · (P · Q)
Where reshape function reorganizes (l_1 × c_1) patches into (h_i × w_i × c_i) feature maps. The term stretch denotes unfolding (h_i × w_i × c_i) feature maps to (l_2 × c_2) patches. Here, the fully connected layer parameters are e_1, e_2, and e_3. The ReLU activation is its nonlinear map function, and the sig generates final attention. The channel attention in this module provides local information as the relationship between the channels of the patches is the same as the spatial relation in the pixels of the feature maps.
§.§ PnP-DETR
The transformer processes the image feature maps that are transformed into a one-dimensional feature vector to produce the final results. Although effective, using the full feature map is expensive because of useless computation on background regions. PnP-DETR <cit.> proposes a poll and pool (PnP) sampling module to reduce spatial redundancy and make the transformer network computationally more efficient. This module divides the image feature map into contextual background features and fine foreground object features. Then, the transformer network uses these updated feature maps and translates them into the final detection results. In Figure <ref>, the bottom left block indicates PnP-DETR. This PnP Sampling module includes two types of samplers: a pool sampler and a poll sampler, as explained below.
Poll Sampler: The poll sampler provides fine feature vectors V_f. A meta-scoring module is used to find the informational value for every spatial location (x, y):
a_xy = ScoreNet(v_xy , θ s)
The score value is directly related to the information of feature vector v_xy. These score values are sorted as follows:
[a_z, |z = 1, . . . , Z], ℵ= Sort(a_xy)
Where Z = h_iw_i and ℵ is the sorting order. The top N_s-scoring vectors are selected to get fine features:
V_f = [v_z , |z = 1, . . . , N_s ]
Here, the predicted informative value is considered as a modulating factor to sample the fine feature vectors:
V_f = [v_z × a_z , |z = 1, . . . , N_s ]
To make the learning stable, the feature vectors are normalized:
V_f = [L_norm(v_z) × a_z, |z = 1, . . . , N_s ]
Here, L_norm is the layer normalization, N_s = α Z, where α is the poll ratio factor. This sampling module reduces the training computation.
Pool Sampler: The poll sampler gets the fine features of foreground objects. A pool sampler compresses the background region's remaining feature vectors that provide contextual information. It performs weighted pooling to get a small number of background features M_b motivated by double attention operation <cit.> and bilinear pooling <cit.>. The remaining feature vectors of the background region are:
V_b = V\V_f = {v_b ,|b = 1, . . . , Z-N }
The aggregated weights a_b∈R^M_bare obtained by projecting the features with weight values w^s ∈R^c_i × M_b as:
a_b = v_b w^s
The projected features with learnable weight w^p ∈R^c_i × c_i are obtained as follows:
v_b = v_b w^p
The aggregated weights are normalized over the non-sampled regions with Softmax as follows:
a_bm = e^a_bm/∑_b=1^N-Ze^a b m
By using the normalized aggregation weight, the new feature vector is obtained that provides information of non-sampled regions:
v_m = ∑_b=1^Z-Nv_b × a_bm
By considering all Z aggregation weights, the coarse background contextual feature vector is as follows:
V_c = {v_m, |b = 1, . . . , M_b }
The pool sampler provides context information at different scales using aggregation weights. Here, some feature vectors may provide local context while others may capture global context.
§.§ Dynamic-DETR
Dynamic-DETR <cit.> introduces dynamic attention in the encoder-decoder network of DETR to solve the slow training convergence issue and detection of small objects. Firstly, a convolutional dynamic encoder is proposed to have different attention types to the self-attention module of the encoder network to make the training convergence faster. The attention of this encoder depends on various factors such as spatial effect, scale effect and input feature dimensions effect. Secondly, ROI-based dynamic attention is replaced with cross-attention in the decoder network. This decoder helps to focus on small objects, reduces learning difficulty and converges the network faster. In Figure <ref>, the bottom right box represents Dynamic-DETR. This dynamic encoder-decoder network is explained in detail as follows.
Dynamic Encoder: The Dynamic-DETR uses a convolutional approach for the self-attention module. Given the feature vectors F = {F1, ··· , F_n}, where n=5 represents object detectors from the feature pyramid, the multi-scale self-attention (MSA) is as follows:
Attn = MSA (F).F
However, it is impossible because of the various scale feature map from the FPN. The feature maps of different scales are equalized within neighbours using 2D convolution as in the Pyramid Convolution <cit.>. It focuses on spatial locations of the un-resized mid-layer and transfers information to its scaled neighbours. Moreover, SE<cit.> is applied to combine the features to provide scale attention.
Dynamic Decoder: The dynamic decoder uses mixed attention blocks in place of multi-head layers to ease the learning in the cross-attention network and improves the detection of small objects. It also uses dynamic convolution instead of a cross-attention layer inspired by ConvBERT <cit.> in natural language processing (NLP). Firstly, RoI Pooling <cit.> is introduced in the decoder network. Then position embeddings are replaced with box encoding BE ∈R^p × 4 as the image size. The output from the dynamic encoder, along with box encoding BE, is fed to the dynamic decoder to pool image features R ∈R^p × s × s × c_i from feature pyramid as follows:
R = RoI_pool(F_encoder, BE, s)
where s is the size of pooling parameter, c_i represents quantity of channels of F_encoder. To feed this in the cross-attention module,
input embeddings qe ∈ R^p × c_i are required for object queries.
These embeddings are passed through the Multi-Head self Attention (MHSAttn) layer as:
qe^∗ = MHSAttn(qe, qe, qe)
Then these query embeddings are passed through fully-
connected layer (dynamic filters) as follows:
Filter^qe = FC(qe^∗)
Finally, cross-attention between features and object queries is performed with 1 × 1 convolution using dynamic filters Filter^qe:
qe^F= Con_1× 1(F, Filter^qe)
These features are passed through the FFN layers to provide various predictions as updated object-embedding, updated box-encoding, and the object class. This process eases the learning of the cross-attention module by focusing on sparse areas and then spreading to global regions.
§.§ YOLOS-DETR
Vision Transformer (ViT) <cit.> inherited from NLP performs well on the image recognition task. ViT-FRCNN <cit.> uses a pre-trained backbone (ViT) for a CNN-based detector. It utilizes convolution neural networks and relies on strong 2D inductive biases and region-wise pooling operations for object-level perception. Other similar works, such as DETR <cit.>, introduce 2D inductive bias using CNNs and pyramidal features. YOLOS-DETR <cit.> presents the transferability and versatility of the Transformer from image recognition to detection in the sequence aspect using the least information about the spatial design of the input. It closely follows ViT architecture with two simple modifications. Firstly, it removes the image-classification patches [CLS] and adds randomly initialized one hundred detection patches [DET] as <cit.> along with the input patch embeddings for object detection. Secondly, similar to DETR, a bipartite matching loss is used instead of ViT classification loss. The transformer encoder takes the generated sequence as input as follows:
s_0 = [I^1_pL; ···; I^M_pL; I^1_d; ··· ; I^100_d] + PE
Where, I is the input image I∈R^h_i × w_i × c_i that is reshaped into 2D tokens I_p∈R^n_i × (R^2 · c_i). Here, h_i represents the height, and w_i indicates the width of the input image. c_i is the total channels. (r,r) is each token resolution, n_i = h_iw_i/r^2 is the total number of tokens. These tokens are mapped to D_i dimensions with linear projection L∈R^(r^2 · c_i)× D_i. The result of this projection is I_pL. The encoder also takes randomly initialized one hundred learnable tokens I_d∈R^100 × D_i. To keep the positional information, positional embeddings PE∈R^(n_i+100) × D_i are also added. The encoder of the transformer contains a multi-head self-attention mechanism and one MLP block with GELU <cit.> non-linear activation function. The Layer Normalization (LN) <cit.> is added between each self-attention and MLP block as follows:
s_n = MHSAttn(LN (s_n-1)) + s_n-1
s_n = MLP (LN (s_n)) + s_n
Where s_n is the encoder input sequence. In Figure <ref>, the top right block indicates YOLOS-DETR.
§.§ Anchor-DETR
DETR uses learnable embeddings as object queries in the decoder network. These input embeddings do not have a clear physical meaning and cannot illustrate where to focus. It is challenging to optimize the network as object queries concentrate on something other than specific regions. Anchor-DETR <cit.> solves this issue by proposing object queries as anchor points that are extensively used in CNN-based object detectors. This query design can provide multiple object predictions at one region. Moreover, a few modifications in the attention are proposed that reduce the memory cost and improve performance. In Figure <ref>, the bottom left block shows Anchor-DETR. The two main contributions of Anchor-DETR: query and attention variant design, are explained as follows:
Row and Column Decoupled-Attention: DETR requires huge GPU memory as in <cit.> because of the complexity of the cross-attention module. It is more complex than the self-attention module in the decoder. Although Deformable-DETR reduces memory cost, it still causes random memory access, making the network slower. Row-Column Decoupled Attention (RCDA), as shown in the bottom left block of Figure <ref>, reduces memory and provides similar or better efficiency.
Anchor Points as Object Queries: The CNN-based object detectors consider anchor points as the relative position of the input feature maps. In contrast, transformer-based detectors take uniform grid locations, hand-craft locations, or learned locations as anchor points. Anchor-DETR considers two types of anchor points: learned anchor locations and grid anchor locations. The gird anchor locations are input image grid points. The learned anchor locations are uniform distributions from 0 to 1 (randomly initialized) and updated using the learned parameters.
§.§ Sparse-DETR
Sparse-DETR <cit.> filters the encoder tokens by learnable cross-attention map predictor. After distinguishing these tokens in the decoder network, it focuses only on foreground tokens to reduce computational costs.
Sparse-DETR introduces the scoring module, aux-heads in the encoder, and the Top-k queries selection module for the decoder. In Figure <ref>, the bottom right box represents Sparse-DETR. Firstly, it determines the saliency of tokens, fed as input to the encoder, using the scoring network that selects top ρ% tokens. Secondly, the aux-head takes the top-k tokens from the output of the encoder network. Finally, the top-k tokens are used as the decoder object queries. The salient token prediction module refines encoder tokens that are taken from the backbone feature map using threshold ρ and updates the features x_l-1 as:
x_l^m =
x_l-1^m m ∉Ω_r^q
LN(FFN(y_l^m)+y_l^m) m ∈Ω_r^q,
where y_l^m = LN (DeformAttn(x_l-1^m, x_l-1) + x_l-1^m)
Where DeformAttn is the deformable attention, FFN is the Feed-Forward Network, and LN is the Layer-Normalization. Then, the Decoder Cross-Attention Map (DAM) accumulates the attention weights of decoder object queries, and the network is trained by minimizing loss between prediction and binarized DAM as follows:
ℒ_dam = -1/M∑_k=1^M BCELoss(sn(x_f),DAM_k^b)
Where BCELoss is the binary cross-entropy (BCE) loss, DAM_k^b is the k-th binarized DAM value of the encoder token, and sn is the scoring network. In this way, sparse-DETR minimizes the computation by significantly eliminating encoder tokens.
§.§ D^2ETR
Much work <cit.> has been proposed to make the training convergence faster by modifying the cross-attention module. Many researchers <cit.> used multi-scale feature maps to improve performance for small objects. However, the solution for high computation complexity has yet to be proposed. D^2ETR <cit.> achieves better performance with low computational cost. Without an encoder module, the decoder directly uses the fine-fused feature maps provided by the backbone network with a novel cross-scale attention module. The D^2ETR contains two main modules a backbone and a decoder. The backbone network based on Pyramid Vision Transformer (PVT) consists of two parallel layers, one for cross-scale interaction and another for intra-scale interaction. This backbone contains four transformer levels to provide multi-scale feature maps. All levels have the same architecture depending on the basic module of the selected Transformer. The backbone also contains three fusing levels in parallel with four transformer levels. These fusing levels provide a cross-scale fusion of input features. The i-th fusing level is shown in the top right block of Figure <ref>. The cross-scale attention is formulated as follows:
f_j = L_j (f_j-1)
f_j^∗ = SA (f_q, f_k, f_v)
f_q = f_j, f_k= f_v = [f_1^∗, f_2^∗, ... , f_j-1^∗, f_j]
where f_j^∗ the fused form feature map f_j. Given that L is the input of the decoder as the last-level feature map, the final result of cross-scale attention is f_1^∗, f_2^∗, ..., f_L^∗. The output of this backbone is fed to the decoder that takes object queries in parallel. It provides output embeddings independently transformed into class labels and box coordinates by a forward feed network. Without an encoder module, the decoder directly used the fine-fused feature maps provided by the backbone network with a novel cross-scale attention module providing better performance with low computational cost.
§.§ FP-DETR
Modern CNN-based detectors like YOLO <cit.> and Faster-RCNN <cit.> utilize specialized layers on top of backbones pre-trained on ImageNet to enjoy pre-training benefits such as improved performance and faster training convergence. DETR network and its improved version <cit.> only pre-train its backbone while training both encoder and decoder layers from scratch. Thus, the transformer needs massive training data for fine-tuning. The main reason for not pre-training the detection transformer is the difference between the pre-training and final detection tasks. Firstly, the decoder module of the transformer takes multiple object queries as input for detecting objects, while ImageNet classification takes only a single query (class token). Secondly, the self-attention module and the projections on input query embeddings in the cross-attention module easily overfit a single class query, making the decoder network difficult to pre-train. Moreover, the downstream detection task focuses on classification and localization, while the upstream task considers only classification for the objects of interest.
FP-DETR <cit.> reformulates the pre-training and fine-tuning stages for detection transformers. In Figure <ref>, the bottom left block indicates FP-DETR. It takes only the encoder network of the detection transformer for pre-training as it is challenging to pre-train the decoder on the ImageNet classification task. Moreover, DETR uses both the encoder and CNN backbone as feature extractors. FP-DETR replaces the CNN backbone with a multi-scale tokenizer and uses the encoder network to extract features. It fully pre-trains the Deformable-DETR on the ImageNet dataset and fine-tunes it for final detection that achieves competitive performance.
§.§ CF-DETR
CF-DETR <cit.> observes that COCO-style metric Average Precision (AP) results for small objects on detection transformers at low IoU threshold values are better than CNN-based detectors. It refines the predicted locations by utilizing local information, as incorrect bounding box location reduces performance on small objects. CF-DETR introduces the Transformer Enhanced FPN (TEF) module, coarse layers and fine layers in the decoder network of DETR. In Figure <ref>, the bottom right box represents CF-DETR. The TEF module provides the same functionality as FPN, have non-local features E4 and E4 extracted from the backbone, and E5 features taken from the encoder output. The features of the TEF module and the encoder network are fed to the decoder as input. The decoder modules introduce a coarse block and a fine block. The coarse block selects foreground features from the global context. The fine block has two modules, Adaptive Scale-Fusion (ASF) and Local Cross-Attention (LCA), further refining coarse boxes. In short, these modules refine and enrich the features by fusing global and local and global information to improve detection transformer performance.
§.§ DAB-DETR
DAB-DETR <cit.> uses the bounding box coordinates as object queries in the decoder and gradually updates them in every layer. In Figure <ref>, the top right block indicates DAB-DETR. These box coordinates make training convergence faster by providing positional information and using the height and width values to update the positional attention map. This type of object query provides better spatial prior for the attention mechanism and provides a simple query formulation mechanism.
The decoder network contains two main networks a self-attention network to update queries and a cross-attention network to find features probing. The difference between the self-attention of original DETR and DAB-DETR is that query and key matrices have also position information taken from bounding-box coordinates. The cross-attention module concatenates the position and content information in key and query matrices and determines their corresponding heads. The decoder takes input embeddings as content queries and anchor boxes as positional queries to find object probabilities related to anchors and content queries. This way, dynamic box coordinates used as object queries provide better prediction, making the training convergence faster and increasing detection results for small objects.
§.§ DN-DETR
DN-DETR <cit.> uses noised object queries as an additional decoder input to reduce the instability of the bipartite-matching mechanism in DETR, which causes the slow convergence problem. In Figure <ref>, the bottom left block indicates DN-DETR. The decoder queries have two parts: the denoising part containing noised ground-truth box-label pairs as input and the matching part containing learnable anchors as input. The matching part M = {M_0, M_1, ..., M_l-1} determines the resemblance between the ground-truth label pairs and the decoder output, while the denoising part d = { d_0, d_1, ..., d_k-1} attempts to reconstruct the ground-truth objects as:
Output = Decoder(d,M,I|A)
Where I is the image features taken as input from the transformer encoder, and A is the attention mask that stops the information transfer between the matching and denoising parts and among different noised levels of the same ground-truth objects. The decoder has noised levels of ground-truth objects where noise is added to bounding boxes and class labels, such as label flipping. It contains a hyper-parameter λ for controlling the noise level. The training architecture of DN-DETR is based on DAB-DETR, as it also takes bounding box coordinates as object queries. The only difference between these two architectures is the class label indicator as an additional input in the decoder to assist label denoising. The bounding boxes are updated inconsistently in DAB-DETR, making relative offset learning challenging. The denoising training mechanism in DN-DETR improves performance and training convergence.
§.§ AdaMixer
AdaMixer <cit.> considers the encoder an extra network between the backbone and decoder that limits the performance and slower the training convergence because of its design complexity. AdaMixer provides a detection transformer network without an encoder. In Figure <ref>, the bottom right box represents AdaMixer. The main modules of AdaMixer are explained as follows.
3D feature space: For 3D feature space, the input feature map from the CNN backbone with the downsampling stride s_i^f, is first transformed by a linear-layer to the same d_f channel and computed the coordinate of its z-axis as follows:
z^f_i = log_2(s^f_i / s_b).
Where, height h_i and width w_i of feature maps (different strides) is rescaled to h_i/s_b and w_i/s_b, where s_b = 4.
3D feature sampling process: In the sampling process, the query generates I_p groups of vectors to I_p points, (Δ x_j, Δ y_j, Δ z_j)I_p, where each vector is dependent on its content-vector q_i by a linear-layer L_i as follows:
( Δ x_j, Δ y_j, Δ z_j)I_p = L_i(q_i).
These offset values are converted into sampling positions w.r.t position vector of object query as follows:
x̃_̃j̃ = x + Δ x_j . 2^z-r
ỹ_̃j̃ = y + Δ y_j . 2^z+r ,
z̃_̃j̃ = z + Δ z_j,
The interpolation over the 3D feature space first samples by bilinear interpolation in the (x_i, y_i) space and then interpolates on the z-axis by Gaussian weighting with weight for the i-th feature map is as follows:
w̃_i = exp(-(z̃-z_i^f)^2 / Γ_z)/∑_i exp(-(z̃-z_i^f)^2 / Γ_z)
where Γ_z is the softening coefficient to interpolate values
over the z-axis (Γ_z = 2 ). This process makes decoder detection learning easier by taking feature samples according to the query.
AdaMixer Decoder: The decoder module in AdaMixer takes a content vector q_i and positional vector (x_i, y_i, z_i, r_i) as input object queries.
The position-aware multi-head self-attention is applied between these queries as follows.
Attn(q_i, k_i, v_i )= Softmax(q_ik_i^T/√(d) + α X).v_i
Where X_kl=log(| box_k∩ box_l | box_k|+ϵ),ϵ=10^-7. The X_kl = 0 indicates the box_k is inside the box_l and X_kl = l represents no overlapping between box_k and box_l. This position vector is updated at every stage of the decoder network. The AdaMixer decoder module takes a content vector and a positional vector as input object queries. For this, multi-scale features taken from the CNN backbone are converted into 3D feature space as the decoder should consider (x_i, y_i) space as well as adjustable in terms of scales of detected objects. It takes the sampled features from this feature space as input. It applies the adaMixer mechanism to provide final predictions of input queries without using an encoder network to reduce the computational complexity of detection transformers.
§.§ REGO-DETR
REGO-DETR <cit.> proposes an RoI-based method for detection refinement to improve the attention mechanism in DETR. In Figure <ref>, the bottom left block denotes REGO-DETR. It contains two main modules: a multi-level recurrent mechanism and a glimpse-based decoder. In the multi-level recurrent mechanism, bounding boxes detected in the previous level are considered to get glimpse features. These are converted into refined attention using earlier attention in describing objects. The k-th processing level is as follows:
O_class (k)= DF_class(H_de(k))
O_bbox (k)= DF_bbox(H_de(k)) + O_bbox(k-1)
Where O_class∈R^ M_d× M_c and O_bbox∈R^ M_d×4. Here, M_d and M_c represent the total number of predicted objects and classes, respectively. DF_class and DF_bbox are functions that convert the input features into desired outputs. H_de(k) is the attention of this level after decoding as:
H_de(k)= [H_gm(k), H_de(k-1)]
Where H_gm(k) is the glimpse features according to H_de(k-1) and previous levels. These glimpse features are transformed using multi-head cross-attention into refined attention outputs according to previous attention outputs as:
H_gm(k)= Attn (V(k), H_de(k-1)),
For extracting the glimpse features V(k), the following operation is performed:
V(k) = FE_ext( X, RI(O_bbox(k-1), α (k))),
Where FE_ext is the feature extraction function, α (k) is a scalar parameter, and RI is the RoI computation. In this way, The Region-of-Interest (RoI) based refinement modules make the training convergence of the detection transformer faster and provide better performance.
§.§ DINO
DN-DETR adds positive noise to the anchors taken as object queries to an input of the decoder and provides labels to only those anchors with ground-truth objects nearby. Following DAB-DETR and DN-DETR, DINO <cit.> proposes a mixed object query selection method for anchor initialization and a look forward twice mechanism for box prediction. It provides the Contrastive DeNoising (CDN) module, which takes positional queries as anchor boxes and adds additional DN loss. In Figure <ref>, the bottom right block indicates DINO. This detector uses λ_1 and λ_2 hyperparameters where λ_1<λ_2. The bounding box b=(x_i,y_i,w_i,h_i) taken as input in the decoder, its corresponding generated anchor is denoted as a=(x_i,y_i,w_i,h_i).
.87!ATD(k)=1/kΣ{M_K({∥ b_0-a_0∥_1,∥ b_1-a_1∥_1,...,∥ b_N-1-a_N-1∥_1},k)}
Where ∥(b_i-a_i)∥ is the distance between the anchor and bounding box and M_K(x, k) is the function that provides the top K elements in x. The λ parameter is the threshold value for generating noise for anchors that are fed as input object queries to the decoder. It provides two types of anchor queries: positive with threshold value less than λ_1 and negative with noise threshold values greater than λ_1 and less than λ_2. This way, the anchors with no ground-truth nearby are labeled as "no object". Thus, DINO makes the training convergence faster and improves performance for small objects.
§ DATASETS AND EVALUATION METRICS
It is important to compare modifications in detection transformers to understand their effect on network size, training convergence, and performance. This section presents quantitative comparisons of improvements in DETR on popular benchmark MS COCO <cit.>. A mini val set of the COCO2014 is used for detection transformers evaluation. These results are evaluated using mean Average Precision (mAP) as the evaluation metric. The mAP is the mean of each object category's Average Precision (AP), where AP is the area under the precision-recall curve <cit.>.
§ RESULTS AND DISCUSSION
Many advancements are proposed in DETR, such as backbone modification, Query design and attention refinement to improve performance and training convergence. Table <ref> shows the performance comparison of all DETR-based detection transformers on the COCO minival set. We can observe that DETR performs well at 500 training epochs and has low AP on small objects. The modified versions improve performance and training convergence like DINO has mAP of 49.0% at 12 epochs and performs well on small objects.
The quantitative analysis of DETR and its updated versions regarding training convergence and model size on the COCO minival set is performed. Part (a) of Figure <ref> shows the mAP of the detection transformers using a ResNet-50 backbone with training epochs. The original DETR, represented with a brown line, has low training convergence. It has an mAP value of 35.3% at 50 training epochs and 44.9 % at 500 training epochs. Here, DINO, represented with a red line, converges at low training epochs and gives the highest mAP on all epoch values. The attention mechanism in DETR involves computing pairwise attention scores between every pair of feature vectors, which can be computationally expensive, especially for large input images. Moreover, the self-attention mechanism in DETR relies on using fixed positional encodings to encode the spatial relationships between the different parts of the input image. This can slow down the training process and increase converging time. In contrast, Deformable-DETR and DINO have some modifications that can help speed up the training process. For example, Deformable DETR introduces deformable attention layers, which can better capture spatial context information and improve object detection accuracy. Similarly, DINO uses a denoising learning approach to train the network to learn more generalized features useful for object detection, making the training process faster and more effective.
Part (b) of Figure <ref> compares all detection transformers regarding the model size. Here, YOLOS-DETR uses DeiT-small as the backbone instead of DeiT-Ti, but it also increases model size by 20x times. DINO and REGO-DETR have comparable mAP, but REGO-DETR is nearly double in model size than DINO. These networks use more complex architectures than the original DETR architecture, which increase the total parameters and the overall network size.
We also provide a qualitative analysis of DETR and its updated versions on all-sized objects in Figure <ref>. For small objects, the mAP for the original DETR is 15.2% at 50 epochs, while Deformable-DETR has an mAP value of 26.4% at 50 epochs. The self-attention mechanism in Deformable-DETR allows it to interpolate features from neighboring pixels, which is particularly useful for small objects that may only occupy a few pixels in an image. This mechanism in Deformable-DETR captures more precise and detailed information about small objects, which can lead to better performance than DETR.
§ OPEN CHALLENGES & FUTURE DIRECTIONS
Detection Transformers have shown promising results on various object detection benchmarks. There are still some open challenges and future directions for improving it. Table <ref> provides the advantages and limitations of all proposed improved versions of DETR.
Here are some open challenges and future directions for improvements in DETR:
Improve attention mechanisms: The performance of detection transformers depends on the attention mechanism for capturing dependencies between various spatial locations in an image. Till now, 60% of modifications have been done in the attention mechanism of the detection transformer to improve performance and training convergence. Future research could focus on designing more refined attention mechanisms to capture spatial information or incorporate task-specific constraints
Adaptive and dynamic backbones: Backbone also affects the network performance and size. Current detection transformers remove the backbone or use fixed backbone architectures across all images. Only 10% of backbone modifications are done in DETR to improve performance and reduce network size. Future research could explore dynamic backbone architectures that can adjust their complexity based on the input image's characteristics. Researchers can improve detection transformers by modifying the backbone, likely leading to even more impressive results.
Improve quantity and quality of object queries: The quantity object queries fed to the decoder as input in DETR is typically fixed during training and inference. However, the size or number of objects in an image can differ. Later on, it is observed in some networks such as DAB-DETR, DN-DETR and DINO that modifying the quantity or quality of object queries can significantly affect the detection transformer performance. DAB-DETR uses dynamic anchor boxes as object queries, DN-DETR adds positive noise to the object queries for denoising training, and DINO adds positive and negative noise to the object queries for improved denoising training. Future models can adjust the number of object queries based on the image's content to improve the quantity of object queries. Furthermore, researchers can include more dynamic and adaptive mechanisms to improve the quality of object queries.
§ CONCLUSION
Detection transformers have provided efficient and precise object detection networks and delivered insights into the operation of deep neural networks. This review gives a detailed overview of the Detection Transformers. Specifically, it focuses on the latest advancements in DETR to improve performance and training convergence. The attention module of the detection transformer in the encoder-decoder network is modified to improve training convergence, and object queries as input to the decoder are updated to enhance the performance of small objects. We provide the latest improvements in detection transformers, including backbone modification, query design, and attention refinement. We also compare the advantages and limitations of detection transformers in terms of performance and architectural design. With its focus on object detection tasks, this review provides a unique view of the recent advancement in DETR. We hope this study will increase the researcher's interest in solving existing challenges towards applying transformers models in the object detection domain.
§ ACKNOWLEDGMENTS
§ ACKNOWLEDGMENT
The work has been partially funded by the European project AIRISE under Grant Agreement ID 101092312.
IEEEtran
[
< g r a p h i c s >
]TAHIRA SHEHZADI received her bachelor's degree in electrical engineering from the University of Engineering and Technology Lahore, Pakistan and her M.S. degree in computer science from Pakistan Institute of Engineering and Applied Sciences, Pakistan. She is pursuing a PhD with the German Research Center for Artificial Intelligence (DFKI GmbH) and the Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau under the supervision of Prof . Didier Stricker and Dr. Muhammad Zeshan Afzal. Her research interests include deep learning for computer vision, specifically in 3D reconstruction. She received two Gold Medals for the Best Student from FAZAIA, Pakistan, in 2014 and secured University Merit Scholarship for a Master's degree in 2018 and the DAAD (Germany) PhD Fellowship in 2021.
[
< g r a p h i c s >
]KHURRAM AZEEM HASHMI received his bachelor’s degree in Computer Science from the National University of Computer and Emerging Sciences, Pakistan, in 2016, and his M.S. degree from the Technical University of Kaiserslautern. He is currently a researcher at the German Research Center for Artificial Intelligence (DFKI) and pursuing a Ph.D. degree from RPTU Kaiserslautern-Landau under the supervision of Prof. Didier Stricker and Dr. Muhammad Zeshan Afzal. His research interests include self-supervised learning and instance-based representation learning in challenging conditions, such as in videos and dark environments. Alongside his research, he serves as a reviewer for major computer vision conferences and regularly reviews articles for journals, including IEEE Access, Springer Nature, Sensors, and Neurocomputing.
[
< g r a p h i c s >
]DIDIER STRICKER lead the Department of Virtual and Augmented Reality, Fraunhofer Institute for Computer Graphics (Fraunhofer IGD), Darmstadt, Germany, from June 2002 to June 2008. In this function, he initiated and participated in many national and international projects in the areas of computer vision and virtual and aug- mented reality. He is currently a Professor with the University of Kaiserslautern and the Scientific Director of the German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, where he leads the Research Department of Augmented Vision. In 2006, he received the Innovation Prize from the German Society of Computer Science. He serves as a reviewer for different European or national research organizations, and is a regular reviewer of the most important journals and conferences in the areas of VR/AR and computer vision.
[
< g r a p h i c s >
]MUHAMMAD ZESHAN AFZAL received the master’s degree majoring in visual computing from Saarland University, Germany, in 2010, and the Ph.D. degree majoring in artificial intelligence from the Kaiserslautern University of Technology, Kaiserslautern, Germany, in 2016. He worked both in the industry (Deep Learning and AI Lead Insiders Technologies GmbH) and academia (TU Kaiserslautern). At an application level, his experience includes a generic segmentation framework for natural, human activity recognition, document and medical image analysis, scene text detection, recognition, and online and offline gesture recognition. Moreover, a special interest in recurrent neural networks and transformers for sequence processing applied to images and videos. He also worked with numerics for tensor-valued images. His research interests include deep learning for vision and language understanding. He is a member of IAPR. He received the Gold Medal for the Best Graduating Student in computer science from IUB, Pakistan, in 2002 and secured the DAAD (Germany) Fellowship in 2007.
|
http://arxiv.org/abs/2306.06505v1
|
20230610184236
|
Vista-Morph: Unsupervised Image Registration of Visible-Thermal Facial Pairs
|
[
"Catherine Ordun",
"Edward Raff",
"Sanjay Purushotham"
] |
cs.CV
|
[
"cs.CV"
] |
Calculating and Visualizing Counterfactual Feature Importance Values
[
July 31, 2023
====================================================================
For a variety of biometric cross-spectral tasks, Visible-Thermal (VT) facial pairs are used. However, due to a lack of calibration in the lab, photographic capture between two different sensors leads to severely misaligned pairs that can lead to poor results for person re-identification and generative AI. To solve this problem, we introduce our approach for VT image registration called Vista Morph. Unlike existing VT facial registration that requires manual, hand-crafted features for pixel matching and/or a supervised thermal reference, Vista Morph is completely unsupervised without the need for a reference. By learning the affine matrix through a Vision Transformer (ViT)-based Spatial Transformer Network (STN) and Generative Adversarial Networks (GAN), Vista Morph successfully aligns facial and non-facial VT images. Our approach learns warps in Hard, No, and Low-light visual settings and is robust to geometric perturbations and erasure at test time. We conduct a downstream generative AI task to show that registering training data with Vista Morph improves subject identity of generated thermal faces when performing V2T image translation.
§ INTRODUCTION
Multiple Visible-Thermal (VT) facial datasets are available for biometric tasks like emotion recognition, thermal face recognition, and person re-identification <cit.>. Unfortunately, misalignment is introduced at the time of data capture when two sensors (a thermal and visible camera) are positioned at different angles and distances. Given increasing interest in generative AI, this inherent misalignment between cross-spectral faces can weaken image quality in generative tasks <cit.> such as Visible-to-Thermal (V2T) image translation, due to shift invariance <cit.>. Manual scaling, cropping, and alignment by hand is infeasible when dealing with thousands of images. Further, existing VT alignment methods rely on supervised feature matching <cit.>. As a result, to rapidly register VT faces on multiple VT facial datasets of varying scale and distortion, we offer Visible Thermal Facial Morph (Vista Morph). Our model is the first unsupervised approach to register VT faces, to our knowledge, and does not rely on feature matching or a target reference. Vista Morph combines two Generative Adversarial Networks (GAN) <cit.> and a Spatial Transformer Network (STN) <cit.> that uses Vision Transformer (ViT) <cit.> for the first time, as the localization network. This contrasts from similar cross-spectral/multi-modal works <cit.> that rely on a traditional CNN or U-NET <cit.> localization network for the STN. We select ViT because it applies self-attention across embedded image patches, making the spatial information fixed and preserved across layers of the network, whereas CNNs are less spatially discriminative <cit.>. Unlike traditional image registration methods, no similarity metric such as mean squared difference, normalized cross-correlation, or mutual information is optimized during training <cit.>. Only common GAN-based losses are learned. Further, Vista Morph integrates a Fourier Loss to learn how to align thermal images relative to No- and Low-light visible pairs by relying on the signal domain - so far unexplored in VT image registration but critical since Long-Wave Infrared (LWIR) sensors capture visible faces without the need for a light source.
We evaluate three VT facial datasets to align thermal faces relative to the visible face's geometry (T∼V) and vice-versa (V∼T). To examine generative image quality, we register each dataset with Vista Morph and train a conditional GAN <cit.> for the downstream task of Visible-to-Thermal (V2T) image translation. V2T image translation is increasingly researched for its value in person re-identification, thermal face recognition, and thermal physiology <cit.>. We also train a Diffusion Model for the T2V generative task <cit.>. We then use diagrams of the underlying facial vasculature, a thermal biometric asserted by <cit.>, to analyze similarity between real and generated thermal identities. Our paper ends with a series of ablation studies on architectural settings and robustness. Our contributions are the following:
* The first unsupervised VT facial image registration called Vista Morph that uses ViT, for the first time, as a localization network in the STN framework.
* Registering pairs in challenging No- and Low-Light settings, a common scenario when using thermal sensors, by integrating a Fourier Loss in the Vista Morph model.
* Analyzing the identity of generated thermal faces from GANs by extracting vessel maps that visualize underlying thermal vasculature.
* Generalizability beyond faces with Vista Morph application to automated driving datasets and proven robustness against geometric transformations and erasure.
§ RELATED WORKS
Existing VT facial registration relies on feature-based matching such as edge maps, corner detection, intensity histograms, and SIFT features, or a supervised target, where these methods only evaluate on a single VT face dataset <cit.>. Multimodal medical image registration methods such as DLIR <cit.> and Voxelmorph <cit.> are applied for CT and MRI imagery. However, while the images vary in density, they are still captured in the same optical spectra. This differs from our challenge where two images are obtained in different electromagnetic spectra altogether; the visible band (350 - 740 nm) and the LWIR band (8 - 15 μm). The task of unsupervised cross-spectral image translation is new. The most similar work to ours is the sentinel Nemar algorithm by Arar et al., <cit.> that first demonstrated unsupervised VT image registration on non-facial images. Since then, several similar Nemar-like approaches to CT/MRI images, remote sensing, and VT street scenes have been developed using varying translation and registration flows, new loss functions and/or fusion <cit.>. No existing works tackle the challenge of non-rigid cross-spectral facial images since they contain abrupt changes and sudden deformations.
§ VISTA MORPH
We describe our approach, Vista Morph, by describing the training flows, loss functions to include the Fourier Loss for handling No- and Low-light settings, as well as using ViT as a novel localization network with a custom Multilayer Perceptron (MLP) for the regressor network of the STN <cit.> framework.
§.§ Generative Flows
Shown in Figure <ref>, registration is trained using four flows, in an end-to-end fashion. First, the ground-truth visible image, A, is passed to the first Generator, G_1, that outputs the fake thermal image, B̂. Second, the original thermal image, B, is passed to the second Generator, G_2, that outputs the first fake visible image, Â_̂1̂. Third, both A and Â_̂1̂ are used as inputs to the STN, in order to output the registered thermal image, B_R. In the fourth flow, B_R is passed back to G_2 in order to output Â_̂2̂. These flows force the STN to use a visible mapping of B̂ translated into its thermal counterpart, Â_̂1̂, that preserves the geometry of the original thermal image as indicated in Figure <ref>. By translating the visible to the thermal spectrum, the STN can now learn the scale between both modalities. Two GANs are used, where the generators, G1 and G2, are identical U-NETs with 5 encoder and 4 decoder modules with added BlurPool layers <cit.>. The discriminators, D1 and D2 are identical and comprise a traditional PatchGAN <cit.> architecture with a 16x16 patch, also incorporating BlurPool layers.
GAN Losses. The perceptual quality of the VT images is controlled using an LPIPS <cit.> perceptual loss, L_perc. Per Eq. <ref>, ϕ is the VGG-16 network and τ transforms network embeddings.
L_𝑝𝑒𝑟𝑐 = ∑_nτ^n(ϕ^n(B̂) - ϕ^n(B)) + ∑_nτ^n(ϕ^n(Â) - ϕ^n(Â_̂2̂))
Recall that Â_̂2̂ is the visible image outputted by B_R after being warped by the STN. To enforce the alignment of B_R against the desired visible geometry of A, we set an L1 reconstruction loss shown in Eq. <ref>.
L_R = A - Â_̂2̂_1
To control for structural similarity between B_R and A, we calculate morphological gradients for B_R, B, and A, and apply a triplet loss in Eq. <ref>.
L_𝑚 = 1/K∑_k=1^Kmax{d(B_R, A) - d(B_R, B) + 1, 0}
Finally, for datasets with Low- or No-Light visible imagery, we add a Fourier Loss to learn the signal domain, as opposed to only spatial domain. The L_FFT shown in Eq. <ref> is the L1 loss of the amplitude in Eq. <ref> and phase in Eq. <ref> of A and Â1̂.
L_amp= (|ℱ{A}_u,v|) -(|ℱ{Â_̂1̂}_u,v|) _1
L_pha = (∠ℱ{A}_u,v) -(∠ℱ{Â_̂1̂}_u,v)_1
L_FFT = L_amp(A, Â_̂1̂) + L_pha(A, Â_̂1̂)
We train the GAN (L_GAN) using a relativistic adversarial loss <cit.> leading to the total Generator Loss, L_G shown in Equation <ref>. The total Discriminator loss L_D, is an average of the real and fake discriminator losses which are both relativistic. The total training objective is shown in Equation <ref>.
L_G = L_GAN + L_perc + L_R + L_m
G^* = min_G max_D L_G + L_D
§.§ Registration Network
The registration network is a STN <cit.>. STN is not a model in itself, but rather, a framework where any differentiable function (e.g. neural network) can be used as the localization network. As a result, we use a 12-Layer ViT <cit.> as the localization network to extract features between the visible (aligned) and thermal (non-aligned) image, and add a 4-Layer MLP as the regressor network. Shown in Figure <ref>, the concatenated input of (A, Â_̂1̂) are passed to the ViT using a patch size of 64. The MLP consists of the following architecture: Linear (17*768,1024)-ReLU-Linear (1024,512)-Relu-Linear (512,256)-Sigmoid-Linear (256,6), outputting an affine matrix (ϕ) of size 6. Using a sigmoid activation function is important for the regressor since it outputs values between [-1,1] in the range of ϕ <cit.>. The ϕ is calculated for each (A,A_1) pair which represents the 2D flow field (sampling grid), given a batch of affine matrices. Using an affine transformation, the STN computes the registered output, B_R sampling the pixel locations from the field. The STN is trained jointly with G1 and G2.
§ EXPERIMENTS
§.§ Datasets
In this section, we evaluate our approach on three VT paired facial datasets: Carl <cit.>, Army Research Lab (ARL) Devcom Dataset <cit.>, and Eurecom <cit.>. To test our approach on a non-facial domain, we use the FLIR Advanced Driver Assistance Systems (ADAS) dataset <cit.>. For each dataset, we conduct a minimal amount of preprocessing. For example, with the Devcom dataset, we use the forward-facing “baseline" and “expression" protocols and ignore the thermal bounding box metadata supplied for alignment. Instead, we use a FaceNet MTCNN <cit.> to detect and crop visible faces, and apply a series of binary thresholding operations to crop the thermal face image away from its background. The results lead to a misaligned set of VT facial pairs with varying degrees of warp. We select a random 5% sample from our misaligned Devcom dataset (56,205 training pairs) due to compute limitations. We use the entire Eurecom, Carl, and FLIR ADAS datasets, with details in Table <ref>.
§.§ Baselines
First, we explore manual approaches using existing facial landmark algorithms in order to capture right and left eye coordinates implicit for affine matrix estimation. One algorithm is the Google MediaPipe <cit.> Face Mesh Active Appearance Model (AAM) based on 3D Morphable Models <cit.> that implements a Multi-task Cascaded Convolutional Network (MTCNN) model <cit.> using an InceptionResnetV1 model pre-trained on VGGFace2. Each detected face returns an array of 468 points for three coordinates used as keypoints. For the Devcom dataset, the AAM fails to detect landmarks for 40% of the thermal faces, thereby leaving only 60% of facial pairs usable. For the remaining pairs, the scale of the desired registered image must be determined by estimating ratios between eye distances among the current and target image. This enables retrieval of geometric parameters to compute the affine matrix. No singular set of parameters can address all variations of warp, and as a result, they must be calculated manually for every VT pair. Samples results shown in Figure <ref> demonstrate the imperfection of this manual approach. Since the manual pipeline is not feasible for multiple VT datasets that contain a variable level of warp, scale, and abrupt changes, we compare our approach against the Nemar model <cit.> as the closest to Vista Morph's unsupervised approach.
§.§ Experimental Protocol
First, we train Vista Morph and Nemar in two directions to: (1) align the thermal face relative to the visible face (T∼V), and (2) for the Eurecom dataset, align visible relative to thermal (V∼T). As a result, we perform a total of four VT facial registration experiments: (1) Devcom (T∼V), (2) Carl (T∼V), (3) Eurecom (V∼T), (4) Eurecom (T∼V). For Carl and Eurecom, we trained the T∼V alignment, using L_FFT loss due to No- and Low-Light images. Second, we register the entire dataset (train, test) using Vista Morph and Nemar. Third, we conduct a downstream generative task, for image-to-image translation using the VTF-GAN <cit.>, a conditional GAN specifically designed for Visible-to-Thermal (V2T) facial image-to-image translation. We generate both visible and thermal faces using both the unregistered original data and the Vista Morph registered pairs.
§.§ Evaluation
To score registration results, we use Structural Similarity Index Measure (SSIM) and Normalized Cross Correlation (NCC) of the edge maps (e.g. morphological gradients of the visible and thermal images), in addition to Mutual Information (MI) <cit.> between both spectra. For generative results, we score with Frechet Inception distance (FID) <cit.> and LPIPS <cit.>. Lastly, we analyze a sample of Devcom generated thermal faces for retention of identity through facial vasculature maps.
§.§ Implementation
For registration experiments, we train our model and the baseline using PyTorch, to 50 epochs with a batch size of 32. For generative experiments, we train the VTF-GAN and VTF-Diff from scratch, to 200 epochs with a batch size of 32. For all experiments, we use automatic mixed precision and parallel training on two RTX-8000 GPUs. Training Vista Morph is fast, where registration is learned approximately 1 hr. on the Devcom dataset.
§ RESULTS
§.§ Registration
In Table <ref>, Vista Morph outperforms the Nemar baseline for T∼V alignment on all datasets. For Mutual Information, Vista Morph is comparable to Nemar with only a marginal difference (0.260, 0.279). Vista Morph shows the greatest registration gains with the Carl dataset for all three metrics (5.1%, 105.0%, 26.8%), which exhibits several Low-Light settings. Similarly, the T∼V alignment for Eurecom improves using Vista Morph, as this dataset includes Low- and No-Light visual images. We show in Figure <ref> that Nemar fails to register T∼V, since the visible face is captured in a No-Light setting. The T∼V results in Figure <ref> and <ref>, show the precise alignment of Vista Morph despite hair texture and differences in scale and height.
An intuitive view are difference maps shown in Figure <ref>. These plots visualize the shift of pixels between the registered and original images. For example, the top row of Figure <ref> shows the difference without registration where the thermal glasses (blue) are not aligned to the visible eyes (red). After registration with Vista Morph, the glasses are superimposed on the eyes, whereas the baseline still demonstrates misalignment. Most noticeable is the Eurecom T∼V on the second to last row which shows no difference, only a light orange Nemar plot. This is because no thermal image was registered. Carl plots show blue ringing effects and shadows around the Nemar difference map indicating the imperfect alignment to the visible face's scale.
§.§ Generation
Table <ref> shows results for the generative Visible-to-Thermal (V2T) image-to-image translation tasks using registered and unregistered training data. In all cases, scores improve significantly after VT pairs are registered. For FID scores, Nemar achieves slightly lower scores than Vista Morph yet LPIPS scores are 6.8% lower for Vista Morph. For the Carl and Eurecom V2T translations, their thermal faces show a -13.9% and -12.4% decrease in FID, and, -13.1% and -12.4% decrease in LPIPS, respectively. Sample generated images for the V2T direction are provided in Figure <ref>. Upon qualitative inspection, when using unregistered data or Nemar-registered pairs, the generated thermal faces (“GAN:V2T", “GAN:V2(Ne)T") introduce more artifacts, less texture and consistency, and similarity to their ground truth thermal faces. The “GAN:V2(VM)T" faces retain distribution of pixel color (e.g. thereby thermal temperature), perceptual clarity, and hair texture which is important for maintaining minority and female identities. Additional results are in the Supplementary Materials.
§.§.§ Generated Identity Analysis
To visualize how Vista Morph registered data improves the generation of subject identity, we turn to facial vascular network extraction as defined by <cit.>. Building on biometric work stemming from retrieval of venous structure in palms and wrists <cit.>, Buddharaju, et al. <cit.> propose thermal vasculature as a unique biometric that can be extracted from thermal faces through basic image processing shown in Figure <ref>: anisotropic diffusion <cit.> to remove noise and enhance sigmoid edges, followed by CLAHE (Contrast Limited Adaptive Histogram Equalization) <cit.>, and then finally top hat segmentation. We use samples from the Devcom dataset as a test bed since faces are close-up with minimal apparel. To measure similarity between the generated and real identity, we calculate the Peak Signal to Noise Ratio (PSNR) <cit.> between vessel diagrams. Subjects in Figure <ref> show the facial vein, labial arteries (mouth and nose), angular artery and vein (eyes), superficial temporal artery and vein (edge of face), and supraorbital artery and vein (forehead). PSNR between the GAN is trained on registered pairs. For example, Subject 1 shows a PSNR of 11.018 of vessel maps before registration. Notice how “G“, the generated image trained on unregistered data shows an identity dissimilar to its respective ground truth, “T". The PSNR of their respective vessel maps is 11.018. When Vista Morph registers T∼V in “RT", the PSNR between vessel maps increases to 11.317, indicating that the generated subject's identity is more similar to the ground truth, when registered. Similar evidence can be seen in Subjects 2 and 3 that display extreme variance in scale as well as head tilt between VT pairs.
§ ABLATION STUDIES
§.§ Architecture and Patch Size
We conducted a brief ablation study using the Devcom dataset (T∼V). When using the traditional U-NET for the STN, all scores decrease significantly when compared to our implementation (SSIM: -2.02%, NCC: -104.15%, MI: -19.03%). Patch=32 size led to the worst results where the affine matrix could not be estimated. For facial images, more patches (e.g. increase of patch size) which preserve positional information, empirically leads to better registration results.
§.§ Non-Facial Domains
Vista Morph successfully registers non facial pairs, namely the FLIR ADAS street scenes dataset. We train a “deeper" STN regressor by adding two more linear layers to the baseline. We believe this enables finer focus of object features such as persons, stop signs, pedestrian crossings, and vehicles. Further, we incorporate Fourier Loss since ADAS includes several No- and Low-Light settings. Further, we find that 32 patches and removing the morphological loss also improves image quality. Registration scores for Vista Morph are 0.672 SSIM, 0.318 NCC, 0.607 MI compared to no registration at 0.627 SSIM, 0.250 NCC, 0.516 MI.
§.§ Robustness
To illustrate Vista Morph's robustness, we test registration with geometric transformations and random erasure at inference time on the input pairs. Despite the permutations, Vista Morph successfully registers the T∼V. In Figure <ref>, the thermal image registers to the scale of the visible input regardless of erasure. Figure <ref> shows the expected behavior, that the entire thermal image is registered. Here, the generated Â_̂1̂ and Â_̂2̂ images demonstrate the translation of the erasure pixels. Figure <ref> shows that when vertically flipped, Vista Morph will register the thermal image accordingly with respect to the visible geometry.
§ LIMITATIONS
Unfortunately, for V ∼ T registration, Vista Morph underperforms. To explore how registration effects a different generative model, we trained a conditional Denoising Diffusion Probabilistic Model (DDPM) <cit.> called VTF-Diff <cit.> on registered and non-registered images in the T2V direction. GAN results improve with registration (Nemar), but registration does not improve the Diffusion results. Although the generated diffusion results look geometrically and perceptually similar, there are differences in skin color, eye detail, and clothing. Further tests are needed beyond the Eurecom dataset.
§ CONCLUSION
We present Vista Morph, the first unsupervised VT facial registration model that aligns facial pairs without a reference or feature matching. We evaluate three VT facial datasets leading to significantly improved registration results over state-of-the-art methods. Further, we show that image quality from a generative Visible-to-Thermal image translation task improves with regards to perceptual clarity and identity when training a GAN on Vista Morph registered pairs. We support our findings with thermal vessel maps and demonstrate Vista Morph can register non-facial domains. Future work includes assessment of generated thermal faces by thermal specialists and user studies, and finer investigation into the consistency of results for diverse demographic samples.
910
IEEEbib
|
http://arxiv.org/abs/2306.06068v1
|
20230605111647
|
DeepStay: Stay Region Extraction from Location Trajectories using Weak Supervision
|
[
"Christian Löwens",
"Daniela Thyssens",
"Emma Andersson",
"Christina Jenkins",
"Lars Schmidt-Thieme"
] |
cs.CV
|
[
"cs.CV",
"cs.LG"
] |
Reassembling Broken Objects using Breaking Curves
Ali AlagramiEqual contribution. Luca Palmieri[1] Sinem Aslan[ ] Marcello Pelillo Sebastiano Vascon
5cDAIS, Ca’ Foscari University of Venice, Italy
5c
===============================================================================================================================================================================================
empty
empty
Nowadays, mobile devices enable constant tracking of the user's position and location trajectories can be used to infer personal points of interest (POIs) like homes, workplaces, or stores. A common way to extract POIs is to first identify spatio-temporal regions where a user spends a significant amount of time, known as stay regions (SRs).
Common approaches to SR extraction are evaluated either solely unsupervised or on a small-scale private dataset, as popular public datasets are unlabeled. Most of these methods rely on hand-crafted features or thresholds and do not learn beyond hyperparameter optimization. Therefore, we propose a weakly and self-supervised transformer-based model called DeepStay, which is trained on location trajectories to predict stay regions. To the best of our knowledge, this is the first approach based on deep learning and the first approach that is evaluated on a public, labeled dataset. Our SR extraction method outperforms state-of-the-art methods. In addition, we conducted a limited experiment on the task of transportation mode detection from GPS trajectories using the same architecture and achieved significantly higher scores than the state-of-the-art.
Our code is available at https://github.com/christianll9/deepstayhttps://github.com/christianll9/deepstay.
§ INTRODUCTION
Extracting stay regions (SR) from location trajectories identifies segments where a subject stays in the same place. It supports fine-grained spatio-temporal analysis of human and animal behavior and is often an intermediate step in point of interest (POI) mapping or POI extraction.
Common SR extraction approaches apply unsupervised clustering algorithms and use thresholds for time, distance, and velocity, among others. These thresholds are determined by a qualitative analysis or a quantitative hyperparameter optimization. Typically, all experiments are performed either on small private datasets, in some cases with manually annotated labels, or on large datasets without any labels. This makes it difficult to compare different approaches and makes the problem less suited for supervised learning that requires a large amount of labeled data.
Even though most trajectories do not contain ground truth SR labels, it is still possible to derive so-called "weak labels" from OpenStreetMap (OSM). For example, we can classify any location point lying within a building as part of a stay and a point near a road as part of a "non-stay" (see Figure <ref>). Given the large number of weak labels available, we assume that this data contains enough signal to learn useful latent representations. To this end, we apply a transformer model <cit.> that takes a trajectory as a time series of location points and classifies each point as either part of a stay or a non-stay.
To our knowledge, this is the first approach to extract SRs from trajectory data using deep learning. Furthermore, we use publicly available data for training and evaluation to ensure reproducibility. We derived a ground truth dataset from the field of activity recognition and use it to compare our model with baselines from related work.
§ PROBLEM STATEMENT
We define a location trajectory 𝒳={g_1, g_2, …, g_|𝒳|} as a time series of consecutive location points g_i=(t_i, x_i, y_i), where x, y∈ℝ denote the 2D coordinates and t∈ℝ^≥0 the ascending timestamp. The sample rate Δ t_i=t_i-t_i-1 is defined as the time difference between two consecutive points and is either constant or fluctuating, depending on the dataset.
SR extraction can be viewed as a time series segmentation task, where the trajectory 𝒳 is split in a set of segments 𝒯𝒮={ts_1,…,ts_q}. Each segment ts_j=(t_start_j, t_end_j, c_j) is defined by its start t_start and end time t_end and the binary class c∈{0,1} indicating whether the user is staying at one place (c=1) or is moving around (c=0) within the time window t_start≤ t<t_end. Moreover, we define
t_start_1 = t_1,
t_end_q = ∞,
t_start_j < t_end_j ∀ j∈{1,…,q},
t_end_j = t_start_j+1 ∀ j∈{1,…,q-1},
c_j c_j+1 ∀ j∈{1,…,q-1}.
The set of stay regions 𝒮ℛ is a subset of all segments, where
𝒮ℛ={ts_j|ts_j∈𝒯𝒮∧ c_j=1}.
The task of SR extraction is now to predict 𝒮ℛ (and therefore 𝒯𝒮) solely from the trajectory data 𝒳.
§ RELATED WORK
Trajectory segmentation is an important research topic with many examples such as activity recognition, transportation mode detection (TMD) and SR extraction. In TMD, each segment is assigned to a mode, e.g. walking, car, bus, etc. <cit.>. A special binary case of this task is SR extraction with only two possible modes: stay and non-stay. In most cases, it functions as a preprocessing step for tasks such as POI mappingextractionprediction. In POI mapping, each SR is assigned to a visit to one of several POIs <cit.>.
SR extraction identifies segments of a user's trajectory where the subject remains at the same place. A virtual location, usually the centroid of an SR, is called a stay point. So this task is also called stay point extractionrecognitionidentificationdetection.
§.§ Threshold-based Clustering
The vast majority of published work uses threshold-based spatio-temporal clustering methods, where the clusters represent stay segments of the trajectory. Commonly used thresholds are a minimum duration T_min and a maximum distance D_max <cit.>. Here, the task is to find the maximum sets of consecutive location points 𝒫={g_m,g_m+1,…,g_n} in the trajectory 𝒳, such that:
t_n - t_m ≥ T_min
dist(g_i,g_j) ≤ D_max ∀ g_i, g_j∈𝒫
Others apply additional thresholds for velocity, acceleration, and heading change <cit.>.
§.§ Adapted Density-based Clustering
Other approaches adapt density-based clustering methods such as DBSCAN <cit.> and OPTICS <cit.>. If the trajectory is sampled at a constant rate, prolonged stays will result in dense spatial data and thus can be detected. Unlike k-means, they do not require a predetermined number of clusters, which is crucial for SR extraction.
These approaches define SR extraction more as spatial clustering rather than time series segmentation. Therefore, the constraint that the clustered points must be consecutive is not always enforced. Many extensions have been proposed to utilize the temporal information as well <cit.>.
§.§ Others
The authors in <cit.> and <cit.> classify single location points as stay points when a GPS connection loss is detected. The algorithm proposed by <cit.> extracts SRs by searching for local minima of speed and zero crossings in acceleration within the trajectory.
§ METHODOLOGY
§.§ Architectural Overview
Figure <ref> shows the overall architecture of our model DeepStay and the intermediate results of the processing pipelines.
First, the raw trajectories are standardized and split into sequences of equal size. Furthermore, additional features are extracted to improve the performance of the subsequent transformer encoder.
This encoder receives a sequence of constant length and outputs an embedding vector for each point comprising latent features about the point within its sequence. The following feedforward layer acts as a decoder and predicts a probability for each vector to be part of a stay.
In the next step, all consecutive points with a predicted probability above a certain threshold are grouped as SRs.
§.§.§ Preprocessing of raw GNSS trajectories
All datasets in this work contain GNSS coordinates, such as GPS. In the first step, we project all coordinates into a 2D Cartesian system (x,y) using an appropriate UTM zone <cit.>.
Since trajectories may have varying sample rates, we use the time difference Δ t between each point and its predecessor as an additional feature. Our preliminary experiments indicate that this approach leads to better results than using linear interpolation as proposed by <cit.>.
We also add the current velocity v as the ratio of the Euclidean distance and Δ t between two consecutive points as another feature.
All trajectories are chunked into sequences of equal length n=256. This allows the transformer encoder to be trained with multiple sequences in a single batch of size B=64.
Furthermore, we standardize the features Δ t and v separately based on their distribution in the training set to obtain a mean of 0 and a standard deviation of 1. The standardization of the location features x and y is done jointly. Each sequence is subtracted from its mean (x_seq,y_seq) and divided by the common standard deviation of the entire training set σ_x,y_train to prevent the model from memorizing specific regions. To further reduce overfitting, we rotate every sequence uniformly at random with respect to its origin (0,0) before feeding it to the model. The final features of the i-th data point in sequence seq are shown in <ref>.
seq_i =
[[ x_i y_i Δ t_i v_i ]], seq∈ℝ^n× 4
§.§.§ Transformer Encoder
We choose the encoder of the transformer model <cit.> to learn latent embeddings emb_i for each sequence point seq_i. This allows us to predict the class probabilities pointwise instead of segmentwise. Thus, by design, segmentation and classification are performed jointly. We stick with the original setting of the base encoder including the projection and positional encoding as described in <cit.> to get the final embeddings emb∈ℝ^n× d_model.
§.§.§ Decoder
A feedforward layer with sigmoid activation decodes the embeddings and predicts the probability for each point emb_i to be part of a stay:
ĉ_̂î = σ(emb_i W_d^T+b_d), ĉ∈[0,1]^n
Now the segmentation can be done by simply grouping consecutive points where ĉ_̂î<0.5 for non-SRs and ĉ_̂î>0.5 for SRs, respectively.
§.§.§ Supervision
In the case of available SR labels, we can compute the pointwise binary cross entropy (BCE) between the prediction ĉ and the ground truth c:
BCE(ĉ_i, c_i) = -c_ilogĉ_i - (1-c_i)log (1-ĉ_i)
The distribution of the binary labels can be highly imbalanced. To prevent the model leaning towards one of the classes, we apply class weighting based on the mean c_train within the training set:
BCE_w(ĉ_i, c_i, c_train) = (c_i/c_train+ 1-c_i/1-c_train) BCE(ĉ_i, c_i)
Now the total loss ℒ_super is the average weighted BCE over all points in all N_train training sequences:
ℒ_super = 1/N_train· n∑_j=1^N_train∑_i=1^nBCE_w(ĉ^(j)_i, c^(j)_i,c_train)
§.§ Weakly Supervised SR Extraction
Since the vast amount of publicly available location trajectories does not contain SR labels, we apply programmatic weak supervision <cit.> by generating weak labels based on other data sources. These labels are often inaccurate. However, since we can generate them on a large scale, and since the error generally does not correlate with the input, we expect our model to still learn useful latent representations.
For that, we define a function f_weak that returns the estimated probability c_i_weak that the location point g_i is part of a stay, and a confidence score w_i_weak for that prediction:
f_weak(g_i) = (c_i_weak, w_i_weak)
Here, c_i_weak replaces the ground truth value c_i, while w_i_weak is used to weight the influence of the weak label on the total loss. Thus, the model learns more from weak labels, where the labeling function is more certain. The total loss is then:
ℒ_weak = ∑_j=1^N_train∑_i=1^nw^(j)_i_weak/N_train· nBCE_w(ĉ^(j)_i, c^(j)_i_weak,c_train_weak)
Furthermore, the mean label c_train_weak is also weighted by the confidence score:
c_train_weak = ∑_j=1^N_train∑_i=1^nc^(j)_i_weak· w^(j)_i_weak/∑_j=1^N_train∑_i=1^nw^(j)_i_weak
f_weak works as an ensemble of separate labeling functions that predict whether a location point is part of a stay or not. All labeling functions implement simple heuristics and may conflict with each other. Here, they predict a pair (c_weak,w_weak(g)) with a constant value for c_weak and a confidence weight w_weak depending on the input data g. In total, four different functions are defined:
* f_build predicts a stay with high confidence if a location lies within a building.
* f_am predicts a stay with high confidence if a location lies within small amenities.
* f_street predicts a non-stay with high confidence if a location is close to a street.
* f_transport predicts a non-stay based on available transportation mode labels.
The data source for the first three functions is OSM. Similar to <cit.>, the coordinates of the location g_i are used to query additional information from the map service.
§.§.§ Stay Labeling Functions
f_build checks, if g_i lies within a building b∈ℬ_OSM. If so, it returns a confidence weight of 1, since points that fall inside a building have a high chance of being part of a stay:
w_build(g_i) =
1, if ∃ b∈ℬ_OSM | b ∩ g_i≠{}
0, otherwise
Similarly, f_am returns w_am > 0, if g_i lies within an amenity a∈𝒜_OSM. This is an OSM category for facilities like hospitals or airports that can encapsulate multiple buildings. For larger amenities, since it is less certain that people will stay in a single location, we model the confidence weight as a function of their geographic area:
.95!w_am(g_i) =
max_a∈𝒜_OSM|a∩ g_i≠{}exp(-area(a)/1/|𝒜_OSM|∑_j area(A_OSM_j)) if ∃ a|a∩ g_i≠{}
0, otherwise
Here, the fraction is the ratio of the area of an encapsulating amenity to the average area of all amenities in the dataset. By using a negative exponent, the weight starts at 1 for an area of 0 and decreases as the area increases. We use the maximum value, when multiple amenities encapsulate g_i.
§.§.§ Non-Stay Labeling Functions
f_street checks if g_i is near a street, since those points have a high probability of being part of a non-stay. Thus, if a street s∈𝒮_OSM intersects with a centered bounding box bb_i(l_s) around g_i, f_street returns a confidence weight of 1. The box has a shape of d(l_s)× d(l_s) with l_s denoting the importance level of the street s (e.g. highway). Formally, this can be defined as:
w_street(g_i) =
1, if ∃ s∈𝒮_OSM | s ∩ bb_i(l_s) ≠{}
0, otherwise
f_transport is designed for the GeoLife (GL) dataset, which will be introduced in Chapter <ref> in more detail. Its trajectories include additional transportation mode labels, which are: walking, running, biking, motorcycle, car, taxi, bus, train, subway, boat, and airplane. While the dataset lacks a separate stay mode, we can use all motorized modes (i.e., all modes except walking, running, and biking) as a heuristic for non-stays. The confidence weight is formalized as:
w_transport(g_i) =
1, if label(g_i)∈ modes_motorized
0, otherwise
§.§.§ Combining the Labeling Functions
We combine the results of all heuristics ℋ by averaging the predicted probabilities and adding up all confidence weights as follows:
f_weak(g_i) = (c_i_weak, w_i_weak)
= (∑_j∈ℋc_j· w_j(g_i)/∑_j∈ℋw_j(g_i), ∑_j∈ℋw_j(g_i)),
where ℋ = {build, am, street, transport}.
Thus, each labeling function f_j has a linear influence on the total confidence weight w_i_weak, independent of the output of other labeling functions. On the one hand, this combination is similar to an ensemble with model averaging, whereas, on the other hand, this resembles also a Mixture of Experts, where the weights depend on the input g_i.
§.§ Self-Supervised Encoder
Many points are not captured by any heuristic and receive a total confidence weight of 0. Self-supervised learning (SSL) could still leverage those data in a (weakly) semi-supervised manner and further strengthen the model's robustness to inaccurate training data <cit.>. Since <cit.> show good results by using forecasting as a pretext task for time series data, we adopted their approach.
§.§.§ Forecasting Task
We choose the velocity as one forecast target, which is less dependent on the sample rate compared to the location. Additionally, the bearing angle is forecasted as a second target because it is not directly included in the input and requires the model to encode more informative embeddings. More specifically, we predict the sine and cosine values of the angle to capture the periodicity.
Given the encoder output emb, we concatenate its sequence mean emb and last embedding vector emb_n as an aggregated embedding vector emb_agg∈ℝ^2d_model for the whole sequence. This vector is then passed to two separate feedforward layers. No activation function is used for the velocity, while for the sine and cosine prediction we apply tanh to bind the output between -1 and 1:
v̂_n+1 = emb_agg W_vel^T+b_vel
[ ŝîn̂_α_n+1; ĉôŝ_α_n+1 ] = tanh(emb_agg W_ang^T+b_ang)
§.§.§ Multitask Loss
The loss for each pretext task is defined by the Mean Squared Error (MSE) between the prediction and the ground truth:
MSE(ŷ, y) = 1/N_train∑_j=1^N_trainŷ^(j)_n+1 - y^(j)_n+1
ℒ_vel = MSE(v̂, v)
ℒ_ang = MSE(ŝîn̂_α, sinα) + MSE(ĉôŝ_α, cosα)
We follow <cit.> and approach SSL as multitask learning with the sum of the downstream loss ℒ_weak and the pretext losses:
ℒ_final = ℒ_weak + λ_velℒ_vel + λ_angℒ_ang ,
where λ denotes tunable hyperparameters. If ground truth labels are available, ℒ_weak is replaced with ℒ_super.
§ DATA
For this study, we select two datasets: GeoLife (GL) by and ExtraSensory (ES) by <cit.>. GL contains two orders of magnitude more location points than ES but lacks proper SR labels. ES is chosen because of its activity labels, from which we can infer ground truth SR labels.
Similar to <cit.> and <cit.>, we remove outliers based on unrealistic velocity values and split a user's trajectory if the time difference between two consecutive points exceeds 20 minutes or if an unrealistic location jump is detected.
§.§ GeoLife
Instead of ground truth SR labels, the GL dataset contain time-segmented transportation mode labels from 69 of all 182 participants. These labels are used to derive weak labels and for our experiment on TMD.
To reduce network traffic and memory, we gather OSM data for points that fall within the 15 %/85 % percentile of longitude and latitude, which covers about 62 % of the total dataset. An overview of the total sum of confidence weights used for weak supervision can be found in Table <ref>. The remaining unlabeled data is still used for SSL instead. The sample rate of GL is non-constant and varies between 1 and 6 seconds. For the UTM projection, we choose zone 50N.
§.§ ExtraSensory
We use the ES dataset to fine-tune and evaluate DeepStay. Besides GNSS points, this dataset contains other sensor data, which we ignore. It was collected for the task of activity recognition. Participants should self-report their current activities such as "biking" or "watching TV". Some activity modes clearly indicate stays and non-stays. Thus, we define a function that maps these modes to SR labels.
In the second step, we remove suspicious stays, where the velocity is higher than the average velocity of non-stays. The final number of points and derived labels are listed in Table <ref>.
The sample rate of ES is nearly constant at 1/min. To achieve reasonable results with an encoder pre-trained on GL, we linearly interpolate the location trajectory at a rate of 0.5Hz. However, for the final test results, only the predictions for the real, non-interpolated labels are evaluated. The prediction value is taken from the nearest interpolation point. For the map projection, we use the UTM zone 11N.
§ EXPERIMENTS
In the first experiment, we train and test DeepStay on SR labels. The second experiment shows the ability of our architecture to be used for the more general task of TMD.
§.§ Experiment 1: Stay Region Extraction
For this experiment, DeepStay is pre-trained on weak labels from the GL dataset and then fine-tuned and tested together with traditional baselines on the ES dataset, where it achieves the best overall results among all methods.
§.§.§ Baselines
We implement the following algorithms as baselines and test them on the ES dataset:
* Kang et al. <cit.>: Threshold-based clustering. It collects consecutive points until a distance threshold to the points' centroid is exceeded. Then the time criterion <ref> is checked and if the minimum duration is reached, the collected points form a SR. Although the authors only proposed a POI extraction algorithm, it also implicitly incorporates SR extraction, which can be outsourced.
* D-Star <cit.>: Density-based clustering. It is based on DBSCAN, but instead of solely clustering the location points spatially, it considers only neighboring points along the trajectory and tries to exclude outliers. D-Star seems to be state-of-the-art.
* CB-SMoT <cit.>: Density-based clustering. While the algorithm is similar to D-Star, the resulting SRs contain only consecutive points, which is more in line with our definition. It can incorporate prior known POIs. However, for a fair comparison, we exclude this data.
We optimize the hyperparameters of Kang et al. and CB-SMoT using a 3×3 grid search based on the values reported in the original publications. D-Star has 4 parameters to adjust, hence we perform a random search with 10 different constellations. Each parameter search is incorporated in a 5-fold cross-validation based on the F_1 score. We split the ES data in the same way as for DeepStay.
§.§.§ Training, Validation, and Test
The training and testing pipeline for DeepStay can be summarized in three steps:
* Hyperparameter optimization: Training on about 80 % of the GL dataset with weak labels and optimization of hyperparameters on the remaining 20% in respect to the loss ℒ_weak. These hyperparameters are the number of training epochs, the weight decay, the learning rate, and the SSL weights λ_vel and λ_ang.
* Pre-training: Creating a pre-trained DeepStay model by reinitiating the training on the full GL dataset and using the best-known hyperparameters.
* Fine-tuning and test: Fine-tuning the decoder of the pre-trained model on the ES dataset and freezing all other model weights including the encoder layers. We apply 5-fold cross-validation, i.e. each iteration about 80 % of the data is used for training, and validation and 20 % for testing. Of this 80 %, 10 % is used for a second hyperparameter optimization.
We follow <cit.> and split the data by the participants of the respective study, to avoid leakage between training, validation and test set. During both the pre-training and the fine-tuning, we apply an Adam optimizer <cit.> and SSL.
§.§.§ Metrics
A common metric in time series segmentation is the pointwise accuracy, i.e. the ratio of correctly classified points to the total number of labels.
In addition, we measure the pointwise calculated recall and precision. The definition of the positive class is crucial for both metrics. Since the final test dataset, i.e. ES, is highly imbalanced and contains many more stays than non-stays (see Table <ref>), it is more important to detect a non-stay than a stay. This also resembles everyday life, where people mostly stay in one place and only move from time to time. Therefore, we choose non-stays as the positive class. The derived F_1 score is used as the main metric to evaluate all SR extraction algorithms.
§.§.§ Results
The final results are shown in Table <ref>. All reported values are calculated over all 5 ES test data splits. In addition to the three baselines, two simplistic baselines predict a constant value (either always non-stay ĉ_i=0 or always stay ĉ_i=1).
DeepStay achieves higher overall scores than all implemented baselines, while the results for D-Star are comparable in terms of accuracy. Kang et al. use an approach with hard thresholds, which seems to be disadvantageous compared to a density-based approach. Even though CB-SMoT achieves relatively high accuracy, its F_1 score is significantly worse than the similar D-Star algorithm. This may be due to the missing outlier detection in CB-SMoT.
§.§.§ Ablation Study
We compare the contribution of different training components in Table <ref>, where we analyze the effect of training DeepStay first without any SSL and second without any pre-training, i.e. solely trained on the ES dataset. For the latter, the original sample rate of 1/min was used instead of interpolation. In addition to the previous metrics, we also measure the area under the PR curve (PR-AUC). It can be seen that the effect of SSL is relatively small. However, the pre-training has a significant impact on the performance, showing that the model correctly handles the noise coming from the weak labels and learns reasonable latent representations of the SRs.
§.§ Experiment 2: Transportation Mode Detection
To further demonstrate the broader applicability of DeepStay and to contribute our findings to a broader field of research, we apply the same encoder for TMD.
There has been some work on transformer-based TMD for data other than trajectories, such as accelerometer, gyroscope, and magnetometer data <cit.>. However, these sensors are sampled at a much higher rate (>20Hz) and thus the input sequences cover only a few seconds. In this case, the transportation mode is mostly constant, so the segmentation part is dropped from the TMD task and only the classification part remains.
For TMD from location trajectories, sequences typically cover several minutes and therefore mode changes are likely to occur. Nevertheless, most of the related work presupposes a correct segmentation and simply classifies each of the segments as one of the available modes <cit.>. This is problematic for real-world applications, where a correct segmentation is never given in advance. Here, the advantage of using the transformer encoder is the joint segmentation and classification of transportation modes by simply predicting the pointwise class probabilities and grouping consecutive predicted points of the same modes together. The baseline model SECA <cit.> may be the state-of-the-art approach of those models that segment and classify from raw GNSS trajectories. Although their published code lacks the segmentation part, we compare their self-reported results on the GL dataset with our own results and show that our approach significantly outperforms SECA.
§.§.§ Model Adaptations
The only adaptations made to DeepStay are in the decoder and the supervision. The decoder's weights are expanded and a softmax activation predicts the pointwise probability ĉ_i,m for each of the M=5 transportation modes:
ĉ_i,m = softmax(emb_i W_d^'^T+b_d^')_m, ĉ∈[0,1]^n× M
Now BCE_w in <ref> is replaced by the weighted cross entropy (CE_w) in <ref> between prediction and ground truth c_i,m, where c_train_m denotes the percentage of labels of the m-th class within the training set. For segmentation, we can simply group consecutive points with the same most probable class.
CE_w(ĉ_i,c_i, c_train) = -∑_m=1^M c_i,mlog(ĉ_i,m)/M·c_train_m
§.§.§ Baseline and Comparable Datasets
The SECA model <cit.> is used as the only baseline. The authors perform a change point search by using the PELT method <cit.> to first segment the trajectory. Second, they use a convolutional neural network (CNN) to predict the mode of each segment and integrate an autoencoder for semi-supervision.
We compare the size of the dataset after our own preprocessing with that of SECA in Table <ref>. It shows that we train DeepStay with significantly fewer labels compared to SECA. However, in total more unlabeled data is available. Overall, the test sets are quite similar, which allows us to compare the final results of DeepStay and SECA.
§.§.§ Training, Validation, and Test
Both SECA and DeepStay are trained semi-supervised. While SECA uses an autoencoder, DeepStay applies SSL (see Section <ref>).
Unlike in Experiment 1, we randomly assign each sequence seq to one of the training or test sets, regardless of the including participants, to match the setup of SECA. We also apply 5-fold cross-validation. In addition, 20% of the training data is used to adjust the same hyperparameters as in Experiment 1. We optimize our model using Adam <cit.>.
§.§.§ Results
We report the weighted F_1 score and the accuracy. This F_1 score is the average of the per-class F_1 scores weighted by the number of labels per class. SECA is performing segmentwise classification and DeepStay pointwise classification, thus the following results are not fully comparable.
Nevertheless, the final results in Table <ref> clearly demonstrate the significant performance improvement of DeepStay. A major reason may be the pointwise predictions, which do not require a prior segmentation, but intrinsically segment the data for the classification task. However, even when SECA is given ground truth segments, DeepStay still achieves better results. One reason may be that, unlike SECA, the input sequence for our model is not limited to a single transportation mode, i.e., it can also learn the transition between modes. E.g., it is intuitively more likely to see a transition from bus to train than from bus to car. Furthermore, the autoencoder in SECA only tries to reconstruct the trajectory, while SSL can provide proxy labels for DeepStay, which may be more informative. In addition, the transformer model with its attention mechanism seems to be superior in comparison to the CNN layers for this task.
§ CONCLUSION AND FUTURE WORK
In this work, we show, how to derive programmatically weak labels for SR extraction and how to successfully train a transformer encoder with these data. We demonstrate the effectiveness of this model on ground truth data for SR extraction and TMD, where it outperforms state-of-the-art methods. This work should be seen as a starting point for new data-driven approaches to SR extraction and provides useful training and test data. Ideas for future work are:
§.§.§ More data augmentation
Instead of always training on the same sequences, all trajectories could be shifted by a number of points in each epoch. This results in slightly different sequences and SSL targets and reduces overfitting.
§.§.§ Modeling dependencies
We treat all labeling functions independently, although there are clear dependencies. E.g., w_build correlates strongly with w_am, because buildings are often part of amenities. Other work suggests that the performance benefits significantly from incorporating these dependencies <cit.>.
§.§.§ Pre-training on multiple datasets
In this study, we stick with the GL dataset for pre-training. However, there are many public unlabeled GNSS trajectory datasets. All of them could be weakly labeled with our approach and carefully combined to have an even larger training set.
-12cm
IEEEtran
|
http://arxiv.org/abs/2306.09662v2
|
20230616073705
|
Cooperative Multi-Objective Reinforcement Learning for Traffic Signal Control and Carbon Emission Reduction
|
[
"Cheng Ruei Tang",
"Jun Wei Hsieh",
"Shin You Teng"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.MA"
] |
Resonant Cancellation Effect in Ramsey Experiments
Florian M. Piegsa
July 31, 2023
==================================================
Existing traffic signal control systems rely on oversimplified rule-based methods, and even RL-based methods are often suboptimal and unstable. To address this, we propose a cooperative multi-objective architecture called Multi-Objective Multi-Agent Deep Deterministic Policy Gradient (MOMA-DDPG), which estimates multiple reward terms for traffic signal control optimization using age-decaying weights. Our approach involves two types of agents: one focuses on optimizing local traffic at each intersection, while the other aims to optimize global traffic throughput. We evaluate our method using real-world traffic data collected from an Asian country's traffic cameras. Despite the inclusion of a global agent, our solution remains decentralized as this agent is no longer necessary during the inference stage. Our results demonstrate the effectiveness of MOMA-DDPG, outperforming state-of-the-art methods across all performance metrics. Additionally, our proposed system minimizes both waiting time and carbon emissions. Notably, this paper is the first to link carbon emissions and global agents in traffic signal control.
§ INTRODUCTION
Traffic signal control is a challenging real-world problem whose goal tries to minimize the overall vehicle travel time by coordinating the traffic movements at road intersections. Existing traffic signal control systems in use still rely heavily on manually designed rules which cannot adapt to dynamic traffic changes and cannot deal well with today's increasingly large transportation networks. Recent advance in reinforcement learning (RL), especially deep RL <cit.>, offers excellent capability to work with high dimensional data, where agents can learn a state abstraction and policy approximation directly from input states. This paper explores the possibility of RL to on-policy traffic signal control with fewer assumptions.
In literature, there have been different RL-based frameworks <cit.> proposed for traffic light control. Most of them <cit.> are value-based and can achieve convergence in relatively easier steps. However, their vita problem is: only discrete actions, states, and time spaces can be applied to.[
A recent method <cit.> can handle value-iterations in continuous actions, states, and time but it has not been applied to traffic optimization yet.] For traffic optimization, this means the choice of next traffic light phase is constrained and limited from pre-defined discrete cyclic sequences of red/green lights. The solution of pre-defining time slots for the agent to simply determine actions is effective for optimization and simple for traffic control. However, the time slots for each action to be executed are fixed and cannot reflect the real requirements to optimize traffic conditions. Moreover, a small change in the value function will cause great effects on the policy decision. To make decisions on continuous space, recent policy-based RL methods <cit.> become more popularly adopted in traffic signal control so that a non-discrete length of phase duration can be inferred. However, its gradient estimation is strongly dependent on sampling and not stable, and thus easily trapped to a non-optimal solution.
To bridge the gaps between value-based and policy-based RL approaches, the actor-critic framework is widely adopted to stabilize the RL training process, where the policy structure is known as the actor and the estimated value function is known as the critic. The Deep Deterministic Policy Gradient method (DDPG) <cit.> learns a Q-function and a policy concurrently, by using off-policy data and the Bellman equal to learn the Q-function, and then using the Q-function to learn the policy. DDPG retains the advantages of both the value-based and policy-based method, and can directly learn a deterministic policy mapping states to actions directly. Thus, there are some actor-critic frameworks proposed for traffic signal control. For example, <cit.>, DDPG was adopted to learn a deterministic policy mapping states to actions. However, it is a “local-agent” solution which is not optimized by trading off different local agents' requirements. More precisely, current RL-based solutions <cit.> are “locally” derived from single agent or multi-agents. Each agent individually decides its policies and actions according to its rewards which often produce conflicts to other agents and make traffic congestion more serious in other intersections. The above RL-based frameworks lack of a global agent to cooperate all local agents and trade off their different requirements.
Another issue of traffic signal control is making
decisions not only on which action to be performed but also how long it should be performed. There are only few frameworks <cit.> pre-defining several fixed time slots from which the agent can choose to determine the action period. However, this pre-defining solutions are less flexible than an on-demand solution to better relieve traffic congestion.
This paper develops a COoperative Multi-objective Multi-Agent DDPG (COMMA-DDPG) framework for optimal traffic signal control. Current RL-based multi-agent methods use only local agents to search solutions and often produce conflicts to other agents. The novelty of this paper is to introduce a global agent to cooperate with all local agents by trading off their requirements to increase the entire throughput. To the best of our knowledge, this concept has not been explored in the literature for traffic signal control. Fig. 3 (in the supplement) shows an overview of our proposed COMMA-DDPG framework. Each local agent focuses on learning the local policy using the intersection clearance as reward. The global agent then optimizes the overall rewards measured by the total traffic waiting time. With the actor-critic framework, the global agent can optimize various information exchanges from local intersections so as to optimize the final reward globally. Our COMMA-DDPG can select the best policy to control the periodic phases of traffic signals that maximize throughput. The global agent is used only during the training stage and is no longer needed in the inference stage. It may be problematic when training agents over a larger road network (more than 10 intersections), since both local and global agents take the information of all intersections as input. To address this problem, for each local agent,
we adjust the global agent by sending information only from its nearby intersections as a training basis because intersections too far away are not so important.
Additionally, unlike other policy-based methods that can only return a fixed length from a predefined action pool, the COMMA-DDPG mechanism allows for determining not only the best policy but also a dynamic length for the next traffic light phase. Theoretical support for the convergence of our COMMA-DDPG approach is provided. Moreover, this paper is the first to establish a link between carbon emissions and global agents in traffic signal control. Experimental results show that the COMMA-DDPG framework significantly improves overall waiting time, as well as reduces CO_2 emissions and carbon emissions. Main contributions of this paper are summarized in the following :
* We propose a COMMA-DDPG framework that can effectively improve the traffic congestion problem, reduce travel time, and thus increase the entire throughput of the roads.
* The global-agent design can trade off different conflicts among local agent and give guides to each local agent so that better policies can be determined.
* The COMMA-DDPG method can determine not only the best policy, but also a dynamic length of the next traffic light phase.
* The COMMA-DDPG method can reduce not only the waiting time, but also CO_2 and carbon emissions.
* Extensive experiments on real-world traffic data and an open bechmark<cit.> show that the COMMA-DDPG method achieves SoTA results for effective and efficient traffic signal control.
§ RELATED WORK
Traditional traffic control methods can be categorized into three classes : (1) fixed-time control <cit.>, (2) actuated control <cit.>, and (3) adaptive control <cit.>. They are mainly based on human knowledge to design appropriate cycle length and strategies for better traffic control. The manual tasks involved will make parameter settings very cumbersome and difficult to satisfy different scenarios' requirements, including peak hours, normal hours, and off-peak hours. Fixed-time control is simply, easy, and thus becomes the most commonly adopted method in traffic signal control. Actuated control determines traffic conditions using predetermined thresholds; that is, if the traffic condition (e.g., the car queue length) exceeds a threshold, a green light will be issued accordingly. Adaptive control methods including <cit.> determine the best signal phase according to current traffic conditions and thus can achieve more effective traffic optimization.
RL traffic control: Recent advances in RL shed light on the improvement of automatic traffic control. There are two main approaches to solving traffic sign control problems, i.e., value-based and policy-based. There is also a hybrid actor-critic (AC) approach <cit.>, which employs both value-based and policy-based searches. The value-based method first estimates the value (expected return) of being in a given state and then finds the best policy from the estimated value function. One of the most widely used value-based methods is Q learning <cit.>. The first Q-learning method applied to control traffic signals at street intersection is traced from <cit.>. However, in Q learning, a large table should be created and updated to store the Q values of each action in each state. Thus, it is both memory- and time-consuming and improper for problems with complicated states and actions. Thus, various RL-based methods <cit.> have been proposed for traffic signal control. Among them, Deep Q learning Network (DQN) <cit.> is often adopted to estimate the Q function. However, the max operator in DQN uses the same values both to select and to evaluate an action. The selection often makes values overestimated. The double DQN <cit.> decouples the selection from the evaluation by using two networks to solve this overestimation problem. The flaw in the value-based methods is that their quick convergences require a discrete action state. The policy-based methods can directly optimize the desired policy with policy gradient for fast convergence. But there is a disadvantage that the policy-based method is a round update during the training process, so the training process would be long. The work of <cit.> has verified the superior fitting power of the deep deterministic policy gradient under simplified traffic environment. AC approaches are the trend for traffic control.
RL can also be classified according to the adopted action schemes, such as: (i) setting the length of green light, (ii) choosing whether to change phase, and (iii) choosing the next phase. The DQN and AC methods are suitable for action schemas 2 and 3, but are not suitable for setting the length of the green line since the action space of DQN is discrete, and a lot of calculations will be wasted for the AC method. DDPG <cit.> can solve continuous action spaces, which are more suitable for modeling the length of green light. Thus, there are several DDPG-based RL frameworks <cit.> proposed for traffic signal control. However, they focus on only a single intersection. In practical environments, multi-agent deep deterministic policy gradient algorithms <cit.> will be another good choice fortraffic control to incorporate information from different agents for large-scale traffic scenarios. For example, the CityFlow algorithm <cit.> is proposed to control traffic signals in a city. However, the above multi-agent systems cannot provide a dynamic length of the next traffic light phase to inform the drivers.
§ BACKGROUND AND NOTATIONS
The basic elements of a RL problem for traffic signal control can be formulated as a Markov Decision Process (MDP) mathematical framework of < S,A,T,R,γ >, with the following definitions:
* S denotes the set of states, which is the set of all lanes containing all possible vehicles. s_t ∈ S is a state at time step t for an agent.
* A denotes the set of possible actions, which is the duration of green light. In our scenarios, both duration lengths for a traffic cycle and a yellow light are fixed. Then, once the state of green light is chosen, the duration of a red light can be determined. At time step t, the agent can take an action a_t from A.
* T denotes the transition function, which stores the probability of an agent transiting from state s_t to s_t+1 if the action a_t is taken; that is, T(s_t+1|s_t,a_t ):S × A → S.
* R denotes the reward, where at time step t, the agent obtains a reward r_t specified by a reward function R(s_t,a_t ) if the action a_t is taken under state s_t.
* γ denotes the discount, which controls the importance of the immediate reward versus future rewards, and also ensures the convergence of the value function, where γ∈ [0,1).
At time-step t, the agent determines its next action a_t based on the current state s_t. After executing a_t, it will be transited to next state s_t+1 and receive a reward r_t (s,a); that is, r_t (s,a)= 𝔼[R_t|s_t=s,a_t=a], where R_t is named as the one-step reward. The way that the RL agent chooses an action is named policy and denoted by π. Policy is a function π (s) that chooses an action from the current state s; that is, π (s):S → A. Our goal is to find such a policy to maximize the future reward
G_t:
G_t=Σ_k=0^∞γ^kR_t+k.
A value function V(s_t) indicates how good the agent is in state s_t, i.e., the expected total return of the agent starting from s_t. If V(s_t) is conditioned on a given strategy π, it will be expressed by V^π(s_t); that is, V^π(s_t)=𝔼[G_t|s_t=s], ∀ s_t∈ S. The optimal policy π^* at state s_t can be found by
π^*(s_t)=arg max_πV^π(s_t),
where V^π(s_t) is the state-value function for a policy π. Similarly, we can define the expected return of taking action a in state s_t under a policy π denoted by a Q function:
Q^π(s_t,a_t)=𝔼[G_t|s_t=s,a_t=a].
The relationship between Q^π(s_t, a_t) and V^π (s_t) is
V^π(s) = ∑_a ∈ Aπ (a|s)Q^π(s,a).
Then, the optimal Q^*(s_t,a) is iteratively solved by
Q^*(s_t,a)=max_πQ^π(s_t,a).
With Q, the optimal policy π^* in state s_t can be found by:
π^*(s_t)=arg max_aQ^*(s_t,a).
Q^*(s_t,a) is the sum of two terms: (i) the instant reward after a period of execution in the state s_t and (ii) the discount expected future reward after the transition to the next state s_t+1. Then, we can use the Bellman equation<cit.> to express Q^*(s_t,a) as follows:
Q^*(s_t,a)=R(s_t,a)+γ𝔼_s_t+1[V^*(s_t+1)].
V^*(s_t) is the maximum expected total reward from state s_t to the end. It will be the maximum value of Q^*(s,a) among all possible actions. Then, V^* can be obtained from Q^* as:
V^*( s_t) = max_a Q^*( s_t,a),∀s_t∈ S.
Deep Q-Network (DQN): In <cit.>, a deep neural network is used to approximate the Q function, which enables the RL algorithm to learn Q well in high-dimensional spaces. Let Q_tar be the targeted true value which is expressed as Q_tar= r+γmax_a'Q(s',a';θ). In addition, let Q(s,a;θ) be the estimated value, where θ is the set of its parameters. We define the loss function for training the DQN as:
L(θ)=𝔼_s,a,r,s'[(Q_tar-Q(s,a;θ))^2].
In <cit.>, Q_tar is often overestimated during training and results in the problem of unstable convergence of the Q function.
In <cit.>, a Double DQN (DDQN) was proposed to deal with this unstable problem by separating the DDQN into two value functions, so that there are two sets of weights θ and ϕ for parameterizing the original value function and the second target network, respectively. The second DQN Q_tar with parameters ϕ is a lagged copy of the first DQN Q(s,q;θ) that can fairly evaluate the Q value as follows: Q_tar = r+γ Q(s',max_a'Q(s',a';θ);ϕ).
Deep Deterministic Policy Gradient (DDPG):
DDPG is a model-free and off-policy framework which uses a deep neural network for function approximation. But unlike DQN which can only solve discrete and low-dimensional action spaces, DDPG can solve continuous action spaces. In addition, DDPG is an Actor-Critic method that has both a value function network (critic) and a policy network. The critic network used in DDPG is the same as the actor-critic network described before. DDPG is derived from DDQN <cit.> and works more robustly by creating two DQNs (target and now) to estimate the value functions. Thus, this paper adopts the DDPG method to learn concurrently the desired Q function and the corresponding policy.
§ METHOD
This paper proposes a cooperative, multi-objective architecture with age-decaying weights for traffic signal control optimization. It represents each intersection with a DDPG architecture, which contains a critic network and an actor network. The outcome of an action is the number of seconds of green light. The duration for a phase cycle (green, yellow, red) is different at different intersections but fixed at an intersection. Also, the duration of yellow light is the same and fixed for all intersections. Then, the duration of the red light can be directly derived once the duration of green is known.
The original DDPG uses off-policy data and the Bellman equation <cit.> to learn the Q-function, and then derives the policy. It interleaves learning an approximator to find the best Q^*(s,a) and also learning another approximator to decide the optimal action a^*(s), and so in a way the action space is continuous. The output of this DDPG is a continuous probability function to represent an action. In this paper, an action corresponds to the seconds of green light. Although DDPG is off-policy, we can mix the past data into the training set, making the distribution of the training set diverse by feeding current environment parameters to a traffic simulation platform such as TSIS <cit.> or SUMO <cit.> to provide on-policy data for RL training. However, since most of the data provided by other papers use only SUMO for experiments, for fair comparisons, we also use only SUMO for performance evaluations and ablation studies.
In a general DDPG, to increase the opportunities for the agents to explore the environment, random-sampling noise is added to the output action space. However, random perturbations also cause the agent to blindly explore the environment. In traffic signal control, most SoTA methods use multiple local agents to model different intersections. During training, the same training mechanism “adding noise to the action model” is used to make each agent explore the environment more. However, “increasing the whole throughput” is the same goal for all local agents. The learning strategy “adding noise to action model” will decrease not only the effectiveness of learning but also the total throughput since blinding exploration will make local agents choose conflict actions to other agents. This means that a cooperation mechanism should be added to the DDPG method among different local agents to increase the final throughput during the learning process. The main novelty of this paper is to introduce a cooperative learning mechanism with a global agent to avoid local agents blindly exploring environments so that overall throughput and learning effectiveness can be significantly improved. It is decentralized since the global agent is used only during the training stage.
§.§ Cooperative DPGG Network Architecture
Most of the policy-based RL methods <cit.> used only local agents to perform RL learning for traffic control. The requirements of a local agent will easily cause conflicts with other agents and result in the divergence problem during optimization. In this paper, a COoperative Multi-Objective Multi-Agent DDPG (COMMA-DDPG) framework for optimal traffic signal control is designed, where a local agent controls each intersection, and a global agent cooperates with all local agents.
Fig. <ref> shows two architectures of our proposed COMMA DPGG mechanism used in the training and inference stages, respectively. In Fig. <ref>(a), during training, there is a local agent created at each intersection and a global agent that cooperates with all intersections.
The global agent optimizes the overall rewards and the local agent observes the traffic status from its corresponding intersection and changes the traffic signal accordingly. After training, as shown in Fig. <ref>(b), the global agent is no longer needed. Each local agent can directly change the traffic signal by observing all current traffic statuses from all intersections.
Details of this COMMA DPGG algorithm are described in Algorithm 1. All the algorithms are detailed in the supplementary file. Although the DDPG method is off-policy, we use TSIS <cit.> and SUMO <cit.> to collect on-policy data for RL training. Details of the on-policy data collection process are described in the GOD (Generating On-policy Data) algorithm (see Algorithm 2 in the supplementary file). With the set of on-policy data, the parameters of local and global agents are then updated by the LAU (Local Agent Updating) algorithm and GAU(Global Agent Updating) algorithm , respectively (also see the supplementary file). Let W_G^m represent the global agent's importance to the mth intersection. Then, the importance W_L^m of the mth local agent will be 1-W_G^m.
For the mth intersection, its next action will be predicted by the GOD and LAU algorithms, respectively, via an epsilon greedy exploration scheme. The output seconds of the global agent and the local agent are compared based on W_G^m and W_L^m. Then, the one with higher importance will be chosen to the output seconds.
§.§ Generating On-policy Data
During the RL-based training process, before starting, we will perform a one hour simulation to collect data (see Algorithm 2) and store them in the replay buffer B based on TSIS or SUMO. Let B_m be the set of on-policy data collected for training the m-th local agent. Then, B is the union of all B_m, i.e., B= (B_1,...,B_m , ..., B_M). In the process of interacting with the environment, we will add the epsilon greedy and weight-decayed method to the selection of actions. In particular, the epsilon greedy method will gradually reduce epsilon from 0.9 to 0.1. To avoid training biases, at the tth training iteration, a time decay mechanism is adopted to decay W_G^m by the ratio (0.95)^t.
§.§ Local Agent
In our scenario, a fixed duration of a traffic signal change cycle is assigned to each intersection. Furthermore, there are Y seconds prepared for yellow light. Then, we only need to model the phase duration for the green light. After that, the phase duration of the red light can be directly estimated. At each intersection, a DPGG-based architecture is constructed to model its local agent for traffic control. To describe this local agent, some definitions are given below.
* The duration of traffic phase ranges from D_min to D_max seconds.
* Stopped vehicles are defined as those vehicles whose speeds are less than 3 km/hr.
* The state at an intersection is defined by a vector in which each entry records the number of stopped vehicles of each lane at this intersection at the end of the green light, and current traffic signal phase.
The reward for evaluating the quality of a state at an intersection is defined as the degree of clearance of this state, i.e., the number of vehicles remaining at the intersection when the green-light period ends. There are two cases in which a reward is given to qualify a state; that is, (1) the green light ends, but there are still some vehicles and (2) the green light is still but there is no vehicle. There is no reward or penalty for other cases. Let N_m,t denote the number of vehicles at intersection m at time t, and let N_max be the maximum traffic flow. This paper uses the clearance degree as a reward for qualifying the mth local agent. When the green light ends and there is no vehicle, a pre-defined max reward R_max is assigned to the mth local agent. If there are still some vehicles, a penalty proportional to N_m,t is assigned to this local agent. More precisely, for Case 1, the reward r_m,t^local for the intersection m is defined as:
Case 1: If the green light ends but some vehicles are still,
r_m,t^local =
R_max, if N_m,t/N_max≤1/N_max;
-R_maxN_m,t/N_max, else.
For Case 2, if there is no traffic but a long period still remaining for the green light, various vehicles moving on another road should stop and wait until this green light turns off. To avoid this case, a penalty should be given to this local agent. Let g_m,t denote the remnant green light time (counted by seconds) when there is no traffic flow in the mth intersection at time step t, and G_max the largest duration of green light. Then, the reward function for Case 2 is defined as:
Case 2: If there is no traffic but the green light is still on,
r_m,t^local =
R_max, if g_m,t/G_max≤1/G_max;
- R_maxg_m,t/G_max, else.
Detailed architectures for local agents are shown in the supplementary file (see Fig. 1 in Section C).
Its inputs are the number of vehicles stopped at the end of the green light at each lane, the remaining green light seconds, and the current traffic signal phases of all intersections. Thus, the input dimension for each local critic network is (2M+∑ _m = 1^MN_lane^m), where M denotes the number of intersections and N_lane^m is the number of lanes in the mth intersection. Then, a hyperbolic tangent function is used as an activation function to normalize all input and output values. There are two fully connected hidden layers used to model the Q-value. The output is the expected value of future return of doing this action at the state.
The architecture of the local actor network is shown in are shown in the supplementary file (see Fig. 1(b) in Section C).
The inputs used to model this network include the numbers of stopped vehicles at the end of the green light at each lane, and current traffic signal phases of all intersections. Thus, the dimension for each local actor network is (M+∑ _m = 1^MN_lane^m).
Let θ ^Q_m and θ ^μ_m denote the sets of parameters of the mth local critic and actor networks, respectively. To train θ ^Q_m and θ ^μ_m, we sample a random minibatch of N_b transitions (S_i,A_i,R_i,S_i+1) from B, where
* each state S_i is an M× 1 vector containing the local states of all intersections;
* each action A_i is an M× 1 vector containing the seconds of current phase of all intersections;
* each reward R_i is an M× 1 vector containing the rewards obtained from each intersection after performing A_i at the state S_i. The mth entry of R_i is the reward of the mth intersection after performing A_i.
Let y_i^m denote the reward after performing A_i from the mth target critic network. Based on y_i^m, the loss functions for updating θ^Q_m and θ^μ_m are defined, respectively, as follows:
L_critic^m=1/N_b∑_i=1^N_b(y_i^m-Q(S_i,A_i|θ_m^Q))^2 and L_actor^m=-1/N_b∑_i=1^N_bQ(S_i,μ (S_i|θ_m^μ)|θ_m^Q).
With θ^Q_m and θ^μ_m, the parameters θ^Q'_m and θ^μ'_m for the target
network are attentively updated as follows:
θ_m^Q'← (1-τ) θ_m^Q+τθ_m^Q' and θ_m^μ'← (1-τ) θ_m^μ+τθ_m^μ'.
The parameter τ is set to 0.8 for updating the target network. Details to update the parameters of local agents are described in Algorithm 3 (see the supplementary file).
To make the output action no longer blindly explore the environment, we introduce a global agent to explore the environment more precisely. The global agent controls the total waiting time at all intersections. The details of the global critic and actor networks are shown in the supplementary file (Fig.2 in Section C), where (a) is for the global critic network and (b) is for the global actor network. For the mth intersection, we use V_m to denote the number of total vehicles, and T_m,n^w,i to be the waiting time of the nth vehicle at the time step i. Then, the total waiting time across the whole site is used to define the global reward as follows: r_i^G=-1/M∑_m=1^M∑_n=1^V_mT_m,n^w,i. Let θ^Q_G and θ^μ_G denote the parameters of the global critic and actor networks, respectively. To train θ^Q_G and
θ^μ_G, we sample a random minibatch of N_b transitions (S_i,A_i,R_i,S_i+1) from B. Let y_i^G denote the reward after performing A_i got from the global target critic network. Then, the loss function for updating θ^Q_G is defined as follows :
L_critic^G=1/N_b∑_i=1^N_b(y_i^G-Q_G(S_i,A_i|θ^ Q_G))^2.
It is noticed that the output of this global critic network is a
scalar value, i.e., the predicted total waiting time across the entire site. To train θ^ μ_G, we use the loss function :
L_actor^G=- 1/N_b∑_i=1^N_bQ_G(S_i,μ_G (S_i|θ^μ_G)|θ^Q_G).
In addition, the outputs of the global actor network are an M× 1 vector to output the suggested actions at all intersections, and the weight W_G^m to represent the importance of the mth intersection of the global agent.
All local agents and the global agent are modeled by a DDPG network. Details to update the global agent are described in Algorithm 4 (see the supplementary file). We use the TSIS and SUMO simulation platforms to generate various small or large vehicles moving on the roads through intersections.
§.§ Carbon Emission Reduction
Another important issue in traffic sign control is to reduce carbon emissions.This paper also adopts the HBEFA formula built in the traffic flow simulation software SUMO for recording and outputing the current vehicle's fuel consumption, carbon emission, and other data in real time since this HBEFA formula is also applicable to some calculations in European countries. The calculation equations of CO are described below, and the parameter part will be explained in the supplementary information, and the content of the formula of CO2 is similar to that of CO, only CO is replaced by CO2.
CO_move = (CO_engine*V_engine*FC*M_fuel)/(M_air*1000),
CO_stop = (CO_engine*V_engine*r_stop*t_stop)/(3600*M_air),
CO = CO_move+CO_stop.
We can see from the formula in HBEFA that the main influencing factors are the distance traveled (v) and the waiting time (t_stop). Distance affects the performance of the fuel cell (FC) and will remain fixed during our experiment, so the influence of waiting time is the main source of difference. By examining Eqs. (<ref>) and (<ref>), clearly, if t_stop is reduced, carbon emissions are also reduced.
§ EXPERIMENTAL RESULTS
Our traffic data consist of visual traffic monitoring sequences from five consecutive intersections during the morning rush hour in a midsize city in Asia. In order to facilitate comparison with other SOTA papers, we used SUMO traffic simulation software for simulation.We take a fixed-time traffic light control scheme with one hour total waiting time as a baseline for comparisons. We also performed ablation studies on the COMMA-DDPG approach with and without the global agent to make comparisons. In addition, we also evaluated an open benchmark <cit.> to make fair comparisons with other SoTA methods.
To train our model, we first used the fixed-time control model to pretrain our COMMA-DDPG model. Fig. 4 (in the supplement) shows the converge conditions of waiting time among different methods. Clearly, the fixed-time control model performs better than the MA-DDPG model. However, they did not converge. As to our proposed COMMA-DDPG method, it gradually and robustly converges to a local minima which is the best than the other two methods.
Table <ref> shows the throughtput comparisons among the fixed-time, MA-DDPG, PPO, TD3, and our COMMA-DDPG schemes at the five observed intersections. Due to the global agent, the throughput obtained by our method is much higher than the baseline and the MA-DDPG method. When training agents over the larger road network (more than 10 intersections), it will be problematic that both local and global agents take the information of all intersections as the input. To reflect real situations, we adjust the global agent by taking only 8 nearby intersections (up, down, left, right, upper left, lower left, upper right, lower right) as a training basis. This means that the local agent can still contain global information.
To make fair comparisons with other SoTA methods, there is an RL testbed environment for traffic signal control <cit.>. It is based on the well-established Simulation of Urban Mobility traffic simulator (SUMO). It includes single- and multiagent-signal control tasks that are based on realistic traffic scenarios from SUMO. To allow easy easy deployment of standard RL algorithms, an OpenAI GYM interface is also provided. It also provides open source data and codes of SoTA RL-based signal control algorithms for performance evaluation. There are five SoTA methods provided for performance evaluations; that is, IDQN <cit.>, IPPO <cit.>, FMA2C <cit.>, MPLight <cit.>, and MPLight full <cit.>. MPLight is a phase competition modeling method. IDQN and IPPO are decentralized algorithms that can effectively learn instance dependent features. FMA2C is a large-scale multi-agent reinforcement learning method for traffic signal control. “MPLight full” is similar to the MPLight implementation but sensing information matched with IDQN appended to the existing pressure state. The state and reward functions were set according to the definitions of each algorithm. Five reward metrics adopted in this paper for performance comparisons are: delay, speed, time loss, system travel time, and total waiting time at intersections. Table <ref> shows the performance comparisons among these methods and our COMMA-DDPG scheme when only two intersections were included. IDQNN <cit.> is a set of independent DQN agents, one per intersection, each with convolution-layers for lane aggregation. IPPO has the same deep neural network of IDQN but the output layer which is constructed with a set of polynomial functions. IPPO performs better than IDQNN in “delay” but much worse in waiting time due to the under-fitting problem of the set of polynomial functions. IDQNN and IPPO are formed by independent DQN agents and thus perform worse than multi-agent methods such as FMA2C and MPLight in the “time loss”, “travel time”, and “Waiting time” categories. MPLight <cit.> is a decentralized deep reinforcement learning method that uses the concept of pressure to coordinate multiple intersections. It outperforms IDQN <cit.> and IPPO <cit.> in the categories “speed”, “time loss”, ”travel time”, and “waiting time” categories. It performs worse than “FMA2C” and our method. FMA2C <cit.> is a multi-agent RL method that overcomes the scalability issue by distributing the global control to each local RL agent. It uses a hierarchy of managing agents to enable cooperation between
signal control agents (one per intersection). However, as described in <cit.>, it requires much more training episodes to converge than IDQN and
MPLight. It outperforms all the other methods (IDQN, IPPO, MPLight, MPLight FUll) in all metrics. As to our COMMA-DDPG method, it includes a global agent to train each local agent to make better actions during the training stage. Since the global agent is not included during inference, it also is a decentralized RL-based method for traffic signal control. With the help of the global agent, each local agent in our COMMA-DDPG architecture can choose non-conflict actions to other agents for better traffic signal control. Clearly, our COMMA-DDPG method outperforms all SoTA methods in all categories. It is noticed that our method has impressive results in the “'Delay', “Travel time”, and “Waiting time” categories.
When more intersections are added, the stability and generality of our method can be proved. Table <ref> shows the performance comparisons among different SoTA methods when five intersections were included to build the road networks. In this case, IPPO still shows significant instability and performs worse in almost performance metrics, especially in “time loss”.
MPLight-full performs better than the IPPO method. IDQN <cit.> performs better at “speed” and “waiting time”. FMA2C outperforms other methods in many performance metrics such as “Delay”, “Speed”, “Travel time”, and “Waiting time” but still performs worse than our method. However, even though five intersections are added, our COMMA-DPPG method still outperforms all SoTA methods. In Table 5, we show data for 16 intersection conditions, including travel time, average, waiting time, speed, CO, CO2, and fuel. The design of this map is taken from real life, bringing together several larger junctions into a 4×4 checkerboard map. Finally, we can see that our method has better performance than that without the global agent, and according to the HBEFA formula built in SUMO, it can be seen that in terms of environmental protection, CO2, etc. have also been reduced.
§ CONCLUSION
This paper proposed a novel cooperative RL architecture to
handle cooperation problems by adding a global agent. Since
the global agent knows all the intersection information, it can guide the local agent to make better actions in the training process. Thus, the local agent does not need to use random noise to randomly explore the environment. Since RL training requires a large amount of data, we hope to add it to RL through data augmentation in the future, so that training can be more efficient. The weakness of our method is that all information of local agents need to be sent to other agents. In the near future, the COMMA-DPPG will be really evaluated in real road conditions.
§ APPENDIX FOR CONVERGENCE PROOF
We additionally put the proof and the experimental data of more intersections in the appendix.
unsrtnat
§ SUPPLEMENT
§.§ Appendix for Convergence Proof
In this section, we will prove that value function in our method will actually converge.
definitionDefinition
theoremTheorem
lemmaLemma
proofProof
A metric space <M,d> is complete (or Cauchy) if and only if all Cauchy sequences in M will converge to M. In other words, in a complete metric space, for any point sequence a_1,a_2, ⋯∈ M, if the sequence is Cauchy, then the sequence converges to M:
lim_n →∞a_n∈ M.
Let (X,d) be a complete metric space. Then, a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that d(T(x),T(y))<qd(x,y), ∀ x,y ∈ X.
Let (X,d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x^* in X. i.e. T(x^*)=x^*.
Let A be a complex n× n matrix, with entries a_ij. For i ∈1,2,...,n, let R_i be the sum of the absolute of values of the non-diagonal entries in the i^th row:
R_i=∑_j=0,j≠ i^n|a_ij|.
Let D(a_ii,R_i)⊆ℂ be a closed disc centered at a_ii with radius R_i, and every eigenvalue of A lies within at least one of the Gershgorin discs D(a_ii,R_i).
We claim that the value function of RL can actually converge, and we also apply it to traffic control.
The value function is to calculate the value of each state, which is defined as follows:
[ V^π(s) = ∑_a π (a|s)∑_s',r p (s',r|s,a)[r + γV^π(s')]; = ∑_a π (a|s)∑_s',r p (s',r|s,a)r; + ∑_a π (a|s)∑_s',r p (s',r|s,a)[γV^π(s')]. ]
Since the immediate reward is determined, it can be regarded as a constant term relative to the second term. Assuming that the state is finite, we express the state value function in matrix form below.
Set the state set S={S_0,S_1,⋯,S_n}, V^π={ V^π(s_0), V^π(s_1), ⋯ , V^π(s_n) }^T, and the transition matrix is
P^π = ( [ 0 P^π _0,1 ⋯ P^π _0,n; P^π _1,0 0 ⋯ P^π _1,n; ⋯ ⋯ ⋯ ⋯; P^π _n,0 P^π _n,1 ⋯ 0 ]),
where P^π _i,j = ∑_a π (a|s_i)p(s_j,r|s_i,a). The constant term is expressed as R^π={ R_0, R_1, ⋯, R_n}^T. Then we can rewrite the state-value function as:
V^π=R^π+λ P^πV^π.
Above we define the state value function vector as V^π={ V^π(s_0), V^π(s_1),⋯, V^π(s_n)}^T, which belongs to the value function space V. We consider V to be an n-dimensional vector full space, and define the metric of this space is the infinite norm. It means:
d(u,v)=∥ u-v ∥_∞=max_s ∈ S|u(s)-v(s)|,∀ u,v ∈ V
Since <V,d> is the full space of vectors, V is a complete metric space. Then, the iteration result of the state value function is u_new=T^π(u)=R^π+λ P^πu.
We can show that it is a contraction mapping.
d(T^π(u),T^π(v)) =∥ (R^π+λ P^πu)-(R^π+λ P^πv)∥_∞
=∥λ P^π(u-v) ∥_∞
≤∥λ P^π∥ u-v ∥_∞∥ _∞.
From Theorem 2, we can show that every eigenvalue of P^π is in the disc centered at (0,0) with radius 1. That is, the maximum absolute value of eigenvalue will be less than 1.
d(T^π(u),T^π(v)) ≤∥λ P^π∥ u-v ∥_∞∥_∞
≤λ∥ u-v ∥_∞
=λ d(u,v).
From the Theorem 1, Eq.(2) converges to only V^π.
§.§ Algorithm
§.§ Experimental Table
Table <ref> presents performance comparisons using ten intersections to construct the road networks. Among all state-of-the-art (SoTA) methods, IPPO exhibits significant instability and performs poorly. When evaluating a subset of intersections, FMA2C outperforms other methods. However, as the number of intersections increases, MPLight-related methods demonstrate their superiority in traffic signal control. For instance, although the "MPLight-full" method initially performs worse than other methods with a smaller number of intersections, it surpasses most SoTA methods in various performance categories. Except for our proposed method, it achieves the best performance across categories such as "Delay," "Speed," "Time loss," and "Waiting time." Notably, leveraging a global agent, our COMMA-DDPG method outperforms all SoTA methods in all performance categories.
Table <ref> presents a comparison method after processing the global agent in parallel. The table consists of a 5-by-5 global agent data for 169 intersections(as shown in Figure <ref>). In our simulation, some intersections are designed as T-shaped or I-shaped intersections, representing real-world scenarios where only left and right or up and down movements are allowed. Specifically, there are 9 I-shaped intersections, and their corresponding left and right (or upper and lower) intersections become T-junctions. Due to the unique shape of these intersections, some positions in the 5 by 5 global agent grid are missing. To handle this, we simply assign a value of zero to these vacant positions.
Table <ref> provides the unit representation of certain parameters used in the formula to calculate carbon emissions referenced in this article. The variable CO can be replaced with CO2 to calculate CO2 emissions. The formula consists of the following parameters:
* CO_engine: CO emission from the vehicle engine in the driving state.
* V_engine: Exhaust volume of the engine.
* FC: Fuel consumption of the vehicle.
* M_fuel: Molecular weight of the fuel.
* M_air: Molecular weight of the air.
* r_stop: Average power of the vehicle when it is stopped.
* t_stop: Duration of time when the vehicle is stopped.
It should be noted that, except for t_stop, which is influenced by our experiment, the remaining parameters are not affected as long as the same traffic flow is used for simulation.
§.§ Picture
|
http://arxiv.org/abs/2306.04778v1
|
20230607205350
|
Loss Functions for Behavioral Game Theory
|
[
"Greg d'Eon",
"Sophie Greenwood",
"Kevin Leyton-Brown",
"James Wright"
] |
cs.LG
|
[
"cs.LG",
"cs.GT"
] |
equationequationEquations
equation#2#3
=1
AppxReferences
[
style=definition,
bodyfont=,
numberwithin=section
]mystyle
[
headfont=,
bodyfont=,
]myproofstyle
[
style=definition,
bodyfont=,
]myaxiomstyle
[style=mystyle]theorem
[style=mystyle, sibling=theorem]definition, example, lemma, proposition, remark, corollary, conjecture
[style=myaxiomstyle, unnumbered]axiom
[style=mystyle]question
[name=Proof, style=myproofstyle, qed=, unnumbered]prf
[itemize]leftmargin=1em, topsep=0pt, partopsep=0pt, parsep=0pt
R[2]
>angle=#1,lap=-(#2)
l
<
kequationkequation*§.§
§.§.§
kequation
kequation*
Loss Functions for Behavioral Game Theory
Greg d'Eon
University of British Columbia
Sophie Greenwood
University of British Columbia
Kevin Leyton-Brown
University of British Columbia
James R. Wright
University of Alberta
================================================================================================================================================================================================================================================
Behavioral game theorists all use experimental data to evaluate predictive models of human behavior.
However, they differ greatly in their choice of loss function for these evaluations, with error rate, negative log-likelihood, cross-entropy, Brier score, and L2 error all being common choices.
We attempt to offer a principled answer to the question of which loss functions make sense for this task, formalizing desiderata that we argue loss functions should satisfy.
We construct a family of loss functions, which we dub “diagonal bounded Bregman divergences”, that satisfy all of these axioms and includes the squared L2 error.
In fact, the squared L2 error is the only acceptable loss that is relatively commonly used in practice;
we thus recommend its continued use to behavioral game theorists.
§ INTRODUCTION
Classical economic models often fail to describe human behavior: e.g.,
people often choose dominated actions <cit.>, reason incorrectly about probabilities close to <cit.>, and fail to account for others' strategic decision making <cit.>.
In response to such failures, behavioral economics aims to learn predictive models of human behavior, fitting models to datasets describing actual human responses to strategic situations.
Such models offer cognitive scientists tools for learning about how humans think, especially when confronted with economic or strategic choices, and can help the designers of economic systems to tune their designs to perform well in the face of realistic behavior.
However, evaluating the quality of such a model on a dataset requires a loss function.[So does fitting the parameters of such a model from data. However, directly minimizing the test loss on a training set can cause a model to overfit, leading it to generalize poorly. To avoid overfitting, it is typical instead to minimize some different loss function on the training set—for example, replacing the loss with a smooth or convex proxy, or regularizing to prefer simpler models <cit.>.
We set these issues aside in what follows, focusing on the appropriate loss for practitioners to aim to minimize on unseen test data.]
Researchers working in behavioral game theory have made a wide variety of different choices about precisely which loss function to use for such evaluations, with error rate, negative log-likelihood, cross-entropy, and (at least two notions of) mean-squared error all being common choices.
Clearly, the choice of loss function is a substantive one: different losses will disagree about the quality of a prediction.
This leads us to ask: which loss function(s) should behavioural game theorists use?
In this paper, we attempt to answer this question with a first-principles argument, arguing that loss functions should satisfy five key axioms.
The first two, which we call alignment axioms, ensure that the loss function induces a correct preference ordering among behavioral predictions.
These axioms, sample Pareto-alignment and distributional Pareto-alignment, ensure that the loss function always penalizes predictions that are clearly worse (on a given dataset or in expectation under draws from a generating distribution, respectively).
Our remaining three axioms impose properties on our loss functions that make them more interpretable, better relating the quality of a prediction to the numerical value of the loss.
Exchangeability requires that the loss be invariant to the order in which observations are made; counterfactual Pareto-regularity ensures that the loss appropriately respects changes in the data, and zero minimum gives the loss an interpretable optimum by ensuring that a perfect prediction receives a loss of 0.
We show that it is possible to satisfy all of these axioms: in fact, we identify an entire family of loss functions that do so, which we dub “diagonal bounded Bregman divergences”.
Exactly one widely used loss function belongs to this set, the squared L2 error between the predicted and empirical distributions; we show how each of the other commonly used loss functions violates at least one axiom.
In particular, the entire class of scoring rules,[The term “scoring rule” is overloaded in the literature. Some authors <cit.> use it to refer to any arbitrary loss function. In this work, we use the more restrictive definition that a scoring rule computes a loss for each individual observation, then aggregates these losses by taking a mean (<Ref>); this definition is also standard <cit.>.]
a class of popular loss functions with celebrated alignment properties, all fail our interpretability axioms.
(However, every diagonal bounded Bregman divergence is equivalent to some scoring rule, up to an additive translation. This means that some scoring rules can safely be used as objective functions in training if their numerical values do not need to be interpreted, and conversely that each scoring rule can be translated into a more interpretable diagonal bounded Bregman divergence inducing the same preferences over predictions.)
In the end, since our work gives no reason to prefer one diagonal bounded Bregman divergence over another, we recommend that behavioral game theory researchers use squared L2 error to evaluate models.
§.§ Related Work
Before we begin, we review related work on loss functions for evaluating probabilistic predictions through the lenses of both statistics and economics.
The statistician's view: the likelihood principle.
It might seem that the problem of choosing a loss function is a straightforward application of statistical inference.
The ingredients are right: given a dataset and a model class that induces a set of probability distributions, we seek to understand how well each distribution describes the data.
Then, the standard statistics textbook argument is that we should use the likelihood of the data to evaluate each of these predicted distributions.
This argument is known as the “likelihood principle”<cit.>: if the data was generated by one of the predicted distributions, then likelihood is a sufficient statistic for this distribution, containing all of the information about the true generating distribution.
Should we then use likelihood to evaluate behavioral models' predictions?
The catch is that this argument relies on the assumption that the model class is “well-specified”, containing a model that outputs the true generating distribution.
This is not usually the case in behavioral economics, where it is common to evaluate simple, low-parameter (or 0-parameter) models that aim to approximate human behavior rather than to predict it perfectly.
For example, behavioral game theorists often compare their models to the classical game theoretic prediction of Nash equilibrium, which often places 0 probability on actions that humans sometimes play, and so cannot have been the true generating distribution.
Negative log likelihood assigns a loss of ∞ to this prediction, making this comparison uninformative.
We elaborate further on the problem of comparing imperfect (“misspecified”) model classes when presenting our alignment axioms.
The economist's view: scoring rules.
Another closely related problem is that of evaluating experts on their probabilistic forecasts of future events.
Prior work aims to perform these evaluations with scoring rules<cit.>, a class of loss functions that evaluate predictions independently on each observation in a dataset.
A large focus of this literature has been finding axiomatic characterizations of particular scoring rules.
These characterizations agree that losses should be proper—an axiom that we refer to in our analysis as “distributionally proper”—but diverge beyond this point:
negative log likelihood is the only proper scoring rule that satisfies a locality axiom <cit.>,
and two different neutrality axioms characterize Brier score <cit.> and the spherical score <cit.>.
We take inspiration from these axiomatic analyses, but our work differs in two important ways.
First, we do not attempt to characterize a single, ideal loss function; we are only concerned with critical problems that arise when training and evaluating behavioral models.
We thus aim to propose axioms that only address these critical problems, without being concerned that these axioms leave us with an entire class of loss functions.
Second, in many applications of scoring rules, it is only possible to observe an outcome once—for example, when evaluating weather, climate, political events, energy, and healthcare forecasts <cit.>.
This contrasts sharply with behavioral economics, where it is common to run laboratory experiments with dozens or hundreds of participants, obtaining many samples of how a human reasons in exactly the same strategic situation.
Interpretability for other tasks.
Our axioms are concerned with evaluating individual predictions.
<cit.> tackle a different, related problem of interpreting the performance of a model class, considering the cross-validation performance of a training algorithm that selects a model from this class.
They formalize a notion of the completeness of a training algorithm, giving a score of 100% to an algorithm that gets the best possible cross-validation performance and 0% to a baseline algorithm (for instance, outputting a uniform random prediction on any training set).
Their work complements ours: their measure of completeness can be applied to any choice of loss function, but they make no claims about how this loss should behave on individual datasets, nor do they apply any analogue of our alignment axioms.
Thus, we recommend that researchers using cross-validation to evaluate a model class should apply completeness to a base loss that satisfies our soundness axioms.
§ EXISTING LOSSES FOR BEHAVIORAL GAME THEORY
We consider the task of evaluating predictive models of human behavior when faced with a discrete set of options.
This task is central to both decision theory and behavioral game theory, where agents are offered a choice over a set of lotteries or actions, respectively.
In both settings, a researcher's task is to predict the human distribution of choices, which has elements of randomness due both to variation between people and to inherent randomness in individual people's decision making.
Generally, researchers collect data and evaluate models on many different decision-making settings
at once, reporting a model's performance across these settings in aggregate.
In this work, we focus on the simpler problem of evaluating a model on a single decision-making setting; of course, this is an essential first step toward the more general problem.
Accordingly, we use the terms “model” and “prediction” interchangeably, as each model only makes a prediction on a single setting.
We model a single decision-making setting as follows.
Let A = {1, …, d} be a fixed set of choices available to the decision maker being modelled (e.g., experiment participants), and let Δ(A) be the set of distributions over these choices, i.e., the (d-1)-dimensional simplex.
We assume that there exists a fixed but unknown distribution p ∈Δ(A) of human choices, where p captures both sources of randomness just discussed.
An analyst can collect a dataset consisting of n independent, identical draws from p, which we denote y ∼ p^n, representing actions taken by distinct actors (for example, different participants in a psychology experiment).
We denote the set of all such datasets by (A) = ⋃_n=1^∞ A^n.
The analyst seeks to choose a model from some model class, which induces a set of predicted distributions ℱ⊆Δ(A), that is good at predicting the distribution of human behavior.
For instance, ℱ might be the set of predictions that a parametric model makes in a given setting for various instantiations of its parameters.
To make their choice, the analyst relies on a loss function : Δ(A) ×(A) → representing preferences over these predictions: that is, (f, y) < (g, y) if and only if f is a better description of the data than g.[Note that objective functions can be phrased in both “positive” and “negative” senses: for example, it is equivalent to maximize accuracy or minimize error rate. In this paper, we use the latter, “negative” sense.]
We pause to define some additional notation.
For any dataset y ∈(A), let n(y) denote the number of observations in y (or simply n, when y is clear from context), and let p̅(y) ∈Δ(A) be its empirical distribution: that is, for all a ∈ A, p̅(y)_a = ∑_i=1^n(y)1_{y_i = a}/n(y)
For any action a ∈ A, let e_a ∈Δ(A) denote the point mass distribution on a.
Finally, for any finite set X, let Π(X) be the set of its permutations π: X → X.
§.§ Common Loss Functions
While behavioral game theorists broadly take this approach of evaluating their models with some loss function, they largely disagree about precisely which loss function to use;
in fact, it is not uncommon to for a single paper to use multiple different losses while analyzing different experiments.
To illustrate this disagreement, we give seven examples of losses that are common in the literature.
First, one common choice is the error rate <cit.>.
This choice is especially common when ℱ consists only of deterministic predictions that assign probability to only one action.
A somewhat related alternative is L1 error, which is also sometimes called “mean absolute deviation” <cit.>.
(f, y)
= ∑_a=1^d p̅(y)_a (1 - f_a),
(f, y)
= f - p̅(y)_1 = ∑_a=1^d |f_a - p̅(y)_a|.
These two losses are attractive because of their clearly defined scale, with a loss of 0 being achieved by a prediction that never makes mistakes (error rate) or matches the data perfectly (L1 error), and a maximum loss of 1 or 2, respectively, by a prediction that is never correct.
Next, several common losses are based on the likelihood of the data, given the prediction.
Perhaps the most common choice of loss in all of behavioral game theory is negative log-likelihood <cit.>.
(f, y)
= -n ∑_a=1^d p̅(y)_a log(f_a).
Cross-entropy <cit.> differs from NLL by a factor of n, and KL divergence
further subtracts the entropy of the dataset.
(f, y)
= 1/n(f, a),
(f, y)
= -∑_a=1^d p̅(y)_a log(f_a/p̅(y)_a).
where H(p) = -∑_a=1^d p_a log p_a is the entropy of a distribution.
All three of these options are rooted in statistics: they make up the core of many statistical hypothesis tests, and all three of them agree with the likelihood principle.
Two more losses originate from regression problems and the forecasting literature.
One is the Brier score, frequently referred to as mean-squared error <cit.>.
(f, y)
= 1/n∑_i=1^n f - e_y_i_2^2
= ∑_a=1^d p̅(y)_a ( (1-f_a)^2 + ∑_a' ≠ a f_a'^2 ).
A small modification is the squared L2 error, which is often also (confusingly) called MSE <cit.>.
(f, y)
= f - p̅(y)_2^2
= ∑_a=1^d (f_a - p̅(y)_a)^2.
Both are natural options for researchers familiar with regression problems, where it is typical to optimize a least-squares objective.
They also have roots in forecasting, as the Brier score was originally introduced for evaluating weather forecasts <cit.>.
To avoid confusion, throughout this work, we avoid the ambiguous term “mean-squared error”, referring to the former as the Brier score and the latter as the squared L2 error.
Finally, a unifying definition that ties together many losses is the concept of a scoring rule.
<cit.>
A scoring rule is a function S: Δ(A) × A → that maps a prediction f ∈Δ(A) and a single outcome a ∈ A to a score S(f, a).
By averaging these scores over the dataset, every scoring rule S induces a loss function
_S(f, y) = 1/n∑_i=1^n S(f, y_i) = ∑_a ∈ Ap̅(y)_a S(f, a).
Scoring rules are popular due to their simple functional form, which simply evaluates the prediction independently on each observation.
Their alignment properties are also the subject of several celebrated results <cit.>, which we describe in detail in <Ref>.
Error rate, negative-log likelihood, cross-entropy, and Brier score are all scoring rules; L1, KL, and squared L2 are not.
§ FORMALIZING AN IDEAL LOSS FUNCTION
Each loss function from the previous section captures the quality of a prediction on a dataset with a single number, inducing preferences over these predictions.
Of course, these loss functions will not always agree with each other about how to order different models.
Is each of these losses an equally acceptable choice?
To answer this question, we turn to an axiomatic analysis, formalizing axioms that a loss function in a behavioral economic setting ought to obey.
We aim to identify axioms that are as weak as possible, in order that they will only disqualify loss functions that exhibit clearly objectionable behavior.
Our axioms can be grouped according to two distinct roles that a loss function serves in describing the quality of a prediction.
First, loss functions are used to compare the quality of models within a fixed experimental setting.
This occurs both during training, when a modeller aims to minimize expected loss on future data; and when evaluating models on a given dataset, comparing losses to see which model achieves the best performance.
Our two alignment axioms address this case, requiring that the loss correctly orders predictions in cases where quality disparities are unambiguous; both are extensions of already standard propriety axioms.
Second, loss functions are used to understand model performance more broadly; studies report losses and these values are interpreted as conveying information about how well a given model captured human behavior.
Our three interpretability axioms ensure that the loss can indeed be understood in this way, having a well-defined reference point and changing coherently as the data varies.
§.§ Alignment Axioms
Alignment axioms.
Our first alignment axiom pertains to the situation where two models' predictions are compared to each other on a fixed dataset.
This is a fundamental step in behavioral modelling: to evaluate the performance of a proposed model, one must compare its predictions on some dataset to other existing models to understand whether their proposal better captures human behavior.
Here, if one model is a better fit to the data than another, it should receive a lower loss.
What do we mean by “better”?
Reasonable people disagree about many comparisons between models, but some are unarguable.
For instance, a perfect prediction—one that exactly matches the dataset—is better than an imperfect one.
A standard axiom in the literature, known as Propriety, captures this intuition by requiring that a perfect prediction minimizes the loss.
To distinguish it from a Distributional Propriety axiom that will follow, we refer to it as Sample Propriety.
[Sample Propriety (SP)]
For all predictions f ∈Δ(A) and sampled datasets y ∈(A),
f ≠p̅(y) (p̅(y), y) < (f, y).
Unfortunately, Sample Propriety is insufficient for behavioral game theory. The reason is that parametric behavioral models are often misspecified: there is often no model in a given class that is able to output the empirical distribution of an arbitrary dataset.
We thus impose a stronger requirement that implies Sample Propriety: that we should prefer one (potentially imperfect) prediction to another whenever the first does an unambiguously better job of fitting the data. We formalize this idea with the notion of a Pareto improvement, which we will use extensively in what follows.
Let p, q, r ∈Δ(A) be three distributions.
We say that q is a Pareto improvement over p with respect to r, denoted by q ≻_r p, if for all a ∈ A, either p_a ≤ q_a ≤ r_a or p_a ≥ q_a ≥ r_a, and furthermore this inequality between p_a and q_a is strict for at least one a.
In other words, q is a Pareto improvement over p if q is at least as close to r as p in every dimension, and is strictly nearer to r in some dimension.
Then, if a prediction is a Pareto improvement over another prediction, i.e., all the predicted action proportions of f are uniformly closer to the empirically observed play distribution than g's predicted proportions, then f clearly is a better prediction than g, and so should receive a lower loss.
[Sample Pareto-Alignment (SPA)]
For all predictions f, g ∈Δ(A) and sampled datasets y ∈(A),
f ≻_p̅(y) g (f, y) < (g,y).
It is also crucial that loss functions align with notions of quality in the training process.
While training, the goal of a modeller is to select the prediction that minimizes the expected test loss over new, unseen data.
Thus, the loss function should be aligned not only on a specific sample, but in expectation.
As with SP, it is standard to insist that the expected loss must be minimized when the true distribution is reported.
[Distributional Propriety (DP)] For all predictions f ∈Δ(A) and all n ≥ 1, p ∈Δ(A),
f ≠ p _y ∼ p^n(p, y) < _y ∼ p^n(f, y).
As above, though, Distributional Propriety is insufficient when the model class is misspecified.
In this case, it is necessary to prefer predictions that are clearly closer to the true data-generating distribution, accurately reflecting improvements even away from the optimum.
We capture this intuition with a second alignment axiom, which we refer to as distributional Pareto-alignment.
[Distributional Pareto-Alignment (DPA)]
For all predictions f, g ∈Δ(A), and all n ≥ 1, p ∈Δ(A),
f ≻_p g _y ∼ p^n(f, y) < _y ∼ p^n(g, y).
In the same way as SPA implies SP, DPA implies DP.
A similar axiom was proposed by <cit.> under the name “accuracy-rewarding”.
There is only one difference: in their setting, a prediction is a vector in ^d, containing independent predictions for d different summary statistics of the dataset.
Because our predictions lie on the simplex, they are not independent in this way: e.g., predicting that one action has a probability of 1 constrains the predictions for all other actions to be 0.
§.§ Interpretability Axioms
Interpretability axioms.
Our alignment axioms constrain how the loss may vary as the prediction varies.
Our next axioms constrain how the loss may vary as the data varies.
Such constraints are important for ensuring that loss represents an understandable measurement of a prediction's quality.
One simple way that the data could be changed is simply by making the same observations in a different order.
Since in our setup, each observation corresponds to an independent trial involving a distinct participant, we argue that the loss should be unaffected by such reordering.
[Exchangeability (Ex)]
For all datasets y ∈(A), permutations π∈Π({1, …, n(y)}), and predictions f ∈Δ(A),
(f, π(y)) = (f, y).
Exchangeability is a standard assumption in many statistical methods <cit.> and loss functions; as we will see later, none of the commonly used BGT losses described earlier violate it.
What if the dataset varies in a more substantial way?
For example, one might replicate an experiment with a second group of participants, resulting in a new set of observations for the same game.
Alternatively, one might run slight variations to an experiment to assess their impact on the quality of a model <cit.>.
In both cases, it would be undesirable if the change in the data could cause the prediction to clearly decrease in quality, but be awarded a better loss.
As with varying predictions, there are many ways in which datasets could vary for which reasonable people could disagree about whether the same prediction ought to receive a higher or lower loss.
However, we can again leverage the insight that Pareto improvements are unambiguously better:
holding a prediction fixed, if the empirical probabilities of the data are brought closer to the predictions for at least some actions and further in none, it is clear that this dataset is better described by the prediction.
In such cases, we require that the loss must also improve.
[Counterfactual Pareto-Regularity (CPR)]
Let f ∈Δ(A) be a fixed prediction.
Suppose that y, y' ∈(A) are two datasets of equal size, where n(y) = n(y'). Then p̅(y) ≻_fp̅(y') (f, y') > (f, y).
Up to this point, all of the axioms have only described equalities or inequalities between certain pairs of losses.
None have constrained the precise numerical values of the losses: indeed, if satisfies all of these axioms, then any positive affine transformation a + b (with a > 0) does too.
This leaves users with a free choice of how to set these two degrees of freedom.
We propose to use this freedom to constrain the minimum loss, requiring that a perfect prediction (which must be the loss function's minimum, by SPA) achieves a loss of zero.
This makes the loss easier to interpret, removing the possibility for irreducible error, where even a perfect prediction could get a positive loss.
[Zero-Minimum (ZM)]
For all y ∈(A), (p̅(y), y) = 0.
ZM is admittedly the most subjective of our axioms:
for example, on some problems, it might be reasonable to anchor the loss to a different baseline instead, such as a uniform random prediction.
In response to this subjectiveness, in <Ref>, we give partial results that do not refer to ZM whenever possible.
Additionally, its addition is inconsequential when analyzing existing loss functions: in <Ref>, we show that each commonly used loss that violates ZM also violates CPR.
§ DIAGONAL BOUNDED BREGMAN DIVERGENCES
With these desiderata in mind, the obvious question is: are there loss functions that satisfy all of our axioms?
In this section, we provide a positive answer.
We first appeal to existing results to show that even asking for a subset of the axioms gives these loss functions considerable structure: Bregman divergences are essentially the only losses that satisfy SP, DP, Ex, and ZM.
Narrowing down this class further, we identify a family of losses, which we coin diagonal bounded Bregman divergences, that each satisfy our whole set of axioms (SPA, DPA, CPR, Ex, and ZM).
Let us now make these claims more precise. We first define a Bregman divergence. Let denote the extended real numbers ∪{±∞}, and adopt the convention that 0 ·∞ = 0.
Let B: C → be a closed and proper strictly convex function on a convex set C ⊆^k.
Then a subgradient of B is a function dB: C →^k such that
B(x) - B(x_0) ≥ dB(x_0)^T (x - x_0)
for all x_0, x ∈ C.
If B is also differentiable, it has a unique subgradient ∇ B on the interior of C.
definitiondefbregmandivergence
Given a closed and proper strictly convex function B: C → and subgradient dB of B, the Bregman divergence∇_(B, dB): C× C →_≥ 0 of B and dB is
∇_(B, dB) (p, q) = B(p) - B(q) - dB(q)^T (p-q).
We now leverage existing work from the field of property elicitation, which asks: if an agent seeks to minimize their expected loss, which loss functions will incentivize them to report their true belief about a particular property of the distribution? In these terms, DP requires that loss functions elicit the true underlying distribution of each sample when datasets are known to be composed of i.i.d. samples.
<cit.> show that, for this property (and, in fact, for many properties), essentially all such incentive-compatible loss functions are equivalent to Bregman divergences between some function of the dataset and the prediction, up to a translation by some function of the dataset.
This immediately yields the following result.
Under mild technical conditions, a well-behaved loss function that satisfies DP must be of the form (f,y) = ∇_(B, dB)(ρ(y), f) + c(y)
for some closed and proper strictly convex function B and subgradient dB, and some translation c: (A) → and summary statistic ρ: (A) →Δ(A) of the data, where _y∼ p^nρ(y) = p for all n, p.
We extend this result, showing that adding the SP and ZM axioms additionally determines c and ρ.
In other words, essentially every loss function satisfying DP, SP, and ZM is a Bregman divergence between the empirical mean of the data and the prediction.
Under mild technical conditions, a well-behaved loss function that satisfies SP and DP must be of the form (f,y) = ∇_(B, dB)(p̅(y), f) + c(y)
for some closed and proper strictly convex function B, subgradient dB of B, and translation c. If additionally satisfies ZM, then c(y) = 0 for all y.
Conversely, every loss function of this form satisfies SP and DP, and if c(y) = 0 for all y, then also satisfies ZM and Ex.
We defer a formal statement and proof of <Ref> to the appendix, as it takes care to describe the technical conditions on . The proof first carefully obtains satisfying DP from Theorem 11 of <cit.>, and then applies standard facts about Bregman divergences to show that the additional axioms constrain ρ and c as described. The reverse direction similarly follows from standard observations from convex analysis.
However, not all Bregman divergences satisfy our remaining axioms SPA, DPA, and CPR.
For example, taking B(f) = -H(f) = ∑_a=1^d f_a log f_a recovers the KL divergence; we will show in Section <ref> that this does not satisfy SPA.
Our main result is that all of our axioms are satisfied by the restricted set of diagonal bounded Bregman divergences.
[Diagonal bounded Bregman divergence (DBBD)]definitiondbbddefn
Let b: [0,1] → be a continuously differentiable convex function where b' is bounded on [0, 1]. Let B_b(x) = ∑_i b(x_i) for x ∈ [0,1]^d.
Then, a diagonal bounded Bregman divergence is a loss function : Δ(A) ×(A) →, where
(f,y) = ∇_(B_b, ∇ B_b)(p̅(y), f).theoremdbbd
If is a DBBD, then satisfies SPA, DPA, Ex, CPR, and ZM.
We again defer a complete proof to the appendix.
Briefly: Ex is trivial, and ZM follows from Theorem <ref>; SPA, DPA, and CPR each leverage the diagonal structure and convexity of B_b.
§ EVALUATING EXISTING LOSS FUNCTIONS USING OUR AXIOMS
We now revisit the loss functions introduced in Section <ref>. It is straightforward to see that squared L2 error is a DBBD (set b(x) = x^2) and so satisfies all of the axioms. Each other loss function violates at least one axiom, as summarized in Table <ref>tab:existing-losses-axioms.
We go through each of these losses below, demonstrating that in each case the axiom violations produce undesirable results under reasonable conditions.
§.§ Error rate
Error rate violates every axiom except Ex.
We show that error rate violates both SP and ZM with the following example.
Consider a game in which a player can choose between two actions, “defect” and “cooperate”. Suppose that in the true distribution of human play, two-thirds of players defect: p = (2/3, 1/3).
Running an experiment with 10 distinct participants, an analyst finds that 6 chose to defect, while the remaining 4 chose to cooperate, yielding an empirical distribution of p̅(y) = (0.6, 0.4). Let (f, 1-f) be a prediction in this setting. Then, the error rate on this dataset is
(f,y) = 1 - 0.6f - 0.4(1-f) = 0.6 - 0.2f.
This expression is minimized by the prediction f = 1, which has an error rate of 0.4.
In particular, this prediction achieves a lower error rate than reporting the empirical distribution, which has an error rate of (p̅(y), y) = 0.48.
This example illustrates a general problem: for any dataset y, the empirical distribution yields an error rate of 1 - p̅(y)_2^2, while predicting the mode gives rise to a lower error rate of 1 - max_a p̅(y)_a. Thus, preferring predictions with lower error rates leads to incorrect conclusions, giving more credit to predictions that overestimate the probability of the most likely action and underestimate the probabilities of the others.
§.§ L1 error
L1 error satisfies both SPA and ZM, but does not satisfy DPA or DP.
In some cases, a model that predicts the true population distribution gets worse expected L1 error on unseen data than an incorrect prediction.
As in Example <ref>, suppose again that the true distribution is p = (2/3, 1/3),
However, now suppose that the dataset is not yet available; all that is known about it is that it consists of 10 independent observations sampled from p.
Then, the expected loss of predicting (f, 1-f) is _y ∼ p^10(f,y) = 2_y ∼ p^10|f-p̅(y)_D|,
where 10p̅(y)_D, the number of participants that defect in the experiment, is a Binomial random variable with parameters n = 10, p = 2/3.
This expected loss is minimized by predicting the median of p̅(y)_D, which is 0.7.
In particular, this prediction even receives a lower expected loss than predicting the true distribution; the true distribution achieves an expected loss of 0.243, while the median gets an expected loss of 0.235.
This example, too, generalizes: in any setting with two actions, the expected loss is minimized by reporting the median of the empirical probability distribution, which is generally not equal to p; in fact, these quantities can differ by 1/2n. In other words, if a model is designed to minimize expected loss, L1 error fails to elicit the true distribution.
§.§ Cross-entropy, negative log likelihood, and Brier score
We group the next three losses together as they suffer from the same key issue: they all violate both CPR and ZM.
A second experimenter attempts to reproduce the results from Example <ref>.
They first fit a model to the existing dataset y, which has an empirical distribution of p̅(y) = (0.6, 0.4); their model fits perfectly, returning the exact empirical distribution.
Evaluating this model with Brier score and cross-entropy gives losses of
(p̅(y), y) = 0.48; (p̅(y), y) = 0.29.
They then collect their own dataset y', re-running the experiment with a different set of 10 participants; they find that 9 defect and only one cooperates.
They are surprised to find that, although their old model fails to predict this new dataset perfectly, it achieves lower losses of
(p̅(y), y') = 0.36; (p̅(y), y') = 0.24.
In general, the minimum possible Brier score is 1 - p̅(y)_2^2, and the minimum possible cross-entropy is the entropy of p̅(y).
NLL also fails in the same way, as it is simply a rescaling of cross-entropy.
§.§ KL divergence
The KL divergence is a translated version of cross-entropy that satisfies ZM, but doesn't satisfy SPA, DPA or CPR.
The key issue is that KL divergence gives infinite losses at the boundary.
That is, when a model predicts that an action has zero probability of being selected, but the action has support in the data, that model will have an infinite KL divergence.
This leads to situations such as the following.
Now, suppose that there are three actions, with a true distribution of p = (0.001, 0.199, 0.8), and that among 100 participants we observe y = (1, 19, 80), yielding an empirical distribution of p̅(y) = (0.01, 0.19, 0.80).
Consider comparing two predictions on this dataset: the very coarse prediction of f = (0, 1, 0) and the far more precise f' = (0, 0.2, 0.8).
Although f' is a better prediction, as it is closer to p̅(y) than f on both the second and third actions, both receive equal losses of (f, y) = (f', y) = ∞.
In general, when every action appears at least once in the dataset, KL divergence assesses every prediction that places 0 probability on any action as equally bad, and considers all of these predictions to be worse than any prediction having full support.
This is a serious problem, as it is common for every action to be played at least once in sufficiently large behavioral datasets: researchers often remark on the fact that subjects play dominated actions.
This makes it extraordinarily difficult to evaluate classical economic predictions, such as expected utility maximization or Nash equilibrium, which assign 0 probability to many actions. To avoid this issue, some researchers <cit.> perturb the predictions of such models to yield finite losses, but in doing so introduce an important new perturbation parameter and sacrifice the ability to evaluate the original models.
§.§ Scoring rules
Recall that error rate, cross-entropy, negative log likelihood, and Brier score—all of which are scoring rules—each violated the ZM and CPR axioms.
It turns out that these failures are common to all scoring rules, implying that scoring rules should not be used to report model performance.
propositionsrnegative
Every scoring rule that satisfies SPA violates ZM. Moreover, no scoring rule satisfies CPR.
We defer the proof to the appendix.
Intuitively, since scoring rules must consider each sample independently, they must treat every sample as if it were the entire dataset.
Then, in order to satisfy SPA, scoring rules must give positive losses to every nondeterministic prediction, causing them to violate the ZM axiom.
Moreover, scoring rules are linear in the empirical probabilities p̅(y) (<Ref>).
Any such linear function is maximized at one of its boundaries, meaning that is it not uniquely maximized at p̅(y) = f unless p̅(y) is a unit vector; hence, all scoring rules violate CPR.
However, while scoring rules fail the interpretability axioms, they do not necessarily violate the alignment axioms.
In fact, a classic result characterizes the set of scoring rules satisfying DP.
<cit.>
A scoring rule satisfies DP if and only if there exists a strictly convex function B: Δ(A) → and subgradient dB such that, for all f ∈Δ(A) and a ∈ A,
S(f, a) = -B(f) - dB(f)^T (e_a - f).
Furthermore, every such scoring rule satisfies SP.
It is no coincidence that this characterization is strikingly similar to the definition of a Bregman divergence, only differing by a shift of B(p).
Indeed, for every DBBD, there exists a scoring rule expressing exactly the same preferences between predictions.
To be precise, suppose that (f, y) = ∇_(B,dB)(p̅(y), f) is some Bregman divergence.
Then, consider the alternative loss '(f, y) = (f, y) + c(y), where c(y) is an arbitrary function that depends only on the data (and not on the prediction).
This shifted loss induces exactly the same preferences over models on every dataset and every distribution.
In particular, it is straightforward to show that setting c(y) = -B(p̅(y)) makes '(f,y) a scoring rule.
What's more, these scoring rules are computationally easier to minimize than their corresponding DBBDs.
Scoring rules can be computed without explicitly calculating p̅(y), which makes them ideal for training on large datasets, as the loss can be evaluated without loading the entire dataset into memory at once.
Therefore, we do not recommend against the use of scoring rules for model training—it may often be a good idea!
We simply argue that researchers should use a corresponding DBBD when evaluating model performance or otherwise reporting results.
§ CONCLUSIONS
Our goal in this paper was to identify suitable loss functions for evaluating behavioral economics models.
We took an axiomatic approach, developing axioms describing alignment and interpretability properties that such a loss function should satisfy.
We showed that almost all of the loss functions used in practice, including the entire class of scoring rules, violate at least one of these axioms.
However, it is indeed possible to construct loss functions that satisfy all of our axioms: we identified a large class of losses—the diagonal bounded Bregman divergences—that does.
Thus, we advocate that behavioral economists use one of these loss functions in future work; in particular, the squared L2 error, which is already used relatively widely, is a natural incumbent.
Is it possible to make a theoretical argument for a single best loss function for behavioral economic research?
If so, the path forward is to identify additional desirable properties for loss functions in behavioral research.
For example, on “rock-paper-scissors” experiments, one might insist on loss functions that are agnostic to the actions' identities, ensuring that they do not treat “rock” differently from “paper” or “scissors”.
Making compelling arguments for such new axioms and understanding how they narrow down the space of permissible losses—indeed, whether any remain at all—is a valuable direction for future work.
§ ACKNOWLEDGEMENTS
Thanks to Frederik Kunstner and Victor Sanches Portella for helpful discussions.
This work was funded by an NSERC CGS-D scholarship, an NSERC USRA award, an NSERC Discovery Grant, a DND/NSERC Discovery Grant Supplement, a CIFAR Canada AI Research Chair (Alberta Machine Intelligence Institute), awards from Facebook Research and Amazon Research, and DARPA award FA8750-19-2-0222, CFDA #12.910 (Air Force Research Laboratory).
plainnat
In the following appendices we provide proofs for the technical results in Sections <ref> and <ref>. <Ref> gives a formal statement and proof of <Ref>, our extension of a result from Abernethy2012 that characterizes all nice loss functions that satisfy a subset of our axioms. <Ref> proves <Ref>, that every DBBD satisfies all of the axioms. Finally, <Ref> proves <Ref>, that no scoring rule can satisfy all of the axioms.
§ FORMAL STATEMENT AND PROOF OF THEOREM <REF>
To formally state the first half of <Ref>, we need the following definitions.
Fix n ∈ℕ, and let L: Δ(A) × A^n → be a loss function.
Then Γ: Δ(A^n) →Δ(A) is a minimizer of L if for all μ∈Δ(A^n),
Γ(μ) ∈min_f ∈Δ(A)_y ∼μ L(f,y).
When satisfies DP, then Γ(p^n) = p. Note that Abernethy2012 defines a loss function to be proper for a given property Γ if Γ minimizes ; we instead begin with and obtain Γ for which is proper by construction.
Let L: Δ(A) × A^n → be a loss function satisfying DP, and let Γ be a minimizer of L. L is Γ-differentiable if for all f ∈relint(Δ(A)), and μ∈Γ^-1(f), the directional derivative
lim_ϵ→ 0_y ∼μL(f + ϵ v,y) - _y ∼μ L(f, y)/ϵ
exists for all v such that f + ϵ v ∈Δ(A) for sufficiently small ϵ.
We can now formally state the second claim of <Ref>.
Fix n ∈ℕ, and let : Δ(A) × A^n → satisfy SP and DP. Suppose that is Γ-differentiable for some minimizer Γ of . Then there exists some closed and proper strictly convex function B and subgradient dB, and some translation c such that is of the form (f,y) = ∇_(B, dB)(p̅(y), f) + c(y),
for all f ∈relint(Δ(A)), y ∈ A^n. If also satisfies ZM, then c(y) = 0 for all y ∈ A^n.
Taking U = Δ(A) and Ω = A^n, we can apply Theorem 11
of Abernethy2012 to find that there exists some convex function B and subgradient dB as well as functions ρ: A^n →Δ(A) and c: A^n → such that for all f ∈relint(Δ(A)), y ∈ A^n,
(f,y) = ∇_(B,dB)(ρ(y), f) + c(y).
Moreover, B is strictly convex. This would follow immediately if reporting the true distribution uniquely minimized the expected loss over arbitrary data distributions—allowing for arbitrary correlations between samples—but our DP axiom only requires this for i.i.d. data.
However, it is possible to repair this problem with the following modification to their proof.
Given a basis {b_i} of Δ(A), they select a set of corresponding distributions μ_i ∈Γ^-1(b_i); by the DP axiom, we know that b_i^n ∈Γ^-1(b_i), so we can pick μ_i = b_i^n.
Then, for any linear combination f = ∑_i α_i b_i, the distribution μ̂[f] = ∑_i α_i μ_i samples i.i.d. from f.
Using this choice of distributions μ_i, the remainder of their proof only requires that be proper for distributions of n i.i.d. observations.
In this case, our DP axiom gives a strict inequality B(f) + dB(f)^T (f' - f) < B(f') for all f ≠ f'; by Proposition D.6.1.3 of Hiriart-Urruty2001, this yields strict convexity of B.
Since a Bregman divergence ∇_(B,dB)(p,q) of strictly convex B is uniquely minimized by p=q, for any y ∈ A^n, L(f,y) is uniquely minimized by f = ρ(y); thus, SP constrains that ρ(y) = p̅(y) for all y.
Finally, ZM implies that for all y ∈ A^n,
L(p̅(y), y) = ∇_(B,dB)(p̅(y), p̅(y)) + c(y) = 0 + c(y) = 0,
so c(y) = 0 for all y. Thus L(f,y) = ∇_(B,dB)(p̅(y), f) for all y ∈ A^n, f ∈relint(Δ(A)).
We now prove the second half of <Ref>.
If (f,y) = ∇_(B, dB)(p̅(y), f) +c(y) for some closed and proper strictly convex function B: C → such that Δ(A) ⊆ C, subgradient dB: C× C →^d of B, and translaton c: A^n →, then satisfies SP and DP. If c(y) = 0 for all y, then also satisfies ZM and Ex.
All four of these axioms follow from a basic property of Bregman divergences of all strictly convex functions, which is that ∇_(B, dB)(p, q) ≥ 0, with equality if and only if p = q.
SP. For any Bregman divergence with strictly convex B and fixed p̅(y), ∇_(B, dB)(p̅(y), f) is uniquely minimized by f =p̅(y). Since y is fixed, the translation by c(y) does not affect the minimizer.
DP. This was shown by Banerjee2005 for f, p ∈relint(Δ(A)); we show it generally below. Let p ∈Δ(A); notice that _y ∼ p^np̅(y) = p. Then, for all f ∈Δ(A),
_y ∼ p^n(f,y) = _y ∼ p^n[∇_(B,dB)(p̅(y), f) + c(y)]
=_y ∼ p^n c(y) + _y ∼ p^n B(p̅(y)) - B(f) - _y ∼ p^n (p̅(y) - f)^T dB(f)
= _y ∼ p^n c(y) + _y ∼ p^n B(p̅(y)) - B(p) + B(p) - B(f) - (p - f)^T dB(f)
=_y ∼ p^n c(y) + _y ∼ p^n B(p̅(y))- B(p) + ∇_(B,dB)(p, f) .
The first three terms do not depend on f,
and the final term is uniquely minimized by f = p.
ZM. For any Bregman divergence, ∇_(B,dB)(p̅(y), p̅(y)) = 0, so if c(y) = 0,
L(p̅(y), y) = ∇_(B,dB)(p̅(y), p̅(y)) + 0 = 0.
Ex. Let π∈Π({1, …, n}). Since p̅(y) = p̅(π(y)), when c(y) = 0,
(f, π(y)) = ∇_(B, dB)(p̅(π(y)), f)= ∇_(B, dB)(p̅(y), f) = (f,y).
§ PROOF OF THEOREM <REF>
We will now prove that every diagonal bounded Bregman divergences satisfies all of our axioms.
First, recall the definition of a DBBD.
*
Note that while b' is not defined at 0, since b is continuously differentiable and convex, b' is monotonic and we can define b' at the endpoints as the continuous extension, which we constrain to be finite. For the remainder of this argument, we simplify ∇_(B_b, ∇ B_b) to ∇_B_b.
We are now ready to prove <Ref>.
*
Let be a DBBD, where (f,y) = ∇_B_b(p̅(y), f) for some b.
We establish that every such loss function satisfies each of our axioms in turn.
SPA. We use the convexity of b to prove that satisfies SPA.
Here, note that is bounded on [0,1] due to continuity on [0,1] and the bounded first derivative of b.
Let f, g ∈Δ(A), y ∈ A^n. Denote p̅ = p̅(y), and suppose f ≻_p̅ g.
As b is convex and differentiable on [0,1], b' is increasing.
Let 1 ≤ a ≤ d, and suppose that p̅_a ≤ f_a ≤ g_a. Then p̅_a - f_a ≤ 0, and b'(f_a) ≤ b'(g_a).
Now suppose that p̅_a ≥ f_a ≥ g_a. Then p̅_a - f_a ≥ 0, and b'(f_a) ≥ b'(g_a).
In either case, (p̅_a - f_a) b'(f_a) ≥ (p̅_a - f_a) b'(g_a). Then
(g,y) - (f,y) = ∇_B_b (p̅, g) - ∇_B_b (p̅, f)
= B_b(p̅) - B_b(g) - (p̅ - g)^T ∇ B_b(g)
- (B_b(p̅) - B_b(f) - (p̅ - f)^T ∇ B_b (f))
= ∑_a=1^d b(f_a) - b(g_a) + (p̅_a - f_a)b'(f_a) - (p̅_a - g_a) b'(g_a)
≥∑_a=1^d b(f_a) - b(g_a) + (p̅_a - f_a) b'(g_a) - (p̅_a - g_a) b'(g_a)
= ∑_a=1^d ∇_b(g_a, f_a)
For all a, ∇_b(g_a, f_a) ≥ 0. For some ã, g_ã≠ f_ã, so ∇_b(g_ã, f_ã) > 0, and (g,y) - (f,y) > 0.
DPA. Let p∈Δ(A) and notice that _y ∼ p^np̅(y) = p.
For any f, g ∈Δ(A), ∇_B_b(p̅(y), f) - ∇_B_b(p̅(y), g) is linear in p̅(y) as the term B_b(p̅(y)) in each cancels. Thus
_y∼ p^n(f,y) - _y ∼ p^n(f,y) = _y ∼ p^n[∇_B_b(p̅(y), f) - ∇_B_b(p̅(y), g)] = ∇_B_b(p, f) - ∇_B_b(p, g).
Thus if f ≻_p g, we can repeat the argument that satisfies SPA with p̅ replaced by p, and conclude that _y ∼ p^n(f,y) - _y ∼ p^n(g,y) > 0.
Ex. This follows from <Ref>.
CPR. This argument follows a similar structure to the proof of SPA.
Let n ∈ℕ, and y, y' ∈ A^n. Let f ∈Δ(A). Denote p̅ := p̅(y), p̅' := p̅(y'), with p̅≻_f p̅'.
As b is convex and differentiable on [0,1], b' is increasing.
Let 1 ≤ a ≤ d, and suppose that p̅_a' ≤p̅_a ≤ f_a. Then we know that p̅_a' - p̅_a < 0, and b'(f_a) ≥ b'(p̅_a).
On the other hand, if p̅_a' ≥p̅_a ≥ f_a, then p̅_a' - p̅_a > 0, but b'(f_a) ≤ b'(p̅_a).
Thus in both cases, (p̅_a' - p̅_a) b'(f_a) ≤ (p̅_a' - p̅_a) b'(p̅_a). Then
(f,y') - (f,y) = ∇_B_b(p̅', f) - ∇_B_b(p̅, f)
= B_b(p̅') - B_b(f) - (p̅' - f)^T ∇ B_b(f)
- (B_b(p̅) - B_b(f) - (p̅ - f)^T ∇ B_b(f))
= B_b(p̅') - B_b(p̅) -(p̅' - p̅)^T∇ B_b(f)
= ∑_a=1^d b(p̅_a') - b(p̅_a) - (p̅_a' - p̅_a) b'(f_a)
≥∑_a=1^d b(p̅_a') - b(p̅_a) - (p̅_a' - p̅_a) b'(p̅_a)
= ∑_a=1^d ∇_d(p̅_a', p̅_a).
For all a, ∇_d(p̅_a', p̅_a) ≥ 0, and for some ã, p̅_ã≠p̅_ã', so ∇_d(p̅_ã', p̅_ã) > 0. Thus (f,y') - (f,y) > 0.ZM. This also follows from <Ref>.
§ PROOF OF <REF>
For convenience, we restate the proposition here.
*
We begin by showing that a scoring rule cannot satisfy both SPA and ZM.
Suppose S is such a scoring rule; we will derive a contradiction.
First, for all a ∈ A, if f ≠ e_a, SPA requires that S(f, a) = _S(f, {a}) > _S(e_a, {a}) = S(e_a, a).
Second, for all a ∈ A, ZM requires that S(e_a, a) = _S(e_a, {a}) = 0.
Then, for any dataset y such that p̅(y) has support on at least 2 elements, we have
0
= _S(p̅(y), y)
= ∑_a ∈ Ap̅(y)_a S(p̅(y), a)
> ∑_a ∈ Ap̅(y)_a S(e_a, a)
= 0,
a contradiction.
Now, we show that an arbitrary scoring rule does not satisfy CPR. Let S be a scoring rule. Suppose y is a set of observations in which not all players select the same action. Define a^* to be an action in A with the best score, i.e.,
a^* ∈min_a∈ AS(p̅(y), a).
Let y' be another set of observations of the same size where all players play a^*. Then
L_S(p̅(y), y) = ∑_a ∈ Ap̅(y)_a S(p̅(y), a) ≥ S(p̅(y), a^*)∑_a ∈ Ap̅(y)_a = S(p̅(y), a^*) = L_S(p̅(y), y'),
but p̅(y) ≻_p̅(y)p̅(y').
plainnatreferences
|
http://arxiv.org/abs/2306.07368v1
|
20230612185110
|
SMART: Spatial Modeling Algorithms for Reaction and Transport
|
[
"Justin G. Laughlin",
"Jørgen S. Dokken",
"Henrik N. T. Finsberg",
"Emmet A. Francis",
"Christopher T. Lee",
"Marie E. Rognes",
"Padmini Rangamani"
] |
q-bio.QM
|
[
"q-bio.QM",
"q-bio.MN"
] |
1]Justin G. Laughlin
2]Jørgen S. Dokken
3]Henrik N.T. Finsberg
1]Emmet A. Francis
1]Christopher T. Lee
2]Marie E. Rognes
1]Padmini Rangamani
[1]Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA, USA
[2]Department of Numerical Analysis and Scientific Computing, Simula Research Laboratory, Oslo, Norway
[3]Department of Computational Physiology, Simula Research Laboratory, Oslo, Norway
SMART: Spatial Modeling Algorithms for Reactions and Transport
[
June 2023
==============================================================
§ SUMMARY
Recent advances in microscopy and 3D reconstruction methods have allowed for characterization of cellular morphology in unprecedented detail,
including the irregular geometries of intracellular subcompartments such as membrane-bound organelles.
These geometries are now compatible with predictive modeling of cellular function.
Biological cells respond to stimuli through sequences of chemical reactions generally referred to as cell signaling pathways.
The propagation and reaction of chemical substances in cell signaling pathways can be represented by coupled nonlinear
systems of reaction-transport equations.
These reaction pathways include numerous chemical species that react across boundaries or interfaces
(e.g., the cell membrane and membranes of organelles within the cell) and domains
(e.g., the bulk cell volume and the interior of organelles).
Such systems of multi-dimensional partial differential equations (PDEs) are notoriously difficult to solve
because of their high dimensionality, non-linearities, strong coupling, stiffness, and potential instabilities.
In this work, we describe Spatial Modeling Algorithms for Reactions and Transport (SMART),
a high-performance finite-element-based simulation package for model specification and numerical simulation of spatially-varying reaction-transport processes.
SMART is based on the FEniCS finite element library, provides a symbolic representation
framework for specifying reaction pathways, and supports geometries in 2D and 3D including
large and irregular cell geometries obtained from modern ultrastructural characterization methods.
§ STATEMENT OF NEED
SMART has been designed to fulfill the need for an open-source software capable of modeling cell signaling pathways within complicated cell geometries,
including reactions and transport between different subcellular surfaces and volumes.
In SMART, the user specifies species, reactions, compartments, and parameters to define a high-level model representation.
This framework uses a similar convention to Systems Biology Markup Language (SBML, <cit.>),
making the software approachable to a wider user base.
SMART provides features for converting the model representation into appropriate coupled systems
of ordinary differential equations (ODEs) and PDEs,
and for solving these efficiently using finite element and finite difference discretizations.
SMART has been designed for use by computational biologists and biophysicists.
SMART leverages state-of-the-art finite element software (FEniCS) <cit.>
which is compatible with a variety of meshing software such as Gmsh <cit.>
and GAMer 2 <cit.>, allowing users to solve nonlinear systems of PDEs within complex cellular geometries.
Moreover, the design of SMART as a FEniCS-based package allows for ease of extension and integration
with additional physics, enabling, e.g., coupled simulations of cell signaling and mechanics or electrophysiology.
SMART complements several existing software tools that are used to assemble and solve equations
describing cell signaling networks such as VCell <cit.>, COPASI <cit.>, and MCell <cit.>.
§ EXAMPLES OF SMART USE
SMART offers unique opportunities to examine the behavior of signaling networks
in realistic cell geometries. As a proof of concept, we used SMART to model
a coupled volume-surface reaction-diffusion system on a mesh of a dendritic spine generated by GAMer 2 (Fig <ref>, <cit.>).
More recently, we implemented a detailed model of neuron calcium dynamics in SMART (Fig <ref>).
This model describes IP_3R- and ryanodine receptor (RyR)-mediated
calcium release following stimulation by neurotransmitters.
These SMART simulations recapitulate the complex dynamics of calcium-induced
calcium release from the endoplasmic reticulum and predict strong
spatial gradients of calcium near regions of calcium release (Fig <ref>C).
§ ACKNOWLEDGEMENTS
The authors would like to acknowledge contributions from Yuan Gao and William Xu during the early development of SMART.
MER acknowledges support and funding from the Research Council of Norway (RCN) via FRIPRO grant agreement #324239 (EMIx), and the U.S.-Norway Fulbright Foundation for Educational Exchange.
EAF is supported by the National Science Foundation under Grant #EEC-2127509 to the American Society for Engineering Education.
CTL is supported by a Kavli Institute for Brain and Mind Postdoctoral Award.
JGL, CTL, EAF, and PR further acknowledge support from AFOSR MURI FA9550-18-1-0051 to PR.
[title=References]
|
http://arxiv.org/abs/2306.03655v1
|
20230606131501
|
Online Learning under Adversarial Nonlinear Constraints
|
[
"Pavel Kolev",
"Georg Martius",
"Michael Muehlebach"
] |
cs.LG
|
[
"cs.LG",
"math.OC"
] |
A New Approach to Measure Fundamental Microstructural Influences on the Magnetic Properties of Electrical Steel using a Miniaturized Single Sheet Tester
[
========================================================================================================================================================
In many applications, learning systems are required to process continuous non-stationary data streams.
We study this problem in an online learning framework and propose an algorithm that can deal with adversarial time-varying and nonlinear constraints.
As we show in our work, the algorithm called Constraint Violation Velocity Projection (CVV-Pro) achieves √(T) regret and converges to the feasible set at a rate of 1/√(T), despite the fact that the feasible set is slowly time-varying and a priori unknown to the learner.
CVV-Pro only relies on local sparse linear approximations of the feasible set and therefore avoids optimizing over the entire set at each iteration, which is in sharp contrast to projected gradients or Frank-Wolfe methods.
We also empirically evaluate our algorithm on two-player games, where the players are subjected to a shared constraint.
§ INTRODUCTION
Today's machine learning systems are able to combine computation, data, and algorithms at unprecedented scales, which opens up new and exciting avenues in many domains, such as computer vision, computer graphics, speech and text recognition, and robotics <cit.>. One of the leading principles that has enabled this progress is the focus on relatively simple pattern recognition and empirical risk minimization approaches, which mostly rely on offline gradient-based optimization and stipulate that the training, validation, and test data are independent and identically distributed.
Somewhat overlooked in these developments is the role of non-stationarity and constraints <cit.>.
Indeed, emerging machine learning problems involve decision-making in the real world, which typically includes interactions with physical, social, or biological systems.
These systems are not only time varying and affected by past interactions, but their behavior is often characterized via fundamental constraints.
Examples include cyber-physical systems where constraints are imposed by the laws of physics, multi-agent systems that are subjected to a shared resource constraint, or a reinforcement learning agent that is subjected to safety and reliability constraints.
In particular, in their seminal work <cit.> gave a reduction for the multi-arm bandit setting to the full information online optimization setting, by employing the multiplicative weights framework <cit.>.
This classical reduction was recently extended by <cit.> to the contextual bandit setting with sequential (time-varying) risk constraints.
This motivates our work, which is in line with a recent trend in the machine learning community towards online learning, adaptive decision-making, and online optimization.
More precisely, we study an online problem with slowly time-varying constraints, where in each time step t, the learner receives partial information on the current cost f_t and feasible set 𝒞_t:={x∈ℝ^n | g_t(x)≥0}.
The learner makes a decision x_t and incurs the loss f_t(x_t).
The quality of the learner's decision making is measured by comparing to the best decision in hindsight, that is,
∑_t=1^T f_t(x_t) - min_x^* ∈𝒞_T∑_t=1^T f_t(x^*) subject to g_T(x_T)≥ - c/√(T),
which will be shown to be bounded by 𝒪(√(T)) for our algorithm.
The functions f_t and g_t are restricted to f_t∈ℱ and g_t∈𝒢 (as defined in Assumption <ref>) and c>0 is an explicit constant.
With minor modifications (slightly tightening the constraints), our algorithm also achieves 𝒪(√(T)) regret in (<ref>) for c=0.
It is important to note that our performance objective (<ref>) is symmetric in the sense that the constraint x∈𝒞_T applies to both the learner and the benchmark x^*.
This contrasts prior work (see, e.g., <cit.>) where a different notion of constraint violations ∑_t=1^T g_t(x_t)≥ -c√(T) is used for the learner, while the benchmark x^* is required to satisfy g_t(x^*)≥ 0 for all t∈{1,…,T}.
Unlike (<ref>), this leads to an asymmetric regret formulation, since different requirements are imposed on the learner and the benchmark x^*.
Even more intriguing is the fact that our algorithm is unaware of the feasible sets a-priori, and obtains, at each iteration, only a local sparse approximation of 𝒞_t based on the first-order information of the violated constraints.
The indices of all violated constraints at x_t will be captured by the index set I(x_t):={i∈{1,…,m} | g_t,i(x_t)≤ 0}, while G(x_t):=[∇ g_t,i(x_t)]_i∈ I(x_t) denotes the matrix whose columns store the corresponding gradients.
In order to guarantee a regret of 𝒪(√(T)) in (<ref>) we require the following assumptions.
There exists R>0 such that 1) ℱ is a class of convex functions, where every f∈ℱ satisfies ||∇ f(x)||≤ L_ℱ, ∀ x∈ℬ_4R, with ||·|| the ℓ_2 norm and ℬ_R the hypersphere of radius R centered at the origin; 2) 𝒢 is a class of concave β_𝒢-smooth functions, where every g satisfies ||∇ g(x)||≤ L_𝒢, ∀ x∈ℬ_4R; 3) The feasible set 𝒞_t is non-empty and contained in ℬ_R for all t.
We note that these assumptions are standard in online optimization <cit.>. The learner's task is nontrivial even in the case where the feasible set is time invariant.
If the feasible set is time varying, additional assumptions are required that restrict the amount that the feasible set is allowed to change.
These two assumptions, see i) and ii) below, are described by the following interaction protocol between the learner and the environment:
(Interaction protocol)
At each time step t∈{1,…,T}:
1) the learner chooses x_t;
2) the environment chooses f_t∈ℱ and g_t∈𝒢 such that i) ||g_t-g_t-1||_∞ =𝒪(1/t), with ||·||_∞ the ℓ_∞ norm, and ii) 𝒞_t is contained in 𝒬_t=𝒬_t-1∪𝒮_t, where 𝒮_t:={x∈ℝ^n | G(x_t)^(x-x_t)≥0} is a cone centered at x_t and 𝒬_0:=ℝ^n (the situation is illustrated in Figure <ref>);
3) the environment reveals to the learner partial information on cost f_t(x_t), ∇ f_t(x_t) and all violated constraints g_t,i(x_t), ∇ g_t,i(x_t) for i∈ I(x_t).
[14]r0.3
< g r a p h i c s >
At each time step, the feasible set changes slightly and is only partially revealed.
The requirements i) ||g_t-g_t-1||_∞=𝒪(1/t); and ii) 𝒞_t ⊂𝒬_t restrict the feasible sets that the environment can choose.
We note that despite the fact that ||g_t-g_t-1||_∞=𝒪(1/t), ||g_t-g_1||_∞=𝒪(ln(t)), which means that the sequence of functions g_t that defines 𝒞_t does not converge in general.
As a result, C_t may evolve in such a way that the initial iterates x_1, x_2, …, x_t_0 achieve a large cost compared to min_x^*∈𝒞_T∑_t=1^T f_t(x^*), as these are constrained by the sets 𝒞_1, 𝒞_2, …, 𝒞_t_0, which may be far away from 𝒞_T.
The second requirement ii) 𝒞_t⊂𝒬_t avoids this situation and is therefore key for obtaining an 𝒪(√(T)) regret.
Our setup differs from traditional online convex optimization in the following two important ways: i) The environment chooses not only the functions f_t but also the nonlinear constraint functions g_t, ii) even if g_t is time-invariant, i.e., g_t=g for all t the learner has only access to local information about the feasible set.
That is, the information about the feasible set is only revealed piece-by-piece and needs to be acquired by the agent through repeated queries of a constraint violation oracle.
We propose an online algorithm that despite the lack of information about the feasible set, achieves 𝒪(√(T)) regret, and will derive explicit non-asymptotic bounds for the regret and the convergence to 𝒞_T. We thus conclude that our algorithm matches the performance of traditional online projected gradients or Frank-Wolfe schemes, while requiring substantially less information about the feasible set and allowing it to be time-varying.
Perhaps equally important is the fact that instead of performing projections onto the full feasible set at each iteration, our algorithm only optimizes over a local sparse linear approximation.
If constraints are nonlinear, which includes norm-constraints or constraints on the eigenvalues of a matrix, optimizing over the full feasible set at each iteration can be computationally challenging.
§.§ Related Work
Online learning has its roots in online or recursive implementations of algorithms, where due to the piece-by-piece availability of data, algorithms are often analyzed in a non i.i.d. setting.
A central algorithm is the multiplicative weights scheme <cit.>, where a decider repeatedly chooses between a finite or countable number of options with the aim of minimizing regret.
This online learning model not only offers a unifying framework for many classical algorithms <cit.>, but represents a starting point for online convex optimization <cit.>, and adversarial bandits <cit.>. Our approach extends this line of work by allowing the environment to not only choose the objective functions f_t, but also the constraints g_t.
Due to the fact that our learner only obtains local information about the feasible set, our work is somewhat related to <cit.>, where the aim is to reduce the computational effort of performing online projected gradient steps or Frank-Wolfe updates.
More precisely, <cit.> propose an algorithm that directly approximates projections, while requiring multiple queries of the constraint functions and their gradients.
A slightly different constraint violation oracle is assumed in <cit.>, where the learner can query separating hyperplanes between a given infeasible point and the feasible set.
Algorithmically, both <cit.> and <cit.> depart from online gradient descent, where the latter computes projections via an approximate Frank-Wolf-type scheme.
An alternative is provided by <cit.> and <cit.>, where optimizations over the entire feasible set are simplified by querying only a set membership oracle based on the Minkowski functional.
While our approach also avoids projections or optimizations over the entire feasible set, we introduce a different constraint violation oracle that returns a local sparse linear approximation of the feasible set.
We call the constraint violation oracle only once every iteration and do not require a two-step procedure that involves multiple oracle calls.
In addition, we also allow for adversarial time-varying constraints.
In addition, there has been important recent work that developed online optimization algorithms with constraints. In contrast to the primal formulation of our algorithm, these works are based on primal-dual formulations, where the algorithm is required to satisfy constraints on average, so called long-term constraints.
The research can be divided into two lines of work <cit.> and <cit.> that use a set of weaker and stricter definitions for constraint violations and investigate time-invariant constraints, which contrasts our formulation that includes time-varying constraints.
A third line of work by <cit.> focuses on time-varying constraints, where, however, the following weaker notion of constraint violation is used: ∑_t=1^T g_t(x_t) ≥ -c √(T), where t refers to time and x_t to the learner's decision. This metric allows constraint violations for many iterations, as long as these are compensated by strictly feasible constraints (in the worst case even with a single feasible constraint with a large margin).
In contrast, our algorithm satisfies g_t(x_t)≥ -c/√(t) for all iterations t∈{1,…,T}, where c is an explicit constant independent of the dimension of the decision variable and the number of constraints.
This means that we can explicitly bound the constraint violation at every iteration, whereas infeasible and strictly feasible iterates cannot compensate each other.
An important distinction to <cit.> is given by our performance metric (see also the discussion in <cit.> and <cit.>).
The work from <cit.> uses ∑_t=1^T f_t(x_t)- ∑_t=1^T f_t(x^⋆) as a performance measure, where the iterates x_t are required to satisfy ∑_t=1^T g_t(x_t) ≥ -c √(T) and x^⋆ satisfies g_t(x^⋆)≥ 0 for all t∈{1,…,T}.
This leads to a major asymmetry in the way regret is measured: while the iterates of the online algorithm only need to satisfy a cumulative measure of constraint violation, the benchmark x^⋆, which represents the best fixed decision in hindsight, is required to satisfy all constraints g_t(x^⋆)≥ 0 for t={1,…,T}.
The performance metric introduced in (<ref>) is symmetric and imposes the same constraints on the learner as well as the benchmark x^*.
These features make our algorithm a valuable addition to the algorithmic toolkit of online constrained optimization, which has also potential applications in bandit problems and related fields.
An important special case of our online learning model arises when the environment is represented by an adversarial player that competes with the learner. This corresponds to a repeated generalized Nash game due to the constraint that couples the decisions of the learner and its adversary.
If the adversary plays best response, the resulting equilibria are characterized by quasi-variational inequalities <cit.> and there has been important recent work, for example by <cit.> that proposes different gradient and penalty methods for solving these inequalities.
Our approach adopts a different perspective, rooted in online learning, which allows us to derive non-asymptotic convergence results for a first-order gradient-based algorithm that can be implemented in a straightforward manner.
Our approach is also inspired by the recent work of <cit.>, who propose a similar algorithm for the offline setting.
§.§ Main Contributions
We give an online optimization scheme under unknown non-linear constraints that achieves an optimal 𝒪(√(T)) regret and converges to the latest feasible set at a rate of 𝒪(1/√(T)).
There are two variants of our problem formulation: The first deals with situations where constraints are unknown but fixed, the second allows constraints to be chosen in a time-varying and adversarial manner.
Our algorithm, named Constraint Violation Velocity Projection (CVV-Pro), has the following features:
1. It assumes access to a new type of oracle, which on input x_t, returns partial information on all currently violated constraints.
Namely, the value g_t,i(x_t) and the gradient ∇ g_t,i(x_t) for all i∈ I(x_t).
2. It projects an adversarially generated negative cost gradient -∇ f_t(x_t) onto a velocity polytope V_α(x_t):={ v∈ℝ^n | [∇ g_i(x_t)]^v ≥-α g_i(x_t), ∀ i∈ I(x_t)}.
Due to the linear and local structure of V_α(x_t), the projection can be computed efficiently.
3. In contrast to standard online methods that project in each round a candidate decision onto the feasible set, our method trades off feasibility for efficiency.
In particular, it produces a sequence of decisions that converges at a rate of 𝒪(1/√(T)) to the latest feasible set.
4. Our method handles time-varying adversarial constraints g_t, provided a decreasing rate of change ||g_t+1-g_t||_∞≤𝒪(1/t) and that each feasible set 𝒞_t belongs to 𝒬_t (see Assumption <ref>).
As we show in Section <ref>, an important special case where the assumption of decreasing rate of change is satisfied is given by g_t=1/t∑_j=1^tg̃_j, i.e., when g_t represents an average of constraints g̃_t over time.
§.§ Outline
Section <ref> describes our algorithm and considers the situation where g_t is time invariant. This sets the stage for our main results in Section <ref> that provide regret guarantees for our new online convex optimization setting with non-stationary, nonlinear, and unknown constraints. An important and interesting application of our algorithm are generalized Nash equilibrium problems, as will be illustrated with a numerical experiment in Section <ref>. The experiment will also highlight that the numerical results agree with the theoretical predictions.
§ ONLINE LEARNING UNDER UNKNOWN, TIME-INVARIANT, AND NONLINEAR CONSTRAINTS
§.§ Online Gradient Descent
Online gradient descent <cit.> is a classical and perhaps the simplest algorithm that achieves optimal 𝒪(√(T)) regret for the setting of a compact, convex, time-invariant, and a priori known feasible set.
It consists of the following two operations:
i) y_t+1=x_t-η_t∇ f_t(x_t) takes a step from the previous point in the direction of the previous cost gradient; and
ii) x_t+1=Proj_𝒞(y_t+1) projects y_t+1 back to the feasible set 𝒞, as y_t+1 may be infeasible.
In this section, we generalize the online gradient descent algorithm to the setting where the feasible set is unknown a priori and has to be learned through repeated queries of a constraint violation oracle that only reveals local information.
§.§ Overview
In Section <ref>, we present the pseudo code of our algorithm.
In Section <ref>, we give a structural result showing that Algorithm <ref> under Assumption <ref> and a bounded iterate assumption guarantees an optimal 𝒪(√(T)) regret and converges to the feasible set at a rate of 𝒪(1/√(T)).
In Appendix <ref>, we show that the bounded iterate assumption can be enforced algorithmically, by introducing an additional hypersphere constraint that attracts the sequence {x_t}_t≥1 to a fixed compact set.
§.§ Constraint Violation Velocity Projection (CVV-Pro)
We present below the pseudocode of Algorithm <ref>
for a fixed horizon length T,
as it is standard in the literature <cit.>. However, we note that
our algorithm is oblivious to the horizon length T, i.e., it can run
for any number of iterations without knowing T a priori.
Let x∈𝒞 be an arbitrary decision.
We show in Claim <ref> that α(x-x_t)∈ V_α(x_t).
Hence, the velocity set V_α(x_t) is always non-empty and well defined.
§.§ Structural Result
Here, we show that Algorithm <ref> under Assumption <ref> and a bounded iterate assumption, guarantees an optimal 𝒪(√(T)) regret and converges to the feasible set at a rate 𝒪(1/√(T)).
The bounded iterate assumption will be removed subsequently, which however, will require a more complex analysis.
Suppose Assumption <ref> holds and in addition x_t∈ℬ_R for all t∈{1,…,T}.
Then, on input α = L_ℱ/R, Algorithm <ref> with step sizes
η_t=1/α√(t) guarantees the following for all T≥1:
(regret) ∑_t=1^Tf_t(x_t)-min_x^⋆∈𝒞∑_t=1^Tf_t(x^⋆)≤ 18L_ℱR√(T);
(feasibility) g_i(x_t)≥-8[L_𝒢/R+2β_𝒢]R^2/√(t), for all t∈{1,…,T} and i∈{1,…,m}.
§.§ Proof Sketch of Theorem <ref>
Our analysis establishes, in two steps, an important geometric property that connects the convex costs and the concave constraints via the velocity polytope V_α(x_t).
This property will be crucial for deriving the regret and feasibility bounds.
In the first step, we leverage the constraints' concavity
and show that the vector α(x^⋆-x_t) belongs to the
velocity polytope V_α(x_t).
Suppose g_i is concave for every i∈{1,…,m}. Then α(x-x_t)∈ V_α(x_t) for all x∈𝒞. In addition, x_t∉int(𝒞) implies [∇ g_i(x_t)]^[x-x_t]≥ 0 for all x∈𝒞.
Let x∈𝒞 be an arbitrary feasible decision, satisfying
g_i(x)≥0 for all i∈{1,…,m}. Since g_i is concave,
we have g_i(x_t)+[∇ g_i(x_t)]^[x-x_t]≥ g_i(x)≥0
and thus [∇ g_i(x_t)]^[x-x_t]≥ - g_i(x_t).
The second conclusion follows by x_t∉int(𝒞), which implies g_i(x_t)≤0.
In the second step, we show that r_t^(x_t-x^⋆)≤0, where r_t=v_t+∇ f_t(x_t) is such that -r_t belongs to the normal cone N_V_α(x_t)(v_t) of the velocity polytope V_α(x_t) evaluated at the projection v_t.
Let v_t be the projection of -∇ f_t(x_t)
onto the polytope V_α(x_t) such that v_t=r_t-∇ f_t(x_t)∈ V_α(x_t),
where -r_t∈ N_V_α(x_t)(v_t). Then, -r_t^(x-x_t)≤0
for all x∈𝒞.
By definition, the normal cone N_V_α(x_t)(v_t) is given by {u∈ℝ^n | u^(v-v_t)≤0, ∀ v∈ V_α(x_t)}.
Then, by construction -r_t∈ N_V_α(x_t)(v_t) and thus it holds for every v∈ V_α(x_t) that -r_t^[v-v_t]≤0.
The proof proceeds by case distinction:
Case 1. Suppose x_t is in the interior of 𝒞.
Then, I(x_t)=∅, which implies -∇ f_t(x_t)∈ V_α(x_t)=ℝ^n and thus r_t=0.
Case 2. Suppose x_t is on the boundary or outside of 𝒞, i.e., I(x_t)≠∅.
By Claim <ref>, we have [∇ g_i(x_t)]^[x-x_t]≥ 0 for all x∈𝒞.
By construction, v_t∈ V_α(x_t) and thus v(x)=v_t+x-x_t∈ V_α(x_t).
The statement follows by applying v=v(x) to -r_t^[v-v_t]≤0.
Regret. To establish the first conclusion of Theorem <ref> (regret), we combine the preceding geometric property with the analysis of online gradient descent.
Since f_t∈ℱ is convex, we upper bound the regret in terms of the gradient of f_t, namely ∑_t=1^Tf_t(x_t)-f_t(x^⋆) ≤∑_t=1^T [∇ f_t(x_t)]^(x_t-x^⋆) and then we show that the following inequality holds
[∇ f_t(x_t)]^(x_t-x^⋆)-η_t/2‖ v_t‖^2 = r_t^(x_t-x^⋆)+‖ x_t-x^⋆‖^2-‖ x_t+1-x^⋆‖^2/2η_t
≤ ‖ x_t-x^⋆‖^2-‖ x_t+1-x^⋆‖^2/2η_t.
Moreover, in Appendix <ref> (see Lemma <ref>), we upper bound the velocity ‖ v_t‖≤α‖ x^⋆-x_t‖+2‖∇ f_t(x_t)‖.
Combining Assumption <ref> and x_t∈ℬ_R yields a uniform bound ‖ v_t‖≤𝒱_α, where for α=L_ℱ/R we set 𝒱_α:=4L_ℱ.
The desired regret follows by a telescoping argument and by convexity of the cost functions f_t∈ℱ.
Feasibility. For the second conclusion of Theorem <ref> (convergence to the feasible set), we develop an inductive argument that proceeds in two steps.
In Appendix <ref> (see Claim <ref>), we give a structural result that bounds the constraint functions from below.
In particular, for every i∈ I(x_t) we have g_i(x_t+1)≥(1-αη_t)g_i(x_t)-η_t^2𝒱_α^2β_𝒢 and for every i∉I(x_t) it holds that g_i(x_t+1)≥-η_t+1𝒱_α[2L_𝒢+𝒱_αβ_𝒢/α].
Using an inductive argument, we establish in Appendix <ref> (see Lemma <ref>) the following lower bound: g_i(x_t)≥-cη_t where c=2𝒱_α(L_𝒢+𝒱_αβ_𝒢/α).
Choosing α=L_ℱ/R implies that 𝒱_α=4L_ℱ.
Then, the desired convergence rate to the feasible set follows for the step size η_t=1/α√(t), since
-cη_t=-2𝒱_α/α√(t)[L_𝒢+β_𝒢𝒱_α/α]=-8[L_𝒢/R+4β_𝒢]R^2/√(t).
§ ONLINE LEARNING UNDER ADVERSARIAL NONLINEAR CONSTRAINTS
§.§ Problem Formulation
In this section, we consider an online optimization problem with adversarially generated time-varying constraints.
More precisely, at each time step t, the learner receives partial information on the current cost f_t and feasible set 𝒞_t, and seeks to minimize (<ref>).
To make this problem well posed, we restrict the environment such that each feasible set 𝒞_t is contained in 𝒬_t (see Section <ref>) and the rate of change between consecutive time-varying constraints decreases over time.
We quantify a sufficient rate of decay with the following assumption.
[TVC Decay Rate]
We assume that the adversarially generated sequence {g_t}_t≥1 of time-varying constraints is such that for every x∈ℬ_4R and all t≥1, the following holds
‖ g_t+1(x)-g_t(x)‖_∞≤98/t+16[L_𝒢/R+3β_𝒢]R^2.
We note that Assumption <ref> essentially only requires ‖ g_t+1(x)-g_t(x)‖_∞≤𝒪(1/t), as R can be chosen large enough such that the bound is satisfied.
Of course, R will appear in our regret and feasibility bounds, but it will not affect the dependence on t or T (up to constant factors).
An important special case where Assumption <ref> is satisfied, is summarized in the following Lemma. The proof is included in Appendix <ref> (see Lemma <ref> and Lemma <ref>).
Suppose the functions g̃_t,i satisfy Assumption <ref> and in addition there is a decision x_t,i∈ℬ_R such that g̃_t,i(x_t,i)=0 for every t≥1 and i∈{1,…,m}.
Then the time-averaged constraints g_t,i(x):=1/t∑_ℓ=1^t g̃_ℓ,i(x) satisfy Assumption <ref> and Assumption <ref>.
§.§ Velocity Projection with Attractive Hypersphere Constraint
We show in Appendix <ref> that the second assumption in Theorem <ref>, namely, “x_t∈ℬ_R for all t≥1” can be enforced algorithmically.
We achieve this in two steps.
1) Algorithmically, we introduce an additional hypersphere constraint g_m+1(x_t)=1/2[R^2-‖ x_t‖^2] that attracts the decision sequence {x_t}_t≥1 to a hypersphere ℬ_R and guarantees that it always stays inside a hypersphere ℬ_4R with a slightly larger radius.
More precisely, we augment the velocity polytope in Step 3 of Algorithm <ref> as follows:
V_α^'(x_t)=V_α(x_t) if ‖ x‖≤ R, otherwise
V_α^'(x_t)={v∈ V_α(x_t) | [∇ g_m+1(x_t)]^v≥-α g_m+1(x_t)}.
2) Analytically, we give a refined inductive argument in Appendix <ref> (see Lemma <ref>), showing that g_m+1(x_t)≥-27R^2/√(t+15), ‖ x_t‖≤4R and ‖ v_t‖≤7L_ℱ, for all t≥1.
§.§ Main Contribution
Our main contribution is to show that Algorithm <ref> with the augmented velocity polytope V_α^'(x_t), achieves optimal 𝒪(√(T)) regret and satisfies g_T(x_T)≥ -Ω(1/√(T)) convergence feasibility rate.
Due to space limitations, we defer the proof to Appendix <ref>.
Suppose the functions {f_t,g_t}_t≥1 satisfy Assumption <ref> and Assumption <ref>.
Then, on input R,L_ℱ>0 and x_1∈ℬ_R, Algorithm <ref> applied with α=L_ℱ/R, augmented velocity polytope V_α^'(·) and step sizes η_t=1/α√(t+15) guarantees the following for all T≥1:
(regret) ∑_t=1^Tf_t(x_t)-min_x∈𝒞_T∑_t=1^Tf_t(x)≤246L_ℱR√(T);
(feasibility) g_t,i(x_t)≥-265[L_𝒢/R+4β_𝒢]R^2/√(t+15), for all t∈{1,…,T} and i∈{1,…,m};
(attraction) g_m+1(x_t)≥-27R^2/√(t), for all t∈{1,…,T}.
Our regret analysis in Theorem <ref> builds upon the following key structural result that generalizes Lemma <ref> to time-varying constraints.
In particular, in Appendix <ref> (see Lemma <ref>), we show that given the feasible set 𝒞_T ⊂𝒬_T, it holds for every x∈𝒞_T that -r_t^(x - x_t) ≤ 0 for all t∈{1,…,T}.
As a result, a similar argument as in (<ref>) shows that the regret is bounded by 𝒪(√(T)).
Moreover, we note that the linear and quadratic dependence on R in Theorem <ref> is consistent in length units.
Let the radius R be of length units ℓ, then the Lipschitz constant L_ℱ, which can be viewed as the supremum over the ℓ_2 norm of the gradient is of 1/ℓ units, and the β_𝒢 smoothness constant (associated with Hessian) is of 1/ℓ^2 units. This means that the regret bound in Theorem <ref> has the same units as f_t, while the feasibility bound has the same units as g_t.
§ SIMULATION EXAMPLES
Two-player games with shared resources are an excellent example for demonstrating the effectiveness and importance of our online learning framework.
We apply our algorithm and show numerical experiments that support our theoretical findings.
We choose random instances of a two player game with linear utility and constraints.
In particular, we consider the following optimization problem
min_x∈_nmax_y∈_n x^A y subject to C_xx+C_yy≤ 1,
where _n={x∈ℝ^n | ∑_i=1^nx_i=1, x≥0} is the probability simplex. Each component of the utility matrix A∈ℝ^n× n is sampled from the normal distribution and the constraint matrices C_x,C_y∈[0,1]^m× n have each of their components sampled uniformly at random from [0,1].
§.§ Online Formulation
The problem in (<ref>) can be modeled with our online learning framework (<ref>) by choosing costs f_t(x):=x^ Ay_t and time-averaged resource constraints g_T(x):=1/T∑_t=1^Tt(x), where the function t(x):=1-C_xx-C_yy_t.
Thus, the constraint in (<ref>) is included as an average over the past iterations of y_t.
The strategy for choosing y_t will be described below and, as we will see, the average of y_t over the past iterations converges.
This ensures that the feasible set 𝒞_t (defined in (<ref>)) is slowly time-varying, while the averages of x_t and y_t over past iterates converge to equilibria in (<ref>).
Further, by a refined version of Lemma <ref> (see Lemma <ref> in Appendix <ref>), the time-averaged constraints g_T(x) satisfy Assumption <ref>.
In each iteration, Algorithm <ref> seeks to minimize the online problem and commits to a decision x_t.
The adversary computes the best response ŷ_t with respect to the decision x_t by solving _y∈_n x_t^ Ay.
To make the dynamics more interesting, the adversary then commits with probability 0.8 to ŷ_t and with probability 0.2 to a random decision r_t, i.e.,
y_t=0.8ŷ_t + 0.2r_t where r_tu.a.r.∼_n.
As both players optimize over the probability simplex (x,y∈_n),
the sequence of decisions {x_t}_t≥1 is automatically bounded.
Thus, we can apply Theorem <ref> with the original velocity polytope, as discussed in Appendix <ref>.
We implemented our algorithm with η_t=1/(α√(t)) and α=100.
§.§ Experimental Results
We report results from numerical simulations with decision dimension n=100, m=10 shared resource constraints, T=4000 iterations, and five independently sampled instances of the two-player game.
The learner's regret, depicted in Figure <ref>a, shows a clear correspondence with the theoretical prediction of 𝒪(√(T)).
Figure <ref>b presents the maximal constraint violation -min_i∈ I(x_T)1/T∑_t=1^Tg_t,i(x_T), which follows the predicted 𝒪(1/√(T)) convergence rate.
We also conclude from Figure <ref>c that the learner's averaged decisions x_T=1/T∑_t=1^Tx_t converge at a rate of 𝒪(1/√(T)).
Similarly, the averaged decisions y_T of the adversary also converge at a rate of 𝒪(1/√(T)).
We note that there is little variability in the results despite the different realizations of the matrices A, C_x, C_y.
Contrasting CVV-Pro and Online Gradient Descent
In Appendix <ref>, we show that our (CVV-Pro) algorithm outperforms the standard Online Gradient Descent algorithm in the two-player game from above.
In particular, our algorithm achieves a lower regret and a runtime improvement of about 60%.
Further, the percentage of violated constraints decreases rapidly and plateaus at 20%.
The amount of improvement in execution time is likely to be greater for higher-dimensional problems, where fewer constraints tend to be active at each iteration.
Moreover, when the constraints are nonlinear, which includes ℓ_p norm or spectral constraints, optimizing over the full feasible set can be computationally challenging.
In contrast, the velocity projection step in CCV-Pro is always a convex quadratic program with linear constraints, regardless of the underlying feasible set.
§ CONCLUSION
We propose an online algorithm that, despite the lack of information about the feasible set, achieves 𝒪(√(T)) regret.
We further ensure convergence of violated constraint -min{g_T(x_T),0} at a rate of 𝒪(1/√(T)) and derive explicit constants for all our bounds that hold for all T≥1.
We thus conclude that our algorithm matches the performance of traditional online projected gradients or Frank-Wolfe schemes, while requiring substantially less information about the feasible set and allowing the feasible set to be time-varying.
Perhaps equally important is the fact that instead of performing projections onto the full feasible set at each iteration, our algorithm only optimizes over a local sparse linear approximation.
We show the applicability of our algorithm in numeric simulations of random two-player games with shared resources.
§ ACKNOWLEDGEMENTS
We acknowledge the support from the German Federal Ministry of Education and Research (BMBF) through the Tübingen AI Center (FKZ: 01IS18039B).
Georg Martius is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645.
Pavel Kolev was supported by the Cyber Valley Research Fund and the Volkswagen Stiftung (No 98 571).
Michael Muehlebach thanks the German Research Foundation and the Branco Weiss Fellowship, administered by ETH Zurich, for the support.
plainnat
Supplementary Material for Online Learning under Adversarial Nonlinear Constraints
§ CONTRASTING CVV-PRO AND OGD: A COMPARATIVE STUDY
In this section, we compare the runtime performance and regret guarantee of the standard Online Gradient Descent (OGD) algorithm and our (CVV-Pro) algorithm in the two-player game setting (defined in Section <ref>).
More concretely, we consider shared constraints of the form C_xx+C_yy≤ b.
We report results from numerical simulations with decision dimension n=1000, m=100 shared resource constraints, capacity b=1.3, T=2000 iterations, and 5 independently sampled instances of the two-player game.
We report below the results:
Regret: The 25th percentile of OGD has a higher regret around iteration 1400 than the function 5√(t) and stays above it.
In contrast, CVV-Pro achieves better regret, with the 75th percentile being strictly bounded by the function 5√(t), see Figure <ref>a.
% Constraints Violation In each iteration, CVV-Pro requires an oracle access only to the currently violated constraints.
The percentage of violated constraints first increases from 0.01% to 57% in the first four iterations, and then decreases rapidly to plateau at 20%, see Figure <ref>b.
Runtime:
In Figure <ref>c, we report the average runtime per iteration for computing a projection.
Since CVV-Pro solves the velocity projection problem with a decreasing number of constraints, it achieves a faster average runtime of 0.11±0.01s compared to OGD, which requires solving the full projection problem each time and runs in 0.18±0.01s.
Thus, for the two-player game with shared constraints, our algorithm CVV-Pro achieves a runtime improvement of around 60% over OGD.
Further, we report in Figure <ref>d the total cumulative runtime of CVV-Pro and OGD for computing the projection.
The amount of improvement in execution time is likely to be greater for higher-dimensional
problems, where fewer constraints tend to be active at each iteration.
Moreover, there are important situations, for example if constraints are non-convex, where projections are very difficult to compute (and/or might not even be well defined).
In contrast, the velocity projection step in CCV-Pro is always a convex problem, regardless of whether the underlying feasible set is convex or not.
§ PROOF OF THEOREM <REF>
In this section, we consider an online optimization problem with time-invariant constraints and a bounded iterate assumption.
The bounded iterate assumption will be removed subsequently in Section <ref>, which however, will require a more complex analysis.
We restate Theorem <ref> below for the convenience of the reader.
Suppose Assumption <ref> holds and in addition x_t∈ℬ_R for all t∈{1,…,T}.
Then, on input α = L_ℱ/R, Algorithm <ref> with step sizes
η_t=1/α√(t) guarantees the following for all T≥1:
(regret) ∑_t=1^Tf_t(x_t)-min_x^⋆∈𝒞∑_t=1^Tf_t(x^⋆)≤ 18L_ℱR√(T);
(feasibility) g_i(x_t)≥-8[L_𝒢/R+2β_𝒢]R^2/√(t), for all t∈{1,…,T} and i∈{1,…,m}.
The rest of this section is devoted to proving the preceding statement.
§.§ Structural Properties
Suppose g_i is concave for every i∈{1,…,m}.
Then, for any α>0 and all x∈𝒞 the following holds
max_t≥0‖ v_t‖≤α‖ x - x_t‖
+ 2‖∇ f_t(x_t)‖.
In particular, when f_t satisfies ‖∇ f_t(z) ‖≤ L_ℱ for all z∈ℬ_cR, it follows that
‖ v_t‖≤(c+1)α R+2L_ℱ for any x∈ℬ_R, x_t∈ℬ_cR, and c>0.
By Claim <ref>, we have α(x-x_t)∈ V_α(x_t) for every x∈𝒞.
Combining the triangle inequality with the fact that v_t is an optimal solution of the velocity projection problem in Step <ref>, yields
‖ v_t‖-‖∇ f_t(x_t)‖ ≤ ‖ v_t+∇ f_t(x_t)‖
≤ ‖α(x - x_t)+∇ f_t(x_t)‖
≤ α‖ x - x_t‖+‖∇ f_t(x_t)‖.
Using x∈ℬ_R, x_t∈ℬ_cR and ‖∇ f_t(x_t)‖≤ L_ℱ, we conclude
‖ v_t‖≤α‖ x-x_t‖+2‖∇ f_t(x_t)‖≤(c+1)α R+2L_ℱ.
§.§ Cost Regret
Suppose Assumption <ref> holds and x_t∈ℬ_cR for all t∈{1,…,T} with c∈(0,4].
Let d≥0 be a constant.
Then, Algorithm <ref> applied with α=L_ℱ/R and step sizes η_t=1/α√(t+d), guarantees the following for all T≥1:
R_T=∑_t=1^Tf_t(x_t)-min_x^⋆∈𝒞∑_t=1^Tf_t(x^⋆)≤√(d+1)[(c+3)^2+1/2(c+1)^2]L_ℱR√(T).
In particular, for c=1 and d=0 we have R_T≤ 18L_ℱR√(T).
We denote an optimal decision in hindsight by x^⋆∈_x∈𝒞∑_t=1^Tf_t(x).
For any points x^⋆, x_t we have f_t(x_t)-f_t(x^⋆)≤[∇ f_t(x_t)]^(x_t-x^⋆), since f_t is convex.
Summing over the number of rounds t results in
∑_t=1^Tf_t(x_t)-f_t(x^⋆) ≤∑_t=1^T [∇ f_t(x_t)]^(x_t-x^⋆).
We proceed by upper bounding the expression [∇ f_t(x_t)]^(x_t-x^⋆).
Using x_t+1=x_t+η_tv_t and v_t=r_t-∇ f_t(x_t), we have
‖ x_t+1-x^⋆‖^2 = ‖ x_t+η_t(r_t-∇ f_t(x_t))-x^⋆‖^2
= ‖ x_t-x^⋆‖^2+η_t^2‖ r_t-∇ f_t(x_t)‖^2+2η_t[r_t-∇ f_t(x_t)]^(x_t-x^⋆).
Then, Lemma <ref> gives r_t^(x_t-x^⋆)≤0 and thus
[∇ f_t(x_t)]^(x_t-x^⋆) = r_t^(x_t-x^⋆)+‖ x_t-x^⋆‖^2-‖ x_t+1-x^⋆‖^2/2η_t+η_t/2‖ v_t‖^2
≤ ‖ x_t-x^⋆‖^2-‖ x_t+1-x^⋆‖^2/2η_t+η_t/2‖ v_t‖^2.
Since x^⋆∈ℬ_R and x_t∈ℬ_cR for all t∈{1,…,T}, by
Lemma <ref> it follows for all t∈{1,…,T} that
‖ v_t‖≤(c+1)α R+2L_ℱ=(c+3)L_ℱ=:𝒱_α.
Summing over the whole sequence, using the fact that η_t=1/α√(t+d) is a decreasing positive sequence and applying Claim <ref>, x^⋆∈ℬ_R, x_t∈ℬ_cR, and (<ref>), yields
2∑_t=1^T[∇ f_t(x_t)]^(x_t-x^⋆) ≤ ∑_t=1^T‖ x_t-x^⋆‖^2-‖ x_t+1-x^⋆‖^2/η_t+η_t‖ v_t‖^2
≤ 𝒱_α^2(∑_t=1^Tη_t)+(c+1)^2R^2/η_T
≤ (c+3)^2L_ℱ^22/α√(T+d)+(c+1)^2L_ℱR√(T+d)
= [2(c+3)^2+(c+1)^2]L_ℱR√(T+d),
where last inequality uses
∑_t=1^Tη_t=1/α∑_t=1^T1/√(t+d)<1/α∑_t=1^T+d1/√(t)≤2/α√(T+d).
The statement follows by combining the fact that √(T+d)≤√(d+1)√(T) for any d≥0 and all T≥1, and
∑_t=1^Tf_t(x_t)-f_t(x^⋆)≤∑_t=1^T[∇ f_t(x_t)]^(x_t-x^⋆)≤√(d+1)[(c+3)^2+1/2(c+1)^2]L_ℱR√(T).
[Series]
For any positive sequence {a_t}_t=1^T+1 and
any decreasing positive sequence {η_t}_t=1^T, it holds that
∑_t=1^Ta_t-a_t+1/η_t≤A/η_T, whereA:=max_t={1,…,T}a_t.
Observe that
∑_t=1^Ta_t-a_t+1/η_t = a_1-a_2/η_1+a_2-a_3/η_2+a_3-a_4/η_3+⋯+a_T-a_T+1/η_T
= a_1/η_1-a_T+1/η_T+∑_i=2^a_i(1/η_i-1/η_i-1)
≤ A/η_T,
where the last inequality follows by
∑_i=2^a_i(1/η_i-1/η_i-1)≤ A∑_i=2^(1/η_i-1/η_i-1)=A(1/η_T-1/η_1)≤A/η_T-a_1/η_1.
§.§ Convergence Rate of Constraint Violations
Suppose Assumption <ref> holds and {x_t}_t≥1∈ℬ_cR with x_1∈ℬ_R and c∈(0,4].
Then, for any α>0 and d≥0, step sizes η_t=1/(α√(t+d)) and 𝒱_α>0 such that ‖ v_t‖≤𝒱_α for all t≥1, it follows for every i∈{1,…,m} and t≥1 that
g_i(x_t) ≥ -c_1η_t,
where
c_1=𝒱_α[2L_𝒢+β_G𝒱_α/α]+𝒵_d and 𝒵_d=(1-1/√(d+1))√(d+2)[L_𝒢/R+β_𝒢]2α R^2.
In particular, when Assumption <ref> holds, {x_t}_t≥1∈ℬ_R, α=L_ℱ/R and d=0, it follows that
g_i(x_t)≥-8[L_𝒢/R+2β_G]R^2/√(t) for all t≥1.
The proof is by induction on t.
We start with the base case t=1.
The proof proceeds by case distinction.
Case 1. Suppose i∈{1,…,m}\ I(x_1), i.e., g_i(x_1)>0.
Then, by Claim <ref> Part ii) we have
g_i(x_2)≥-η_2𝒱_α[2L_𝒢+𝒱_αβ_𝒢/α√(1+d)]≥-c_1η_2.
Case 2. Suppose i∈ I(x_1), i.e., g_i(x_1)≤0.
By combining x_1∈ℬ_R and g_i is concave β_𝒢-smooth, it follows for every x∈𝒞⊆ℬ_R that
g_i(x_1) ≥ g_i(x)+∇ g_i(x)^T(x_1-x)-β_𝒢/2‖ x_1-x‖^2
≥ -2L_𝒢R-2β_𝒢R^2
= -η_1√(d+1)[L_𝒢/R+β_𝒢]2α R^2≥-c_1η_2.
Using η_t=1/(α√(t+d)), η_1/η_2≤√(2) and η_1^2𝒱_α^2β_𝒢/2≤η_2^2𝒱_α^2β_𝒢=η_2𝒱_α^2β_𝒢/α√(d+2), it follows by Claim <ref> Part i) that
g_i(x_2) ≥ (1-αη_1)g_i(x_1)-η_1^2𝒱_α^2β_𝒢/2
≥ -η_2[(1-1/√(d+1))√(d+2)[L_𝒢/R+β_𝒢]2α R^2+𝒱_α^2β_𝒢/α√(d+2)]≥-c_1η_2.
Our inductive hypothesis is g_i(x_t)≥-c_1η_t for all i.
We now show that it holds for t+1.
Case 1. Suppose i∈{1,…,m}\ I(x_1), i.e., g_i(x_t)>0.
Then by Claim <ref> Part ii)
g_i(x_t+1)≥-η_t+1𝒱_α[2L_𝒢+β_𝒢𝒱_α/α√(d+1)]≥-c_1η_t+1.
Case 2. Suppose i∈ I(x_t), i.e., g_i(x_t)≤0.
Combining Claim <ref> Part ii) and the inductive
hypothesis we have
g_i(x_t+1) ≥ (1-αη_t)g_i(x_t)-η_t^2𝒱_α^2β_𝒢/2
≥ -c_1η_t+c_1αη_t^2-η_t^2𝒱_α^2β_𝒢/2
= -c_1η_t+1+c_1η_t+1-c_1η_t+c_1αη_t^2-η_t^2𝒱_α^2β_𝒢/2
= -c_1η_t+1+c_1η_t[η_t+1/η_t-1+αη_t-η_t𝒱_α^2β_𝒢/2c_1].
Since c_1η_t>0, it suffices to show that
α-η_t-η_t+1/η_t^2≥𝒱_α^2β_𝒢/c_1
or equivalently (using η_t=1/α√(t+d)
for t≥1)
α-α√(t+d/t+d+1)(√(t+d+1)-√(t+d))≥𝒱_α^2β_𝒢/2c_1.
Straightforward checking shows that max_t≥1√(t/t+1)(√(t+1)-√(t))<1/3.
Hence, inequality (<ref>) is implied for c_1≥β_𝒢𝒱_α^2/α and thus g_i(x_t+1)≥-c_1η_t+1.
Furthermore, for c=1 and α=L_ℱ/R, by Lemma <ref>, we can set 𝒱_α=4L_ℱ.
Then, for d=0 we have g_i(x_t)≥-8[L_𝒢/R+2β_𝒢]R^2/√(t) for all t≥1.
[Constraint Violation]
Suppose g_i is concave, β_𝒢-smooth and satisfies ‖∇ g_i(x) ‖≤ L_𝒢 for all x∈ℬ_cR and i∈{1,…,m}, where c>0 is a constant.
Suppose further that there exists a constant 𝒱_α>0 such that x_t∈ℬ_cR and ‖ v_t‖≤𝒱_α, for all t≥1.
Then, for all t≥1 we have
i) g_i(x_t+1)≥(1-αη_t)g_i(x_t)-η_t^2𝒱_α^2β_𝒢/2 for every i∈ I(x_t);
ii) g_i(x_t+1)≥-η_t+1𝒱_α[2L_𝒢+𝒱_αβ_𝒢/(α√(1+d))] for every i∈{1,…,m}\ I(x_t).
The proof proceeds by case distinction.
Case 1. Suppose i∈ I(x_t), i.e., g_i(x_t)≤0.
By combining the facts that g_i is concave and β_𝒢-smooth, x_t+1=x_t+η_tv_t
and [∇ g_i(x_t)]^v_t≥-α g_i(x_t), it
follows that
g_i(x_t+1) ≥ g_i(x_t)+[∇ g_i(x_t)]^[x_t+1-x_t]-β_𝒢/2‖ x_t+1-x_t‖_2^2
≥ (1-αη_t)g_i(x_t)-η_t^2𝒱_α^2β_𝒢/2.
Case 2. Suppose i∉I(x_t), i.e., g_i(x_t)>0.
Using ‖∇ g_i(x) ‖≤ L_𝒢 for x_t∈ℬ_cR,
we have
[∇ g_i(x_t)]^[x_t+1-x_t]≤‖∇ g_i(x_t)‖‖ x_t+1-x_t‖≤η_tL_𝒢𝒱_α.
Hence,
g_i(x_t+1) ≥ g_i(x_t)+[∇ g_i(x_t)]^[x_t+1-x_t]-β_𝒢/2‖ x_t+1-x_t‖_2^2
≥ -η_tL_𝒢𝒱_α-η_t^2𝒱_α^2β_𝒢/2
= -η_t+1η_t/η_t+1𝒱_α[L_𝒢+η_t/2𝒱_αβ_𝒢]
> -η_t+1𝒱_α[2L_𝒢+𝒱_αβ_𝒢/α√(1+d)],
where the last inequality follows by η_t≤η_1=1/(α√(1+d)) and
max_ℓ≥1η_ℓ/η_ℓ+1≤max_ℓ≥1√(ℓ+1/ℓ)=√(2).
§ GUARANTEEING A BOUNDED DECISION SEQUENCE
We now show that the second assumption in Theorem <ref>, namely, “x_t∈ℬ_R for all t∈{1,…,T}” can be enforced algorithmically.
We achieve this by introducing an additional hypersphere constraint g_m+1(x_t)=1/2[R^2-‖ x_t‖^2] that attracts the decision sequence {x_t}_t≥1 to a hypersphere ℬ_R and guarantees that it always stays inside a hypersphere ℬ_4R with a slightly larger radius.
Technically, we modify the velocity polytope in Step 3 of Algorithm <ref> as follows:
V_α^'(x_t)=V_α(x_t) if ‖ x‖≤ R, and otherwise
V_α^'(x_t)={v∈ V_α(x_t) | [∇ g_m+1(x_t)]^v≥-α g_m+1(x_t)}.
We are now ready to state our main algorithmic result for the setting of time-invariant constraints.
Suppose Assumption <ref> holds.
Then, on input R,L_ℱ>0, α=L_ℱ/R and x_1∈ℬ_R, Algorithm <ref> with augmented velocity polytope V_α^'(·) and step sizes η_t=1/α√(t+15) guarantees the following for all T≥1:
(regret) ∑_t=1^Tf_t(x_t)-min_x^⋆∈𝒞∑_t=1^Tf_t(x^⋆)≤ 246L_ℱR√(T);
(feasibility) g_i(x_t)≥ -21[L_𝒢/R+3β_G]R^2/√(t+15), for all t∈{1,…,T} and i∈{1,…,m};
(attraction) g_m+1(x_t)≥-27R^2/√(t) for all t∈{1,…,T}.
In addition, ‖ x_t‖≤ 4R and ‖ v_t‖≤ 7L_ℱ, for all t≥1.
To ensure convergence of the hypersphere constraint -min{g_m+1(x_t),0} at a rate of 𝒪(1/√(t)), we use an inductive argument similar to Lemma <ref>.
We note that compared to the simplified setting of Appendix <ref>, our analysis requires an additional refined inductive argument, which is summarized in Lemma <ref>.
§.§ Hypersphere constraint
We consider the following hypersphere constraint, parameterized by R>0,
g_m+1(x)=1/2[R^2-‖ x_t‖^2].
By construction, g_m+1 is concave and 1-smooth.
Suppose g_i is concave for every i∈{1,…,m} such that 𝒞⊆ℬ_R and f_t is convex such that ‖∇ f_t(x)‖≤ L_ℱ for all x∈ℬ_cR, where c>0 is a constant.
Then for any decision x_t∈ℬ_cR, it holds that
‖ v_t‖≤α‖ x_t‖+(α R+2L_ℱ) and 1/2‖ v_t‖^2<-2α^2g_m+1(x)+[α^2R^2+(α R+2L_ℱ)^2].
Due to the fact that g_m+1 and g_i are concave for every i∈{1,…,m}, it follows by Lemma <ref> that
‖ v_t‖ ≤ 2‖∇ f_t(x_t)‖+α‖ x^⋆-x_t‖
≤ α‖ x_t‖+α R+2L_ℱ.
Further, by definition of g_m+1(x) we have
1/2‖ v_t‖^2 ≤ 1/2[α‖ x_t‖+(α R+2L_ℱ)]^2
≤ α^2‖ x_t‖^2+(α R+2L_ℱ)^2
= -2α^2g_m+1(x)+[α^2R^2+(α R+2L_ℱ)^2].
Suppose the assertions in Claim <ref> hold.
Let the step sizes be {η_t=1/α√(t+15)}_t≥1 and α=L_ℱ/R.
Then, we have
i) If g_m+1(x_t)>0 then g_m+1(x_t+1)≥-η_t·6L_ℱR; and
ii) If g_m+1(x_t)≤0 then g_m+1(x_t+1)≥(1-α/2η_t)g_m+1(x_t)-η_t^210L_ℱ^2.
The proof is by case distinction.
Case 1. Suppose g_m+1(x_t)>0. Using ‖ x_t‖<R
it follows by Claim <ref> that
‖ v_t‖≤2(α R+L_ℱ)=4L_ℱ.
Using g_m+1 is concave and 1-smooth, g_m+1(x_t)>0, ∇ g_m+1(x_t)=-x_t
and ‖ x_t‖<R, we have
g_m+1(x_t+1) ≥ g_m+1(x_t)+∇ g_m+1(x_t)^(x_t+1-x_t)-1/2‖ x_t+1-x_t‖^2
≥ -η_tR‖ v_t‖-1/2η_t^2‖ v_t‖^2
≥ -η_t·6L_ℱR
≥ -η_t+1·7L_ℱR,
where we used
1/2η_t16L_ℱ^2=8/√(t+15)L_ℱR≤2L_ℱR.
Case 2. Suppose g_m+1(x_t)≤0, i.e., ‖ x_t‖≥ R.
Using α^2R^2+(α R+2L_ℱ)^2=10L_ℱ^2, it follows by Claim <ref> that
1/2‖ v_t‖^2<-2α^2g_m+1(x)+10L_ℱ^2.
Combining g_m+1 is concave and 1-smooth, and ∇ g_m+1(x_t)^v_t≥-α g_m+1(x_t)
yields
g_m+1(x_t+1) ≥ g_m+1(x_t)+∇ g_m+1(x_t)^(x_t+1-x_t)-1/2‖ x_t+1-x_t‖^2
≥ (1-αη_t)g_m+1(x_t)-1/2η_t^2‖ v_t‖^2
> (1-αη_t+2α^2η_t^2)g_m+1(x_t)-η_t^210L_ℱ^2
≥ (1-α/2η_t)g_m+1(x_t)-η_t^210L_ℱ^2,
where the last inequality follows by: -η_tα+2η_t^2α^2≤-η_tα/2, which is implied by η_t=1/α√(t+15).
Suppose the assertions in Claim <ref> hold for c=4.
Given α=L_ℱ/R, step sizes {η_t=1/α√(t+15)}_t≥1 and an arbitrary initial decision x_1 with ‖ x_1‖<R, then it holds that
g_m+1(x_t)≥-27R^2/√(t), ‖ x_t‖≤4R, ‖ v_t‖≤7L_ℱ, for all t≥1.
The proof is by induction on t≥1.
Part I) We show first that g_m+1(x_t+1)≥-c_0η_t+1, for some c_0>0.
The proof proceeds by case distinction.
Case 1. Suppose g_m+1(x_t)>0, then by Claim <ref> we have
g_m+1(x_t+1)≥-η_t·6L_ℱR, (implying c_0≥6L_ℱR).
Case 2. Suppose g_m+1(x_t)≤0.
Let A:=10L_ℱ^2, then by combining Claim <ref> and the inductive hypothesis, we have
g_m+1(x_t+1) ≥ (1-η_tα/2)g_m+1(x_t)-η_t^2A
≥ -(1-η_tα/2)c_0η_t-η_t^2A
= -c_0η_t-(A-α/2c_0)η_t^2
= -c_0η_t+1-c_0η_t+c_0η_t+1-(A-α/2c_0)η_t^2
= -c_0η_t+1+c_0η_t[-1+η_t+1/η_t-η_t(A/c_0-α/2)].
Since c_0η_t>0, it suffices to show that
-1+η_t+1/η_t-η_t(A/c_0-α/2)≥0α/2-η_t-η_t+1/η_t^2≥A/c_0.
The previous condition is equivalent to (using η_t=1/α√(t+15)
for t≥1)
α[1/2-√(t+15)/√(t+16)[√(t+16)-√(t+15)]]≥A/c_0.
Straightforward checking shows that max_t≥16√(t/t+1)(√(t+1)-√(t))<0.12
and thus
c_0≥2.7A/α=27L_ℱR.
Hence, for c_0=27L_ℱR it holds that g_m+1(x_t+1)≥-c_0η_t+1.
We set c_0 to the maximum over the preceding two case, i.e.,
c_0:=max{ 7L_ℱR, 27L_ℱR},
and obtain
g_m+1(x_t)≥-c_0η_t=-27R^2/√(t+15)>-27R^2/√(t).
Part II) We now show that ‖ x_t+1‖≤ 4R.
Combining Part I) and the definition of step size η_t=1/α√(t+15), we have
1/2[R^2-‖ x_t+1‖^2]=g_m+1(x_t+1)≥-c_0η_t+1≥-c_0η_1=-c_0/4α
and thus
‖ x_t+1‖^2≤ R^2+c_0/2α<15R^2<(4R)^2.
Part III) By Claim <ref>, it follows that
‖ v_t+1‖≤L_ℱ/R‖ x_t+1‖+3L_ℱ<7L_ℱ.
§.§ Concluding Remarks
By Lemma <ref>, the decision sequence {x_t}_t≥1 is attracted to the hypersphere ℬ_R and always stays inside a slightly larger hypersphere ℬ_4R.
Then, by Lemma <ref> applied with c=4, d=15, α=L_ℱ/R and step size η_t=1/(α√(t+d)) we obtain
Regret_T ≤ √(15+1)[(4+3)^2+1/2(4+1)^2]L_ℱR√(T)
= 246L_ℱR√(T).
Moreover, by Lemma <ref>, we have
𝒵_d=3/2√(17)[L_𝒢/R+β_𝒢]L_ℱR and
c_1=𝒱_α[2L_𝒢+β_G𝒱_α/α]+𝒵_d≤21[L_𝒢/R+3β_G]L_ℱR.
Hence, the convergence rate to the feasible 𝒞 satisfies for every t≥1 and i∈{1,…,m}
g_i(x_t)≥-c_1η_t≥-21[L_𝒢/R+3β_G]R^2/√(t+15).
§ PROOF OF THEOREM <REF>
In this section, we consider an online optimization problem with adversarially generated time-varying constraints.
More precisely, at each time step t, the learner receives partial information on the current cost f_t and feasible set 𝒞_t, and seeks to minimize (<ref>).
To make this problem well posed, we restrict the environment such that each feasible set 𝒞_t is contained in 𝒬_t (see Section <ref>) and the rate of change between consecutive time-varying constraints decreases over time.
We quantify a sufficient rate of decay in Assumption <ref>, which we restate below for the convenience of the reader.
[TVC Decay Rate]
We assume that the adversarially generated sequence {g_t}_t≥1 of time-varying constraints are such that for every x∈ℬ_4R and all t≥1, the following holds
‖ g_t+1(x)-g_t(x)‖_∞≤98/t+16[L_𝒢/R+3β_𝒢]R^2.
We note that Assumption <ref> essentially only requires ‖ g_t+1(x)-g_t(x)‖_∞≤𝒪(1/t), as R can be chosen large enough such that the bound is satisfied.
Of course, R will appear in our regret and feasibility bounds, but it will not affect the dependence on t or T (up to constant factors).
We restate Theorem <ref> below for the convenience of the reader.
Suppose the functions {f_t,g_t}_t≥1 satisfy Assumption <ref> and Assumption <ref>.
Then, on input R,L_ℱ>0 and x_1∈ℬ_R, Algorithm <ref> applied with α=L_ℱ/R, augmented velocity polytope V_α^'(·) and step sizes η_t=1/α√(t+15) guarantees the following for all T≥1:
(regret) ∑_t=1^Tf_t(x_t)-min_x^⋆∈𝒞∑_t=1^Tf_t(x^⋆)≤ 246L_ℱR√(T);
(feasibility) g_t,i(x_t)≥-265[L_𝒢/R+4β_𝒢]R^2/√(t+15), for all t∈{1,…,T} and i∈{1,…,m};
(attraction) g_m+1(x_t)≥-27R^2/√(t) for all t∈{1,…,T}.
Outline This section is organized as follows.
In Subsection <ref>, we introduce a key geometric property that allows us to generalize the standard online gradient descent analysis to the setting of time-varying constraints.
In Subsection <ref>, we give an overview of our proof approach for Theorem <ref>.
In Subsection <ref>, we present the analysis that quantifies the convergence rate to the feasible set for the setting of slowly time-varying constraints.
Finally, in Subsection <ref>, we give an important special case, slightly generalizing Lemma <ref>, for which Assumption <ref> is satisfied.
§.§ Key Geometric Property
Our regret analysis builds upon the following key geometric property that generalizes Lemma <ref> to time-varying constraints.
We show that for any subset 𝒞_T of the polyhedral intersection 𝒬_T,
every decision x∈𝒞_T satisfies the normal cone constraint -r_t^(x - x_t) ≤ 0, for every pair (x_t,r_t) in the decision sequence {(x_t,r_t)}_t=1^T up to step T.
As a result, a similar argument as in (<ref>) yields 𝒪(√(T)) regret in the time-varying constraint setting.
Let 𝒞_T be any subset of the polyhedral intersection 𝒬_T. Then, every decision x∈𝒞_T satisfies the normal cone constraint -r_t^(x - x_t) ≤ 0, ∀ t∈{1,…,T}.
Let t∈{1,…,T} be arbitrary.
The statement is trivially fulfilled when r_t vanishes.
Suppose r_t is a non-zero vector and let x ∈ C_T be arbitrary.
By construction, the feasible set is given by 𝒞_T=∩_t=1^T{x∈ℝ^n | G(x_t)^(x-x_t)≥0}.
Hence, we have ∇ g_t,i(x_t)^(x-x_t)≥0 for all i∈ I(x_t) and since v_t∈ V_α(x_t), it follows that v(x)=v_t+x-x_t∈ V_α(x_t).
Moreover, the vector -r_t belongs to the normal cone N_V_α(x_t)(v_t), which implies -r_t^(v-v_t)≤0 for all v∈ V_α(x_t).
In particular, for v(x) we have -r_t^(x-x_t)≤0.
§.§ Proof Overview of Theorem <ref>
By Assumption <ref>, the slowly time-varying constraints g_t,i(x) are concave and β_𝒢-smooth such that ‖∇ g_t,i(x)‖≤ L_𝒢 for all x∈ℬ_4R, t≥1 and i∈{1,…,m}.
By construction, see Lemma <ref>, η_t=1/(α√(t+15)), α=L_ℱ/R and 𝒱_α=7L_ℱ implies that η_t+1𝒱_α=7R/√(t+16). We note that Lemma <ref> still holds for time-varying constraints, which implies ‖ x_t ‖≤ 4R and ‖ v_t ‖≤ 7L_ℱ.
Further, by Assumption <ref> we have for every x∈ℬ_4R, t≥1 and i∈{1,…,m} that
|g_t+1,i(x)-g_t,i(x)|≤98/t+16[L_𝒢/R+3β_𝒢]R^2=2η_t+1^2[L_𝒢/R+3β_𝒢]𝒱_α^2.
Then, applying the preceding inequality and using similar arguments as in Part 2) of Section <ref>, we give in Corollary <ref> bounds on the slowly time-varying constraints g_t,i(x) from below.
In particular, we show that
g_t+1,i(x_t+1)≥(1-αη_t)g_t,i(x_t)-η_t^2[2L_𝒢/R+7β_𝒢]𝒱_α^2 for all i∈ I(x_t),
and
g_t+1,i(x_t+1)≥-η_t+17𝒱_α[L_𝒢+β_𝒢𝒱_α/4α] for all i∈{1,…,m}\ I(x_t).
Using a similar inductive argument as in Lemma <ref>, we show in Lemma <ref> that in the setting of slowly time-varying constraints, the following feasibility convergence rate holds
g_t,i(x_t)≥-[265L_𝒢/R+927β_𝒢]R^2/√(t+15), for all t∈{1,…,T} and i∈{1,…,m}.
Then, the regret and the attraction to the feasible sets follow as in Theorem <ref>.
§.§ Slowly Time-Varying Constraints
Suppose Assumption <ref> holds, x_1∈ℬ_R, α=L_ℱ/R and step sizes η_t=1/(α√(t+15)).
Then, for every i∈{1,…,m} and T≥1 we have
g_t,i(x_t)≥-[265L_𝒢/R+927β_𝒢]R^2/√(t+15), for all t∈{1,…,T} and i∈{1,…,m}.
The proof is by induction on t.
We start with the base case t=1.
The proof proceeds by case distinction.
Case 1. Suppose i∈{1,…,m}\ I(x_1), i.e., g_1,i(x_1)>0.
Then, by Corollary <ref> Part ii) we have
g_2,i(x_2)≥-η_27𝒱_α[L_𝒢+β_𝒢𝒱_α/4α]≥-η_2[49L_𝒢/R+86β_𝒢]L_ℱR.
Case 2. Suppose i∈ I(x_1), i.e., g_1,i(x_1)≤0.
By combining x_1∈ℬ_R and g_1,i is concave β_𝒢-smooth, it follows for every x∈𝒞_1⊆ℬ_R that
g_1,i(x_1) ≥ g_1,i(x)+∇ g_1,i(x)^T(x_1-x)-β_𝒢/2‖ x_1-x‖^2
≥ -2L_𝒢R-2β_𝒢R^2
= -η_1[8L_𝒢/R+8β_𝒢]L_ℱR.
Using η_t=1/(α√(t+15)) and η_1/η_2≤√(2), it follows that
(1-αη_1)g_1,i(x_1)≥-η_1[L_𝒢/R+β_𝒢]6L_ℱR≥-η_2[9L_𝒢/R+9β_𝒢]L_ℱR
and
η_1^2[2L_𝒢/R+7β_𝒢]𝒱_α^2≤η_2^2[4L_𝒢/R+14β_𝒢]𝒱_α^2≤η_2[49L_𝒢/R+172β_𝒢]L_ℱR.
Then, by Corollary <ref> Part i) we have
g_2,i(x_2) ≥ (1-αη_1)g_1,i(x_1)-η_1^2[2L_𝒢/R+7β_𝒢]𝒱_α^2
≥ -η_2[58L_𝒢/R+181β_𝒢]L_FR.
Our inductive hypothesis is g_t,i(x_t)≥-c_2η_t for all i.
We now show that it holds for t+1.
Case 1. Suppose i∈{1,…,m}\ I(x_1), i.e., g_i(x_t)>0.
Then by Corollary <ref> ii)
g_t+1,i(x_t+1)≥-η_t+17𝒱_α[L_𝒢+β_𝒢𝒱_α/4α]≥-η_t+1[49L_𝒢/R+86β_𝒢]L_ℱR.
Case 2. Suppose i∈ I(x_t), i.e., g_i(x_t)≤0.
Let A=[2L_𝒢/R+7β_𝒢]𝒱_α^2.
By combining Corollary <ref> Part i), the inductive hypothesis and using similar arguments as in the proof of Lemma <ref> Case 2, yields
g_t+1,i(x_t+1)≥-c_2η_t+1, where c_2=2.7A/α=[265L_𝒢/R+927β_𝒢]L_ℱR.
The feasibility convergence rate is then given by
g_t,i(x_t)≥-[265L_𝒢/R+927β_𝒢]R^2/√(t+15).
Suppose Assumptions <ref> and Assumption <ref> hold.
Let α=L_ℱ/R, 𝒱_α=7L_ℱ and step sizes η_t=1/(α√(t+15)).
Then, for every t≥1 we have
i) g_t+1,i(x_t+1)≥(1-αη_t)g_t,i(x_t)-η_t^2[2L_𝒢/R+7β_𝒢]𝒱_α^2 for all i∈ I(x_t); and
ii) g_t+1,i(x_t+1)≥-η_t+17𝒱_α[L_𝒢+β_𝒢𝒱_α/4α]
for all i∈{1,…,m}\ I(x_t).
Combining Assumption <ref> and (<ref>) gives
g_t+1,i(x_t+1)≥ g_t,i(x_t+1)-2η_t+1^2[L_𝒢/R+3β_𝒢]𝒱_α^2.
Then, by Claim <ref>, it follows for every i∈ I(x_t) that
g_t+1,i(x_t+1) ≥ g_t,i(x_t+1)-2η_t+1^2[L_𝒢/R+3β_𝒢]𝒱_α^2.
≥ (1-αη_t)g_t,i(x_t)-η_t^2𝒱_α^2β_𝒢/2-η_t^2[2L_𝒢/R+6β_𝒢]𝒱_α^2
> (1-αη_t)g_t,i(x_t)-η_t^2[2L_𝒢/R+7β_𝒢]𝒱_α^2,
and for every i∈{1,…,m}\ I(x_t) that
g_t+1,i(x_t+1) ≥ g_t,i(x_t+1)-2η_t+1^2[L_𝒢/R+3β_𝒢]𝒱_α^2
≥ -η_t+1𝒱_α[2L_𝒢+β_𝒢𝒱_α/4α]-η_t+1𝒱_α[L_𝒢𝒱_α/2α R+3β_𝒢𝒱_α/2α]
≥ -η_t+17𝒱_α[L_𝒢+β_𝒢𝒱_α/4α].
where we used that α=L_ℱ/R and 𝒱_α=7L_ℱ implies L_𝒢𝒱_α/Rα=7L_𝒢.
§.§ Average Time-Varying Constraints
An important special case where Assumption <ref> is satisfied, is summarized in the following slightly more general version of Lemma <ref>.
Suppose the functions g̃_t,i satisfy Assumption <ref> and in addition there is a decision x_t,i∈ℬ_R such that |t,i(x_t,i)|≤1/2[L_𝒢/R+3β_𝒢]R^2, for every t≥1 and i∈{1,…,m}.
Then the following average time-varying constraints, satisfy Assumption <ref> and Assumption <ref>:
g_t,i(x):=1/t∑_ℓ=1^tℓ,i(x)∈ℝ^m.
The rest of this subsection is devoted to proving Lemma <ref>.
We achieve this in two steps.
We start by showing in Lemma <ref> that the average time-varying constraints satisfy Assumption <ref>, and then in Lemma <ref> we demonstrate that they also satisfy Assumption <ref>.
Suppose t,i is concave β_𝒢-smooth such that ‖∇t,i(x)‖≤ L_𝒢 for all x∈ℬ_4R, t≥1 and i∈{1,…,m}.
Then, the average function
g_t,i(x):=1/t∑_ℓ=1^tℓ,i(x)
is concave and β_𝒢-smooth and ‖∇ g_t,i(x)‖≤ L_𝒢 holds for all x∈ℬ_4R, t≥1 and i∈{1,…,m}.
By assumption, each ℓ,i is concave and β_𝒢-smooth, which implies
ℓ,i(x_t+1)≥ℓ,i(x_t)+[∇ℓ,i(x_t)]^[x_t+1-x_t]-β_𝒢/2‖ x_t+1-x_t‖^2.
Summing over all ℓ∈{1,...,t} yields
1/t∑_ℓ=1^tℓ,i(x_t+1)≥1/t∑_ℓ=1^tℓ,i(x_t)+[1/t∑_ℓ=1^t∇ℓ,i(x_t)]^[x_t+1-x_t]-1/t∑_ℓ=1^tβ_𝒢/2‖ x_t+1-x_t‖^2,
since 1/t∑_ℓ=1^t∇ℓ,i(x)=∇ g_t,i(x), which is equivalent to
g_t,i(x_t+1)≥ g_t,i(x_t)+[∇ g_t,i(x)]^[x_t+1-x_t]-β_𝒢/2‖ x_t+1-x_t‖_2^2.
Hence, g_t,i is concave and β_𝒢-smooth.
Moreover, since ‖∇t,i(x)‖≤ L_𝒢
for all x∈ℬ_4R, we have
‖∇ g_t,i(x)‖=‖1/t∑_ℓ=1^t∇ℓ,i(x)‖≤1/t∑_ℓ=1^t‖∇ℓ,i(x)‖≤ L_𝒢.
We show next that the average time-varying constraints satisfy Assumption <ref>.
Suppose t,i is concave β_𝒢-smooth such that ‖∇t,i(x)‖≤ L_𝒢 for all x∈ℬ_4R, t≥1 and i∈{1,…,m}.
Further, suppose for every t≥1 and i∈{1,…,m}, there exists a decision x_t,i∈ℬ_R such that
|t,i(x_t,i)|≤1/2[L_𝒢/R+3β_𝒢]R^2.
Then, for α=L_ℱ/R, step sizes η_t=1/(α√(t+15)) and 𝒱_α=7L_ℱ, it holds for every x∈ℬ_4R that
|g_t+1,i(x)-g_t,i(x)|≤2η_t+1^2[L_𝒢/R+3β_𝒢]𝒱_α^2.
Using the inequality 1/t+1≤17/21/t+16 for every t≥1 and η_t+1^2=1/(α^2(t+16)), it follows by construction that
|g_t+1,i(x)-g_t,i(x)| = |1/t+1t+1,i(x)+t/t+1g_t,i(x)-g_t,i(x)|
= 1/t+1|t+1,i(x)-g_t,i(x)|
= 1/t+11/t|∑_ℓ=1^tt+1,i(x)-ℓ,i(x)|
≤ η_t+1^217/2α^2·1/t∑_ℓ=1^t|t+1,i(x)-ℓ,i(x)|.
By triangle inequality |t+1,i(x)-ℓ,i(x)|≤|t+1,i(x)|+|ℓ,i(x)|
and thus it suffices to bound the term |t,i(x)|
for every t≥1, i∈{1,…,m} and x∈ℬ_4R.
By assumption, x∈ℬ_4R and there is x_t,i∈ℬ_R satisfying inequality (<ref>). Further, t,i is concave, which implies
t,i(x)-t,i(x_t,i)≤[∇t,i(x_t,i)]^[x-x_t,i]≤5L_𝒢R
and the fact that t,i is concave β_𝒢-smooth yields
t,i(x)-t,i(x_t,i) ≥ [∇t,i(x_t,i)]^[x-x_t,i]-β_𝒢/2‖ x_t,i-x‖^2
≥ -5[L_𝒢/R+3β_𝒢]R^2.
Further, by combining |t,i(x)-t,i(x_t,i)| ≤ 5[L_𝒢/R+3β_𝒢]R^2, triangle inequality and assumption (<ref>), we obtain for every x∈ℬ_4R that
|t,i(x)| = |t,i(x)-t,i(x_t,i)+t,i(x_t,i)|
≤ |t,i(x)-t,i(x_t,i)|+|t,i(x_t,i)|
≤ 11/2[L_𝒢/R+3β_𝒢]R^2.
The statement follows by combining α=L_ℱ/R, 𝒱_α=7L_ℱ, (<ref>) and
|g_t+1,i(x)-g_t,i(x)| ≤ η_t+1^217/2α^2·1/t∑_ℓ=1^t|t+1,i(x)-ℓ,i(x)|
≤ η_t+1^2[L_𝒢/R+3β_𝒢]11/2·17α^2R^2
< 2η_t+1^2[L_𝒢/R+3β_𝒢]𝒱_α^2.
|
http://arxiv.org/abs/2306.01613v2
|
20230602152105
|
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization
|
[
"Javier Carnerero-Cano",
"Luis Muñoz-González",
"Phillippa Spencer",
"Emil C. Lupu"
] |
cs.LG
|
[
"cs.LG",
"cs.CR",
"stat.ML"
] |
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization
Javier Carnerero-Cano, Student Member, IEEE,
Luis Muñoz-González, Phillippa Spencer,
and Emil C. Lupu
J. Carnerero-Cano, L. Muñoz-González and E. C. Lupu are with Imperial College London, South Kensington Campus, London, SW7 2AZ, United Kingdom. E-mail: {j.cano, l.munoz, e.c.lupu}@imperial.ac.uk
P. Spencer is with the Defence Science and Technology Laboratory (Dstl), Porton Down, Salisbury, United Kingdom.
July 31, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================
Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance. Optimal attacks can be formulated as bilevel optimization problems and help to assess their robustness in worst-case scenarios. We show that current approaches, which typically assume that hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem. This allows to formulate optimal attacks, learn hyperparameters and evaluate robustness under worst-case conditions. We apply this attack formulation to several ML classifiers using L_2 and L_1 regularization. Our evaluation on multiple datasets shows that choosing an “a priori” constant value for the regularization hyperparameter can be detrimental to the performance of the algorithms. This confirms the limitations of previous strategies and evidences the benefits of using L_2 and L_1 regularization to dampen the effect of poisoning attacks, when hyperparameters are learned using a small trusted dataset.
Additionally, our results show that the use of regularization plays an important robustness and stability role in complex models, such as Deep Neural Networks, where the attacker can have more flexibility to manipulate the decision boundary.
Adversarial machine learning, bilevel optimization, data poisoning attacks, hyperparameter optimization, regularization.
§ INTRODUCTION
In many applications, Machine Learning (ML) systems rely on data collected from untrusted data sources, such as humans, machines, sensors, or IoT devices that can be compromised and manipulated. Malicious data from these compromised sources can then be used to poison the learning algorithms themselves. These scenarios expose ML algorithms to data poisoning attacks, where adversaries manipulate a fraction of the training data to subvert the learning process, either to decrease its overall performance or to produce a particular kind of error in the system <cit.>. Poisoning attacks can also facilitate subsequent evasion attacks or produce backdoor (or Trojan) attacks <cit.>.
Several systematic optimal poisoning attacks have already been proposed to analyze different families of ML algorithms under worst-case scenarios, including Support Vector Machines (SVMs) <cit.>, other linear classifiers <cit.>, and neural networks <cit.>. These attack strategies are formulated as a bilevel optimization problem, i.e., an optimization problem that depends on another optimization problem. In these cases, the attacker typically aims to maximize a malicious objective (e.g., to maximize the error for a set of target points) by manipulating a fraction of the training data. At the same time, the defender aims to optimize a different objective function to learn the model's parameters, typically by minimizing some loss function evaluated on the poisoned training set.
Some of the previous attacks target algorithms that have hyperparameters, but the hyperparameters are considered constant regardless of the fraction of poisoning points injected in the training dataset. This can provide a misleading analysis of the robustness of the algorithms against such attacks, as the value of the hyperparameters can change depending on the type and strength of the attack. For example, Xiao et al. <cit.> presented a poisoning attack against embedded feature selection methods, including L_1, L_2 and elastic-net regularization. Their results show that the attacker can completely control the selection of the features to significantly increase the overall test error of linear classifiers. However, they assume a constant regularization hyperparameter regardless of the attack considered. We show that this approach provides overly pessimistic results on the ML algorithms' robustness to poisoning attacks.
In our prior work <cit.> we reported a limited case study that analyzes the influence of the L_2 regularization hyperparameter on the effect of poisoning attacks against Logistic Regression (LR). In this paper we provide a comprehensive analysis of the influence of the hyperparameters on the effect of poisoning attacks when using different regularization techniques, including L_2 and L_1 regularization. We also propose a more general optimal indiscriminate poisoning attack formulation to test worst-case scenarios against ML algorithms that contain hyperparameters. For this, we model the attack as a multiobjective bilevel optimization problem, where the outer objective includes both the learning of the poisoning points and that of the hyperparameters, while the inner problem involves learning the model's parameters. This attack formulation allows us to model an adversary aware not only of the training algorithm, but also of the procedure used to select the model's hyperparameters. Thus, this formulation considers a more realistic attacker and allows to assess in a more comprehensive way the robustness of the algorithms to poisoning attacks in worst-case scenarios. In scenarios where the attacker is aware of the dataset used to learn the model's hyperparameters and aims to maximize the overall error, the outer objective can be modeled as a minimax problem.
We used hypergradient (i.e., the gradient in the outer problem <cit.>) descent/ascent to solve the multiobjective bilevel optimization problem. As the computation of the exact hypergradients can be computationally expensive, especially for neural networks, we used Reverse-Mode Differentiation (RMD) <cit.> to approximate the hypergradients. We conduct an exhaustive experimental analysis on Logistic Regression (LR) and Deep Neural Networks
(DNNs), using different datasets including MNIST <cit.>, Fashion-MNIST (FMNIST) <cit.> and CIFAR-10 <cit.>, and attacks with both small and large fractions of poisoning points.[The PyTorch implementation of the algorithms used for the experiments is available at https://github.com/javiccano/hyperparameter-learning-and-poisoning—tnnls/https://github.com/javiccano/hyperparameter-learning-and-poisoning—tnnls/.]
We show that choosing (a priori) a constant value for the regularization hyperparameter, λ, can be detrimental: if the value is too high it damages accuracy (i.e., it produces underfitting when there is no attack), if the value is too low it damages robustness (the algorithm is more brittle in the presence of an adversary). In contrast, selecting λ appropriately by, for example, using a small trusted validation set, provides both accuracy and robustness regardless of the presence or absence of poisoning points in the training dataset and of the attack strength. Our empirical evaluation also reveals that the value of the regularization hyperparameter increases with the number of poisoning points injected in the training set. The algorithm automatically tries to compensate the negative effect of the poisoning points by increasing the strength of the regularization term. For the DNNs, we show that the attack can have a more pronounced effect in the later layers of the network, and that the use of different regularization hyperparameters for the different layers in the DNN can be beneficial to mitigate the impact of data poisoning. In the case of embedded feature selection methods, we confirm the stabilizing effect of regularization against poisoning.
The rest of the paper is organized as follows: In Sect. <ref> we describe the related work. In Sect. <ref> we introduce our novel formulation for optimal poisoning attacks against learning algorithms with hyperparameters. In Sect. <ref> we discuss how regularization can help mitigate poisoning attacks by enhancing algorithms' stability. In Sect. <ref> we present our experimental evaluation on different datasets. Finally, Sect. <ref> concludes the paper.
§ RELATED WORK
The first poisoning attacks reported in the literature targeted specific applications, such as spam filtering <cit.> or anomaly detection <cit.>. A more systematic approach was introduced by Biggio et al. <cit.> to poison SVMs, modeling the attack as a bilevel optimization problem. Subsequent works extended this approach to other families of ML algorithms, including linear and other convex classifiers <cit.> or embedded feature selection methods <cit.>. A more general approach was introduced by Muñoz-González et al. <cit.>, formulating different optimal attack strategies for targeting multiclass classifiers. The authors also proposed an algorithm to estimate the hypergradients in the corresponding bilevel optimization problem through Reverse-Mode Differentiation (RMD), which significantly improves the scalability of optimal attacks, allowing to poison a broader range of learning algorithms, including neural networks. Koh et al. <cit.> proposed an algorithm for solving bilevel problems with detectability constraints, allowing to craft poisoning points that can bypass outlier detectors. However, the algorithm is computationally demanding, which limits its applicability in practical scenarios. None of the previous approaches consider the effect and influence of the hyperparameters on the learning algorithm when the training dataset is poisoned.
Other approaches have also been proposed for crafting poisoning attacks: Koh et al. <cit.> created adversarial training examples by exploiting influence functions. This approach allows to craft successful targeted attacks by injecting small perturbations to genuine data points in the training set. Shafahi et al. <cit.>, Zhu et al. <cit.>, Huang et al. <cit.>, and Geiping et al. <cit.> proposed targeted attacks for situations where the adversary does not control the labels of the poisoning points. A Generative Adversarial Net-based model to craft indiscriminate and targeted poisoning attacks at scale against deep networks was proposed in <cit.>. This approach allows to naturally model detectability constraints for the attacker, enabling attacks with different levels of “aggressiveness" to bypass different types of defenses.
On the defender's side, it is possible to mitigate poisoning attacks by analyzing the samples that have a negative impact on the target algorithms <cit.>. However, this approach can be impractical in many applications, as it scales poorly. Following a similar approach, Koh et al. <cit.> propose to use influence functions as a mechanism to detect poisoning points. Different outlier detection schemes have proved to be effective to mitigate poisoning attacks in those cases where the attacker does not consider appropriate detectability constraints <cit.>. Label sanitization has also been proposed as a mechanism to identify and relabel suspicious training points <cit.>. However, this strategy can fail when the poisoning points “collude" <cit.>. Finally, Diakonikolas et al. <cit.> proposed a robust meta-algorithm, based on Singular Value Decomposition, capable of mitigating some attacks.
Koh et al. <cit.> reported some results on poisoning attacks against a linear SVM using L_2 regularization. Their results suggest that, in some cases, increasing the value of the regularization hyperparameter can make outlier detectors less effective. However, a direct comparison with our results is not possible as we consider a different threat model. Compared to <cit.>, we provide a more general formulation to model the effect of hyperparameters, including the case of L_2 regularization, in the presence of data poisoning. Furthermore, we provide a more complete and systematic evaluation of the benefits of using regularization to mitigate the effect of poisoning attacks, under reasonable assumptions.
§ GENERAL OPTIMAL POISONING ATTACKS
In data poisoning attacks the attacker can tamper with a fraction of the training set to manipulate the behavior of the learning algorithm <cit.>. We assume that the attacker can arbitrarily manipulate all the features and the label of the injected poisoning points, provided that the resulting points are within a feasible domain of valid data points. We consider white-box attacks with perfect knowledge, i.e., the attacker knows everything about the target system, including the training data, the feature representation, the loss function, the ML model, and the defense (if applicable) used by the victim. Although unrealistic in most cases, these assumptions are needed to analyze the robustness of the ML algorithms in worst-case scenarios for attacks of different strengths.
§.§ Problem Formulation
In line with most literature on poisoning attacks we consider ML classifiers. Then, in a classification task, given the input space 𝒳⊆ℝ^m and the discrete label space, 𝒴⊆ℤ_c, where c is the number of classes, the learner aims to estimate the mapping f: 𝒳→𝒴. Given a training set 𝒟_tr= {( x_tr_i , y_tr_i)}^n_tr_i=1 with n_tr IID samples drawn from the underlying
probability distribution p(𝒳, 𝒴), we can estimate f with a model ℳ: ℝ^n_tr× m→ℝ^n_tr× c trained by minimizing an objective function ℒ(𝒟_tr, Λ, w) : ℝ^n_tr× m×Z_c^n_tr→ℝ w.r.t. its parameters,[As in <cit.> we use parameters
to denote “parameters that are just parameters and not hyperparameters”.] w∈ℝ^d, given a set of hyperparameters Λ∈ℝ^ h.
In this paper, we use gradient-based algorithms to optimize the performance of the model on a clean validation set with respect to the hyperparameters <cit.> and poisoning points <cit.>. Thus, we assume that the defender has access to a small validation dataset 𝒟_val= {( x_val_j , y_val_j)}^n_val_j=1 with n_val trusted data points, representative of the ground-truth underlying data distribution. In practice, it is not uncommon to have access to a limited clean set, for example, because the integrity of a small set of data sources can be ascertained.[Note that if the quality of the trusted data is limited, the model's performance can be limited as well.] This small clean dataset is held out for the optimization of the hyperparameters (and the poisoning points, as we describe later). Then, as proposed in <cit.>, the model's hyperparameters can be learned by solving the following bilevel optimization problem:
min_Λ' ∈Φ(Λ) ℒ(𝒟_val, w^⋆)
s.t. w^⋆∈_ w∈𝒲 ℒ(𝒟_tr, Λ', w),
where Φ(Λ) represents the feasible domain set for the hyperparameters Λ. The use of this approach to select the model's hyperparameters has some advantages compared to other selection methods. Cross-validation-based approaches require to re-train the model multiple times over different training and validation set splits, making it computationally very demanding when the number of hyperparameters is large and training the learning algorithm is expensive. Grid search techniques also rely on a separate validation set to select the hyperparameters. However, the exploration of all the hyperparameters values considered in the grid also requires to train the learning algorithm many times, which can be computationally demanding, especially as the number of hyperparameters in the model grows. This can be alleviated using more guided search techniques, such as Bayesian optimization, but still, the exploration of each combination of hyperparameters requires training the algorithms from scratch multiple times and the performance and scalability with the number of hyperparameters is reduced. In contrast, solving the bilevel optimization problem in Eq. (<ref>), with gradient-based techniques, is computationally more efficient than previous approaches when using approximate techniques to estimate the hypergradients in the outer objective <cit.>. In this case, the computation of these hypergradients does not require to train the learning algorithm (in the inner objective) completely, but just for a reduced number of epochs. This approach is more scalable, especially when the number of hyperparameters is large. On the downside, gradient-based techniques to solve Eq. (<ref>) do not guarantee to find the global optimum for the outer objective but possibly a local one. However, this problem can be mitigated with multiple re-starts.
In a poisoning attack, the adversary aims to inject a set of n_p poisoning data points, 𝒟_p= {( x_p_k ,y_p_k)}^n_p_k=1, in the training set to maximize some arbitrary objective, 𝒜: ℝ^n_target× m×ℤ_c^n_target→ℝ, evaluated on a set of target data points 𝒟_target. As described in <cit.> different attack scenarios can be considered depending on both the set of target data points and the attacker's objective, including indiscriminate and targeted attacks. To allow for the learning of the hyperparameters we therefore propose to formulate the attacker's problem as a multiobjective bilevel optimization problem:
min_Λ' ∈Φ(Λ)ℒ(𝒟_val, w^⋆), max_𝒟_p' ∈Φ(𝒟_p)𝒜(𝒟_target, w^⋆)
s.t. w^⋆∈_ w∈𝒲ℒ(𝒟_tr', Λ', w),
where 𝒟_tr' = 𝒟_tr∪𝒟_p' is the poisoned dataset and Φ(𝒟_p) is the feasible domain for the attacker.
From the general formulation in Eq. (<ref>) it is clear that the poisoning points in 𝒟_tr' have an effect not only on the parameters of the classifier (in the inner problem), but also on its hyperparameters (in the outer objective for the defender). Previous studies have neglected the effect of the hyperparameters in the problem for the attacker, e.g., the regularization hyperparameter for the loss function for SVMs <cit.> or for embedded feature selection methods <cit.>. This can overestimate the adversary's capabilities to influence the learning algorithm, as we show in the synthetic experiment in Fig. <ref>.
Our novel attack formulation in Eq. (<ref>) allows to model a wide variety of attack scenarios, depending on the attacker's objective and the combinations between the target, validation and training sets. However, for the sake of clarity, in the remainder of the paper we focus on analyzing worst-case scenarios for indiscriminate poisoning attacks, where the attacker, having perfect knowledge, aims to increase the overall classification error in the target algorithm. These settings have been commonly used in most of the related work on poisoning attacks using bilevel optimization <cit.>. To achieve such a goal, the attacker aims to maximize the loss evaluated on a separate validation set, i.e., 𝒜(𝒟_target, w^⋆) = ℒ(𝒟_val, w^⋆): ℝ^n_val× m×ℤ_c^n_val→ℝ. In our case, where the attacker is also aware of the effect of the hyperparameters in the performance of the algorithm, 𝒟_val is the same as the validation dataset used by the defender, to maximize the overall error not only compromising the learning of the model's parameters, but also the selection (or learning) of its hyperparameters. Then, the attacker's problem can be formulated a bilevel optimization problem where the outer objective is a minimax problem:
min_Λ' ∈Φ(Λ) max_𝒟_p' ∈Φ(𝒟_p)ℒ(𝒟_val, w^⋆)
s.t. w^⋆∈_ w∈𝒲ℒ(𝒟_tr', Λ', w).
In this formulation, in the outer problem, there is an implicit dependency of both the hyperparameters, Λ, and the poisoning points, 𝒟_p, on the parameters of the model learned in the inner optimization problem, w^⋆. We can also observe that the value of the poisoning points has an effect on the learning of both w and Λ in the inner and outer objectives respectively.
This formulation is compatible with grid-search-based approaches, which select the hyperparameters using a separate validation dataset. However, it is computationally infeasible to solve the problem for the attacker using these techniques, as the number of variables to be learned in the outer objective, i.e., the model's hyperparameters and the value of the features for all the poisoning points, is very large. On the other hand, cross-validation uses the same dataset for creating the different training and validation splits. Thus, the learner can not benefit from the trusted dataset and, both the training and validation datasets would contain poisoning points across all splits. It is important to note that the availability of the small trusted dataset gives a chance to the learner to defend against poisoning attacks. In our case, the learner uses the trusted set for validation aiming to mitigate the effect of the poisoning attack by the selection of appropriate hyperparameters. Our experiments show that this can be a good approach in some cases, for example, when using regularization to increase the stability of the learning algorithm, and helps mitigate the attack. Of course, more specialized algorithms can be devised to make a different use of the trusted set of data points (e.g., data hypercleaning <cit.>). However, it is not our intention here to develop a specific algorithm for defending against data poisoning, but rather to show that the existence of a trusted dataset can be helpful to reduce the impact of poisoning attacks just by using standard techniques to increase the stability of the algorithm, such as regularization, and learning the model's hyperparameters appropriately. Our attack formulation allows us to characterize the worst-case performance under such assumptions. Thus, our findings provide ML practitioners a methodology to better use their trusted data points to mitigate poisoning attacks without requiring specialized knowledge or algorithms, but using techniques commonly used for training ML algorithms, as is the case of regularization.
§.§ Solving General Optimal Poisoning Attacks
Solving the multiobjective bilevel optimization problems in Eq. (<ref>) and Eq. (<ref>) is strongly NP-Hard <cit.> and, even if the inner problem is convex, the bilevel problem is, in general, non-convex. However, it is possible to use gradient-based approaches to obtain (possibly) suboptimal solutions, i.e., finding local optima for the problem in Eq. (<ref>) and saddle points for the minimax problem in Eq. (<ref>). For clarity, in the rest of this paper we focus on the solution to Eq. (<ref>), which we use in our experiments to show the robustness of L_2 regularization to indiscriminate poisoning attacks. The solution of Eq. (<ref>) follows a similar procedure.
Similar to <cit.>, we assume that the label of the poisoning points is set a priori, so the attacker just needs to learn the features for the poisoning points, X_p. For clarity, in the following description we use 𝒜 (which does not explicitly depend on the poisoning points or the hyperparameters, but implicitly through the parameters) to denote the loss function evaluated on 𝒟_val in the outer objective, i.e., ℒ(𝒟_val, w^⋆), and ℒ to refer to the loss function evaluated on 𝒟_tr' in the inner objective, ℒ(𝒟_tr', Λ, w^⋆). Both are evaluated on w^⋆, the parameters obtained when solving the inner optimization problem.
To compute the hypergradients for the outer objective, we assume that the first and second derivatives of the loss function, ℒ, are Lipschitz-continuous functions. We can then compute the hypergradients applying the chain rule, so that ∇_ X_p𝒜 = ( d w^⋆ / d X_p)∇_ wℒ.[The expression for Λ is analogous.] To compute the implicit derivative, d w^⋆ / d X_p, we can leverage the stationarity (Karush-Kuhn-Tucker, KKT) conditions in the inner problem, i.e., ∇_ wℒ = 0, and apply the implicit function theorem <cit.>, so that ∇_ X_p∇_ wℒ + ( d w^⋆ / d X_p)∇_ wℒ = 0.note1 Then, the hypergradients can be computed as
∇_ X_p𝒜 = -( ∇_ X_p∇_ wℒ)( ∇^2_wℒ)^-1∇_w𝒜,
where we assume that the Hessian ∇^2_wℒ is not singular. Brute-force computation of Eq. (<ref>) requires inverting the Hessian, which scales in time as 𝒪(d^3) and in space as 𝒪(d^2)—where d is the number of parameters. However, as in <cit.>, we can rearrange the terms in the second part of Eq. (<ref>), solve the linear system: ( ∇^2_ wℒ) v = ∇_ w𝒜, and compute ∇_ X_p𝒜=-(∇_ X_p∇_ wℒ) v.note1 The linear system can be efficiently solved by using Conjugate Gradient (CG) descent, as described in <cit.>. For this, let us assume that the inner problem is solved by an iterative algorithm that arrives at a local minima after T_KKT training iterations. After solving the linear system, the procedure scales in time 𝒪((T_KKT + √(κ))d) and in space 𝒪(d) <cit.>, where κ is the condition number of the Hessian ∇^2_wℒ. Moreover, the Hessian-vector products ( ∇^2_ wℒ) v and (∇_ X_p∇_ wℒ) v can be computed exactly and efficiently with the technique proposed in <cit.>, thus avoiding the computation and storage of the Hessian, as follows:note1
(∇_ w^2 ℒ) v = ∇_ w( v∇_ wℒ),
(∇_ X_p∇_ wℒ) v = ∇_ X_p( v∇_ wℒ).
The computation of the first and second expression above scales as 𝒪(d) and 𝒪(max(d, n_p m))[𝒪(max(d, h)) for Λ, where h is the number of hyperparameters.] respectively—where n_p denotes the number of poisoning points, each one containing m features—both in time and in space. An elegant aspect of this technique is that, for ML models optimized with gradient-based methods, the equations for evaluating
the Hessian-vector products emulate closely those for standard forward and backward propagation. Hence, the application of existing automatic differentiation frameworks to compute this product is typically straightforward <cit.>.
However, approaches based on the implicit function theorem require training the whole learning algorithm to compute the hypergradient, i.e., until the stationarity conditions are met. This can be intractable for some learning algorithms such as deep networks, where the number of parameters is huge. To sidestep this problem, different techniques have been proposed to estimate the value of the hypergradients <cit.>. These techniques do not require to re-train the learning algorithm each time the hypergradient is computed. Instead, they estimate the hypergradient by truncating the learning in the inner problem to a reduced number of training iterations.
As described in <cit.>, we can think of the training algorithm (inner problem) as a discrete-time dynamical system, described by a sequence of states s^(t)( X_p, Λ) ∈ℝ^d_s, with t = 1, …, T, where each state depends on model's parameters, the accumulated gradients and/or the velocities, and the training data and hyperparameters. In this paper, we focus on Stochastic Gradient Descent (SGD), i.e., s^(t)( X_p, Λ) = w^(t)( X_p, Λ), so that each state of the sequence depends only on the previous state. We can therefore reformulate the bilevel problem in (<ref>) as the constrained single-level optimization problem:
min_Λ' ∈Φ(Λ) max_ X_p' ∈Φ(𝒟_p)ℒ(𝒟_val, w^(T)( X_p, Λ))
s.t. w^(t)( X_p, Λ') = w^(t - 1)( X_p, Λ)
- η∇_ wℒ(𝒟_tr', Λ', w^(t - 1)),
t = 1, …, T,
where η is the learning rate for SGD.
Then, we estimate the hypergradients from the values of the parameters collected in the set of training states asnote1
∇_ X_p𝒜 = (d w^(T)( X_p, Λ)/d X_p)∇_w𝒜,
where the bottleneck is, again, the computation of the implicit derivatives. Given the constraints in Eq. (<ref>), it is obvious that the state w^(t)( X_p, Λ) depends on the poisoning points and hyperparameters both, directly by its expression, and indirectly through the previous state w^(t-1)( X_p, Λ). Then, by applying the chain rule we obtainnote1
d w^(t)( X_p, Λ)/d X_p = ∂ w^(t)/∂ X_p + ∂ w^(t)/∂ w^(t-1)d w^(t-1)( X_p, Λ)/d X_p
Then, from a reduced number of training iterations, T ≤ T_KKT (which does not necessarily satisfy the stationarity conditions <cit.>), these expressions can be expanded, according to the updates of SGD <cit.>, as follows:
∇_ X_p𝒜 = ( ∂ w^(T)/∂ X_p + ∑_t=1^T-1(∏_t'=t+1^T ∂ w^(t')/∂ w^(t'-1)) ∂ w^(t)/∂ X_p)∇_w𝒜,
∇_Λ𝒜 = ( ∂ w^(T)/∂Λ + ∑_t=1^T - 1(∏_t'=t+1^T ∂ w^(t')/∂ w^(t'-1)) ∂ w^(t)/∂Λ)∇_w𝒜,
where ∂ w^(t') / ∂ w^(t'-1) = I - η∇_ w^2 ℒ, ∂ w^(t) / ∂ X_p = -η∇_ X_p∇_ wℒ, and ∂ w^(t) / ∂Λ = -η∇_Λ∇_ wℒ.
Depending on the order to compute the different terms in Eq. (<ref>), we can use two approaches to estimate the hypergradients: Reverse-Mode (RMD) and Forward-Mode Differentiation (FMD) <cit.>. In the first case, RMD requires first to train the learning algorithm for T training iterations, i.e., to compute w^(1) to w^(T). Then, the hypergradients estimate is computed by reversing the steps followed by the learning algorithm from w^(T) down to w^(1). On the other hand, FMD computes the estimate of the hypergradients as the algorithm is trained, i.e., from w^(1) to w^(T) (i.e. the estimates can be computed in parallel with the training procedure).
To estimate the hypergradients, RMD requires to compute a forward and a backward pass through the set of states. In some cases, as in <cit.>, RMD requires to store all the information collected in the states in the forward pass.[However, other RMD methods proposed in the literature do not require to store this information <cit.>.] In contrast, FMD just needs to do the forward computation. However, compared to RMD, the scalability of FMD depends heavily on the number of hyperparameters. As a practical example, consider training a neural network (including LR as a special case) with d weights, using classic iterative optimization algorithms such as SGD. According to Eq. (<ref>), RMD scales in time as 𝒪(Td) and in space as 𝒪(n_p m + h + Td), while FMD scales as 𝒪((n_p m + h)Td) and 𝒪((n_p m + h)d) in time and space respectively. Thus, the time complexity of RMD does not depend on the size of the poisoning points or hyperparameters. Then, for problems where the number of hyperparameters is large, as is the case for the poisoning attacks we introduced in the paper, RMD is computationally more efficient to estimate the hypergradients. As mentioned before, it is also clear that RMD is more efficient compared to grid search, where the learning algorithms need to be trained from scratch for each combination of the hyperparameters' values explored in the grid.
Table <ref> summarizes the computational trade-offs between different state-of-the-art methods to compute the hypergradients. For the analysis of the convergence properties of the hypergradients, we refer the reader to <cit.>, which studies and compares the convergence rate of techniques such as CG and RMD. From a practical perspective, the number of training iterations for the inner problem plays a crucial role in the convergence rate <cit.>, but can also cause overfitting in the outer objective <cit.>.
Here we include the RMD algorithm (Alg. <ref>), which we use to compute the hypergradients estimate at the outer level problem (both for the features of the poisoning points (Line <ref>), and the hyperparameters (Line <ref>)). RMD requires first to train the learning algorithm for T training iterations (Lines <ref>-<ref>). Then, the hypergradients estimate is computed by differentiating the updates of the learning algorithm and reversing its sequence of parameters (Lines <ref>-<ref>), i.e., expanding the terms in Eq. (<ref>) in reverse order. This approach can be derived by leveraging a Lagrangian formulation associated with
the parameter optimization dynamics <cit.>. Lines <ref>-<ref> compute the corresponding Hessian-vector products, whereas Lines <ref>-<ref> update the value of the hypergradients. We use a notation similar to <cit.>, where more details on the derivation of this algorithm can be found.
§.§ Projected Hypergradient Descent/Ascent
After computing the hypergradients, at each hyperiteration we use projected hypergradient descent/ascent to update the poisoning points and the hyperparameters:
X_p ←Π_Φ(𝒟_p)( X_p + α ∇_ X_p𝒜),
Λ ←Π_Φ(Λ)( Λ - α ∇_Λ𝒜),
where α is the learning rate for the outer problem and Π_Φ(𝒟_p) and Π_Φ(Λ) are the projection operators for the features of the poisoning points, X_p, and the hyperparameters, Λ, defined as Π_Φ(·) (input) ≜clip(input, infΦ(·), supΦ(·)), so that their updated values are within the corresponding feasible domains, Φ(·). In our case we used standard gradient descent/ascent to solve Eq. (<ref>). The analysis of other alternatives to solve minimax games, such as <cit.>, is left for future work.
[t]
Projected Hypergradient Descent/Ascent
Input: ℳ, 𝒜, ℒ, 𝒟_val, 𝒟_tr, n_p, 𝒫, T_mul, T, α, η
Output: 𝒟_p^(T_mul), Λ^(T_mul)
Alg. <ref> describes the procedure to solve the multiobjective bilevel problem proposed in the paper. Essentially, this algorithm implements projected hypergradient descent/ascent for T_mul iterations (Lines <ref>-<ref>) to optimize, in a coordinated manner, the poisoning points (Line <ref>)—replaced into the training set (Line <ref>)—and the set of hyperparameters (Line <ref>).
To reduce the computational burden, we consider the simultaneous optimization of a batch of n_p poisoning points, 𝒟_p = {( x_p_k ,y_p_k)}^n_p_k=1. We generate the initial values of 𝒟_p by cloning n_p samples—uniformly sampled without duplicates—of 𝒟_tr. Their labels are initially flipped and
kept fixed during the optimization. This process is carried out in the function (Line <ref>).
Then, these n_p poisoning samples replace the n_p clean samples of 𝒟_tr whose indices are in the set 𝒫 (Line <ref>).
On the other hand, the hyperparameters are initialized in (Line <ref>).
To solve the bilevel problem, every time the variables in the outer problem are updated, the model's parameters need to be previously initialized and optimized. Thus, let (Line <ref>)
be a particular initialization for the model's parameters. (Line <ref>) refers to the particular optimization algorithm used to train the model's parameters and compute the corresponding hypergradients. In this work, this algorithm is Reverse-Mode Differentiation (RMD) (Alg. <ref>).
§ REGULARIZATION TO PARTIALLY MITIGATE POISONING ATTACKS
Poisoning attacks are intrinsically related to the stability of ML algorithms. Attackers aim to produce large changes in the target algorithm by influencing a reduced set of training points. Xu et al. <cit.> introduced the following definition of stability: “an ML algorithm is stable if its output is nearly identical on two datasets, differing on only one sample." This concept of stability has also been studied in the field of robust statistics, in which “robustness” formally denotes this definition of stability <cit.>. It is not our intention here to provide a formal analysis of the stability of ML algorithms, but to show that stability is an important property in the design of ML algorithms that are robust to data poisoning.
L_2 (or Tikhonov) regularization is a well-known mechanism to increase the stability of ML algorithms <cit.>. In L_2 regularization, a penalty term is added to the original loss function, which shrinks the norm of the model's parameters, so that ℒ( 𝒟_tr, w, λ)=ℒ(𝒟_tr, w)+e^λ/2|| w||_2^2,
where λ is the hyperparameter that controls the strength of the regularization term. The exponential form is used to ensure a positive contribution of the regularization term to the loss function and to help learning λ, for example by using Eq. (<ref>), as this hyperparameter is usually searched over a log-spaced grid <cit.>. In principle, different L_2 regularization schemes can be considered: e.g., in neural networks, we could have a different regularization term for each layer or even for each parameter <cit.>.
Xiao et al. <cit.> analyzed the robustness of embedded feature selection, including L_2 and L_1 regularization, for linear classifiers against optimal poisoning attacks. Although their experimental results showed that L_2 was slightly more robust compared to L_1 regularization and elastic-net, all the classifiers tested where very vulnerable to indiscriminate optimal poisoning attacks. However, these results relied on the assumption that the regularization hyperparameter was constant regardless of the fraction of poisoning data, which as we show in our experiments provides a limited perspective on the robustness of the learning algorithms.
The synthetic example with a binary classifier in Fig. <ref> illustrates the limitations of the approach in <cit.>. Here, 16 points per class were drawn from two different bivariate Gaussian distributions and we trained an LR classifier. Fig. <ref>(left) shows the effect of injecting a single poisoning point (red point, labeled as green) to maximize the error (measured on a separate validation set with 32 points per class) against a non-regularized LR classifier.[ The details of the experiment can be found in Appx. <ref>.] The dashed-white line represents the decision boundary learned when training on the clean dataset, and the red line depicts the decision boundary when training on the poisoned dataset. We observe that a single poisoning point can significantly alter the decision boundary. Fig. <ref>(center), shows a similar scenario, but training an LR classifier with L_2 regularization, setting λ=log(20)≈3. Here, we observe that the effect of the poisoning point is much reduced and the decision boundary shifts only slightly. In the background of these two figures we represent the validation error of the LR trained on a poisoned dataset as a function of the location of the poisoning point. We observe that, when there is no regularization (left) the error can significantly increase when we inject the poisoning point in certain regions. On the contrary, when regularization is applied (center), the colormap is more uniform, i.e., the algorithm is quite stable regardless of the position of the poisoning point. Note that, when the model is regularized, the increase in the validation error after the attack is small. In the next section, we also experiment with L_1 regularization against data poisoning. In this case, ℒ( 𝒟_tr, w, λ)=ℒ(𝒟_tr, w)+e^λ|| w||_1.
Fig. <ref>(right) shows how the optimal value of λ that minimizes the loss in the trusted validation set changes significantly as a function of the location of the poisoning point. The colormap in the background represents the value of λ.
We observe that λ is much bigger for the regions where the poisoning point can influence the classifier more (Fig. <ref>(left)). So, when the poisoning attack has a negative impact on the classifier's performance, the importance of the regularization term, controlled by λ, increases. It is clear that selecting the value of λ appropriately, using a small trusted validation set, can have a significant impact on the classifier's robustness. Furthermore, when testing the robustness of regularized classifiers we must consider the interplay between the attack strength and the value of λ.
§ EXPERIMENTS
We evaluate the effectiveness of the attack strategy in Eq. (<ref>) against LR and feed-forward DNNs. We study the influence of L_2 and L_1 regularization on the attack, providing an analysis of the robustness of the learning algorithms to worst-case scenarios for attacks with different strengths. Note that the analysis of optimal indiscriminate poisoning attacks against non-convex models is substantially more computationally difficult. Most previous work in optimal poisoning attacks focuses on linear classifiers and, to our knowledge, our study is the first to analyze the effect of regularization against data poisoning on DNNs.
§.§ Experimental Settings
For both LR and DNNs, we use three different binary classification problems: MNIST (`0' vs. `8') <cit.>, FMNIST (trouser vs. pullover) <cit.>, and CIFAR-10 (airplane vs. frog) <cit.>. All datasets are balanced and drawn at random from the original joint pool of training and test points. The details for each dataset are included in Table <ref>.
All our results are the average of 10 repetitions with different random data splits for training, validation and test sets. Moreover, both MNIST and FMNIST sets are normalized to be in the range [0, 1]^784, whereas CIFAR-10 sets are normalized to be in the range [-1, 1]^3,072. For all the attacks, we measure the average test error for different attack strengths, where the number of poisoning points ranges from 0 (0%) to 1,750 (35%). The size of the batch of poisoning points that are simultaneously optimized is 350 for all the datasets. For MNIST and FMNIST, this leads to 274,400 features to be optimized simultaneously, and to 1,075,200 features for CIFAR-10. In this way, we simulate six different ratios of poisoning ranging from 0% to 35%.
We simulate different ratios of poisoning points in a cumulative manner: Once the optimization of the current batch of poisoning points and hyperparameters is finished,[The criterion to finish the loop that optimizes the variables of the outer level problem is given by the number of hyperiterations.] this batch of poisoning points is fixed and the next batch of poisoning points is replaced into the remaining clean training set, whereas the hyperparameters are re-initialized, to carry out their corresponding optimization. To accelerate their optimization, the hypergradients for the poisoning points are normalized with respect to their L_2 norm, and the hypergradients for each Λ are also normalized with respect to their corresponding value.[The analysis of other techniques to accelerate the optimization, such as adaptive learning rates, is left for future work.]
The LR classifier's parameters are always initialized with zeros, for all the datasets. The DNN models have two hidden layers with Leaky ReLU activation functions as follows: 784×32×8×1, i.e., 25,393 parameters, for MNIST and FMNIST; and 3,072×64×32×1, i.e., 198,785 parameters, for CIFAR-10. In DNN models, these parameters are initially filled with values according to Xavier Initialization method <cit.>, using a uniform distribution for all the parameters except the bias terms, which are initialized with a value of 10^-2.
For all the experiments, we make use of SGD both to update the parameters in the forward pass of RMD, and to train the model when testing the attack (full batch training). The choice of the number of iterations for the inner problem, T, depends on the model and the training dataset. Low values of T could lead to low-quality approximations for the hypergradient. As T increases, the solution of RMD approaches the exact (true) hypergradient, but at the risk of overfitting the outer objective in the bilevel optimization problem <cit.>. The details of the attack settings are shown in Table <ref>, whereas the ones for testing the attacks are in Table <ref>.
All the experiments have been run on 2 × 11 GB NVIDIA GeForce® GTX 1080 Ti GPUs. The RAM memory is 64 GB (4×16 GB) Corsair VENGEANCE DDR4 3000 MHz. The processor (CPU) is Intel® Core™ i7 Quad Core Processor i7-7700k (4.2 GHz) 8 MB Cache.
§.§ Logistic Regression
§.§.§ Test Error and Value of Learned
For LR we test the general poisoning attack strategy in Eq. (<ref>)—labeled as λ_RMD in the figures—using the following settings for the computation of the hypergradients with RMD. For MNIST we set T, the number of iterations for the inner problem, to 140. For FMNIST and CIFAR-10 we use T=160 and T=500, respectively.
For comparison purposes, in addition to crafting attacks learning the value of λ, λ_RMD, we also craft optimal poisoning attacks setting the value of λ to different constant values: no regularization (λ =-∞); a very large one (for L_2 regularization: λ = log (1,000) for MNIST and FMNIST, and λ = log (10,000) for CIFAR-10; for L_1 regularization: λ = log (50) for MNIST, λ = log (25) for FMNIST, and λ = log (100) for CIFAR-10); and the value of λ optimized with 5-fold cross-validation (λ_CLEAN). By comparing with no regularization and large constant values for λ, we aim to show the trade-off between accuracy (under clean data) and robustness to different attack strengths. The case of λ_CLEAN is similar to the settings used in <cit.>, which uses a methodology akin to <cit.>, where the authors use K-fold cross-validation to select the value of λ, and the clean data is used both for training and validation in an unbiased way.
The results are shown in Fig. <ref>. We observe that when the model is not regularized or uses λ_CLEAN, the attacks are very effective and the test error increases significantly when compared to the algorithm's performance on the clean dataset (0% of poisoning). In contrast, for the largest λ the test error increases moderately with the increasing fraction of poisoning points, showing a lower test error compared to the case of no regularization. However, in the absence of an attack, the algorithm underfits and the error is higher compared to the other models (especially in the case of CIFAR-10). When the value of λ is learned (λ_RMD) using the trusted validation dataset, the increase in the test error is moderate and, when the ratio of poisoning points is large, the performance is similar to when λ is large. We can also observe that, in this case, when there is no attack, the performance is similar to that of the non-regularized classifier.
The results in Fig. <ref> also show that the attack and the methodology presented in <cit.> provide an overly pessimistic view on the robustness of L_2 and L_1 regularization to poisoning attacks, and that using the hyperparameter learned when the data is clean can be detrimental under data poisoning. We show that, by appropriately selecting the value of λ, we can effectively reduce the impact of such attacks. We can also observe that there is a trade-off between accuracy and robustness: over-regularizing (i.e., setting a very large value for λ) makes the algorithm more robust to the attack, but the performance on clean data is degraded.
In Fig. <ref> we show the value of λ learned and the norm of the model's parameters divided by the number of parameters, || w||^2_2/d, as a function of the fraction of poisoning points injected. We observe that the regularization hyperparameter increases and then saturates as we increase the fraction of poisoning points. Thus, the regularization term compensates the effect of the poisoning points on the model's parameters up to a point.
Comparing L_2 and L_1, we observe that both regularization techniques provide similar mitigation effects against the attack. Thus, even if L_1 regularization does not necessarily provide stability to the learning algorithm, as is the case of L_2 regularization, the use of the trusted validation set for learning the regularization hyperparameter helps to mitigate the impact of the attack in both cases. The presence of the poisoning points increases the norm of the parameters if no regularization is applied. But, when the trusted validation dataset is available for selecting the regularization parameter, both L_1 and L_2 regularization are capable of mitigating this effect, and thus, of reducing the impact of the poisoning points.
§.§.§ Sensitivity Analysis of the Size of the Validation Set
The size of the trusted validation set has an effect not only on the selection of the hyperparameters, but also on the effectiveness of the poisoning points learned using the attack in Eq. (<ref>) when evaluated on a separate test set. Note that having a larger trusted dataset is not necessarily beneficial only for the learner, but also for the attacker, who, under worst-case scenario assumptions, also has access to the trusted validation set.
To study this effect, we consider an LR classifier and the same datasets (i.e., MNIST, FMNIST and CIFAR-10) and settings as before. Previously, we assumed that the validation set was ten times smaller than the training set for MNIST and FMNIST, and five times smaller for CIFAR-10. Now, the size of the training and test sets is fixed, and we evaluate different sizes for the validation set—compared to the size of the training set. To analyze the influence of the validation set both when there is no regularization and when there is, we define the relative decrease of test error as the relative difference of the test error obtained when there is no regularization and when the value of λ is learned using the trusted validation set, i.e., (Test Error_|No Reg. - Test Error_|λ_RMD) / Test Error_|No Reg..
In Fig. <ref> and Fig. <ref>, we observe that when the model is not regularized, for MNIST and CIFAR-10, the test error is higher when the validation set is larger, as the poisoning points do not overfit the validation set. In contrast, for FMNIST the different-size validation sets result in a similar test error. On the other hand, when λ is learned (L_2 and L_1 regularization), for MNIST and FMNIST the test error decreases when the validation set is smaller, whereas for CIFAR-10, the opposite occurs. This shows that having a larger validation set is not always advantageous. When the poisoning points are learned with no regularization, a larger validation set provides more effectiveness for the attack, reducing the overfitting of the attack points. However, when using regularization and the poisoning points and hyperparameters are jointly learned, the optimal size of the validation set can be task-dependent. Our results show that, with this interplay between the learner and the attacker, the net benefit for the learner depends on the specific classification task, the size of the validation set and the attack strength. However, it is also important to note that, across all experiments, there is a clear benefit for using regularization to mitigate the impact of the attack in all cases and, especially, for strong attacks.
§.§.§ Consistency Index
To understand how embedded
feature selection methods based on L_2 regularization are affected by the attack, we evaluate the stability of feature selection
under poisoning using Kuncheva's consistency index <cit.>. Given two feature subsets
A, B ⊆𝒳, with |A| = |B| = k, r = |A ∩ B|, and 0 < k < |𝒳| = d, Kuncheva's consistency index is defined as I_c(A,B) = (rd - k^2)/(k(d - k)), where positive values indicate similar sets, zero is equivalent
to random selections, and negative values indicate
strong anti-correlation between the feature subsets. The
underlying idea of this consistency index is to normalize the
number of common features in the two sets using a correction for chance
that accounts for the average number of common features
randomly selected out of k trials <cit.>.
To evaluate how poisoning affects embedded feature selection, we compute this index using for A the feature set selected for the clean training data, and compare it against a set B selected under attack, at different percentages of poisoning. For each scenario, we consider the first k features exhibiting the highest absolute weight values: for MNIST, given that the most of the features are close to zero, we choose the top 20, 40 and 80 features; for FMNIST, the top 40, 80 and 160 features; and for CIFAR-10, the top 200, 400 and 800 features.
The results for L_2 regularization for MNIST, FMNIST and CIFAR-10 are shown in Fig. <ref>. The corresponding results for L_1 regularization are consistent with these and can be found in Fig. <ref>. We observe that, in all cases, the consistency index decreases with the ratio of poisoning. This means that, to succeed, the attack naturally modifies the importance of the features of the training set (even if the attack is not specifically designed to do that), so that the poisoned model pays more attention to less relevant features. It is also clear that if the model is not regularized, the features selected are less consistent, and regularization helps to increase the feature stability under poisoning. For λ_RMD, it is generally bounded between the cases of no regularization and large value of λ, showing that the algorithm sacrifices some feature stability to decrease the test error. Compared to L_1 (Fig. <ref>), L_2 regularization provides greater feature stability when using a large regularization hyperparameter. It is important to note that the selection of the regularization hyperparameter, using Eq. (<ref>), aims to minimize the error on the validation set, not to maximize the stability of the features, which would require a different defensive strategy. However, the results in Fig. <ref> help to understand better the combined effect of the poisoning attack and the use of regularization.
§.§ Deep Neural Networks
Poisoning attacks can have different effects on the different layers of the target DNNs <cit.>. This problem has not been sufficiently studied in the research literature and, in this section, we provide useful insights that shed some light in this regard through the lens of regularization. For this, we consider two possibilities: a single regularization hyperparameter, and a vector of regularization hyperparameters—with one hyperparameter for each layer. Intuitively, the amount of scaling needed by each layer's parameters to compensate for a change in the output is not the same, as the activation functions are non-linear. This also gives us an intuition about the layers most vulnerable to the poisoning attack. We also propose an additional modification to the RMD algorithm: we apply different initial random parameters w^(0) for every update of the poisoning points. This can be interpreted as assembling different randomly initialized DNNs to improve the generalization of the poisoning points across different parameter initializations. We set T=700 for MNIST and T=800 for FMNIST and CIFAR-10. This scenario is much more challenging for the bilevel problem we aim to solve, as the models have two hidden layers with Leaky ReLU activation functions: 784×32×8×1, i.e., 25,393 parameters, for MNIST and FMNIST; and 3,072×64×32×1, i.e., 198,785 parameters, for CIFAR-10.
As before, we denote with λ_RMD the case where the regularization hyperparameter is learned according to Eq. (<ref>), distinguishing now the cases: (1) when a single regularization hyperparameter is used for the whole DNN, and (2) when a different hyperparameter is used at each layer. We also performed attacks with different strength for the DNN assuming it is trained without regularization (λ=-∞) and with a large value for λ (for L_2 regularization: λ= log(100) for MNIST, and λ= log(500) for FMNIST and CIFAR-10; for L_1 regularization: λ= log(50) for MNIST, λ= log(10) for FMNIST, and λ= log(25) for CIFAR-10), constant for all the layers. Fig. <ref> shows the results for L_2 regularization. The results for L_1 regularization are coherent with the ones for L_2 and can be found in Fig. <ref>. In this case, we omitted the case where λ is set with 5-fold cross-validation on the clean dataset as the search space is large, which makes it computationally very expensive.
The results in Fig. <ref> are consistent with those obtained for the case of LR (Fig. <ref>). When there is no regularization, the algorithm is vulnerable to the poisoning attack and its test error increases significantly. For a large value of λ, the algorithm's performance remains quite stable, but the clean error is higher. For λ_RMD the test error increases only moderately, and the results when using a single hyperparameter or a different hyperparameter at each layer are very similar. From Fig. <ref> and Fig. <ref> we can see that when there is no attack, the test error for λ_RMD is smaller than in the other two cases. Although over-regularizing may be appealing to make the algorithm more robust to poisoning, the performance in the absence of attacks may be significantly worse. Learning λ evidences this trade-off. For a large fraction of poisoning points, the small discrepancy observed between λ_RMD and the large value of λ is due to the non-convexity of the bilevel optimization problem, resulting in learning (possibly) suboptimal values for λ_RMD. On the other hand, comparing the results for the DNNs (Fig. <ref> and Fig. <ref>) and for LR (Fig. <ref>), it is evident that the mitigating effect of regularization is more prominent in the case of DNNs. As the capacity of the DNN (compared to LR) is higher, the attackers can have more flexibility to manipulate the decision boundary. Hence, having regularization in place, in combination with the trusted validation set, is even more important in the case of the DNNs.
Fig. <ref> and Fig. <ref> show the value of λ when using a different regularization term at each layer, for L_2 and L_1 regularization, correspondingly. We observe that the λ learned for the second and output layers increases faster than the one for the first layer and, for FMNIST and CIFAR-10, this increase is faster for the first hidden layer from 20% of poisoning. This suggests that the latter layers can be more vulnerable to the poisoning attack. The poisoning attack tries to produce more changes in those layers and, at the same time, the network tries to resist those changes by increasing the value of the corresponding regularization hyperparameters. On the other hand, when the attack is very strong, the impact of the attack appears more uniform across all layers in the DNN, based on the values of λ learned for each layer.
Finally, as in the case of LR, the value of the regularization hyperparameters is also related to the norm of the weights divided by the number of parameters for each layer in the DNN. These results are shown in Fig. <ref>.
§ CONCLUSIONS
Existing literature has been ambivalent on the role of regularization in mitigating poisoning attacks. This problem has been insufficiently studied as existing works assume that regularization hyperparameters are constant and chosen “a priori” regardless of the number of poisoning points or their effects. We have shown that the value of the hyperparameters depends on the amount of poisoning and that a constant value cannot be chosen a priori: when the value is too low, it provides insufficient robustness; when the value is too high, it damages performance. We have shown that when the value of the hyperparameters is learned as a function of the poisoning incurred, regularization can significantly mitigate the effect of indiscriminate poisoning attacks, whilst at the same time not damaging performance. This, however, requires the use of a small trusted validation set.
To study the mitigating effect of regularization and choose hyperparameters, we have introduced an novel formulation where the poisoning attack strategy for worst case scenarios is formulated as a multiobjective bilevel optimization problem. This formulation allows to learn the most appropriate values for the model's hyperparameters and to calculate the poisoning points simultaneously. Solving this multiobjective bilevel optimisation problem is challenging. However, we have shown how this problem can be solved with gradient-based techniques by extending previous RMD-based approaches.
With this formulation, we have analyzed the effect of indiscriminate poisoning attacks against LR and DNN classifiers when using both L_2 and L_1 regularization. Our results confirm that the use of regularization, combined with the presence of the small trusted set to learn the hyperparameters, significantly helps to reduce the error under poisoning attacks. When the regularization hyperparameter is learned appropriately, the algorithm is more robust and, at the same time, the performance of the model is not affected when there is no attack. The trusted validation set required is quite small and task dependent; a larger trusted set is not necessarily advantageous.
Although L_2 regularization typically provides more stability compared to L_1, our empirical results show that both types of regularization are useful to reduce the effect of poisoning attacks. Additionally, our results show that the use of regularization plays a more important role in more complex models, such as DNNs. Our empirical evaluation also shows that indiscriminate attacks have a more pronounced effect in the later layers of the network, as the value of the regularization hyperparameters learned for those layers increases significantly (with respect to those learned when there is no attack) compared to the ones learned for the first layers. However, for a large fraction of poisoning points, the effect of the attack is spread across all the different layers.
In our future work, we plan to investigate these aspects in targeted poisoning attacks and ways to combine and contrast the mitigating effect obtained from regularization with that of other defenses against poisoning attacks, e.g. data sanitization.
§ ACKNOWLEDGMENT
We gratefully acknowledge funding for this work from the Defence Science and Technology Laboratory (Dstl), under the project ERASE - Evaluating the Robustness of Machine Learning Algorithms in Adversarial Settings.
IEEEtran
§ EXPERIMENTAL SETTINGS FOR THE SYNTHETIC EXAMPLE
For the synthetic example in Fig. <ref>, we sample the attacker's data from two bivariate Gaussian distributions, 𝒩(μ_0, Σ_0) and 𝒩(μ_1, Σ_1) with parameters:
μ_0 = [ -3.0; 0.0 ], Σ_0 = [ 2.5 0.0; 0.0 1.5 ],
μ_1 = [ 3.0; 0.0 ], Σ_1 = [ 2.5 0.0; 0.0 1.5 ].
The attacker uses 32 points (16 per class) for training and 64 (32 per class) for validation, and one poisoning point cloned from the validation set (in the example of the paper, cloned from the set labeled as blue), whose label is flipped. This poisoning point is concatenated into the training set and the features of this point are optimized with RMD. In order to poison the LR classifier, we use α=0.4 and T_𝒟_p=50; Φ(𝒟_p)∈[-9.5,9.5]^2; η=0.2, T=100; and when testing the attack, η_tr=0.2, batch size =32 (full batch), and number of epochs = 100. When we apply regularization, we fix λ=log(20)≈3.
To plot the colormap in Fig. <ref>(right),
the values of λ explored for each possible poisoning point are in the range [-8, 6]. Then, the optimal value of λ is chosen such that it minimizes the error of the model, trained on each combination of the poisoning point (concatenated into the training set) and λ in the grid, and evaluated on the validation set.
§ ADDITIONAL RESULTS
Here we show additional results that complement and are coherent with the ones discussed in Sect. <ref>.
In Fig. <ref> we show the value of λ learned and the norm of the model's parameters divided by the number of parameters, || w||^2_2/d, as a function of the fraction of poisoning points injected, for LR using L_2 and L_1 regularization on FMINST and CIFAR-10 (see also Fig. <ref>). We observe that the regularization hyperparameter increases, and then, saturates as we increase the fraction of poisoning points. Comparing L_2 and L_1, we observe that both regularization techniques provide similar mitigation effects against the attack.
In Fig. <ref> we show the sensitivity analysis of the size of the validation set for LR (see also Fig. <ref>). When λ is learned using L_1 regularization, for MNIST and FMNIST the test error decreases when the validation set is smaller, whereas for CIFAR-10, the opposite occurs. This shows that having a larger validation set is not always advantageous. Our results show that, with this interplay between the learner and the attacker, the net benefit for the learner depends on the specific classification task, the size of the validation set and the attack strength. However, it is also important to note that, across all experiments, there is a clear benefit for using regularization to mitigate the impact of the attack in all cases and, especially, for strong attacks.
The results for Kuncheva's consistency index for LR using L_1 regularization on MNIST, FMNIST and CIFAR-10 are shown in Fig. <ref> (which can be compared with Fig. <ref>).
We observe that, in all cases, the consistency index decreases with the ratio of poisoning. This means that, to succeed, the attack naturally modifies the importance of the features of the training set, so that the poisoned model pays more attention to less relevant features. It is also clear that if the model is not regularized, the features selected are less consistent, and regularization helps to increase the feature stability under poisoning. For λ_RMD, it is generally bounded between the cases of no regularization and large value of λ, showing that the algorithm sacrifices some feature stability to decrease the test error. Compared to L_1, L_2 regularization provides greater feature stability when using a large regularization hyperparameter.
Fig. <ref> shows the test error for the optimal attack against the DNNs using L_1 regularization (see also Fig. <ref>). These results are consistent with those obtained for the case of L_2 regularization and LR. When there is no regularization, the algorithm is vulnerable to the poisoning attack and its test error increases significantly. For a large value of λ, the algorithm's performance remains quite stable, but the clean error is higher. For λ_RMD the test error increases only moderately, and the results when using a single hyperparameter or a different hyperparameter at each layer are very similar. From Fig. <ref> we can see that when there is no attack, the test error for λ_RMD is smaller than in the other two cases. Although over-regularizing may be appealing to make the algorithm more robust to poisoning, the performance in the absence of attacks may be significantly worse. Learning λ evidences this trade-off. On the other hand, it is evident that the mitigating effect of regularization is more prominent in the case of DNNs. As the capacity of the DNN (compared to LR) is higher, the attackers can have more flexibility to manipulate the decision boundary. Hence, having regularization in place, in combination with the trusted validation set, is even more important in the case of the DNNs.
Fig. <ref> shows the value of λ when using a different regularization term at each layer, for L_1 regularization (refer also to Fig. <ref>). We observe that the λ learned for the second and output layers increases faster than the one for the first layer and, for FMNIST and CIFAR-10, this increase is faster for the first hidden layer from 20% of poisoning. This suggests that the latter layers can be more vulnerable to the attacks. These poisoning attacks try to produce more changes in those layers and, at the same time, the network tries to resist those changes by increasing the value of the corresponding regularization hyperparameters. On the other hand, when the attacks are very strong, their impact appear more uniform across all layers in the DNN, based on the values of λ learned for each layer.
In Fig. <ref> we can observe that, as in the case of LR, the value of the regularization hyperparameters of the DNNs is also related to the norm of the weights divided by the number of parameters for each layer in the DNN.
Finally, for the sake of completeness, Fig. <ref> shows the value of λ learned and the norm of the model's parameters divided by the number of parameters, || w||^2_2/d, as a function of the fraction of poisoning points injected, for the DNNs when using a single regularization term for L_2 and L_1 regularization. These results are coherent with the ones for LR (Fig. <ref> and Fig. <ref>). We observe that the regularization hyperparameter increases, and then, saturates as we increase the fraction of poisoning points. However, using a different regularization at each layer can be more insightful to understand at which layer the attack focuses more.
|
http://arxiv.org/abs/2306.02103v1
|
20230603125315
|
Thomas-Fermi theory of out-of-plane charge screening in graphene
|
[
"Vitaly Moroz",
"Cyrill B. Muratov"
] |
math.AP
|
[
"math.AP",
"cond-mat.mes-hall",
"math-ph",
"math.MP"
] |
Thomas-Fermi theory of charge screening in
graphene]Thomas-Fermi theory of out-of-plane charge screening in
graphene
Swansea University, Department of
Mathematics, Fabian Way, Swansea SA1 8EN, Wales, UK
[email protected]
Dipartimento di Matematica, Università di Pisa, Largo
B. Pontecorvo, 5, 56127 Pisa, Italy
Department of Mathematical Sciences, New Jersey Institute
of Technology, University Heights, Newark, NJ 07102, USA
[email protected]
This paper provides a variational treatment of the effect of
external charges on the free charges in an infinite free-standing
graphene sheet within the Thomas-Fermi theory. We establish
existence, uniqueness and regularity of the energy minimizers
corresponding to the free charge densities that screen the effect of
an external electrostatic potential at the neutrality point. For the
potential due to one or several off-layer point charges, we also
prove positivity and a precise universal asymptotic decay rate for
the screening charge density, as well as an exact charge
cancellation by the graphene sheet. We also treat a simpler case of
the non-zero background charge density and establish similar results
in that case.
[
Cyrill B. Muratov
July 31, 2023
=====================
§ INTRODUCTION
section
Graphene is a classical example of a two-dimensional material whose
electronic properties give rise to a number of unusual characteristics
that make it a prime target for both fundamental research and multiple
applications
<cit.>. A key
feature of the electrons in single layer graphene sheets is the
presence of the Dirac cone in their dispersion relation that makes the
elementary excitations (electrons and holes) of the ground state
behave as massless relativistic fermions
<cit.>. This presents challenges in the
theoretical treatment of those excitations, as their kinetic energy,
which is on the order of E_K ∼ħ v_F / r, where
v_F ≃ 1 × 10^8 cm/s is the Fermi velocity and r is the
radius of the wave packet containing a single charge, remains
comparable to the Coulombic interaction energy
E_C ∼ e^2 / (ϵ_d r) of two charges at distance r
independently of the scale r (here e is the elementary charge, in
the CGS units, ϵ_d ∼ 1 is the effective dielectric
constant, and it is noted that e^2 / (ħ v_F) ≃ 2.2). As a
result, many-body effects need to be taken into consideration in the
studies of electronic properties of graphene. In particular, these
effects are significant in determining the way the massless
ultrarelativistic fermions screen the electric field of supercritical
charged impurities <cit.>.
The problem of characterizing the charged impurity screening by the
graphene sheet has been studied, using a number of theoretical
approaches <cit.> (this list is not
intended to be exhaustive). Note that a similar question arises in the
studies of the graphene based devices in the proximity of a conducting
electrode, or when a scanning tunneling microscope tip approaches a
graphene sheet <cit.>. In particular, in this situation the
electric charge the layer is exposed to may exceed the elementary
charge e by many orders of magnitude. Under such conditions, a fully
nonlinear treatment of the screening problem is, therefore, necessary.
In conventional quantum systems, a good starting point for the
analysis of electric field screening is the Thomas-Fermi theory, as it
yields an asymptotically exact response of a system of interacting
electrons to a large external charge <cit.>. Such a theory for
massless relativistic fermions was developed by Di Vincenzo and Mele
in the context of charged impurity screening in graphite intercalated
compounds <cit.>. They conducted numerical studies of the
resulting equations for the screening charge density and noted a
highly non-local character of the response. More recently, Katsnelson
carried out a formal analysis of the asymptotic behavior of the
screening charge density away from a single impurity ion in a graphene
monolayer <cit.>. His results were further clarified an
extended by Fogler, Novikov and Shklovskii, who also confirmed the
predictions about the decay of the screening charge density by
numerical simulations <cit.>. The nonlocal character of the
response and its dependence on the level of doping have been confirmed
by the direct experimental observations of the screening charge
density <cit.>. Note that these observations are
at variance with the prediction of a purely local dielectric response
at the Dirac point from the linear response theory for massless
relativistic fermions within the random phase approximation
<cit.>.
This paper is a mathematical counterpart of the studies in
<cit.> that provides a suitable
variational framework for the study of the charge screening problem
described by the Thomas-Fermi theory of graphene (for a closely
related Thomas-Fermi-von Weizsäcker model and some further
discussion, see <cit.>). The setting turns out to be rather
delicate, as the presence of a bare Coulombic potential from an
impurity leads to heavy tails in the potential term that are precisely
balanced with the Coulombic interaction term. Within our setting, we
prove existence, uniqueness, radial symmetry and monotonicity of the
minimizer of the graphene Thomas-Fermi energy for an off-layer
external point charge in a free-standing graphene sheet. More
generally, we provide existence, uniqueness, the Euler-Lagrange
equation that is understood in a suitable sense, and regularity of the
minimizer for a general class of external potentials arising as
Coulombic potentials of appropriate collections of external
charges. Back to a single off-layer charge in a free-standing graphene
sheet, we establish the precise asymptotic decay of the screening
charge density at infinity, which agrees with the one obtained by
Katsnelson using formal arguments.
The decay of the screening charge density turns out to be a borderline
power law decay modulated by a logarithmic factor that makes it barely
integrable. The latter presents a significant technical difficulty in
the handling of the appropriate barrier functions that control the
decay of the solution at infinity. In particular, we prove that the
decay indeed tuns out to be universal, independently of the strength
of the external charge. Finally, we present the corresponding results
for the biased layer. The treatment of the latter is significantly
simpler due to the expected fast power law decay of the screening
charge density. As a by-product of our analysis, we also demonstrate
existence of sign-changing minimizers for the closely related
Thomas-Fermi-von Weizsäcker model studied in <cit.> in the
regime when the latter is well approximated by the Thomas-Fermi model.
Our paper is organized as follows. In section <ref>, we
introduce the Thomas-Fermi energy functional for a free-standing
graphene sheet and then discuss several issues associated with its
definition in the context of the associated variational problem for
charge screening that require a modified formulation compared to the
classical Thomas-Fermi theory. Within these modifications, we then
state the main results of our paper in Theorems <ref> and
<ref> and illustrate their conclusions with several numerical
examples. In section <ref>, we give the precise
variational setting for the modified Thomas-Fermi energy of the
free-standing graphene sheet and establish general existence and
regularity results for the minimizers. Then, in section <ref> we
focus on the case of the potential from a single off-layer external
point charge. In particular, in <ref> we reformulate the
Euler-Lagrange equation for the minimizers in terms of a convenient
auxiliary variable and establish several properties of the solutions
associated with a comparison principle that we establish for this
equation, and in section <ref> we establish further
implications of the comparison principle on the positivity of
solutions. This leads us, in section <ref>, to establish
existence of sign-changing solutions to the closely related
Thomas-Fermi-von Weizsäcker model considered by us in
<cit.>. The key computation of the paper is carried out in section
<ref>, where a logarithmic barrier is established, which is then
used in section <ref> to prove the asymptotic decay rate of
the solution at infinity for the external potential of a point
charge. Furthermore, in section <ref> we show the complete
charge screening and in section <ref> we establish the
universality of the decay. Finally, in section <ref> we
outline the analogous treatment of the case of a doped graphene sheet
characterized by the presence of a uniform background charge, where
the main results are contained in Theorems <ref> and
<ref>.
Notations
Throughout the paper, for f(t), g(t) ≥ 0 we use the asymptotic
notations as t → +∞:
* f(t)≲ g(t) if there exists C>0 independent of t
such that f(t) ≤ C g(t) for all t sufficiently large;
* f(t)∼ g(t) if f(t)≲ g(t) and g(t)≲ f(t);
* f(t)≃ g(t) if f(t) ∼ g(t) and
lim_t→ +∞f(t)/g(t)=1.
As usual, B_R(x):={y∈^N:|y-x|<R}, B_R:=B_R(0), and C,c,c_1
etc., denote generic positive constants. For an open set
Ω⊆^2, by C^α(Ω) we denote the space of
all locally Hölder continuous functions of order
α∈(0,1] on Ω, and C^k,α(Ω) denotes
higher order Hölder spaces for k=1,2,…. By
C^∞_c(Ω) we denote the space of all compactly supported
infinitely differentiable function with the support in Ω, while
𝒟'(Ω) is the space of distributions on Ω,
i.e. the dual space of C^∞_c(Ω). For a function
f∈ L^1_(Ω), unless specified otherwise, the inequality
f≥ 0 in Ω is always understood in the distributional sense,
i.e., that ∫_^2f(x)φ(x) dx ≥ 0 for all
0≤φ∈ C^∞_c(Ω). We similarly define f ≤
0. When we want to emphasize a pointwise (in)equality, we
always write explicitly f(x).
Acknowledgements The work of CBM was supported, in
part, by NSF via grants DMS-1614948 and DMS-1908709.
§ MODEL AND MAIN RESULTS
Thomas-Fermi (TF) energy for massless relativistic fermions in a
free-standing graphene layer in the presence of the external
electrostatic potential V takes the following form, after a suitable
non-dimensionalization <cit.>:
^TF_0(ρ)=2/3∫_^2|ρ|^3/2^2
x-∫_^2ρ(x) V(x) ^2
x+1/4 π∬_^2×^2ρ(x)
ρ(y)/|x-y|^2 x^2 y.
Here ρ:^2→ is the charge density of charge carrying
fermionic quasiparticles (electrons and holes). The density ρ is
a sign–changing function with ρ>0 corresponding to electrons and
ρ<0 to holes. The first, Thomas–Fermi term, is an
approximation of the kinetic energy of the uniform gas of
noninteracting particles. The exponent 3/2 can be deduced from
scaling considerations.
The last, nonlocal Coulomb term
(ρ,ρ):=1/4 π∬_^2×^2ρ(x)
ρ(y)/|x-y|^2 x^2 y,
is the like-charged inter-particle repulsion energy which is inherited
from ^3. The middle term is the potential energy due to the
interaction with the external potential V:^2→. In the
case of a single external point charge of magnitude Z∈
located in ^3 at distance d≥ 0 away from the graphene layer
the external potential is
V_Z,d(x):=Z/2 π√(d^2+|x|^2),
but more general potentials V(x) could be considered, e.g. involving
multi-point charge configurations. Importantly, for an unscreened
system of uncompensated external charges one has V(x)∼ 1/|x|
as |x|→∞, since the quasiparticle–charge interaction is
according to Coulomb's law in ^3. For a more detailed
discussion of various terms in the energy and the
non-dimensionalization, see <cit.>.
Our principal goal is to prove the existence of global minimizers of
^TF_0 and establish their fundamental properties, such as
regularity and decay estimates. At first glance the
Thomas–Fermi energy ^TF_0 looks similar to its classical
three-dimensional (3D) atomic counterpart
<cit.><cit.><cit.>.
However, there are fundamental differences within the variational
framework for graphene modelling:
* Unlike in the classical TF-theory for atoms and molecules where
ρ≥ 0, the density ρ in graphene is a sign–changing
function. As a consequence, (|ρ|,|ρ|)≥(ρ,ρ)
which means that oscillating profiles could be energetically more
favorable.
* All three terms in ^TF_0 with V=V_Z,0 scale at
the same rate under the charge–preserving rescaling
ρ_λ(x)=λ^2 ρ(λ x). Hence
^TF_0 (ρ_λ)=cλ when d=0 for some c ∈.
Physically, this is a manifestation of the non-perturbative role of
the Coulomb interaction in graphene. Mathematically, this reveals
the critical tuning of the three different terms in the energy.
* The nonlocal term (ρ,ρ) is formally identical to the
usual Coulomb term in ^3. However, the integral kernel
|x-y|^-1 in ^2 is associated with the Green function of the
fractional Laplacian operator (-Δ)^1/2. As a consequence,
the Euler–Lagrange equation for ^TF_0 transforms into
a fractional semilinear partial differential equation (PDE)
involving (-Δ)^1/2, instead of the usual Laplace operator
-Δ of the classical 3D TF-theory.
Note that the total number of electrons and holes in the graphene
sheet is neither fixed nor bounded a priori. As a consequence, unlike
in the atomic and molecular 3D models, it is unclear if the minimizers
of ^TF_0 should have a finite total charge, i.e. if they
are L^1–functions. This implies that regular distributions should
be included as admissible densities. Indeed, even if the density
ρ is a sign–changing continuous function, it is not a priori
clear if ρ can be interpreted as a charge density in the sense of
potential theory (i.e., whether d μ = ρ dx can be associated
to a signed measure μ on ^2, making the Coulomb energy
(ρ,ρ) meaningful in the sense of the Lebesgue integration,
see <cit.>*Example 4.1 and further references therein. This
makes the analysis of the minimizers of ^TF_0
mathematically challenging.
We avoid these issues by identifying the Coulomb term (ρ,ρ)
with one-half of the square of the H^-1/2(^2) norm of
ρ. The energy we consider is then
_0(ρ) := 2/3∫_^2|ρ|^3/2^2 x -
⟨ρ, V ⟩ + 1/2ρ_^2,
where ⟨· , ·⟩ is a duality pairing
between the function V ∈ L^1_loc(^2) and the linear
functional generated by ρ, to be specified shortly. Sometimes
we also write _0^V to emphasize the dependence on V. It is easy
to see that the definition of _0 in (<ref>) agrees with that of
^TF_0 when ρ∈ C^∞_c(^2) and
⟨ρ, V ⟩ = ∫_^2 V ρ^2 x.
The natural domain of definition of _0 is the class
_̋0:= ∩ L^3/2(^2).
Clearly, _̋0 is a Banach space with the norm
·__̋0=·_L^3/2(^2)+·_. Its dual
space _̋0^' can be identified with the Banach space
+L^3(^2).[Recall that
L^p(^2)+L^q(^2)={f∈
L^1_(^2):f=f_1+f_2, f_1∈ L^p(^2), f_2∈ L^q(^2)}
is a Banach space with the norm
f_L^p(^2)+L^q(^2) :=
inf(f_1_L^p(^2)+f_2_L^q(^2)), where the infimum is
taken over all admissible pairs (f_1,f_2). The dual of
L^p(^2)+L^q(^2) is the Banach space
L^p'(^2)∩ L^q'(^2), equipped with the norm
f_L^p'(^2)∩
L^q'(^2):=f_L^p'(^2)+f_L^q'(^2).]
Therefore, one may define ⟨·, ·⟩ as the
duality pairing between _̋0' and _̋0. More precisely, for every
ρ∈_̋0 and every V = V_1 + V_2, where V_1 ∈ and
V_2 ∈ L^3(^2) we may define
⟨ρ, V ⟩ := ⟨ρ, V_1
⟩
+ ∫_^2ρ(x) V_2(x) ^2 x,
where ⟨·, ·⟩ in the right-hand side of
(<ref>) stands for the duality payring between and
. See Section <ref> for further details and precise
definitions.
Our first result establishes the existence of a unique minimizer for
_0.
For every V ∈ + L^3(^2) there exists a unique
minimizer ρ_V ∈_̋0 such that
_0(ρ_V) = inf_ρ∈_̋0_0(ρ). The minimizer
ρ_V satisfies the Euler–Lagrange equation
∫_^2(ρ_V)|ρ_V|^1/2φ^2 x - ⟨φ, V ⟩ + ⟨ρ_V,
φ⟩_=0, ∀φ∈_̋0.
Furthermore, if (-Δ)^1/2V≥ 0 then ρ_V≥ 0.
If, e.g., ρ_V∈ L^4/3(^2),
then (<ref>) implies that
(ρ_V(x))|ρ_V(x)|^1/2-V(x)+1/2π∫_^2ρ_V(y)/|x-y|^2
y=0 for a.e. x ∈^2.
However, (<ref>) is not valid for a general V∈_̋0,
since the nonlocal term may not be well–defined as the Lebesgue
integral. Nevertheless, we show that for any V∈ the
Euler–Lagrange equation (<ref>) is equivalent to the
fractional semilinear PDE
(-Δ)^1/2 u + |u|u = (-Δ)^1/2 V in ,
and
u_V:=(ρ_V)|ρ_V|^1/2∈
is the unique solution of (<ref>). We further show that
(<ref>) satisfies suitable weak maximum and comparison
principles. This allows us to employ barrier techniques to study the
decay of the solution u_V. With the aid of explicit log–barrier
functions constructed in Section <ref>, we establish the main
result of this work.
Let Z>0, d>0 and let V_Z,d be defined in (<ref>).
Then the minimizer ρ_V_Z,d∈_̋0 is Hölder continuous,
radially symmetric non-increasing and satisfies
0<ρ_V_Z,d(x)≤ V_Z,d(x) for all x∈^2
and
ρ_V_Z,d(x) ≃1/|x|^2log^2 |x| as
|x|→∞.
In particular, ρ_V_Z,d∈ L^1(^2) and
ρ_V_Z,d_L^1(^2)=Z.
Estimate (<ref>) remains valid for a more general class of
external potentials V with sufficiently fast decay at infinity,
see (<ref>). The significance of the log–decay becomes
clear if we note that p=2 plays a role of the Serrin's critical
exponent <cit.>*(1.7) for the equation
(-Δ)^1/2 u + |u|^p-1u = f in ,
with p>1 and (for simplicity) nonnegative f∈ C^∞_c(^2).
If p>2 the linear part in (<ref>) dominates and solutions
must decay as the Green function of (-Δ)^1/2,
i.e. |x|^-1. For p<2 the nonlinear part in (<ref>)
dominates and the solutions should have “nonlinear” decay rate
|x|^-1/(p-1). In the Serrin's critical regime p=2 the linear
and nonlinear parts balance each other, which leads to the
log–correction in the decay asymptotics, correctly captured by
Katsnelson <cit.>. Such log-correction is well-known
for the local Laplacian -Δ <cit.>*Theorem 3.1. We
are not aware of similar results in the fractional Laplacian case.
If d=0 then
V_Z,0(|x|)=Z / (2 π |x|) ∉(+L^3(^2)) and
_0^V_Z,0 is unbounded below, for any Z≠ 0. In fact, by
scaling, V_Z,d(x)=d^-1V_Z,1(x/d) and
ρ_V_Z,d(x)=d^-2ρ_V_Z,1(x/d). Then
_0^V_Z,d(ρ_V_Z,d) =
d^-1_0^V_Z,1(ρ_V_Z,1)→-∞,
as d→ 0. Note also that by scaling,
ρ_V_Z,d_L^1(^2)=ρ_V_Z,1_L^1(^2).
Observe that the potential V_Z,d is a rescaling of the critical
Sobolev minimizer in and
(-Δ)^1/2V_1,1=σ V_1,1^3 for an explicit σ>0
(see e.g. calculations in <cit.>*p.258 and (6.5)). Then by
scaling, u_V_Z,d(x)=d^-1u_V_1,1(x/d) solves
(-Δ)^1/2 u_V_Z,d + u_V_Z,d^2 = σ
Zd^-1V_1,1^3 in 𝒟'(^2).
Note also that (-Δ)^1/2 V_Z,0= Z δ_0 and in the case
d=0 equation (<ref>) formally becomes
(-Δ)^1/2 u + u^2 = Z δ_0 in
𝒟'(^2).
Such equation has no positive distributional solutions, see
<cit.>*Theorem 4.2.
§ VARIATIONAL SETTING AT THE NEUTRALITY POINT
§.§ Space H^1/2(^2)
Recall that the homogeneous Sobolev space H^1/2(^2)
can be defined as the completion of C^∞_c(^2) with respect
to the Gagliardo's norm
u_^1/2(^2)^2 :=
1/4π∬_^2×^2|u(x)-u(y)|^2/|x-y|^3^2
x ^2 y.
By the fractional Sobolev inequality <cit.>, <cit.>,
u_H^1/2(^2)^2≥√(π) u_L^4(^2)^2, ∀ u∈ C^∞_c(^2).
In particular, the space H^1/2(^2) is a well-defined
space of functions and
H^1/2(^2)⊂ L^4(^2).
The space H^1/2(^2) is also a Hilbert space, with
the scalar product associated to (<ref>) given by
⟨ u,v⟩_^1/2(^2) :=
1/4π∬_^2×^2(u(x)-u(y))(v(x)-v(y))/|x-y|^3^2
x ^2 y.
Recall (cf. <cit.>) that if u∈H^1/2(^2)
then u^+, u^-∈H^1/2(^2) and
u^±_H^1/2(^2)≤u_H^1/2(^2).
Moreover, ⟨ u^+,u^-⟩_^1/2(^2)≤ 0.
The dual space to H^1/2(^2) is denoted
H^-1/2(^2). According to the Riesz representation
theorem, for every F∈H^-1/2(^2) there exists a
uniquely defined potential U_F∈H^1/2(^2) such
that
⟨ U_F,φ⟩_^1/2(^2)=⟨
F,φ⟩ ∀φ∈H^1/2(^2),
where ⟨ F,·⟩:H^1/2(^2)→ denotes
the bounded linear functional generated by F,
⟨· ,·⟩_ is the inner product in
, and ⟨· ,·⟩_ will be similarly
defined as the inner product in .
Moreover,
U_F_^1/2(^2)=F_H^-1/2(^2),
so the duality (<ref>) is an isometry.
The potential U_F∈H^1/2(^2) satisfying
(<ref>) is interpreted as the weak solution of
the linear equation
(-Δ)^1/2U_F= F in ^2,
and we recall that for functions u∈ C^∞_c(^2), the fractional
Laplacian (-Δ)^1/2 can be defined as
(-Δ)^1/2u(x)= 1/4π∫_^22 u(x) - u(x+y)
- u(x-y)/|y|^3^2 y (x∈^2),
cf. <cit.>.
§.§ Regular distributions in and potentials
Recall that ρ∈∩ L^1_(^2) means that ρ is a
regular distribution in 𝒟'(^2), i.e.
⟨ρ, φ⟩ : = ∫_^2ρ(x)φ(x)
^2
x ∀φ∈ C^∞_c(^2),
and ⟨ρ, φ⟩ is bounded by a multiple of
φ_. Then ⟨ρ, ·⟩ is understood
as the unique continuous extension of (<ref>) to .
Caution however is needed as not every regular distribution
ρ∈∩ L^1_(^2) admits an integral representation
(<ref>) on all of . In other words,
ρ∈∩ L^1_(^2) does not necessarily imply that
ρ w∈ L^1(^2) for every w∈. Examples of this type go
back to H. Cartan (cf. <cit.>, <cit.>, or
<cit.>*Remark 5.1 for an example from ∩ C^∞(^2)
and further references). As a consequence, the Coulomb energy term in
^TF may not be defined in the sense of Lebesgue's integration
for all ρ∈_̋0 and should be interpreted in the
distributional sense, i.e., in the definition of
^TF_0 one should replace (ρ, ρ) with
ρ_^2.
Recall however that every nonnegative distribution is a measure <cit.>.
An alternative reinterpretation of (ρ,ρ) can be given in
terms of potentials. Given ρ∈∩ L^1_(^2), let U_ρ∈
be the uniquely defined potential of ρ, defined as in (<ref>)
by the Riesz's representation theorem.
If ρ∈ L^1(^2,(1+|x|)^-1^2 x) then the potential U_ρ
could be identified with the Riesz potential of the function
ρ, so that
U_ρ(x)=1/2π∫_^2ρ(y)/|x-y|^2 y
a.e. in ^2,
(see <cit.>).
Furthermore, according to the Hardy–Littlewood–Sobolev (HLS) inequality (cf. <cit.>),
if ρ∈ L^s(^2) with s∈(1,2) then U_ρ∈ L^t(^2) with 1/t=1/s-1/2, and
U_ρ_L^t(^2)≤ Cρ_L^s(^2).
Even if (<ref>) is valid, ρ U_ρ∉L^1(^2) in
general. However, if φ∈∩ L^4/3(^2) then
φ U_ρ∈ L^1(^2) by the HLS
inequality and
1/2π∬_^2 ×^2ρ(x)φ(y)/|x -
y|^2 x ^2 y=∫_^2U_ρ(x)φ(x)^2 x=
⟨
U_ρ,U_φ⟩_=⟨ρ,φ⟩_.
In particular,
(ρ,ρ)=∫_^2U_ρ(x)ρ(x)^2 x=
U_ρ^2_=ρ^2_,
which means that L^4/3(^2)⊂ and the Coulomb energy is
well–defined on L^4/3(^2) in the sense of Lebesgue's
integration.
§.§ Existence, uniqueness and regularity of the minimizers.
Consider the unconstrained minimization problem
E_0:=inf_ρ∈_̋0_0(ρ).
It is easy to prove the following.
For every V∈ + L^3(^2), the TF–energy _0 admits
a unique minimizer ρ_V ∈_̋0 such that _0(ρ_V)=E_0.
The minimizer ρ_V satisfies the Euler–Lagrange equation
∫_^2(ρ_V)|ρ_V|^1/2φ^2 x -
⟨φ, V ⟩ + ⟨ρ_V,
φ⟩_=0 ∀φ∈_̋0.
It is standard to conclude from V∈ that _0 is
bounded below on _̋0, i.e., that E_0>-∞.
Consider a minimizing sequence (ρ_n)⊂_̋0.
Clearly
sup_n ρ_n_L^3/2(^2)≤ C, sup_n ρ_n_≤ C.
Using weak-* compactness of the closed unit ball in , we may
extract a subsequence, still denoted by (ρ_n), such that
ρ_n⇀ ρ_V in L^3/2(^2),
ρ_n∗⇀ F in ,
for some ρ_V∈ L^3/2(^2) and F∈.
By the definition, (<ref>) and (<ref>) mean that
∫_^2ρ_n(x)φ(x) ^2 x→ ∫_^2ρ_V(x)φ(x) ^2 x ∀φ∈
L^3(^2),
⟨ρ_n,φ⟩=∫_^2ρ_n(x)φ(x) ^2
x→ ⟨ F,φ⟩ ∀φ∈.
Therefore, passing to the limit we obtain
∫_^2ρ_V(x)φ(x) ^2 x=⟨
F,φ⟩ ∀φ∈ L^3(^2)∩.
In particular, ρ_V∈ defines a regular distribution in 𝒟'(^2)
and we may identify F=ρ_V. This implies that
_0(ρ_V) ≤lim inf_n→∞_0(ρ_n)=E_0,
which follows from the weak lower semicontinuity of the
·_L^3/2(^2) and ·_ norms, and the
weak continuity of the linear functionals ⟨·, V ⟩
on _̋0.
The uniqueness of the minimizer ρ_V∈_̋0 is a consequence of the strict convexity of the energy _0, which is the sum of the strictly convex kinetic energy, linear external potential energy, and positive definite quadratic Coulomb energy.
The derivation of the Euler–Lagrange equation (<ref>) is standard, we omit the details.
As was already mentioned, if ρ_V∈_̋0∩ L^4/3(^2)
then (<ref>) can be interpreted pointwise as the integral
equation (<ref>). However, in general the
Euler–Lagrange equation (<ref>) for _0 should be
interpreted as
(ρ_V)|ρ_V(x)|^1/2+ U_ρ_V=V in
𝒟'(^2),
where U_ρ_V∈ is the potential of ρ_V defined
via (<ref>). In particular, if
ρ_V≥ 0 then U_ρ_V≥ 0 (see <cit.>) which implies V≥ 0 and
0≤ρ_V≤ V^2 in 𝒟'(^2).
The mapping V↦ρ_V is a bijection between
_̋0^'=+L^3(^2) and _̋0. Indeed, the uniqueness of
the minimizer implies that ρ_V is injective. Further, it is
clear that for any ρ∈_̋0,
V:=U_ρ+(ρ)|ρ(x)|^1/2∈+L^3(^2),
which means that the mapping ρ_V is also surjective. In
particular, this shows that non–regular at infinity distributions in
could occur amongst the minimizers. Simply choose a
regular distribution ρ∈_̋0 such that ρφ∉L^1(^2) for some φ∈ (see e.g. <cit.> for an explicit example) and generate the
corresponding potential V via (<ref>).
While for a generic V∈ + L^3(^2) the information
ρ_V∈_̋0 is optimal, under additional restrictions on the
potential V the regularity of the minimizer can be improved up to
the regularity of V.
Assume that V∈∩ C^α(^2) for some
α∈(0,1]. Then the minimizer ρ_V∈_̋0 additionally
satisfies ρ_V∈_̋0∩ C^α(^2), and
ρ_V(x) → 0 as |x| →∞. Furthermore, the potential U_ρ
could be identified with the Riesz potential of ρ as in (<ref>) and
U_ρ_V∈ C^1/3(^2).
According to (<ref>), the minimizer ρ_V∈ℋ_0 satisfies
(ρ_V)|ρ_V|^1/2=V-U_ρ_V in 𝒟'(^2).
Since
ρ_V∈ℋ_0⊂ L^3/2(^2),
by the HLS-inequality (<ref>) with s=3/2 we have
U_ρ_V∈ L^6(^2),
and in particular, the potential U_ρ
could be identified with the Riesz potential of ρ as in (<ref>).
Also, by the Sobolev inequality (<ref>),
V∈∩ C^α(^2)⊂ L^4(^2)∩
C^α(^2).
This implies
V^2∈ L^2(^2)∩ C^α(^2).
In particular, both V and V^2 are bounded and decay to zero as |x|→∞.
Note also that U_ρ_V^2∈ L^3(^2).
Hence,
|ρ_V|=(V-U_ρ_V)^2=V^2-2V U_ρ_V+U_ρ_V^2∈ L^3/2(^2)∩ L^3(^2).
Furthermore, by Hölder estimates on Riesz potentials, we conclude that U_ρ_V∈ C^1/3(^2), see <cit.>*Lemma 4.1 or <cit.>*Theorem 2. Then
|ρ_V|=(V-U_ρ_V)^2∈ C^β(^2),
where β=min{α,1/3} and ρ_V(x) → 0 as |x| →∞.
If α≤ 1/3 we are done. If α>1/3 then (<ref>) implies U_ρ_V∈ C^1,1/3(^2), see <cit.>*Proposition 2.8. Therefore, ρ_V has at least the same Hölder regularity as V.
Similarly, one can establish higher Hölder regularity of ρ_V assuming higher regularity of V.
For instance, using <cit.>*Proposition 2.8 we can conclude that if V∈ C^1,α(^2) then
ρ_V∈ C^1,β(^2), where β=min{α,1/3}. However, in general the Hölder regularity of ρ_V can not be improved beyond the Hölder regularity of V.
§ POSITIVITY AND DECAY
§.§ Half-Laplacian representation, positivity and comparison
Let ρ_V∈_̋0 be the minimizer of _0. Introduce the substitution
u_V:=(ρ_V)|ρ_V|^1/2.
Then ρ_V=|u_V|u_V and (<ref>) transforms into
∫_^2 u_V(x)φ(x)^2 x - ⟨φ, V
⟩ + ⟨
U_|u_V|u_V,φ⟩_=0 ∀φ∈_̋0.
Let V∈ and u_V be defined by (<ref>). Then
u_V∈ and is the unique solution of the problem
(-Δ)^1/2 u + |u|u = (-Δ)^1/2 V in .
Let ψ∈ C^∞_c(^2). Then
(-Δ)^1/2ψ∈ C^∞(^2)∩ L^1(^2)⊂ L^4/3∩ L^1(^2)⊂_̋0
<cit.>*Section 2.1. Test (<ref>) with
φ = (-Δ)^1/2ψ and take into
account that in view of (<ref>),
⟨|u_V|u_V,φ⟩_= ⟨ U_|u_V|u_V,(-Δ)^1/2ψ⟩_
= ∫_^2|u_V|u_V(x)ψ(x)^2 x ∀ψ∈ C^∞_c(^2).
Then (<ref>) yields
∫_^2 u_V(-Δ)^1/2ψ^2 x -
⟨ (-Δ)^1/2ψ, V ⟩ + ∫_^2|u_V|u_V(x)ψ(x)^2 x=0 ∀ψ∈ C^∞_c(^2),
or equivalently,
(-Δ)^1/2 u_V - (-Δ)^1/2 V + |u_V|u_V = 0 in 𝒟'(^2),
where (-Δ)^1/2 V∈, |u_V|u_V=ρ_V∈. Hence u_V∈, and (<ref>)
also holds weakly in by density.
The uniqueness for (<ref>) follows from the Comparison Principle of Lemma <ref> below.
Let V∈. Assume that (-Δ)^1/2V≥ 0 in ^2.
Then u_V≥ 0 in ^2. If, in addition V≠ 0 then u_V≠ 0.
Decompose u_V=u_V^+-u_V^- and recall that
u_V^+,u_V^-∈ and ⟨ u_V^+,u_V^-⟩_≤
0. Testing (<ref>) by u_V^-≥ 0 and taking into
account that u_V|u_V|u_V^-≤ 0, we obtain
0≤⟨ V,u_V^-⟩_=⟨
u_V,u_V^-⟩_+∫_^2u_V|u_V| u_V^-^2 x
≤-⟨ u_V^-,u_V^-⟩_≤ 0.
We conclude that u_V^-= 0.
Further, if V≠ 0 then u=0 is not a solution of (<ref>) and hence u_V≠ 0.
Let V∈. Assume that u,v∈H^1/2(^2)∩ L^3(^2)
are a super and a subsolution to (<ref>) in a smooth domain
Ω⊆^2, respectively, i.e.,
(-Δ)^1/2 u + u |u| ≥ (-Δ)^1/2 V in 𝒟'(Ω),
(-Δ)^1/2 v + v |v| ≤ (-Δ)^1/2 V in 𝒟'(Ω).
If ^2∖Ω≠∅, we also assume u≥ v in ^2∖Ω̅. Then u≥ v in
^2.
Subtracting one inequality from another, we obtain
(-Δ)^1/2 (v-u) + v|v| - u|u| ≤ 0 in𝒟'(Ω).
Let H^1/2_0(Ω) denotes the completion of C^∞_c(Ω) wrt the Gagliardo's norm ·_^1/2(^2)^2, defined in (<ref>). With this definition, H^1/2_0(Ω)
is automatically a closed supspace of H^1/2_0(^2).
By density, (<ref>) is also valid in H^1/2_0(Ω), in the sense that
⟨ v-u,φ⟩_+ ∫_^2(v|v| - u|u|)φ^2
x≤ 0 ∀ 0≤φ∈ H^1/2_0(Ω).
Note that (v-u)^+∈. If
^2∖Ω≠∅ then u≥ v in
^2∖Ω̅ and hence (v-u)^+=0 in
^2∖Ω̅. This implies (v-u)^+∈ H^1/2_0(Ω),
see e.g. <cit.>*Theorem 10.1.1. Testing (<ref>)
by (v-u)^+, taking into account
⟨(v-u)^-,(v-u)^+⟩_≤ 0 and monotone increase of the
nonlinearity, we obtain
0≥⟨ v-u,(v-u)^+⟩_+∫_^2(v|v| - u|u|)(v-u)^+^2 x
≥⟨(v-u)^+,(v-u)^+⟩_=(v-u)^+_H^1/2(^2)^2.
We conclude that (v-u)^+= 0.
The Comparison Principle immediately implies that (<ref>) can
have at most one solution in . Hence the solution u_V
constructed from the minimizer ρ_V via (<ref>) is the
unique solution of (<ref>). A consequence of the uniqueness is
the following.
Assume that V∈ and (-Δ)^1/2 V≥ 0 in ^2. If (-Δ)^1/2 V∈ L^4/3(^2) is a radially symmetric non-increasing function then u_V is also radially symmetric and
non-increasing.
Note that u_V is the unique global minimizer of the convex energy
J_V(u)=1/2u_^2+1/3u_L^3(^2)^3-⟨ u,V⟩_
on ∩ L^3(^2).
Since (-Δ)^1/2 V∈ L^4/3(^2),
⟨ u_V, V⟩_=∫_^2u_V (-Δ)^1/2 V ^2 x,
where the latter integral is finite by the HLS inequality.
Then the symmetric–decreasing rearrangement u_V^* is also a minimizer of
J_V, by <cit.>*Theorem 3.4 and Lemma 7.17.
Hence the assertion follows from the uniqueness of the minimizer.
Another straightforward, but important consequence of the Comparison
Principle is the following upper bound on u_V.
Assume that V∈ and V≥ 0. Then
u_V≤ V in ^2.
We simply note that V is a supersolution to
(<ref>) in ^2, i.e.
(-Δ)^1/2 V + V^2 ≥ (-Δ)^1/2 V in 𝒟'(^2).
Hence, (<ref>) follows from the Comparison Principle in ^2.
The Comparison Principle can be used as an alternative tool to prove
the existence of the solution u_V of (<ref>), via construction
of appropriate sub and supersolutions. In the next section we
construct an explicit barrier which later will be used to obtain lower
and upper solution with matching sharp asymptotics at infinity. This
will lead to the sharp decay estimates on u_V and ρ_V.
§.§ Super-harmonicity of the potential is essential
We are going to show that the assumptions (-Δ)^1/2 V≥ 0 is in a certain sense necessary for the positivity of the minimizer ρ_V.
Let V∈∩ C^α(^2) for some α∈(0,1].
Assume that V≠ 0 and
lim_|x|→∞|x|V(x)=0.
Then ρ_V changes sign in ^2.
The assumption (<ref>) implicitly necessitates that
(-Δ)^1/2V can not be non-negative. Indeed, if
(-Δ)^1/2V≥ 0 then lim_|x|→∞|x|V(x)>0 (cf. (<ref>) below), which is incompatible with (<ref>).
According to (<ref>) and Lemma
<ref>, we know that ρ_V∈ℋ_0∩ C^α(^2),
U_ρ could be identified with the Riesz potential of the function
ρ as in (<ref>), U_ρ_V∈ C^1/3(^2),
and
sign(ρ_V)|ρ_V|^1/2(x)=V(x)-
U_ρ_V(x) for all x∈^2.
Assume that ρ_V≥ 0 in ^2.
Then for each x∈^2,
U_ρ(x)≥1/2π∫_B_2|x|(x)ρ(y)/|x-y|^2 y≥1/4π|x|∫_B_2|x|(x)ρ(y)^2 y.
In particular,
lim inf_|x|→∞|x|U_ρ_V(x)>0
and hence, in view of (<ref>),
lim sup_|x|→∞|x|sign(ρ_V)|ρ_V|^1/2(x)=
lim sup_|x|→∞|x|(V(x)-
U_ρ_V(x))<0,
a contradiction. A symmetric argument shows that ρ_V≤ 0 is also
impossible.
For example, we can consider the dipole potential
W_Z(x)=Z/2 π(1+|x|^2)^3/2.
Note that W_Z(x)=-. d dt V_Z,t(x)
|_t=1. While W_Z>0, it is not difficult to see,
using the harmonic extension of W_Z, that
(-Δ)^1/2W_Z(|x|)=Z(2-|x|^2)/2
π(1+|x|^2)^5/2,
which is a sign–changing function.
Clearly, W_Z satisfies the assumptions of Proposition <ref>, so the minimizer ρ_W_Z changes sign for any Z>0.
§.§ Sign-changing minimizer in TFW model
A density functional theory of Thomas-Fermi-Dirac-von Weizsäcker (TFW) type to describe the response of a single layer of graphene to a charge V was developed in <cit.>. For > 0, and in the notations of the present paper, the TFW-energy studied in <cit.> has the form:
_0,(ρ) :=
|ρ|^-1/2ρ_H^1/2(^2)^2+
_0(ρ):_̋0→∪{+∞}.
The existence of a minimizer for _0, with V∈ was established in <cit.>.
We are going to show that if V≥ 0 satisfies the assumptions of Proposition <ref> then for sufficiently small >0 the TFW–energy _0, admits a sign–changing minimizer. This gives a partial answer to one of the questions left open in <cit.> (see discussions in <cit.>).
To show the existence of a sign–changing minimizer for _0,,
assume that V≥ 0 and the assumptions of Proposition <ref> holds. Then the minimizer ρ_V of _0 changes sign.
Let
E_0:=inf__̋0_0=_0(ρ_V).
Similarly to Proposition <ref>, we can also minimize convex energy _0 on the weakly closed set _̋0^+ of nonnegative functions in _̋0.
Let ρ_V^+∈_̋0^+ be the minimizer of _0 on _̋0^+ and set
E_0^+:=inf__̋0^+_0=_0(ρ_V^+).
It is clear that E_0^+<0 and hence ρ_V^+≠ 0 (just take trial functions 0≤φ∈𝒟'(^2) such that ⟨ V,φ⟩>0). By an adaptation of arguments in <cit.>, the minimizer ρ_V^+ satisfies the Thomas–Fermi equation
(ρ_V^+)^3/2=(V-U_ρ_V^+)^+ in 𝒟'(^2).
Observe that supp(ρ_V^+)≠^2. Indeed, assume that ρ_V^+>0 in ^2.
Then ρ_V^+>0 satisfies the Euler-Lagrange equation
(ρ_V^+)^3/2=V-U_ρ_V^+ in 𝒟'(^2),
which contradicts to the uniqueness, since (<ref>) has a sign–changing solution ρ_V by Proposition <ref>.
Crucially, by the strict convexity of _0 we can also conclude that
E_0<E_0^+.
Next, for > 0 consider the TFW-energy _0,. Set
E_0,:=inf__̋0_0,.
The existence of a minimizer for E_0, was established in <cit.>.
Without loss of a generality, we may assume that ρ_V is regular
enough and |ρ_V|^-1/2ρ∈H^1/2(^2) (otherwise we may approximate ρ_V by smooths functions).
Then
E_0,≤|ρ_V|^-1/2ρ_V_H^1/2(^2)^2
+E_0→ E_0 as → 0.
Similarly,
E_0^+≤ E_0,^+:=inf__̋0^+_0,.
Taking into account the strict inequality (<ref>), for sufficiently small >0 we have
E_0<E_0,< E_0^+≤ E_0,^+.
In particular, E_0,< E_0,^+ and we conclude that a minimizer for E_0, must change sign. For example, a dipole, or any compactly supported nonnegative potential should give rise to a sign–changing
global minimizer in the TFW model.
§.§ Logarithmic barrier
Recall (cf. <cit.>*Theorem 1.1) that for a radial function
u∈ C^2(_+) such that
∫_0^∞|u(r)|/(1+r)^3 r r <∞,
the following representation of the fractional Laplacian
(-Δ)^1/2 in ^2 is valid:
(-Δ)^1/2u(r)=1/2 π
r∫_1^∞(u(r)-u(rτ)
+u(r)-u(r/τ)/τ)𝒦(τ) τ,
where
𝒦(τ) := 2πτ^-2 _2F_1(32,32,1,τ^-2) ,
see <cit.>*p. 246. Note that 𝒦(τ) > 0 and
𝒦(τ) ∼
(τ-1)^-2 as τ→ 1^+,
𝒦(τ) ∼τ^-2 as τ→ +∞,
so the kernel 𝒦(τ) is integrable as
τ→ +∞, but it is singular as τ→ 1^+.
Denote
Φ_u(r,τ):=u(r)-u(rτ) +u(r)-u(r/τ)/τ.
Clearly, Φ_u(r,1)=0. A direct computation shows that
∂_τΦ_u(r,1)=0, ∂^2_τΦ_u(r,1)=-2r^2ℒ u(r),
where the differential expression
ℒ u(r):=u”(r)+2/ru'(r)
acts on u(r) as the radial Laplacian in 3D. In particular, the
integral in (<ref>) converges as τ→ 1^+.
We now define a barrier function U ∈ C^2(_+) such
that U(r) is monotone decreasing and
U(r) =
1/rlog(er) ∀ r>1.
Clearly, if u(x) := U(|x|) then u∈ H^1(^2). By
interpolation between L^2(^2) and H^1(^2)
(cf. <cit.>*Proposition 1.52) we also conclude that
u∈ H^1/2(^2).
There exists R>2 such that
(-Δ)^1/2U(r) ∼ -
1/r^2(log(r))^2 for all r>R.
Our strategy is to split the representation in (<ref>) into
three parts ∫_1^2+∫_2^r+∫_r^∞ and then either estimate
each part from above and below or compute the integrals
explicitly, see (<ref>) and (<ref>).
For r>2 we compute
ℒ U(r)=log(e^3 r)/(rlog(er))^3>0.
Next we claim that for all r>2 the following inequalities hold:
Φ_U(r,τ) < U(r) ∀ τ∈[r,+∞),
Φ_U(r,τ) ≤ 0 ∀ τ∈[1,r],
Φ_U(r,τ) ≥ -4 r^2ℒ
U(r)(τ-1)^2 ∀ τ∈[1,2].
We begin by noting that by monotonicity and positivity of U we
have
Φ_U(r,τ)
< U(r),
which yields (<ref>). To deduce (<ref>), observe that for
r>2 and 1≤τ≤ r we have
Φ_U(r,τ)=1/r{1/log(er)-
1/log(er/τ)+
1/τ(1/log(er)-1/log(erτ))}.
It is elementary to see that (<ref>) is equivalent to
log(erτ)/log(er/τ)≥1/τ,
the latter is true for any r>1 and τ∈[1,r] (since in this
range the left hand side is bigger than one).
To derive (<ref>), let A:=log(er) and observe that for r>2 and
τ∈[1,2] we have A>1 and
r {Φ_U(r,τ)+4ℒ U(r)r^2(τ-1)^2}=
=1/log(er)-1/log(er)-log(τ)+1/τ(1/log(er)-1/log(er)+log(τ))
+4log(e^3 r)/(log(er))^3(τ-1)^2
=1/A(1+1/τ)-
(1/A-log(τ)+1/τ(A+log(τ)))
+4(2+A)/A^3 (τ-1)^2
≥1/A(1+1/τ)-
(1/A-log(τ)+1/τ(A+log(τ)))
+4/A^2(log(τ))^2,
where we used the fact that log(τ)<τ-1 for τ≥ 1. It
is convenient to substitute τ=e^x, where x∈[0,log(2)].
Then, taking into account that A≥log(2e) > 2x we rewrite the
right-hand side of (<ref>) as
1/A-1/A-x+e^-x(1/A-1/A+x)
+4 x^2/A^2 = x A{ -1 A - x
+ e^-x A + x +
4x A}
≥x/A^2{-1-2x/A+
(1-x)(1-x/A)
+4 x}
≥3
x^2/A^2{1-1/A}≥ 0 for all x∈[0,log(2)].
Now, for r>2, we compute explicitly, using again the
substitution τ = e^x and a standard asymptotic expansion of the
integral:
∫_2^r r Φ_U(r,τ)τ^-2dτ=
∫_log(2)^z^-1(x z^2 e^-x/(z+1) (x
z+z+1)+z/(x-1)
z-1+z/z+1) e^-x dx
= -7+6log(2)/16 z^2 + O(z^3) as z
→ 0^+,
where we defined z := 1/log(r). Similarly, we have
| ∫_r^∞ r Φ_U(r,τ)τ^-2dτ| ≤∫_z^-1^∞z e^-x/z+1 dx + U(0)
e^z^-1∫_z^-1^∞ e^-2x dx ≤ (1 + 12 U(0))
e^-z^-1.
Therefore, using (<ref>), (<ref>) and (<ref>), for r>2 we estimate
(-Δ)^1/2U(r)≲
r^-1∫_2^rΦ_U(r,τ)τ^-2dτ
+r^-1U(r)∫_r^∞τ^-2dτ,
∼-1/r^2(log(r))^2+1/r^3log(r)∼ - 1/r^2(log(r))^2 as r
→∞.
To deduce a lower estimate, we use (<ref>), (<ref>) and
(<ref>) to obtain
(-Δ)^1/2U(r)≳-rℒU(r)+
r^-1(∫_2^r+∫_r^∞)Φ_U(r,τ)
τ^-2 dτ,
≳-1/r^2(log(r))^2-1/r^2(log(r))^2-
1/r^3∼ -1/r^2(log(r))^2 as r →∞,
which completes the proof.
§.§ Decay estimate
Let V∈∩ C^α(^2) for some α∈(0,1].
Assume that (-Δ)^1/2V≥ 0, V≠ 0, and for some R>0 and C>0,
(-Δ)^1/2V≤C/|x|^2(log|x|)^2 for |x|≥ R.
Then the unique solution u_V∈ H^1/2(^2)∩ C^α(^2) of (<ref>) satisfies
0<u_V(x)≤ V(x) for all x∈^2
and
u_V(x)∼1/|x|log|x| as |x|→∞.
In particular, u_V∈ L^2(^2).
We do not assume radial symmetry of V or u_V. The assumptions
(-Δ)^1/2V≥ 0 and V≠ 0 ensure the positivity of
u_V, while the upper bound (<ref>) controls the
logarithmic decay rate (<ref>). The bound
(<ref>) together with (-Δ)^1/2V≥ 0 implicitly
necessitates that V is positive in ^2,
(-Δ)^1/2V∈ L^1(^2) and
lim_|x|→∞2 π |x| V(x)=(-Δ)^1/2V_L^1(^2),
see Lemma <ref> below. Recall that
(-Δ)^1/2V_Z,d=(4 π^2 d / Z^2) V_Z,d^3, so
V_Z,d satisfies (<ref>).
Note that (-Δ)^1/2V≥ 0 implies that V≥
0 (this could be seen similarly to the argument in the proof of Proposition <ref> but without the nonlinear term). Then the upper bound in (<ref>) follows by
Corollary <ref>. Next recall that u_V∈ C^α(^2) by
Lemma <ref> and u_V≠ by Proposition
<ref>. Therefore,
with c := u_V_L^∞(^2) we get
((-Δ)^1/2+c)u_V=(c-u_V)u_V+(-Δ)^1/2V≥ 0 in ^2.
This implies that u_V(x)>0 for all x∈^2, cf. <cit.>*Lemma 7.1.
To derive (<ref>), set U_λ:=λ U, where
U is the logarithmic barrier function defined in (<ref>).
Recall that U∈ H^1/2(^2)⊂. Using (<ref>) to
estimate (-Δ)^1/2U_λ, we conclude that there exist
positive constants c_1, c_2, C such that for some R'>R and all
sufficiently large λ>0,
(-Δ)^1/2U_λ+bU_λ^2-(-Δ)^1/2V≥
≥-c_1λ/|x|^2(log(|x|))^2+λ^2/|x|^2
(log(e|x|))^2-C/|x|^2(log|x|)^2≥ 0 for
|x|≥ R'.
Similarly, for some R'>R and all sufficiently small λ>0,
(-Δ)^1/2U_λ+bU_λ^2-(-Δ)^1/2V≤
-c_2λ/|x|^2(log(|x|))^2+λ^2/|x|^2(log(e|x|))^2≤ 0
for |x|≥ R'.
Therefore, for suitable values of λ we can use U_λ as a sub or supersolution in the Comparison Principle of Lemma <ref> with Ω=B_R^c.
To construct a lower barrier for the solution u_V, set λ_0:=min_B̅_Ru_V>0.
Then
u_V≥ U_λ_0 in B̅_R.
Taking into account (<ref>), we conclude by Lemma <ref> that
u_V≥ U_λ in ^2,
for a sufficiently small λ≤λ_0.
To construct an upper barrier for u_V, choose μ>0 such that
u_V≤ U_μ in B̅_R,
Using (<ref>), we conclude by Lemma <ref> that
u_V≤ U_λ in ^2,
for a sufficiently large λ≥μ.
§.§ Charge estimate
In the case of the standard Newtonian kernel |x|^-1 on ^3 it
is well–known that for a nonnegative f∈ L^1_rad(^3),
|x|^-1*f=f_L^1(^3)|x|^-1+o(|x|^-1) as
|x|→∞, cf. <cit.> for a discussion. The result becomes nontrivial when we
consider the convolution kernel |x|^-1 on ^2, or more
generally the Riesz kernel |x|^-(N-α) on ^N with
α∈(0,N). It is known that if α∈(1,N) and
f∈ L^1(^N) is positive radially symmetric then
|x|^-(N-α)*f=O(|x|^-(N-α)), see <cit.>*Theorem
5(i). The same remains valid if α∈(0,1] and f is in
addition monotone decreasing, see <cit.>*Lemma 2.2
(4). However, without assuming monotonicity of f,
|x|^-(N-α)*f with α∈(0,1] could have arbitrary fast
growth at infinity <cit.>*Theorem 5.
We are going to show that if f is monotone non-increasing and decays
faster than |x|^-2 then the sharp asymptotics of |x|^-1*f on
^2 is recovered. The proof is easily extended to Riesz kernels
with N≥ 2 and α∈(0,N).
Let 0≤ f∈ L^1(^2) be a function dominated by a radially
symmetric non-increasing function φ:_+→_+ that
satisfies
lim_|x|→∞φ(x)|x|^2=0.
Then
∫_^2f(y)/|x-y| ^2 y=
f_L^1(^2)/|x|+o(|x|^-1) as
|x|→∞.
Fix 0≠ x∈^2 and decompose ^2 as the union of
B={y: |y-x|<|x|/2}, A={y∉B: |y|≤ |x|},
C={y∉B: |y|>|x|}.
We want to estimate the quantity
|∫_A∪ C f (y) (1/|x - y| -
1/|x|) ^2 y |
≤∫_A∪ C f (y) |1/|x - y| -
1/|x|| ^2 y .
Since |x|/2≤|x-y|≤ 2|x| for all y∈ A, by the Mean Value Theorem we have
|1/|x - y| - 1/|x||
≤4|y|/|x|^2 (y∈ A).
Thus
|∫_A f (y) (1/|x - y| - 1/|x|) ^2 y |
≤4/|x|^2∫_A f(y)|y| ^2 y.
On the other hand, since |x-y|>|x|/2 for all y∈ C then
|1/|x|-1/|x - y||
≤1/|x| (y∈ C),
from which we compute that
|∫_Cf(y) (1/|x - y| - 1/|x|) ^2 y |
≤1/|x|∫_C f(y) ^2 y.
Then
|∫_^2f(y)/|x-y| ^2 y
-f_L^1(^2)/|x||≤
4/|x|^2∫_A f(y)|y| ^2
y+∫_Bf(y)/|x-y| ^2
y+1/|x|∫_B∪ C f(y) ^2 y
=:I_1+I_2+I_3.
Using (<ref>), for |x|≫ 2 we estimate
I_1=4/|x|^2∫_|y|≤ |x|f(y)|y| ^2 y ≤8
π/|x|^2∫_0^|x|φ(t)t^2 dt_o(|x|)
=o(|x|^-1) (|x|→∞).
Also using the monotonicity of f and
(<ref>),
for |x|≫ 2 we obtain
I_2=∫_|y-x|≤ |x|/2f(y)/|x-y| dy≤φ(|x|/2)∫_|z|≤ |x|/2dz/|z|= πφ(|x|/2)|x|=o(|x|^-1).
Finally, I_3=o(|x|^-1) as |x|→∞ since f∈ L^1(^2),
so the assertion follows.
Assume that the assumptions of Proposition <ref> holds and
lim_|x|→∞ 2 π |x|V(x)=Z>0.
Then ρ_V_L^1(^2)=Z.
According to (<ref>), the minimizer ρ_V∈ℋ_0∩ C^α(^2) satisfies
ρ_V^1/2(x)=V(x)-U_ρ_V(x) for all x∈^2.
Taking into account (<ref>), by Lemma <ref> below we conclude that
lim_|x|→∞2 π |x|U_ρ_V(x)=ρ_V_L^1(^2).
Then the assertion follows since lim_|x|→∞|x|ρ_V^1/2(x)=0.
§.§ Universality of decay
We next prove that in the case V=V_Z,d the behavior of
ρ_V_Z,d for large |x| does not depend on the values of Z
and d. Such “universality of decay” is well–known in the standard
atomic Thomas–Fermi theory, going back to Sommerfeld
<cit.>, cf. <cit.> for a
discussion. In TF-theory for graphene a similar universality was
observed by Katsnelson <cit.> (see also
<cit.>).
Let Z>0, d>0 and let V=V_Z,d as defined in (<ref>).
Then
u_V(x)≃1/|x|log|x| as |x|→∞.
To prove the sharp asymptotic decay of the minimizer when
V = V_Z,d, we use the idea in the computation of Katsnelson
<cit.>, also giving the latter a precise
mathematical meaning. To this end, we first note that since
ρ_V ∈ L^1(^2) ∩ L^∞(^2), we have that (<ref>)
holds. In terms of u_V > 0 defined in (<ref>), this equation
reads
u_V(x) = V(x) - 1 2 π∫_^2u_V^2(y)
|x - y|^2 y for all x ∈^2,
where we used the regularity of u_V and V. In turn, since
u_V(x) = u(|x|), applying Fubini's theorem we obtain after an
explicit integration:
u(r) = Z 2 π√(d^2 + r^2) - 1 2 π∫_0^∞∫_0^2 πu^2(r') r' r' θ√(r^2 + r'^2 - 2 r r' cosθ)
= Z 2 π√(d^2 + r^2) - 2 π∫_0^∞r' u^2(r')
r + r' K ( 2 √(r r') r + r') r',
where K(k) is the complete elliptic integral of the first kind <cit.>.
Proceeding as in <cit.>, we introduce a smooth bounded
function
F(t) := e^t u(e^t), t ∈,
which satisfies F(ln r) = r u(r). Then with the substitution
r = e^t, (<ref>) written in terms of F(t) becomes
F(t) = Z 2 π√(1 + d^2 e^-2 t) - 2 π∫_-∞^∞F^2(t') 1 + e^t' - t K ( 1
cosht' - t 2) t'.
We further introduce (with the opposite sign convention to that in <cit.>)
ϕ(t) := 2 K ( 1 cosht 2) π (1 + e^-t) - θ(t),
where θ(t) is the Heaviside step function, and note that
ϕ(t) is a positive, exponentially decaying function as
t →±∞, which is smooth, except for a logarithmic
singularity at t = 0. Then, since F(t) → 0 as t → +∞,
(<ref>) becomes
F(t) = Z ( 1 - √(1 + d^2 e^-2 t)) 2 π√(1 + d^2 e^-2 t) + ∫_t^∞ F^2(t') t' -
∫_-∞^∞ϕ(t - t') F^2(t') t'.
Here we applied Lebesgue's dominated convergence theorem to erase the
last term in the limit as t → +∞.
To conclude, we observe that since F(t) ∼ t^-1 we can estimate
the last term in (<ref>) to be O(t^-2) as t →
+∞. Similarly, the first term gives an exponentially small
contribution for t → +∞ and can, therefore, be absorbed into
the O(t^-2) term as well. Thus we have
F(t) = G(t) + O(t^-2), G(t) := ∫_t^∞ F^2(t') dt',
and it follows that G(t) satisfies for all t sufficiently large
d G(t) dt = -( G(t) + O(t^-2) )^2.
In particular, since F(t) ∼ t^-1, we can further estimate for
t ≫ 1:
d G(t) dt = -G^2(t) ( 1 + O(t^-1) )^2.
Integrating this expression from some sufficiently large t_0 then
gives
1 G(t) - 1 G(t_0) = t - t_0 + O(ln (t / t_0)),
t > t_0.
Finally, solving for G(t) and inserting it into (<ref>)
results in
F(t) = 1 t + O(ln t) as t → +∞,
which yields the claim after converting back into the original variables.
§ NONZERO BACKGROUND CHARGE
We now turn to the situation in which a net background charge
density ρ̅∈ is present, which is achieved
in graphene via back-gating. This leads to the modified TF-energy
<cit.>
_^TF(ρ)=2/3∫_^2(|ρ
(x)|^3/2-||^3/2) ^2 x
-sgn()||^1/2∫_^2(ρ
(x)-)^2 x
-∫_^2(ρ (x)-) V(x) ^2 x+1/4
π∬_^2×^2(ρ(x)-)(ρ(y)-)/|x-y|^2
x ^2 y,
where ρ(x)→ sufficiently fast as
|x|→∞. Since this energy is invariant with respect to
ρ→-ρ, →-, V→-V,
in the sequel we assume, without loss of generality, that ρ̅>0.
§.§ A representation of the energy functional
For a given charge density ρ(x) and >0, we
define
ϕ :=ρ - ρ̅.
Then, for ϕ∈ C^∞_c(^2), the energy
^TF_(ϕ) can be written as (with a slight abuse of
notation, in what follows we use the same letter to denote both the
energy as a function of ρ and that as a function of ϕ)
_^TF(ϕ) =
∫_^2Ψ_(ϕ(x)) ^2 x
- ∫_^2 V(x)ϕ(x)^2 x + 1/4π∬_^2 ×^2ϕ(x)ϕ(y)/|x - y|^2 x ^2 y,
where
Ψ_(ϕ):=23|+ϕ|^3/2-23^3/2-^1/2ϕ.
Clearly Ψ_:→ is a convex C^1–function of ϕ with
Ψ^'_(ϕ)=|+ϕ|^1/2(+ϕ)-^1/2,
and Ψ_∈ C^∞(∖{-}). The graphs of
Ψ_(ϕ) and Ψ^'_(ϕ) for ρ̅= 1 are
presented in Fig. <ref>.
Using elementary calculus one can see that
c|ϕ|^2/√(+|ϕ|)≤Ψ_(ϕ)≤C|ϕ|^2/√(+|ϕ|) (ϕ∈),
for some universal C > c > 0. This implies that for >0,
{ϕ∈ L^1_(^2) : Ψ_(ϕ)_L^1(^2)<+∞}=L^3/2(^2)+L^2(^2).
Let >0. Then
Ψ_(·)_L^1(^2):L^3/2(^2) +L^2(^2)→
is a strictly convex and weakly lower semi-continuous functional,
i.e.
⟨ϕ_n,φ⟩→⟨ϕ,φ⟩ ∀φ∈ L^3(^2) ∩
L^2(^2) Ψ_(ϕ)_L^1(^2)≤lim inf_nΨ_(ϕ_n)_L^1(^2).
The strict convexity of Ψ_(·)_L^1(^2) follows from the strict convexity of the function Ψ_:→.
Let (ϕ_n)⊂ L^3/2(^2)+L^2(^2) be a sequence that
converges strongly to ϕ, i.e. there exist representations
ϕ_n=f_n+g_n and ϕ=f+g such that
f_n-f_L^3/2(^2)→ 0 and g_n-g_L^2(^2)→ 0.
Then up to a subsequence Ψ_(ϕ_n)→Ψ_(ϕ)
a.e. in ^2. By Fatou's lemma,
Ψ_(ϕ)_L^1(^2)≤lim inf_nΨ_(ϕ_n)_L^1(^2),
i.e., the sublevel sets of Ψ_(·)_L^1(^2) are
closed in the norm of L^3/2(^2)+L^2(^2). Using the
convexity of Ψ_(ϕ)_L^1(^2), by Mazur's theorem we conclude that all sublevels sets are also weakly
closed in L^3/2(^2)+L^2(^2), i.e. (<ref>)
holds.
§.§ Variational setup and the main result.
In view of Lemma <ref>, the natural domain of the total
TF–energy _^TF is
_̋ := ∩ (L^3/2(^2)+L^2 (^2)),
and the TF-energy is correctly defined on _̋ in the form
_(ϕ) :=
∫_^2Ψ_(ϕ(x)) ^2 x
- ⟨ϕ, V ⟩ + 1/2ϕ_^2,
where ⟨·, ·⟩ denotes the duality pairing
between _̋' and _̋. Having in mind the definition of
_̋ in (<ref>), we have
_̋^' = +(L^3 (^2) ∩ L^2 (^2)).
Our main result concerning minimizers of _ is the following.
Let >0 and V∈_̋^'. Then _ admits a unique
minimizer ϕ_∈_̋ such that
_(ϕ_) = inf__̋_. The minimizer
ϕ_ satisfies the Euler–Lagrange equation
∫_^2Ψ^'_(ϕ_(x))φ(x)^2 x -
⟨φ, V ⟩ + ⟨ϕ_,
φ⟩_=0 ∀φ∈_̋.
The proofs of the existence and uniqueness of the minimizer (employing Lemma <ref>), as well as
the derivation of the Euler–Lagrange equations (<ref>) are small modifications of
the arguments in the proof of Proposition <ref>, so we omit the
details. For the differentiability of the map Ψ_ see
<cit.>*Lemma 6.2.
If, for instance, ϕ_∈_̋∩ L^4/3(^2) then
(<ref>) can be interpreted pointwise as
Ψ^'_(ϕ_(x))+1/2π∫_^2ϕ_(y)/|x-y|^2
y=V(x) a.e. in ^2.
However in general, the Euler–Lagrange equation for _ should
be understood as
Ψ^'_(ϕ_)+ U_ϕ_=V in
𝒟^'(^2),
where
ϕ_∈ℋ_⊂ L^2(^2)+L^3/2(^2) and
U_ϕ_∈ is the potential of ϕ_ defined
via (<ref>).
In the rest of the section, under some additional assumptions on V
we will use the equivalent half–Laplacian representation of
(<ref>) to establish further regularity and decay properties of
the minimizer ϕ_ when >0. Our crucial observation is
that unlike in the case =0, for >0 the minimizer ϕ_
has the same fast polynomial decay as the Green function of
(-Δ)^1/2+1 in ^2, for all reasonably fast decaying
potentials V.
Let >0, V∈ and ϕ_ be the minimizer of
ℰ_ from Theorem <ref>.
(i) If (-Δ)^1/2V∈ L^∞(^2) then
ϕ_∈ H^1/2(^2)∩ C^1/2(^2).
(ii) If additionally, (-Δ)^1/2V≥ 0, V ≠ 0, and for some
C>0 we have
(-Δ)^1/2V≤C/(1+|x|^2)^3/2 in ^2,
then ϕ_>0 in ^2 and
ϕ_(x)∼1/|x|^3 as |x|→∞.
In particular, ϕ_∈ L^1(^2).
In the rest of this section we are going to sketch the proof of
Theorem <ref>. We only emphasise the differnce in the
asymptotic behaviour, other arguments that are similar to the case
=0 will be omitted.
§.§ Half–Laplacian representation, regularity and decay
Let > 0 and ϕ_∈_̋ be the minimizer of
_. Introduce the substitution
u_ := Ψ^'_(ϕ_).
Then (<ref>) transforms into
∫_^2 u_(x)φ(x)^2 x -
⟨φ, V ⟩ + ⟨
U_S_(u_),φ⟩_=0 ∀φ∈_̋,
where
S_(u):=|^1/2+u|(^1/2+u)- (u∈)
is the inverse function of Ψ^'_, so that
S_(Ψ^'_(ϕ))=ϕ, for all
ϕ∈. The graph of S_(u) is shown in
Fig. <ref>.
Let >0, V∈ and u_ be defined by
(<ref>). Then u_∈ and u_ is the unique
solution of the equation
(-Δ)^1/2 u + S_(u) = (-Δ)^1/2 V in .
Moreover,
-v_-≤ u_≤ v_+,
where v_±≥ 0 are solutions of
(-Δ)^1/2 v_±=((-Δ)^1/2V)^± in .
Similar to the proof of Propositions <ref> and <ref>.
The uniqueness of the solution and the bound (<ref>) follows from an extension of the comparison principle of Lemma <ref> to the case of a monotone increasing function S_(u).
Let >0 and V∈. Assume that
(-Δ)^1/2V∈ L^∞(^2), (-Δ)^1/2V≥ 0 and
V≠ 0. Then u_∈ H^1/2(^2)∩ C^1/2(^2),
u_>0 in ^2 and
u_(x)≳1/|x|^3 as
|x|→∞.
If, in addition, for some C>0,
(-Δ)^1/2V≤C/(1+|x|^2)^3/2 in ^2,
then
u_(x)∼1/|x|^3 as |x|→∞.
In particular, u_∈ L^1(^2).
Represent (<ref>) as
((-Δ)^1/2 + 2^1/2) u_ + s_(u_)
= (-Δ)^1/2 V in 𝒟'(^2),
where s_(t)=S_(t)-2^1/2 t and observe that
s_(t)=t^2 for |t|<^1/2 small, while
s_(t)∼|t|t for t large.
In particular, in view of (<ref>) we have u_≥ 0
and u_∈ L^∞(^2). Then for a sufficiently large c>0,
((-Δ)^1/2 + 2^1/2+c) u_=
c-s_(u_)+ (-Δ)^1/2 V≥ 0 in ^2.
This implies u_∈ H^1/2(^2)∩ C^1/2(^2), u_>0 in ^2 and additionally,
u_(x)≳1/|x|^3 as |x|→∞,
cf. <cit.>*Lemma 7.1 for a similar argument.
To derive the upper bound on u_, consider the dipole type family of barriers
W_Z,λ(|x|):=Z/2π(1+|λ x|^2)^3/2
and note that using (<ref>), scaling, s_(W_Z,λ)≥ 0 and (<ref>), we obtain
((-Δ)^1/2 + 2^1/2) W_Z,λ +
s_(W_Z,λ)- (-Δ)^1/2 V
≥Zλ(2-|λ x|^2)/2π(1+|λ
x|^2)^5/2+2 Z^1/2(1+|λ
x|^2)/2π(1+|λ
x|^2)^5/2-C/(1+|x|^2)^3/2≥ 0 in
^2 ,
provided that we choose λ=2^1/2 and Z≫ 1 sufficiently large.
Then u_≤ W_Z,2^1/2 in ^2 by an extension of the comparison principle of Lemma <ref> to the equation (<ref>).
The proof of Theorem <ref> now follows from Proposition
<ref> using the explicit representation
ϕ_=S_(u_)=2^1/2 u_ + u_^2 in (<ref>),
which is valid since u_>0.
Data availability statement. Data sharing not applicable
to this article as no datasets were generated or analysed during the
current study.
plain
|
http://arxiv.org/abs/2306.04575v1
|
20230606163201
|
Quantum entanglement partly demystified
|
[
"Diederik Aerts",
"Massimiliano Sassoli de Bianchi"
] |
quant-ph
|
[
"quant-ph"
] |
Quantum entanglement partly demystified
Diederik Aerts and Massimiliano Sassoli de Bianchi
Center Leo Apostel for Interdisciplinary Studies,
Brussels Free University, 1050 Brussels, Belgium
E-Mails: <[email protected]>, <[email protected]>
=========================================================================================================================================================================================================================
We consider a simple string model to explain and partly demystify the phenomenon of quantum entanglement. The model in question has nothing to do with string theory: it uses macroscopic strings that can be acted upon by Alice and Bob in ways that violate, or fail to violate, in different ways Bell-CHSH inequalities and the no-signaling conditions, also called marginal laws. We present several variants of the model, to address different objections that may arise. This allows us to make fully visible what the quantum formalism already suggests, about the nature of the correlations associated with entangled states, which appear to be created in a contextual manner at each execution of a joint measurement. We also briefly present the hidden measurement interpretation, whose rationale is compatible with the mechanism suggested by our string model, then offer some final thoughts about the possibility that the quantum entanglement phenomenon might affect not only states, but also measurements, and that our physical reality would be predominantly non-spatial in nature.
Keywords: entanglement; Bell-CHSH inequalities; Tsirelson bound; no-signaling conditions; quantum measurements; extended Bloch representation; non-spatiality.
§ INTRODUCTION
Schrödinger first introduced the term entanglement in the 1930s, and he believed it was the characteristic trait that separated quantum mechanics from classical mechanics <cit.>. Thirty years later, John Bell's famous inequalities were able to test the presence of entanglement in bipartite systems <cit.>. Despite Bell's belief that his inequalities would not be violated, the predictions of quantum theory are no longer put into question today. Quantum entanglement has been shown to be preservable over very large spatial distances <cit.>, and the debate about the profound meaning of the phenomenon has never stopped.
In this article, we show that it is possible to partially demystify entanglement by explaining it in a way that is perfectly compatible with what the quantum formalism already suggests. We will do so by following the approach initiated by one of us in the 1980s, which started from the analysis of simple but sophisticatedly defined macroscopic systems, able to partially imitate the behavior of the microscopic ones, shedding light on the quantum measurement problem in general and on the correlations emerging from quantum entangled systems in particular <cit.>. These ideas were more recently incorporated in a rather general representation of quantum states and measurements, called the extended Bloch representation of quantum mechanics <cit.>, which allows to explain quantum measurement and quantum entanglement in terms of “hidden” (measurement) interactions, responsible for the reduction of the system's state. This is to say that although in this article we will be dealing with very simple toy model systems, the perspective our analysis opens up is broad and still under investigation.
Another important aspect of what we will explain is its didactical value. Indeed, if the ideas behind the simple models that we will describe are not new, although to our knowledge what we have called “Variant 4” of the model has never been described to date, it is the way in which we will present them that is new and, we hope, capable of capturing the attention even of those who, until now, have not given sufficient importance to the clarification that these models allow especially considering that they also find a more general representation within the quantum formalism and the already mentioned extended Bloch representation of quantum mechanics.
More precisely, starting from a very simple situation, which demonstrates the possibility for a macroscopic system to maximally violate the Bell-CHSH inequalities, hence revealing a possible mechanism behind the quantum correlations, we proceed by presenting a first possible objection. This will allow us to propose a first variant of the model, which will give rise to a subsequent objection, and so on, up to variant number 4. At that point, the objection will be that these simple systems, however clarifying, may have nothing to do with what happens at the microscopic level. And here we come to the last part of this article, where we briefly present the gist of the extended Bloch representation and the associated hidden measurement interpretation.
In the Conclusion section, we address a last objection, that of the absence of a connective structure detectable in space, able to connect the entangled entities and explain their behavior as if they were a single interconnected whole. Here our analysis comes into contact with the real mystery that the phenomenon of entanglement reveals, which is not possible to demystify and which leads us to contemplate a physical reality of a non-spatial nature, where the Euclidean or Mikowski spaces are simply seen as theaters capable of representing the relationships between the different macroscopic entities, and not as a background canvas for all of reality. Finally, we will also evoke the possibility that entanglement could additionally manifest at the level of the measurements, and not only of the states <cit.>.
§ BELL-CHSH INEQUALITIES
We consider a bipartite system, such that it is possible to identify two parts of it, interpretable as two possibly interconnected sub-systems, forming the whole system. These two sub-systems may be more or less easy to characterize, in the sense that it may not be always clear where one sub-system ends and the other begins. What is important, however, is that the ability to distinguish Alice's actions, on one sub-system, from Bob's actions, on the other sub-system, is not lost, Alice and Bob being here the two fictional characters traditionally used to describe these actions.
The paradigmatic example of a quantum bipartite system in an entangled state is that proposed long ago by David Bohm, of two spin-1 2 fermionic entities in a rotationally invariant singlet state <cit.>:
|s⟩=1√(2)(|+⟩⊗|-⟩ -|-⟩⊗|+⟩)
The universally used test for the presence of entanglement is that provided by the Clauser Horne Shimony Holt (CHSH) version of Bell’s inequalities <cit.>. They can be formulated by considering four joint measurements, which we will denote AB, AB', A'B and A'B', where A and A' are the two measurements Alice can freely select and execute on her sub-system, and B and B' are the two measurements that Bob can freely select and execute on his sub-system.
In the case of a two-spin composite system, the joint measurement AB corresponds to the situation where Alice measures one of the two spins, with a Stern-Gerlach oriented along the A-axis, whereas Bob measures the other spin, using a Stern-Gerlach oriented along the B-axis, and same for the other three joint measurements, which use Stern-Gerlach apparatuses oriented along the A' and B' axes, respectively. One then introduces the four linear combinations:
A_ CHSH≡ -E_AB+E_AB'+E_A'B+E_A'B'
B_ CHSH≡- E_AB-E_AB'+E_A'B+E_A'B'
C_ CHSH≡-E_AB+E_AB'-E_A'B+E_A'B'
D_ CHSH≡-E_AB+E_AB'+E_A'B-E_A'B'
where we have defined the correlation function:
E_AB=(P_AB^+++P_AB^–)-(P_AB^+-+P_AB^-+)
which accounts for the “++” and “–” correlated outcomes with a positive sign and for the “+-” and “-+” anticorrelated outcomes with a negative sign, where P_AB^ij is the probability that the joint measurement AB gives the outcome i for A and j for B, with i,j∈{+,-}. And of course the correlation functions E_A'B, E_AB' and E_A'B' are defined in a similar way. Bell-CHSH inequalities then correspond to the four inequalities:
-2≤ A_ CHSH,B_ CHSH,C_ CHSH,D_ CHSH≤ 2
We will not discuss here in detail the case of two spin-1/2 entities in the singlet state (<ref>), since its treatment can be found in any good quantum physics textbook. Let us just mention that if α=π 4 is the angle between the A and B axes, and one additionally considers an angle of 3π 4 between the A and B' axes, and an angle of π 4 between the B and A' axes, with angles of π 2 between A and A' and between B and B', one finds the values: A_ CHSH=C_ CHSH=D_ CHSH=0 and B_ CHSH=-2√(2). Since -2√(2)<-2, inequality (<ref>) for B_ CHSH is clearly violated, and corresponds to what is known as Cirel'son's bound, the maximal violation one can achieve in standard quantum mechanics <cit.>, for as long as all joint measurements are described as product measurements relative to the same tensorial representation <cit.>.
§ A STRING MODEL
We consider a simple macroscopic string model that can violate (<ref>), inspired by the original “vessels of water model” <cit.> and from similar models usually described in terms of breakable elastic structures; see for instance <cit.> and the references cited therein. After explaining why the model is able to do so, and why it could be illustrative of what also happens in microphysical systems, we consider a first possible objections, which will lead us to analyze a second variant of the model, then a second objection will come, leading to a third version of the model, then a fourth.
§.§ Variant 1: white string
The system on which Alice and Bob perform their joint experiments is a white string of length L, made of a breakable material. It can be considered as a bipartite system as one can easily identify two parts of it, which are the two ends of the string on which Alice and Bob can act independently from one another.
Alice's experiments A and A' consists in measuring the length and color of her string fragment, respectively. More precisely, in experiment A she pulls hard on her end of the string, to then measure the length of the collected fragment, for example using a yardstick. If the length is greater than L/2, the outcome is noted “+,” and “-” otherwise. Experiment A', on the other hand, simply consists in observing the color of the string (without pulling it), and if she finds it is white, the outcome is noted “+,” and “-” otherwise. Bob's experiments B and B' are the same, but executed on his end side of the string. See the schematic representation in Figure <ref>.
To calculate the correlation functions, one needs to determine the values of the joint probabilities. Here we assume that the string is uniform, hence it can break in any point with equal probability, when jointly pulled by Alice and Bob. This means that the probability that Alice collects a fragment of length greater than L/2, when jointly performing with Bob measurement AB, is simply 1/2, and of course the same is true for Bob. Also, we assume that Alice always has enough time to see the color of the string, before it is pulled by Bob, when they preform A'B, and same for Bob when they perform AB'. And since the string is white, the probability to observe a non-white color is equal to zero. Also, in joint measurement AB', Alice will pull the entire string, hence the observed length will be with certainty greater than L/2, and same for Bob in measurement A'B. Based on these observations, it is easy to convince oneself that we have the joint probabilities described in Table <ref>.
We see that joint measurement AB produces perfect anticorrelations, hence E_AB=-1, whereas the other three joint measurements, AB', A'B and A'B', produce perfect correlations, hence E_AB'=E_A'B=E_A'B'=1. Therefore, B_ CHSH=C_ CHSH=D_ CHSH=0, A_ CHSH=4, and we obtain an algebraically maximal violation of the CHSH inequalities (<ref>).
§.§ Explanation of the model and first objection
Some readers may be surprised to learn that a macroscopic system is capable of violating (<ref>), since there is still a widespread prejudice that Bell's inequalities have solely to do with the description of microscopic systems. Bell's inequalities, however, do not demarcate between microscopic and macroscopic systems, but between correlations of the first kind and correlations of the second kind <cit.>. More precisely, correlations that are contextually created by a joint measurement are called `of the second kind', whereas correlations that were already actual prior to the execution of a joint measurement, hence are not created by the latter, they are called `of the first kind'.
When Alice and Bob perform the joint measurement AB, i.e., when they jointly pull their respective ends of the string, the latter will break in a point that cannot be predicted in advance. But by conservation of the matter with which the string is formed, the length of Alice's fragment will always be perfectly anticorrelated with that of Bob, as is clear that if L_A is the length obtained by Alice and L_B is the length obtained by Bob, we always necessarily have L_A+L_B=L, hence, if L_A>L/2, then L_B<L/2, and if L_B>L/2, then L_A<L/2. However, since the two lengths L_A and L_B do not pre-exist the joint measurement AB, we are in a situation where the latter creates the correlations, in the sense that it each time actualizes one among an infinite number of potential correlations. In other words, they are correlations of the second kind.
The model also contains correlations of the first kind, associated with the other three joint measurements, but not all the correlations need to be of the second kind to obtain a violation. It is sufficient that only some of them are. But if none of them are, then the violation of (<ref>) becomes impossible. To enable the reader to appreciate the difference between having or not having correlations of the second kind, we can consider the situation where a colleague of Alice and Bob, who likes to play pranks, has pre-cut the string at some unspecified point; see schematic representation in Figure <ref>. This means that the two lengths L_A and L_B are now actual before the execution of the joint measurements, hence, they are not anymore created by them, but only discovered.
The joint probabilities are now given in Table <ref>, and the difference from the previous situation is that when Alice executes measurement A, and Bob jointly executes measurement B', Alice will not anymore pull the entire string, but only a fragment of it, and same for Bob when he executes B and Alice executes A'.
We now have E_AB'=E_A'B=0, and E_A'B'=-E_AB=1, implying that B_ CHSH=C_ CHSH=0 and A_ CHSH=-D_ CHSH=2, hence Bell-CHSH inequalities (<ref>) are not anymore violated.
Objection 1 [from an imaginary quantum physicist]. This is an intriguing example, that makes you think.[This is roughly the reaction that Bell had, when in the seventies of the last century one of us presented for the first time, at a conference at the CERN, the equivalent `vessels of water model'.] At first, I thought a mistake must be lurking, but the calculations are really very simple and it is easy to check that there is none, and that all reasonings are legitimate. So, the model certainly reveals a mechanism that can be used to violate Bell-CHSH inequalities, but there is little chance that it is the same mechanism in force in quantum entangled micro-systems. Because you see, your model violates Bell-CHSH inequalities with a value of 4, which is the maximal algebraic violation, and we all know that quantum violations are limited by Cirel'son's bound to the value 2√(2) <cit.>. If only because of this, it seems plausible to me that the quantum situation brings in mechanisms of a very different nature.
§.§ Variant 2: black or white string
To address Objection 1, we consider a slightly different situation, so that the magnitude of the violation of Bell-CHSH inequalities can now have arbitrary values, also below Cirel'son's bound. The system on which Alice and Bob perform their joint experiments is always a string of length L, made of a breakable material, but this time the color of the string is unstable, in the sense that it randomly oscillates between black and white, with p_w (respectively, p_b) the probability that, when observed, it appears as white (respectively, black), with p_w+p_b=1. See the schematic representation in Figure <ref>. Alternatively, one can think that Alice and Bob's colleague who likes to play pranks, secretly changes the string before each execution of the joint measurements, placing a white string with probability p_w and a black string with probability p_b.
Different form the previous situation, the black color becomes a possible outcome, so we now have the joint probabilities described in Table <ref>.
The situation for joint measurements AB and A'B' is similar to that of Variant 1 of the model, i.e., they produce perfect anticorrelations and perfect correlations, respectively. On the other hand, joint measurements AB' and A'B do not produce anymore perfectly correlated outcomes. More precisely, we have E_AB=-E_A'B'=-1, and E_A'B=E_AB'=p_w-p_b=1-2p_b, so that: A_ CHSH=-(-1)+2(1-2p_b)+1=4p_w, and similar calculations yield: B_ CHSH=C_ CHSH=0, and D_ CHSH=-4p_b. We see that we can now have a violation of (<ref>) of arbitrary magnitude. In particular, if we choose p_w=√(2)/2, then A_ CHSH=2√(2), which corresponds to Cirel'son's bound. Also, if p_w=1/2, then A_ CHSH=2, and D_ CHSH=-2, so we have no violation in this case.
Objection 2. Interesting to observe that the toy model can violate Bell's inequalities with arbitrary values, so my previous criticism was unfounded. But I have bad news, because I meanwhile discovered that your beautiful model has a flaw: it violates the no-signaling conditions, also called marginal laws. As you certainly know, these conditions are not violated by the quantum formalism, and this means that the mechanism subtended by your model cannot be the same in force in quantum mechanics. More precisely, if for instance we consider the probability P_B(A=+) that Alice obtains outcome “+,” when performing experiment A, irrespective of the outcome of Bob, when the latter performs measurement B, we have: P_B(A=+)=P_AB^+++P_AB^+-=0+1/2=1/2, but if we calculate the probability P_B'(A=+) that Alice obtains outcome “+,” when performing experiment A, irrespective of the outcome of Bob, when the latter performs measurement B', we have: P_B'(A=+)=P_AB'^+++P_AB'^+-=1+0=1. Hence, P_B(A=+)=1/2≠ 1 =P_B'(A=+), which is a violation of the marginal laws. And one can of course exhibit many other of these violations in your model.
§.§ Variant 3: black or white string with length-color correlations
To address Objection 2, we consider a new variation of the model, such that the degree of violation of the marginal laws can now be varied, hence can also be obeyed. This variant is inspired by <cit.>; see also <cit.>. The system is always a string of length L, made of a breakable material whose color oscillates between black and white, with probabilities p_w and p_b, respectively. Measurements A' and B' are the same color-measurements as before, but measurements A and B are now different. When Alice executes A, she always pulls the string, to measure its length, but she also jointly measures its color, and if the outcomes are “long-white” or “short-black,” she notes them “+.” If they are “long-black” or “short-white,” she notes them “-.” Here “long” means longer than L/2, and “short” means shorter than L/2. In other words, Alice, when performing A, now measures the length-color correlation, and Bob does of course the same on his side when performing B. See the schematic representation in Figure <ref>.
In the calculation of joint probabilities, the difference with respect to Variant 2 of the model is not in the joint measurement A'B', as it is the same, nor in joint measurement AB, as is clear that only anticorrelated outcomes can be observed, since Alice and Bob cannot both collect a long fragment, or a short fragment. But there is a difference in the two joint measurements AB' and A'B, since now we cannot have anticorrelated outcomes, as is clear that the length is always long, hence we can only have the “long-white” and “long-black” combinations for the A measurement, when jointly performed with B' (and same for the B measurement, when jointly performed with A'). The (long-white,white) situation corresponds to the “++” outcome and the (long-black,black) situation to the “–” outcome. Also, the “+-” outcome, which corresponds to the “(long-white,black) or (short-black,black)” situation, and the “-+” outcome, which corresponds to the “(short-white,white) or (long-black,white)” situation, cannot happen, as the string is observed to be either black or white by both Alice and Bob, and never to be shorter than L/2. Hence, we have the joint probabilities described in Table <ref>.
The main difference with respect to the previous version of the model is that the joint measurements AB' and A'B now produce perfectly correlated outcomes. More precisely, we now have E_AB=-1 and E_A'B'=E_A'B=E_AB'=1, so that A_ CHSH=4, and B_ CHSH=C_ CHSH=0=D_ CHSH=0, independently of the value of p_w. Considering the marginal laws, and taking again the example of the two marginal probabilities P_B(A=+) and P_B'(A=+), we have: P_B(A=+)=P_AB^+++P_AB^+-=0+1/2=1/2, and P_B'(A=+)=P_AB'^+++P_AB'^+-=p_w+0=p_w. Hence, if we choose p_w=1/2, then P_B(A=+)=P_B'(A=+), and one can check that all the other marginal relations are also obeyed.
Objection 3. Congratulations, I'm impressed, you managed to create an experimental situation with a macroscopic entity where you have a violation of the Bell-CHSH inequalities without jointly violating the marginal laws. But it still doesn't convince me, because you see, the price you had to pay to succeed is to have again a fixed algebraically maximum violation of the Bell-CHSH inequalities, hence, I'm now back to my first objection. It seems to me that if you fix one aspect of your model, you get a problem somewhere else, thus confirming my sentiment that it does not capture the real mechanism able to explain a microscopic Bell-test quantum experiment.
§.§ Variant 4: two black or white strings with length-color correlations
To address Objection 3, we further elaborate on our model, to obtain a situation where the marginal laws are obeyed and the possibility of varying the magnitude of the violation of the Bell-CHSH inequalities is preserved.[To the authors' knowledge, a macroscopic model with such properties had never been proposed up to now.] The model no longer consists of a single string, but of two strings. This means that when Alice and Bob perform their measurements, which are the same as those defined in Variant 3 of the model, they have to randomly select one of the two strings, which will be then the one on which they will operate. This means that, different from Variant 3 of the model, there will be situations where Alice and Bob do not select the same string, and situations where they do so. See the schematic representation in Figure <ref>.
The calculation of the joint probabilities is now more involved, but presents no difficulties. Let us denote p_1 the probability with which Alice and Bob select string 1, and p_2 the probability with which they select string 2, with p_2=1-p_1. One could of course also consider the situation where these probabilities are different for Alice and Bob, but there is no reason to overcomplicate the model. An important difference with respect to the single string situation is that now Alice can obtain a white (respectively, black) outcome when Bob jointly obtains a black (respectively, white) outcome, for the string color, when they select a different string. Note that when Alice and Bob select a different string, the “short” result can never be actualized. With that in mind, let us now explicitly calculate all the joint probabilities (the reader who is not interested in the details can directly jump to the results in Table <ref>).
Let us start considering P_AB^++. This corresponds to a situation where both Alice and Bob observe either outcome “long-white,” or outcome “short-black”. If they select the same string, the observed colors cannot be different, nor they can collect both a “short” or “long” string, which means that the “++” outcome has no contribution from the situations where Alice and Bob select the same string. So, the events contributing to the “++” outcome, when Alice and Bob select different strings (which are then necessarily “long”), are the following: “(Alice selects string 1 and is white and Bob selects string 2 and is white) or (Alice selects string 2 and is white and Bob selects string 2 and is white).” We thus have the joint probability:
P_AB^++=p_1p_wp_2p_w + p_2p_wp_1p_w=2p_1p_2p_w^2
By a similar reasoning, the “–” outcome corresponds to the events: “(Alice selects string 1 and is black and Bob selects string 2 and is black) or (Alice selects string 2 and is black and Bob selects string 1 and is black),” which gives the joint probability:
P_AB^–=p_1p_bp_2p_b + p_2p_bp_1p_b=2p_1p_2p_b^2
Let us now consider P_AB^+-. It corresponds to the following events: “(Alice selects string 1 and is white and Bob selects string 2 and is black) or (Alice selects string 2 and is white and Bob selects string 1 and is black) or (Alice selects string 1 and Bob selects string 1 and the string is white and Alice's fragment is longer) or (Alice selects string 2 and Bob selects string 2 and the string is white and Alice's fragment is longer) or (Alice selects string 1 and Bob selects string 1 and the string is black and Alice's fragment is shorter) or (Alice selects string 2 and Bob selects string 2 and the string is black and Alice's fragment is shorter).” This gives the joint probability:
P_AB^+- = p_1p_wp_2p_b + p_2p_wp_1p_b + p_1p_1p_w1/2 + p_2p_2p_w1/2 + p_1p_1p_b1/2 + p_2p_2p_b1/2
= 1/2+p_1p_2(2p_wp_b-1)
and the calculation for P_AB^-+ being specular to that of P_AB^+-, we also have
P_AB^-+=1/2+p_1p_2(2p_wp_b-1)
Let us now consider the joint measurement AB', and more precisely the joint probability P_AB'^++. The corresponding events are: “(Alice selects string 1 and is white and Bob selects string 2 and is white) or (Alice selects string 2 and is white and Bob selects string 1 and is white) or (Alice selects string 1 and is white and Bob selects string 1) or (Alice selects string 2 and is white and Bob selects string 2),” which gives the joint probability:
P_AB'^++=p_1p_wp_2p_w + p_2p_wp_1p_w + p_1p_1p_w + p_2p_2p_w=p_w(1-2p_1p_2p_b)
For the joint probability P_AB'^+-, we have the following contributing events: “(Alice selects string 1 and is white and Bob selects string 2 and is black) or (Alice selects string 2 and is white and Bob selects string 1 and is black).” This gives:
P_AB'^+-=p_1p_wp_2p_b + p_2p_wp_1p_b = 2p_1p_2p_wp_b
For the joint probability P_AB'^–, we have to proceed as for probability P_AB'^++, but interchanging the roles of p_w and p_b. This gives:
P_AB'^–=p_b(1-2p_1p_2p_w)
For the joint measurement A'B, we observe that for symmetry reasons we have the same probabilities as for measurement AB'. Finally, let us consider the joint measurement A'B'. For the calculation of P_A'B'^++, we have to consider the following events: “(Alice selects string 1 and is white and Bob selects string 1) or (Alice selects string 2 and is white and Bob selects string 2) or (Alice selects string 1 and is white and Bob selects string 2 and is white) or (Alice selects string 2 and is white and Bob selects string 1 and is white),” which gives:
P_A'B'^++=p_1p_wp_1 + p_2p_wp_2+p_1p_wp_2p_w +p_2p_wp_1p_w=p_w(1-2p_1p_2p_b)
So, we find that P_A'B'^++=P_AB'^++, and we leave it to the reader to check that we also have P_A'B'^+-=P_AB'^+-, P_A'B'^-+=P_AB'^-+, and P_A'B'^–=P_AB'^–. Table <ref> summarizes these results.
Let us calculate the four correlation functions and the quantities in (<ref>). We have:
E_AB=2p_1p_2(p_w^2+p_b^2)-1-2p_1p_2(2p_wp_b-1)=-1+4p_1p_2(1-2p_wp_b)
E_AB'=E_A'B=E_A'B'=1-8p_1p_2p_wp_b
Thus, for the four combinations (<ref>), we find:
A_ CHSH= 4[1-p_1p_2(1+4p_wp_b)]
B_ CHSH=C_ CHSH= D_ CHSH=4p_1p_2(1-4p_wp_b)
Finally, let us also consider the marginal probabilities P_B(A=+) and P_B'(A=+). We have:
P_B(A=+)=P_AB^+++P_AB^+-=1/2+p_1p_2(2p_w^2+2p_wp_b-1)
P_B'(A=+)=P_AB'^+++P_AB'^+-=p_w
Hence, the condition
P_B(A=+)-P_B'(A=+)=1/2-p_w+p_1p_2(2p_w^2+2p_wp_b-1)=0
is verified when p_w=p_b=1/2, and it is straightorward to check that all the other merginal laws are then also obeyed. Replacing these specific values in (<ref>), we find:
A_ CHSH= 4(1-2p_1p_2)=4(p_1^2+p_2^2)
B_ CHSH=C_ CHSH= D_ CHSH=0
In conclusion, when p_w=1/2, the marginal laws are obeyed, and depending on the value of p_1, A_ CHSH can vary from a situation of no violation, with A_ CHSH=2, when p_1=1/2, to a situation of an algebraically maximal violation, A_ CHSH=4, when p_1=0, or p_1=1. Also, when p_1=1 2(1±√(√(2)-1)), A_ CHSH=2√(2) is exactly Cirel'son's bound.
Objection 4. I'm again impressed, you succeeded in responding to all my criticisms. So, I am now forced to admit it, this toy model reveals a mechanism that is general enough to possibly describe quantum entanglement. But I still doubt this is `the' mechanism at work when dealing with micro-systems. Because if that were true, this should somehow emerge also from the quantum formalism, but I do not believe the latter predicts the existence of these mysterious non-local “hidden” potential interactions, responsible for the existence of correlations of the second kind.
§ MEASUREMENT INTERACTIONS
To address Objection 4, we now abandon the string model and enter the description of the measurement process in the quantum formalism, to emphasize that the latter naturally incorporates the possibility of interpreting quantum probabilities as resulting from “hidden” measurement interactions that are contextually actualized at each run of a measurement.
To see this, let us consider the simple situation of a measurement on a spin-1/2 entity, which only has two outcomes. Using Dirac notation, let ρ=|ψ⟩⟨ψ| be the initial spin state. When we represent it in the Bloch sphere, we can associate to it a 3-dimensional unit real vector r, such that ρ=1 2(𝕀 + σ· r), where σ is a vector formed by the three Pauli matrices, generators of the SU(2) group.
Let us assume that we are measuring a spin observable A=P_+-P_-=|+⟩⟨+|-|-⟩⟨-|. In the Bloch representation, there exist two opposite unit vectors, n_±, n_+=- n_-, such that P_±=1 2(𝕀 + σ· n_±). In other words, the measurement of the observable A can be associated with the one-dimensional diameter-region subtended by the two outcome unit vectors n_±, which we will call _1.
It is well accepted that a measurement involves a decoherence process, consequence of the entanglement resulting from the interaction of the measured entity with the measuring apparatus, producing the disappearance of the non-diagonal elements of the operator state ρ, when expressed in the eigenbasis {|+⟩,|-⟩}. This means that the first phase of a measurement can be described as the transition of ρ towards the fully reduced operator state ρ^∥=r_+^∥ P_+ + r_-^∥ P_-, where the coefficients r_±^∥ = Trρ P_± = |⟨±|ψ⟩|^2 are the Born probabilities.
From the perspective of the Bloch sphere, the decoherence process can be described as a movement where the point representative of the state “dives” into the sphere, along a rectilinear path orthogonal to the diameter-region _1, until it stops exactly on that region. In other words, we have a first phase in the measurement corresponding to the transition ρ→ρ^∥, where one can show that ρ^∥=1 2(𝕀 + σ· r^∥), with r^∥=r_+^∥ n_+ + r_-^∥ n_-, the vector r^∥ being parallel to _1, which explains our use of the ∥-symbol.
It is at this point that the formalism reveals the possible existence of hidden (potential) interactions. Indeed, the vector ρ^∥ splits the region _1 in a way that is exactly proportional to the outcome probabilities, in the sense that the sub-region A_+, going from n_- to r^∥, has length μ(A_+)=2r_+^∥, and the sub-region A_-, going from r^∥ to n_+, has length μ(A_-)=2r_-^∥. And since μ(_1)=2, one observes that the ratios r_±^∥ = μ(A_±)/μ(_1) are exactly the Born probabilities. This means that one can interpret _1 as a potentiality region describing all the available interactions between the measuring and measured systems, with those in A_+ producing the collapse r^∥→ n_+, and those in A_- producing the collapse r^∥→ n_-.
There is an interesting way to visualize this process. One can imagine the potentiality region to be like an abstract elastic band that can break at some unpredictable point, which by collapsing pulls the point particle representative of the state towards one of the two outcome states, according to the projection postulate. The hidden interactions are then in a correspondence with all these potential breaking points of the elastic band; see Figure <ref>.
It is of course not the scope of this paper to delve into the mathematics of the hidden measurement interpretation of quantum mechanics, and we refer the interested readers to <cit.> for more details. What is important here to emphasize is that the above description is not limited to 2-dimensional systems and can be generalized to measurements having an arbitrary number N of outcomes, possibly also degenerate. Then, the potentiality region, describing the hidden interactions, becomes a (N-1)-dimensional simplex _N, inscribed in the convex region of states belonging to a generalized (N^2-1)-dimensional Bloch sphere <cit.>. These simplexes can always be viewed as abstract hyper-membranes that the pre-measurement state r, following the decoherence process r→ r^∥, partitions into N convex sub-regions, and one can prove that the relative sizes of these regions correspond exactly to the quantum probabilities <cit.>.
Objection 5. I wasn't aware that the Bloch representation could be generalized to an arbitrary number of dimensions N of the state space, and that the eigenvectors that characterize an observable determine in it a simplex-shaped region, that the decohered state is capable of partitioning into as many sub-regions as there are outcomes of the measurement, and that the relative sizes of these sub-regions are exactly the probabilities predicted by Born's rule. This certainly makes it more credible that there could be an objective “weighted symmetry breaking mechanism,” based on hidden interactions, at the origin of the quantum collapses and associated probabilities, in the hypothesis of course that such collapses would be objective physical phenomena and not mere Bayesian updates of an experimenter's knowledge. But let me express a last objection. Coming back to your string model, it is obvious that quantum correlations, if they are truly created by the joint measurements, they must originate from a connective element that somehow unites the two sub-entities. In your model, this connective element is the very body of the string. But when considering two micro-entities, like two entangled electrons, or two entangled photons, we know that when they are separated by arbitrarily large distances there is nothing that can be detected in the space between them, playing the role of such hypothetical connective element. Ultimately, I believe this is the fundamental reason why your interesting string model is unable to capture the gist of what happens during a typical Bell-test experiment with micro-entities.
§ CONCLUDING REMARKS
This last objection brings us closer to the true mystery of quantum entanglement, and more generally of quantum superposition states. Again, we want to present some arguments that will help clarify why this mechanism of correlations of the second kind is able to properly illustrate what happens with the entanglement phenomenon. Here we will do so from a more general perspective of the foundations and philosophy of physics.
Let us first emphasize that the reason we consider simple mechanical models of breaking strings as examples relevant to the phenomenon of entanglement is not in the attempt to give quantum mechanics a more classical interpretation. We believe that the unquestionable success of quantum mechanics as a predictive physical theory shows that the microscopic reality has a profoundly different nature from the everyday reality that surrounds us, thus from the classical mechanic reality. So, why these string models to illustrate the phenomenon of entanglement?
To answer this question, we must consider another one: “What are the properties of physical reality that are fundamentally different in quantum reality from classical reality?" This is what is really important to consider first, and in doing so, it is appropriate to be cautious in any of our attempts to interpret those micro-world phenomena that confront us with a “quantum singularity." With this in mind, it is
also important to remember that the phenomenon of quantum entanglement is primarily linked to the superposition principle, an entangled state being in general a superposition of product states. But the superposition principle is generally valid in quantum mechanics, even in the situation of a system formed by a single entity. This observation is relevant because there are strong reasons to believe that the absence of a connective element in space between two entangled entities participating in a Bell-test experiment, unlike the situation in our string models, can be explained by a general property of quantum entities that is also possessed by single quantum entities.
Indeed, since the early days of quantum mechanics, there were strong suspicions that the Schrödinger wave function should not be regarded as a wave present in space, but as an expression of the potentialities by which a quantum entity can be located in certain regions of space. In pondering this possibility, of particular importance were the experiments conducted in the 1970s by Helmut Rauch, with low-energy neutrons <cit.>, when one of us was a doctoral student in the quantum physics group at the University of Geneva, where the work of Rauch's experimental group had attracted much attention.[Interestingly, Anton Zeilinger, who was recently awarded the Nobel Prize for his experimental work on entanglement, co-authored Rauch's main experiment <cit.>, as he was one of his students at the time.]
In fact, these astonishing experiments made it particularly clear that it was not possible to think of the wave function as a wave spreading in space, in the sense that such hypothesis was insufficient to explain the non-local effects that these experiments revealed, which led to the introduction, by one of the authors, of the notion of non-spatiality, already from the early 1990s <cit.>. The other author of this article was also working in Geneva, a decade later, when interest in neutron interferometry experiments was still very much alive. This partly explains why their collaboration around the notion of non-spatiality could naturally emerge, also influencing other fields of their joint research <cit.>.
More exactly, it was possible to deduce from these experiments that a neutron can show its joint presence in two separate spatial regions, within an interferometer, with nothing being present in the space between them. Despite this separation, the neutron continues to behave as a whole entity, thus demonstrating that in the quantum layer of reality wholeness and interconnectedness do not depend on having actual spatial connections between the different elements that form an entity.
We believe that what happens with the interconnectedness of entangled entities must find an explanation that is similar to what happens to the neutrons in Rauch's experiments. It is also important to note that, in the meantime, similar experiments have been carried out that can also bring complex entities like molecules, consisting of more than 800 atoms, in such kind of non-local superposition state <cit.>, showing that they can remain whole entities throughout an experiment despite manifesting in different spatial regions, with nothing in between them. It is such behavior with respect to space, of a single entity, or of two entangled entities, that we call non-spatiality.
Compatibly with the notion of non-spatiality, we believe that the non-linear and indeterministic evolution of the wave function, usually referred to as collapse, or reduction, is to be considered as the more general process of change, while the linear and deterministic one, described by the Schrödinger equation, should be considered as the special case, applicable only when there is no interaction with a measuring apparatus. This is the way quantum physics is regarded today by most physicists who use it, but for some reason it is rarely the way it is regarded by those who reflect on its philosophical interpretation.
This means that the philosophical maneuver leading to the Many Worlds interpretation <cit.> would not be necessary, in the sense that the indeterministic evolution, which perhaps for the time being is described too simply on the basis of the collapse of the wave function, should not disappear and be absorbed in the Schrödinger evolution, but in a sense exactly the opposite <cit.>.
In the interpretive approach we have proposed with our collaborators, it is also important that the fundamental evolution in the quantum world remains indeterministic, to allow both physics and psychology, when more advanced, to integrate within a single explanatory framework the free choice of a person who decides whether and which experiment to carry out. In our approach, steps are already taken in that direction, since the indeterminism of a quantum measurement is described to occur due to the presence of fluctuations on the interactions between the measuring apparatus and the entity being measured.
Note that a form of this indeterminism also exists in classical mechanics. Indeed, if a classical entity is in a state of unstable equilibrium, it is the fluctuations present in the perturbations of this state that push the classical entity out of it, and these fluctuations are usually unpredictable, i.e., random in nature. We then speak of bifurcations. The difference with the quantum situation is that the set of unstable states in a classical theory always has measure zero, and in this sense the uncertainties associated with it do not contribute to the probabilities associated with outcomes of the measurements. Superposition states in quantum mechanics, on the contrary, always form the largest part of the set of all states, so that the indeterminism they carry does contribute to the probabilities of the outcomes of a measurement.
The presence of an irreducible uncertainty in quantum mechanics, but also in classical mechanics if we consider the unstable equilibrium states, does not mean that the entities being measured, the measuring apparatuses, the experimenters and their environments, cannot proceed as a whole according to a deterministic evolution. So, in principle, what is now called superdeterminism <cit.> is a hypothesis that can be put forward as possibly true. However, what the adherents of such an interpretive view seem not to be aware of, is how important to our most basic notion of reality, when viewed as an operational construction, is that we possess a genuine free choice, in the sense of being able to select our experiences among a set of possibilities that are truly such.
This possibility, of having been able to make different choices in our past, is what really determines the content and richness of our present reality, and is at the very basis of operationalism, which in turn is at the core of the project of science. But it would take us too far to analyze this statement and its scope in detail here, so we invite the interested reader to <cit.>. Given the main topic of this article, it is also fitting to note that the hypothesis of the existence of free choice is crucial in terms of interpreting the meaning of a Bell-test experiment, when Bell inequalities are violated. Indeed, if experimenters are not assumed to be able to freely choose the orientations of the polarizers in measurements on polarization-entangled photon pairs, the very significance of the design of these experiments is lost <cit.>.
Coming back to the notion of non-spatiality, it is worth observing that a widespread “classical” prejudice persists today, consequence of the way in which we humans have experienced reality during our evolution on this planet, which is to believe that our physical world is somehow fully contained in space, and more generally in spacetime. As a consequence, as we observed already, if two entities are separated by a very large spatial distance, they are also believed to be experimentally separated, when we act on them simultaneously. This prejudice is really what led to the initial disbelief about the quantum entanglement phenomenon, but experimental data, via the violation of Bell-CHSH inequalities, told us a very different story, that we could no longer ignore. But we were then also left with the uncomfortable feeling that the entanglement phenomenon, however real, was impossible to explain.
On the other hand, if space and time are considered to be emergent, i.e., coming into being with the formation of macroscopic aggregates of matter, the unjustified assumption would be to consider that two entities are necessarily disconnected if we cannot detect anything measurable in the space between them, functioning as a connective element. Indeed, a connective element may well exist without being detectable as a specific spatial element. The extended Bloch formalism, which we briefly mentioned in Section <ref>, also suggests the existence of such non-spatial missing piece `in between' two entangled entities. The formalism actually does much more than this, as it also provides a less rudimentary description of the non-linear, quantum collapse, indeterministic processes of change, than the standard formalism allows.
To be more specific, take the case of two spin-1/2 entities. Since the Hilbert space is 4-dimensional, the generalized Bloch sphere is 15-dimensional, and a state ρ=|ψ⟩⟨ψ| can always be written as
ρ = 1 4(𝕀 +√(6) r·)
where Λ is a 15-dimensional vector formed by a determination of the generators of SU(4). We can take them to be <cit.>: Λ_1=1√(2)σ_1⊗𝕀, Λ_2= 1√(2)σ_2⊗𝕀, Λ_3=1√(2)σ_3⊗𝕀, Λ_4=1√(2)𝕀⊗σ_1, Λ_5=1√(2)𝕀⊗σ_2, Λ_6=1√(2)𝕀⊗σ_3, Λ_7=1√(2)σ_1⊗σ_1, …, Λ_15=1√(2)σ_3⊗σ_3. By direct calculation, one can then show that the vector r associated with the state ρ, in the Blochean representation, can always be written as the direct sum:
r = 1√(3) r_ Alice⊕1√(3) r_ Bob⊕ r_ conn,
where r_ Alice and r_ Bob are the 3-dimensional Bloch vectors describing the states of Alice's and Bob's sub-systems, respectively, and r_ conn is a 9-dimensional vector describing the “state of their connection.”
When the composite system is in a product state, then r_ conn is trivial, in the sense of being fully determined by the components of r_ Alice and r_ Bob. However, when the composite system is in an entangled state, r_ conn cannot anymore be deduced from the knowledge of the sub-systems' states r_ Alice and r_ Bob, as it now describes a genuine additional element of reality, in accordance with the principle that the whole can be greater than the sum of its parts. The important observation for our discussion is that r_ conn is higher dimensional compared to r_ Alice and r_ Bob, which suggests that the entanglement connection would belong to a more abstract layer of our reality, less prone to be detectable by our three-dimensional measuring instruments.
Regarding the general notion of non-linear and indeterministic change discussed above, we believe that at least some of this change does not occur within space, parameterized by time, thus within spacetime. This means that the so-called `collapse models' of the measurement process, which are usually proposed with the aim of making quantum theory classical again, will not lead to satisfactory results, because they carry with them the bias that change must be a process that occurs within spacetime. In other words, more sophisticated models, also from a philosophical point of view, will be necessary to take into due account the ingredient of non-spatiality.
Note also that in the string models we have described, there is the simplifying assumption that all points of a string have equal probability to become a breaking point. In the extended Bloch formalism, this assumption of uniform probability over the different interactions is what allows to derive the Born probabilities. But one can relax such condition and also consider situations where some interactions are more probable than others, i.e., where the probability distributions are not anymore uniform. In this more general framework, the `uniform case' is situated somehow in between two limit cases, the `classical case', where almost all outcomes are predetermined once the measured entity is in a given state, and the `solipsistic case', where only the fluctuations are responsible for the (non-predeterminable) outcome, with no influence from the state <cit.>.
In that sense, one can say that if a quantum measurement is described by the standard collapse model in Hilbert space, then it lies in between a situation of pure discovery (of what exists prior to the measurement) and pure creation (of something that did not exist prior to the measurement). However, this feature of the pure quantum measurements cannot be captured by remaining within the Hilbert space formalism. It is necessary for this to consider a more fine grained approach, like the one of the extended Bloch formalism, where probability distributions are not restricted to be uniform. The Born probabilities can then be shown to appear as universal averages, i.e., as averages over all different possible probability distributions, describing all possible ways of selecting a measurement-interaction <cit.>.
This possibility, of viewing quantum measurements as universal measurements, i.e., as averages over different typologies of measurements, classical, solipsistic and in between, holds a possible explanation of the classicality of the macroscopic material world. Indeed, this would be due to the presence of an oversupply of classical-like measurements in the collection of those on which the average operates. Does it mean that non-classical, or even solipsistic measurements, would be totally absent in the macroscopic material reality? Surely not, as the examples of measurements with the strings we proposed in this article clearly demonstrates. Simply, these typologies of measurements are not usually considered necessary to gain relevant knowledge about macroscopic material entities, like strings, so in particular they are not used in Bell-test experimental protocols.
One fascinating aspect of the hypothesis that quantum measurements are to be equated with universal measurements, is that it allows free choice
to be described in a way that is compatible with quantum mechanics and the general measurement model we propose, which generalizes the quantum model. Indeed, to give an example, free choice dictated by criteria of rationality, or by a moral code, since it doesn't contemplate every possible “way of choosing," but only a tiny sub-class of such ways, it cannot be associated with Born probabilities, but with something more close to a classical probabilistic model. On the other hand, persons who make their choices in ways that we would define as irrational, not being governed by specific principles, hence more open to actualize every possible “way of choosing," would be more closely described by a quantum model. Also, the fluctuations that give rise to the bifurcations present in classical unstable systems, when viewed as part of measurements, they are to be associated with solipsistic-like processes, and corresponding statistics, in the classification scheme we have introduced.
Coming now to the question of marginal laws, we have seen that our model allows for their violation, something that is not possible in quantum mechanics, if one uses the same tensor product representation to describe all the joint measurements as product measurements. However, the violation is not an essential ingredient of it, in the sense that Bell-CHSH inequalities can be easily violated, with arbitrary magnitude, also when the marginal laws are obeyed. This possibility of violating the marginal laws in our model, and in other models built with similar logic <cit.>, opens up the possibility that what is predicted by the standard quantum formalism may not necessarily be always the rule. And in that respect, we can observe that in experiments the marginal laws are in fact violated, although these violations are usually attributed to experimental errors <cit.>.
If these violations truly happen, i.e., are not just experimental errors, it means that the entanglement phenomenon should not only be associated with the states, but also with the joint measurements. Indeed, when the marginal laws and Bell-CHSH inequalities are jointly violated, to model the data one needs to also consider non-product measurements (if one wants to have the same pre-measurement state for each joint measurement), which means that joint measurements should also be considered to be entangled.
Quoting from <cit.>: “Thinking of entanglement as being present also at the level of the measurements might seem like a very drastic perspective [...], particularly in those experimental situations where there is a clear spatial separation between the measurement apparatuses working in a coincident way. However, if the measured entity forms a whole, it is to be expected that also the measurements can become entangled, precisely through the very wholeness of the measured entity, because their action on the latter would occur simultaneously and not sequentially. In other words, the notions of locality and separability, usually intended as `spatial locality' and `spatial separability', need here be replaced by the more general notions of `sub-system locality' and `sub-system separability'. This because among the salient properties of physical and conceptual systems, there is precisely that of non-spatiality, and therefore `separation in space' is not anymore a sufficient criterion for characterizing a separation of two sub-systems and corresponding joint measurements.”
We think it is important to point out that it is not just entanglement that reveals to us that our physical reality is mostly non-spatial. Quantum superposition, quantum measurement, quantum complementarity and quantum indistinguishability are all phenomena that remain unintelligible if we hold on to our prejudice that microphysical entities are permanent resident of our spatiotemporal theater.
Without going into details here, let us mention an interpretation called the conceptuality interpretation, that we are currently further developing in our group <cit.>. In its framework, the nature of non-spatiality is addressed in a very natural way, by considering that physical micro-entities are to be interpreted as “entities of meaning," which therefore can be found in more or less abstract states, with respect to a given semantic context. Non-spatiality would then be an expression of conceptual abstractness, and this opens up a truly novel perspective on the physical world and how quantum mechanics should be interpreted, although, of course, such perspective remains mostly still a subject for further study for the time being.
Much more should be said to fully clarify the scope of the models we have presented and how they fit into the study of more general physical entities,
but this would require much more space than what is available here. To conclude, we have presented different variants of a simple string model that can explain entanglement as being the result of two ingredients: (1) potential measurement interactions between the measured system and the measuring apparatus; (2) an element of reality connecting the two sub-systems from which correlations of the second kind are produced.
In our string model, the existence of the potential measurement interactions presents no mysteries, as they clearly correspond to the different ways Alice and Bob can jointly pull the string, producing different breaking points, hence different correlations. The presence of the connective element is also non-mysterious, as the unbroken string can be interpreted as an entity formed by two string fragments that are not in a well-defined length-state, in the same way the two spin-1/2 entities in a singlet state cannot be associated with well-defined spatial directions. In particular, there is no “spooky action at a distance” between Alice and Bob, and no superluminal communication between them.
We have argued that our macroscopic model remains relevant also for the microscopic domain. Indeed, micro-entities can remain whole even when they appear fragmented from a spatial perspective. This is a general quantum property that holds not only for entangled entities, but also for individual entities. It is usually referred to as non-locality, but should be more properly understood as non-spatiality, or non-spatiotemporality, and is a fundamental feature of all microscopic quantum systems.
[Adenier & Khrennikov(2007)]AdenierKhrennikov2007 Adenier, G. & Khrennikov, A. (2007). Is the fair sampling assumption supported by EPR experiments? J. Phys. B: Atomic, Molecular and Optical Physics 40, 131–141.
[Adenier & Khrennikov(2017)]AdenierKhrennikov2016 Adenier, G. & Khrennikov, A. (2017). Test of the no-signaling principle in the Hensen loophole-free CHSH experiment. Fortschritte der Physik (Progress in Physics) 65, 1600096.
[Aerts(1982)]Aerts1982Aerts, D. (1982). Example of a macroscopical situation that violates Bell inequalities. Lettere al Nuovo Cimento, 34, 107–111.
[Aerts(1984)]a1984 Aerts, D. (1984). The missing elements of reality in the description of quantum mechanics of the EPR paradox situation. Helvetica Physica Acta 57, 421–428.
[Aerts(1986)]Aerts1986 Aerts, D. (1986). A possible explanation for the probabilities of quantum mechanics. Journal of Mathematical Physics 27, 202–210.
[Aerts(1990)]Aerts1990 Aerts, D. (1990). An attempt to imagine parts of the reality of the micro-world. In: J. Mizerski et al. (Eds.), Problems in Quantum Physics II; Gdansk '89. World Scientific Publishing Company, Singapore.
[Aerts(1991)]Aerts1991 Aerts, D. (1991). A mechanistic classical laboratory situation violating the Bell inequalities with 2√(2), exactly `in the same way' as its violations by the EPR experiments. Helvetica Physica Acta 64, 1–23.
[Aerts(1996)]Aerts1996 Aerts, D. (1996). Relativity theory: what is reality? Foundations of Physics 26, pp. 1627–1644.
[Aerts(1998)]aerts1998 Aerts, D. (1998). The entity and modern physics: the creation discovery view of reality. In E. Castellani (Ed.) Interpreting Bodies: Classical and Quantum Objects in Modern Physics, pp. 223–257. Princeton University Press: Princeton.
[Aerts(2005)]aerts2005 Aerts, S. (2005). A realistic device that simulates the non-local PR box without communication. arXiv:quant-ph/0504171.
[Aerts et al(1997a)]aertsetal1997a Aerts, D., Aerts, S., Coecke, B., D’Hooghe, B., Durt, T. & Valckenborgh, F. (1997a). A model with varying fluctuations in the measurement context. In M. Fererro and A. van der Merwe (Eds), New Developments on Fundamental Problems in Quantum Physics (pp. 7–9). Dordrecht: Springer.
[Aerts et al(1997b)]aertsetal1997b Aerts, D., Coecke, B., Durt, T. & Valckenborgh, F. (1997b). Quantum, Classical and Intermediate I & II. Tatra Mountains Mathematical Publications 10, pp. 225-240, pp. 241–266.
[Aerts et al(1993)]aertsetal1993 Aerts, D., Durt, T. & Van Bogaert (1993). Quantum probability, the classical limit and non locality. In K. V. Laurikainen and C. Motonen (Eds.), Proceedings of the International Sympodium on the Foundations of Modern Physics, 1992: The Copenhagen Interpretation and Wolfgang Pauli (pp. 35–56). Singapore: World Scientific.
[Aerts & Durt(1994)]aertsdurt1994 Aerts, D. & Durt, T. (1994). Quantum, classical and intermediate: An illustrative example. Foundations of Physics 24, pp. 1353–1369.
[Aerts & Sassoli de Bianchi(2014)]AertsSassolideBianchi2014 Aerts, D. & Sassoli de Bianchi M. (2014). The Extended Bloch Representation of Quantum Mechanics and the Hidden-Measurement Solution to the Measurement Problem. Annals of Physics 351, 975–1025.
[Aerts & Sassoli de Bianchi(2015)]AertsSassolideBianchi2015 Aerts, D. & Sassoli de Bianchi M. (2015). Many-Measurements or Many-Worlds? A Dialogue. Foundations of Science 20, pp. 399–427.
[Aerts & Sassoli de Bianchi(2016)]AertsSassoli2016 Aerts, D. and Sassoli de Bianchi, M. (2016). The Extended Bloch Representation of Quantum Mechanics. Explaining Superposition, Interference and Entanglement. J. Math. Phys. 57, 122110.
[Aerts & Sassoli de Bianchi(2019)]AertsSassolideBianchi2019 Aerts, D. & Sassoli de Bianchi M. (2019). When Bertlmann wears no socks: contextual common causes as an explanation for quantum correlations. arXiv:1912.07596 [quant-ph]. To be published in a forthcoming World Scientific `Probing the Meaning of Quantum Mechanics' volume.
[Aerts & Sassoli de Bianchi(2021)]AertsSassolideBianchi2021 Aerts, D. & Sassoli de Bianchi M. (2021). Single-entity violation of Bell-CHSH inequality and no-signaling conditions. Journal of Mathematical Physics 62, 092103.
[Aerts & Sassoli de Bianchi(2023)]aertssassolidebianchi2023 Aerts, D. & Sassoli de Bianchi, M. (2023). The nature of time and motion in relativistic operational reality. In preparation. To be submitted to: Theoria. An International Journal for Theory, History and Foundations of Science.
[Aerts et al.(2019)]aertsetal2019 Aerts, D., Aerts Arguëlles, J., Beltran, L., Geriente, S., Sassoli de Bianchi, M., Sozzo, S & Veloz, T. (2019). Quantum entanglement in physical and cognitive systems: a conceptual analysis and a general representation. Eur. Phys. J. Plus 134: 493.
[Aerts et al.(2020)]aertsetal2020 Aerts, D., Sassoli de Bianchi, M., Sozzo, S. & Veloz, T. (2020). On the Conceptuality interpretation of Quantum and Relativity Theories. Foundations of Science 25, pp. 5–54.
[Aspect et al.(1982a)]aspect1982a Aspect, A., Grangier, P. & Roger, G. (1982a). Experimental realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A new violation of Bell's Inequalities. Physical Review Letters 49, 91–94.
[Aspect et al.(1982b)]aspect1982b Aspect, A., Dalibard, J., & Roger, G. (1982b). Experimental test of Bell's Inequalities using time-varying analyzers. Physical Review Letters 49, 1804–1807.
[Arvind et al.(1997)]Arvind1997 Arvind, K. S. Mallesh & N. Mukunda (1997). A generalized Pancharatnam geometric phase formula for three-level quantum systems. J. Phys. A 30, 2417.
[Aspect(1983)]aspect1983 Aspect, A. (1983). Trois tests expérimentaux des inégalités de Bell par mesure de corrélation de polarization de photons. Orsay: Thèse d'Etat.
[Bell(1964)]Bell1964 Bell, J. (1964). On the Einstein Podolsky Rosen paradox. Physics 1, 195–200.
[Bengtsson & Życzkowski(2006)]Bengtsson2006 Bengtsson, I. & Życzkowski, K. (2006). Geometry of quantum states: An introduction to quantum entanglement, Cambridge University Press, Cambridge.
[Bengtsson et al.(2013)]Bengtsson2013 Bengtsson, I., Weis, S. & Życzkowski, K. (2013). Geometry of the Set of Mixed Quantum States: An Apophatic Approach, pp. 175–197. In: Geometric Methods in Physics, XXX Workshop 2011, Trends in Mathematics, Springer.
[Bednorz(2017)]Bednorz2017 Bednorz A. (2017). Analysis of assumptions of recent tests of local realism. Phys. Rev. A 95, 042118.
[Bohm(1951)]bohm1951 Bohm, D. (1951). Quantum Theory. Prentice-Hall, Inc. Englewood Cliffs.
[Brans(1988)]Brans1988 Brans, C.H. (1988). Bell's theorem does not eliminate fully causal hidden variables. Int. J. Theo.r Phys. 27, pp. 219–226.
[Byrd & Khaneja(2003)]Byrd2003 Byrd, M. S. & Khaneja, N. (2003). Characterization of the positivity of the density matrix in terms of the coherence vector representation,” Phys. Rev. A 68, 062322.
[Christensen et al.(2013)]christensen2013 Christensen, B. G., McCusker, K.T., Altepeter, J., Calkins, B., Gerrits, T., Lita, A., Miller, A., Shalm, L. K., Zhang, Y., Nam, S. W., Brunner, N., Lim, C. C. W., Gisin, N., & Kwiat, P. G. (2013). Detection-loophole-free test of quantum nonlocality, and applications. Physical Review Letters 111, 1304–1306.
[Cirel'son(1980)]cirelson1980 Cirel'son, B. S. (1980). Quantum generalizations of Bell's inequality. Lett. Math. Phys. 4, 93–100.
[Clauser et al.(1969)]Clauser1969 Clauser, J. F., Horne, M. A., Shimony, A. & Holt, R.A. (1969). Proposed experiment to test local
hidden-variable theories. Physical Review Letters 23, 880–884.
[De Raedt et al.(2012)]DeRaedt2012 De Raedt, H., Michielsen, K. & Jin, F. (2012). Einstein-Podolsky-Rosen-Bohm laboratory experiments: Data analysis and simulation. AIP Conf. Proc. 1424, 55–66.
[De Raedt et al.(2013)]DeRaedt2013 De Raedt H., Jin, F. & Michielsen, K. (2013). Data analysis of Einstein-Podolsky-Rosen-Bohm laboratory experiments. Proc. of SPIE 8832, The Nature of Light: What are Photons? V, 88321N.
[DeWit & Graham(1973)]DeWitt1973 DeWit, B. & Graham, N. (eds.) (1973). The Many-Worlds Interpretation of Quantum Mechanics, Princeton University Press,
Princeton.
[Everett(1957)]Everett1957 Everett, H. (1957). Relative State Formulation of Quantum Mechanics, Review of Modern Physics 29, pp. 454–462. (1957).
[Gerlich et al.(2011)]gerlichetal2011Gerlich, S., Eibenberger, S., Tomandl, M., Nimmrichter, S., Hornberger, K., Fagan, P.J., Tuüxen, J., Mayor, M. and Arndt, M. (2011). Quantum interference of large organic molecules. Nature Communications 2, 263.
[Giustina et al.(2013)]giustina2013 Giustina, M., Mech, A., Ramelow, S., Wittmann, B., Kofler, J., Beyer, J., Lita, A., Calkins, B., Gerrits, T., Woo Nam, S., Ursin, R., & Zeilinger, A. (2013). Bell violation using entangled photons without the fair-sampling assumption. Nature 497, 227–230.
[Hensen et al.(2016)]hensen-etal2016 Hensen, B., Bernien, H., Dréau, A. E., Reiserer, A., Kalb, N., Blok, M. S., Ruitenberg, J., Vermeulen, R. F. L., Schouten, R. N., Abellán, C., Amaya, W., Pruneri, V., Mitchell, M. W., Markham, M., Twitchen, D. J., Elkouss, D., Wehner, S., Taminiau T. H., & Hanson R. (2016). Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres. Nature, 526, 682–686.
[Hossenfelder(2020)]Sabine2020 Hossenfelder, S. & Palmer, T. (2020). Rethinking Superdeterminism, Frontiers in Physics 8, doi: 10.3389/fphy.2020.00139.
[Kimura(2003)]Kimura2003 Kimura, G. (2003). The Bloch Vector for N-Level Systems. Phys. Lett. A 314, 339.
[Kimura & Kossakowski(2005)]Kimura2005 Kimura, G. & Kossakowski, A. (2005). The Bloch-vector space for N-level systems – the spherical-coordinate point of view,” Open Sys. Information Dyn. 12, 207.
[Kupczynski(2017)]Kupczynski2017 Kupczynski, M. (2017). Is Einsteinian no-signalling violated in Bell tests? Open Phys. 15, 739–753.
[Rauch et al.(1974)]rauchetal1974 Rauch, H., Treimer, W. and Bonse, U. (1974). Test of a single crystal neutron interferometer, Physics Letters A 47, pp. 369–371.
[Rauch et al.(1975)]rauchetal1975 Rauch, H., Zeilinger, A., Badurek, G., Wilfing, A., Bauspiess, W. & Bonse, U. (1975). Physics Letters 54A, 425.
[Sassoli(2013)]Sassoli2013 Sassoli de Bianchi, M. (2013). Using simple elastic bands to explain quantum mechanics: a conceptual review of two of Aerts' machine-models. Central European Journal of Physics 11, pp. 147–161.
[Sassoli de Bianchi(2017)]sassolidebianchi2017 Sassoli de Bianchi, M. (2017). Theoretical and conceptual analysis of the celebrated 4π-symmetry neutron interferometry experiments. Foundations of Science 22, pp. 627–753.
[Sassoli de Bianchi(2021)]sassolidebianchi2021 Sassoli de Bianchi, M. (2021). A non spatial reality. Foundations of Science 26, pp. 143–170.
[Schrödinger(1935)]Schrodinger1935 Schrödinger, E. (1935). Discussion of Probability Relations between Separated Systems. Mathematical Proceedings of the Cambridge Philosophical Society 31, 555–563; doi:10.1017/S0305004100013554.
[Scheidl et al.(2010)]scheidletal2010 Scheidl, T., Ursin, R., Kofler, J., Ranelow, S., Ma, X., Herbst, T., Ratschbacher, L., Fedrissi, A., Langford, N. K., Jennewein, T. & Zeilinger, A. (2010). Violation of local realism with freedom of choice. Proceedings of the National Academy of Science 107, pp. 19708–19713.
[Tittel et al.(1998)]tittel1998 Tittel, W., Brendel, J., Zbinden, H. & Gisin N. (1998). Violation of Bell's inequalities by photons more than 10 km apart. Physical Review Letters 81, 3563–3566.
[Weihs et al.(1998)]weihs1998 Weihs, G., Jennewein, T., Simon, C., Weinfurter, H. & Zeilinger, A. (1998). Violation of Bell's inequality under strict Einstein locality condition. Physical Review Letters 81, 5039–5043.
|
http://arxiv.org/abs/2306.08410v1
|
20230614100923
|
Durfee rectangle identities as character identities for infinite Fibonacci configurations
|
[
"Timur Kenzhaev"
] |
math.CO
|
[
"math.CO",
"math-ph",
"math.MP"
] |
acm
same
calc, matrix, arrows,decorations.pathmorphing, positioning
equationsection
1.6in
-0.8in
1.6in
-0.8in
definition
theoremTheorem[section]
lemmaLemma[section]
propProposition[section]
definitionDefinition[section]
remarkRemark[section]
exampleExample[section]
consequenceConsequence[section]
|
http://arxiv.org/abs/2306.17788v1
|
20230630164237
|
Superheavy quasi-stable strings and walls bounded by strings in the light of NANOGrav 15 year data
|
[
"George Lazarides",
"Rinku Maji",
"Qaisar Shafi"
] |
hep-ph
|
[
"hep-ph",
"astro-ph.CO"
] |
1.2
23.5cm 16cm
1ex
0pt 0pt -40pt
.tifpng.png`convert #1 `dirname #1`/`basename #1
.tif`.png
|
http://arxiv.org/abs/2306.09123v1
|
20230615133110
|
Kobayashi-Ochiai's finiteness theorem for orbifold pairs of general type
|
[
"Finn Bartsch",
"Ariyan Javanpeykar"
] |
math.AG
|
[
"math.AG",
"math.CV"
] |
Classification of NMS-flows with unique twisted saddle orbit on orientable 4-manifolds
[
======================================================================================
empty
Kobayashi–Ochiai proved that the set of dominant maps from a fixed variety to a fixed variety of general type is finite. We prove the natural extension of their finiteness theorem to Campana's orbifold pairs.
§ INTRODUCTION
In <cit.> Kobayashi and Ochiai proved a higher-dimensional generalization of the finiteness theorem of De Franchis for compact Riemann surfaces. Namely, for X and Y smooth projective varieties over ℂ with X of general type, the set of dominant rational maps Y X is finite.
In this paper we prove a generalization of the classical finiteness theorem of Kobayashi–Ochiai for dominant rational maps in the setting of Campana's orbifold maps (Theorem <ref>). The notion of orbifold pairs (also referred to as C-pairs <cit.>) was introduced in <cit.> and has already been shown to be fruitful in, for example, the resolution of Viehweg's hyperbolicity conjecture <cit.>. Let k be an algebraically closed field.
A variety (over k) is an integral finite type separated scheme over k.
A ℚ-orbifold (over k) (X, Δ) is a variety X together with a ℚ-Weil divisor Δ on X such that all coefficients of Δ are in [0,1]. If Δ = ∑_i ν_i Δ_i is the decomposition of Δ into prime divisors, we say that m(Δ_i) := (1-ν_i)^-1 is the multiplicity of Δ_i in Δ. If all multiplicities of a ℚ-orbifold are in ℤ∪{∞}, we say that (X,Δ) is a ℤ-orbifold or simply an orbifold.
A ℚ-orbifold (X, Δ) is normal if the underlying variety X is normal.
Moreover, a ℚ-orbifold (X, Δ) is smooth (over k) if the underlying variety X is smooth and the support of the orbifold divisor Δ is a divisor with strict normal crossings. This means that every component of Δ is smooth and that étale locally around any point of X, the divisor Δ is given by an equation of the form x_1⋯ x_n=0 for some n ≤ X.
Let (X, Δ_X) be a normal ℚ-orbifold and (Y, Δ_Y) be a ℚ-orbifold such that Y is locally factorial. In this case, we define a morphism of ℚ-orbifolds f (X, Δ_X) → (Y, Δ_Y) to be a morphism of varieties f X → Y satisfying f(X) ⊈Δ_Y such that, for every prime divisor E ⊆Δ_Y and every prime divisor D ⊆ f^*E, we have t m(D) ≥ m(E), where t ∈ℚ denotes the coefficient of D in f^*E; the local factoriality of Y ensures that E is a Cartier divisor, so that f^∗ E is well-defined. Note that, equivalently, we can require that t - 1 + ν_D ≥ t ν_E, where ν_D and ν_E are the coefficients of D in Δ_X and of E in Δ_Y, respectively.
If X is a normal variety, we identify X with the orbifold (X, 0). If X and Y are varieties such that X is normal and Y is locally factorial, every morphism of varieties X → Y is an orbifold morphism (X,0) → (Y,0). A ℚ-orbifold (X, Δ) is proper (resp. projective) if the underlying variety X is proper (resp. projective) over k.
A smooth proper ℚ-orbifold (X,Δ) is of general type if K_X+Δ is a big ℚ-divisor, where K_X denotes the canonical divisor of X. If Δ=0, we recover the usual notion of a smooth proper variety of general type. If the multiplicities of Δ are all infinite, then (X,Δ) is of general type if and only if the smooth quasi-projective variety X∖Δ is of log-general type. Finally, if X is a smooth proper variety of nonnegative Kodaira dimension, D is a strict normal crossings divisor, and m ≥ 2, then the orbifold (X, (1-1/m)D) is of general type if and only if X ∖ D is of log-general type.
If (Y,Δ) is a smooth proper orbifold pair of general type and X is a normal variety, then the set of separably dominant orbifold morphisms X→ (Y,Δ) is finite.
We prove a more general result in which we consider rational maps X (Y,Δ); see Theorem <ref> for a precise statement.
Theorem <ref> (or, actually, its more precise version Theorem <ref>) generalizes Kobayashi–Ochiai's finiteness theorem for dominant rational maps to a projective variety of general type in characteristic zero (take Δ to be trivial and k of characteristic zero) <cit.>. It also implies Tsushima's extension of Kobayashi–Ochiai's theorem to varieties of log-general type (take the multiplicities of Δ to be infinity and k of characteristic zero) <cit.>. Moreover, we also obtain the finiteness theorem of Martin-Deschamps and Menegaux for separably dominant morphisms to a proper variety of general type (take Δ to be trivial and k of arbitrary characteristic) <cit.>, as well as Iwanari–Moriwaki's extension of Tsushima's result to characteristic p <cit.>. Finally, we obtain a new proof of Campana's extension of De Franchis's theorem for one-dimensional smooth proper orbifold pairs of general type; see <cit.>.
The theorem of Kobayashi–Ochiai can be made effective in the sense that one can give effective upper bounds for the number of dominant maps from a fixed variety to a fixed variety of general type; see <cit.>. It seems reasonable to expect that one can obtain similar effective statements in the orbifold setting.
One part of the Green–Griffiths–Lang conjecture predicts that every complex projective hyperbolic variety is of general type. In particular, the theorem of Kobayashi–Ochiai suggests that a similar finiteness statement for dominant maps should hold for projective hyperbolic varieties. Such a finiteness result for projective hyperbolic varieties was in fact already conjectured by Lang in the early seventies (see <cit.>) and proven by Noguchi <cit.> (see also <cit.>) in the early nineties.
We stress that, conjecturally, a complex projective variety is of general type if and only if it is “pseudo-hyperbolic”, i.e., there is a proper closed subset Δ⊊ X such that every entire curve ℂ→ X^ lands in Δ. The analogous finiteness statement for dominant maps to a pseudo-hyperbolic projective variety is currently not known. In particular, its extension to Campana's orbifold pairs is not known either.
As a straightforward application of Theorem <ref>, we prove the finiteness of the set of surjective endomorphisms of an orbifold pair of general type; see Corollary <ref> for a precise statement.
We were first led to investigate the orbifold extension of the theorem of Kobayashi–Ochiai in our joint work with Rousseau on rational points over number fields. We refer the reader to <cit.> for arithmetic applications of our orbifold extension of Kobayashi–Ochiai's finiteness theorem. Our proof of Theorem <ref> follows the general strategy of Kobayashi-Ochiai (and Tsushima). However, these proofs crucially rely on properties of differential forms on (log-)general type varieties (see, for example, <cit.> for a key step in Tsushima's proof relying on properties of tensor powers of the sheaf of differential forms). The main difficulty in proving Theorem <ref> is that differentials for an orbifold pair (X,Δ) are not well-behaved. One may even say that there is no meaningful way to define a sheaf Ω^1_(X,Δ) of orbifold differentials on X. On the other hand, in his seminal work on orbifold pairs <cit.>, Campana suggests to instead use locally free sheaves which mimick sheaves of symmetric differentials. These sheaves are abusively denoted by S^n Ω^p_(X,Δ) despite the lack of existence of Ω^p_(X,Δ); see Section <ref> for a precise definition. The aforementioned key step in Tsushima's proof is then replaced by an argument involving symmetric differentials on (X,Δ); see the proof of Proposition <ref> for details.
§.§ Conventions
We work over an algebraically closed field k.
A variety is a integral separated scheme of finite type over k.
If X and Y are varieties, we write X × Y for X ×_ k Y.
A point of a variety is a schematic point and need not be closed.
If ℒ is a line bundle, D is a ℚ-divisor, and n is a natural number such that nD is a ℤ-divisor, we abuse notation and write (ℒ(D))^⊗ n instead of ℒ^⊗ n(nD).
The second-named author thanks Erwan Rousseau for many helpful discussions on Campana's theory of orbifolds.
§ ORBIFOLD NEAR-MAPS
In this paper, we work with the more general notion of an orbifold near-map (Definition <ref>).
An open subscheme U ⊆ X of a variety X is big if its complement is of codimension at least two.
A rational map X Y of varieties is a near-map if there is a big open U ⊆ X such that U Y is a morphism.
Note that a rational map X Y is a near-map if and only if it is defined at all codimension one points of X. For example, for every normal variety X and any proper variety Y, every rational map X Y is a near-map.
If X and Y are varieties with X locally factorial, and f X Y is a near-map, we can pull back a line bundle ℒ on Y to a line bundle ℒ̃ on X. Indeed, while the pullback bundle f^*ℒ is a priori only defined on a big open of X, as X is locally factorial, it extends uniquely to a line bundle on all of X by <cit.>. Since locally factorial schemes are normal, global sections of ℒ also pull back to global sections of ℒ̃.
Let (X, Δ_X) be a normal orbifold and (Y, Δ_Y) be an orbifold such that Y is locally factorial. Then an orbifold near-map
f (X, Δ_X) (Y, Δ_Y)
is a near-map f X Y satisfying f(X) ⊈Δ_Y such that, for every prime divisor E ⊆Δ_Y and every prime divisor D ⊆ f^*E, we have t m(D) ≥ m(E), where t ∈ℚ denotes the coefficient of D in f^*E; this pullback is well-defined as E is Cartier. As before, this is equivalent to requiring t - 1 + ν_D ≥ t ν_E, where ν_D and ν_E are the coefficients of D in Δ_X and of E in Δ_Y, respectively.
Caution is advised: the composition of orbifold morphisms need not be an orbifold map. Indeed, although the condition on the multiplicities of divisor pullbacks is stable under composition, the image of the composition might be completely contained in the orbifold divisor of the target. For example, consider the morphism ℙ^1 → (ℙ^1, (1/2)·∞) given by z ↦ z^2 and the inclusion of the point ∞ into ℙ^1. While both morphisms are orbifold, their composition is the inclusion of the point ∞ into (ℙ^1, (1/2)·∞) which is not an orbifold morphism. There is however one situation in which we can compose orbifold morphisms. Namely, if the composition of two orbifold morphisms is dominant, then the composition is again an orbifold morphism. We will need a similar statement for orbifold near-maps (see Corollary <ref>).
Now, the following lemma gives a criterion for when the composition of orbifold near-maps is still orbifold. Note that this is not immediate from the definitions, as the composition of two near-maps, even when it exists as a rational map, need not be defined in codimension 1, so that it is not immediately clear why it satisfies the orbifold condition.
Let (Y, Δ_Y) be an orbifold such that Y is locally factorial. Let X and Z be locally factorial varieties. Let f X (Y, Δ_Y) be an orbifold near-map and let g Z X be a near-map. If the composition f ∘ g exists and extends to a near-map h Z Y of varieties whose image is not contained in Δ_Y, then h is an orbifold near-map h Z (Y, Δ_Y).
Let D ⊆Δ_Y be an irreducible component and let m ∈ℤ_≥ 2∪{∞} be its multiplicity.
Let ℒ be the line bundle on Y with a global section s ∈ℒ(Y) cutting out D. We pull back ℒ along f and h, and by Remark <ref>, we obtain line bundles f^*ℒ and h^*ℒ on X and Z, respectively. Pulling back the section s as well, we obtain global sections f^*s and h^*s of f^*ℒ and h^*ℒ, respectively.
Since h is generically the composition of f and g, we know that h^*s and g^*f^*s determine the same element of the function field of Z.
As the localization maps 𝒪_Z,z→ k(Z) are injective, it follows that (h^*s)_z = g^♯_η((f^*s)_g(z)) for every z ∈ Z.
If m=∞, to verify the orbifold condition for h over D, we have to show that h^∗ s is nowhere vanishing on Z. Suppose h^∗ s vanishes at a codimension 1 point η∈ Z. Then f^∗ s vanishes at g(η) in X. This contradicts the orbifold condition for f X (Y,Δ_Y) over D (as m=∞).
If m ∈ℤ_≥ 2, let η∈ Z be a point of codimension 1 at which h^*s vanishes, i.e., η is the generic point of an irreducible component of f^∗ D.
To verify the orbifold condition for h, we have to show that h^*s vanishes to order at least m there. In other words, we have to show that (h^*s)_η∈𝔪^m_Z, η(h^*ℒ)_η.
We know that (h^*s)_η∈𝔪_Z, η(h^*ℒ)_η, so it follows that (f^*s)_g(η)∈𝔪_X, g(η)(f^*ℒ)_g( η). Since the vanishing locus of a nonzero section of a line bundle is pure of codimension 1, there exists a point ξ∈ X of codimension 1 specializing to g(η) such that f^*s vanishes at ξ. The assumption that f X (Y, Δ_Y) is orbifold now tells us that f^*s vanishes to order at least m at ξ, so that (f^*s)_ξ∈𝔪_X,ξ^m(f^*ℒ)_ξ. As X is locally factorial, the ring 𝒪_X, g(η) is a UFD, so that we have 𝒪_X,g(η)∩𝔪^m_X,ξ⊆𝔪^m_X, g(η). Indeed, for R a local UFD with maximal ideal 𝔪, n ≥ 1 an integer and 𝔭⊂ R a prime ideal of height one, the ideal 𝔭^n R_𝔭∩ R equals 𝔭^n, and is thus contained in 𝔪^n.
The above implies that (f^*s)_g(η)∈𝔪^m_X, g(η)(f^*ℒ)_ g(η). As g^♯_η is a local homomorphism, it follows that (h^*s)_η = g^♯_η((f^*s)_g(η)) ∈𝔪^m_Z,η(h^*ℒ)_η, as required.
We will use Lemma <ref> in the following form.
Let (Y, Δ_Y) be a proper orbifold such that Y is locally factorial. Let X be a locally factorial variety. Let f X (Y, Δ_Y) be a dominant orbifold near-map and let Z ⊂ X be a locally factorial closed subvariety. If the restriction f|_Z exists as a rational map and is still dominant, then it defines an orbifold near-map f|_Z Z (Y, Δ_Y).
In Section <ref> and in the proof of Theorem <ref> it will be convenient to use the following notion of products for orbifold pairs.
If (X, Δ_X) and (Y, Δ_Y) are two orbifolds, then we define the product orbifold by
(X, Δ_X) × (Y, Δ_Y) := (X × Y, Δ_X × Y + X ×Δ_Y).
If X and Y are locally factorial, the product of orbifolds defined above satisfies the universal property of a product. More specifically, for any orbifold (T, Δ_T) and for any two orbifold morphisms
ϕ_X (T, Δ_T) → (X, Δ_X), ϕ_Y (T, Δ_T) → (Y, Δ_Y),
there is a unique orbifold morphism ϕ (T, Δ_T) → (X, Δ_X) × (Y, Δ_Y) such that ϕ_X = π_X ∘ ϕ and ϕ_Y = π_Y ∘ ϕ. Indeed, it is clear that there is a morphism ϕ (T, Δ_T) → X × Y, and we just have to check that this is indeed an orbifold morphism after equipping X × Y with its orbifold structure. First note that the set of closed points t ∈ T satisfying ϕ_X(t) ∉Δ_X is a non-empty, hence dense, open subset of T. Of course, the same holds for the condition ϕ_Y(t) ∉suppΔ_Y, so there is a closed point t ∈ T satisfying both conditions. Thus, the image of ϕ is not contained in (Δ_X × Y + X ×Δ_Y). Now let E ⊆ (Δ_X × Y + X ×Δ_Y) be a prime divisor. Without loss of generality, we may assume that E = E_X × Y for some prime divisor E_X ⊆ X. Now let D ⊆ϕ_X^* E_X be a prime divisor, and let r be its coefficient in ϕ_X^* E_X. Since we know that ϕ_X is an orbifold morphism, we have r m(D) ≥ m(E_X). Now note that ϕ^* E = ϕ_X^* E_X, so that r is also the coefficient of D in ϕ^* E. Furthermore, we have m(E_X) = m(E). Thus, r m(D) ≥ m(E), and ϕ is an orbifold morphism, as desired.
§ FAMILIES OF MAPS
In this section we consider families of maps, and prove that certain conditions on the maps are either open or closed. More precisely, we show in Lemma <ref> that a morphism landing in a closed subscheme is a closed condition, in Lemma <ref> that a rational function being dominant is an open condition, and in Lemma <ref> that the pullback of a fixed differential form having no poles outside some fixed divisor is a closed condition.
Let S be a scheme. Let X→ S be a flat morphism whose geometric fibres are reduced. Let Y and T be S-schemes, and let F:X×_S T→ Y be an S-morphism. Then, for every closed subscheme Z⊂ Y, the set of t in T such that F_t:X×_S {t}→ Y factors over Z is closed in T.
Let T_1⊂ T be the set of t in T such that F_t factors over Z. We consider T_1 = ⊔_t∈ T_1κ(t) as an S-scheme. To show that T_1 is closed in T, it suffices to show that T_1 = T_1. We endow T_1⊂ T with the reduced closed subscheme structure. The natural morphism T_1⊂T_1 is dominant. Since X→ S is flat, the basechange X×_S T_1→ X×_S T_1 is (also) dominant. Since X×_S T_1→ Y factors set-theoretically through Z and X×_S T_1→ X×_S T_1 is dominant, we see that the restriction of F to X×_S T_1 (also) factors set-theoretically through Z. However, for any t∈T_1, the scheme X×_S κ(t) is geometrically reduced over κ(t). In particular, F_t factors (scheme-theoretically) through Z, as required.
A rational map X Y of varieties over k is separably dominant if it is dominant and k(Y) ⊂ k(X) is separable.
Let f X → Y be a morphism of smooth varieties. Then the following are equivalent:
* The morphism f is separably dominant.
* There is a closed point x ∈ X such that df_x _x X →_f(x) Y is surjective.
* There is a closed point x ∈ X such that f is smooth at x.
Assume <ref> holds. Let Z ⊆ X be the locus of points where the rank of df_x is less than Y. Since K(Y)⊂ K(X) is a separable field extension, the arguments used to prove <cit.> and <cit.> show that the dimension of f(Z) is less than Y. Thus, since f is dominant, this implies that Z ≠ X. In particular, there is a closed point x ∈ X such that the rank of df_x is equal to Y. Since Y is smooth, this shows that <ref> holds.
Assume <ref> holds. Then <cit.> shows that <ref> holds.
Now assume that <ref> holds. There is an open neighborhood U ⊆ X of x such that f|_U is smooth. Smooth maps are flat, and flat maps of varieties are open. Hence f(U) is a nonempty open of Y. Thus, f is dominant. In particular, f maps the generic point of X to the generic point of Y. The smoothness of f|_U also implies that f is smooth at the generic point of X. Since generically smooth morphisms are separable, we see that <ref> holds. This concludes the proof.
Let X and Y be varieties of the same dimension. Assume that X is smooth. Let f X → Y be a morphism. Then the following are equivalent:
* The morphism f is separably dominant.
* There is a closed point x ∈ X such that df_x _x X →_f(x) Y is an isomorphism.
Assume <ref> holds. Let U ⊆ Y be the locus of smooth points of Y. Then U is a dense open of Y, and the restriction of f f^-1(U) → U is still separably dominant. Thus, we may assume that Y is smooth. In particular, by Lemma <ref>, there is a closed point x in X such that df_x _x X →_f(x) Y is surjective. Since X and Y are smooth of the same dimension, it follows that df_x is an isomorphism.
Assume <ref> holds. Then, the point f(x) is a regular point of Y. We can thus replace Y by an open neighborhood of f(x) and replace X by the preimage of that. The assumption on the dimensions continues to hold, so we may assume that Y is smooth. Consequently, by Lemma <ref>, the morphism f is separably dominant.
If X, Y and T are varieties, we say that a rational map f X × T Y is a relative rational map (over T) if the maximal open subset U ⊆ X × T on which f is defined has nonempty intersection with every closed fiber X_t := X ×{t}. In other words, it is a family of rational maps f_t X Y parametrized by the variety T.
Let X and Y be varieties of the same dimension and let T be any variety. Let F X × T Y be a relative rational map. Then the locus of t ∈ T such that F_t X Y is separably dominant is open in T.
Replacing X by an alteration if necessary, we may assume that X is smooth <cit.>. Let U ⊆ X × T be the maximal open subset on which F is defined. The map F then induces a morphism of T-schemes G U → Y × T.
We claim that if (x,t) ∈ U is any closed point, the differential dG_(x,t) is an isomorphism if and only if the differential of the rational map F_t X Y is an isomorphism at x ∈ X. To see this, note that the tangent spaces of X × T and Y × T are the products of the tangent spaces of the factors. Furthermore, the component of dG_(x,t) which maps _t T →_F(x,t) X is the zero map, and the component which maps _t T →_t T is just the identity. Lastly, the component of dG_(x,t) mapping _x X →_F_t(x) is just dF_t. This implies the claim.
Let t ∈ T be a closed point. By Lemma <ref>, F_t is separably dominant if and only if there is a closed point x ∈ X such that the differential dF_t,x is an isomorphism. By the previous claim, this happens if and only if there is a closed point x ∈ X such that dG_(x,t) is an isomorphism.
Let V ⊆ U be the set of all points at which the differential of G is an isomorphism. The set V is open in U, hence open in X × T. The map X × T → T is flat, hence open. Thus, the projection of V to T is an open subset of T. By the previous paragraph, this is exactly the set of t ∈ T for which F_t is separably dominant, so we are done.
The locus of (separably) dominant maps is not necessarily closed in T (this seems to have been overlooked in <cit.>).
Consider, for example, the map ℙ^1 ×ℙ^1 ℙ^1 given by (x,t)↦ xt. Its indeterminacy locus consists of the two points (0,∞) and (∞,0), so that this is indeed a relative rational map. If we fix any value t ∈ℙ^1 ∖{0,∞}, the resulting rational map ℙ^1 ℙ^1 is an isomorphism, and thus separably dominant. For t ∈{0, ∞}, the resulting map is constant. Thus, the locus where the map is separably dominant is ℙ^1 ∖{0,∞}⊆ℙ^1; this is open but not closed.
The following statement is a purely algebraic result which we will use in the proof of Lemma <ref> below. Recall that if M is an R-module, and f ∈ R is an element, the element f is called M-regular if the morphism M → M, m ↦ fm is injective. In the special case that M = S is an R-algebra, this is equivalent to f being a nonzerodivisor in S. If S is additionally assumed to be reduced and noetherian, this in turn is equivalent to asking that (f) ⊆ S has codimension ≥ 1 (where the empty set is considered to have codimension ∞).
Let R be a noetherian ring, let S be a noetherian R-algebra, let f ∈ S. Let M be a finitely generated S-module. Assume that M is flat over R and assume that for every maximal ideal 𝔪⊆ S, the element f is M/(𝔪∩ R)M-regular. Then f is M-regular and M/fM is flat over R.
See <cit.>.
The following lemma and proof are essentially due to Tsushima <cit.>.
Before starting the proof, we briefly discuss extending relative rational maps. Let G X × T Y be a relative rational map with X, T normal varieties, and Y a proper variety. Then, by properness of Y and normality of X × T, the rational map G X × T Y extends to a morphism U → Y on a maximal open set U ⊆ X × T with complement of codimension at least 2. However, for any given closed point t ∈ T, it might still happen that U_t := U ∩ X_t ⊆ X has a complement of codimension 1. While the restriction of G to U_t will extend to a rational map X Y defined in codimension 1, this extension will in general not be compatible with G X × T Y.
Let X and Y be smooth projective varieties of the same dimension n, and let T be a variety. Let G X × T Y be a relative rational map over T. Let D_X be an effective divisor on X and let D_Y be a divisor on Y. Let m ≥ 0. Let ω∈Γ(Y, ω_Y^⊗ m(D_Y)). Assume that, for every closed point t ∈ T, the rational map G_t X Y is separably dominant. Then, the set T_ω of t ∈ T such that G_t^*ω lies in Γ(X, ω_X^⊗ m(D_X)) is closed in T.
The case ω = 0 is clear, so suppose ω≠ 0. By replacing T with an alteration if necessary, we may assume that T is smooth.
Consider the pullback form G^*ω, and note that it defines a rational section of the vector bundle (Ω^n_X × T)^⊗ m. For any closed point t ∈ T, we pullback G^*ω along the inclusion ι_t X := X×{t}⊆ X × T to get the form ι_t^*G^*ω = G_t^*ω.
Let E and F be the divisors of zeroes and poles of G^*ω, respectively. For t in T, we define E_t := E ∩ X_t and F_t := F ∩ X_t. Since G_t is separably dominant and ω≠ 0, we have that, for every t in T, E_t and F_t are (possibly trivial) effective divisors in X_t.
Note that whenever G_t is defined in codimension one, we have that E_t (resp. F_t) is the divisor of zeroes (resp. poles) of G_t^*ω. On the other hand, if G_t is not defined at all points of codimension 1, it may happen that E_t and F_t are strictly bigger than the divisor of zeroes and poles, respectively.
We now prove the result by induction on (T). The case (T)=0 is clear. Consider the set S of all t in T such that (E_t ∩ F_t) = n-1. By semicontinuity of fiber dimension, S is closed in T (since (E_t ∩ F_t) > n-1 cannot occur). The condition t ∈ S implies that G_t cannot be defined in codimension 1. Otherwise the form G_t^*ω would have to have both a pole and a zero along the codimension 1 subset E_t ∩ F_t, which is absurd. Since G is defined at all points of codimension 1, we see (S)<(T). By the inductive hypothesis, it follows that S_ω=S ∩ T_ω is closed.
We now show that F→ T is flat using Lemma <ref>. First, note that it is locally cut out by the vanishing of a single equation given by the denominator of G^*ω. Furthermore, the morphism X × T → T is flat and, for every closed point t in T, the scheme-theoretic fiber F_t of F → T is a divisor of X_t.
Thus, we conclude that F → T is flat by Lemma <ref>.
In particular, there is a morphism T →Hilb(X) representing the family (F_t)_t ∈ T, where Hilb(X) is the Hilbert scheme of X over k. Now, as there are only finitely many effective divisors with the property of being ≤ D_X, the set of such divisors form a (finite) closed subscheme of Hilb(X). It follows that F_t ≤ D_X is a closed condition on t.
For t in T, the condition F_t≤ D_X implies that G_t^* ω∈Γ(X,ω_X^⊗ m(D_X)). Moreover, outside the set S (defined above), the condition G_t^*ω∈Γ(X,ω_X^⊗ m(D_X)) is equivalent to F_t ≤ D_X.
Thus, a point t ∈ T lies in T_ω if and only if we have t ∈ S ∩ T_ω or F_t ≤ D_X. As S∩ T_ω is closed in T and the set of t in T with F_t≤ D_X is closed in T, this concludes the proof.
§ SYMMETRIC DIFFERENTIALS ON ORBIFOLDS
In this section we collect some statements regarding the sheaf of symmetric differentials on an orbifold. We start by recalling their definition, first given by Campana in <cit.>.
Let (X, Δ) be a smooth orbifold. Let n, p ≥ 0 be natural numbers. The sheaf of symmetric differentials, written S^n Ω^p_(X,Δ), is the locally free subsheaf of ^n Ω^p_X(logΔ) which is étale-locally generated by the following elements:
x^k/m⊗_i=1^n dx_J_i/x_J_i
Here, the following notation was used:
* x_1,...,x_(X) are a set of local coordinates which exhibit Δ in normal crossing form.
* The J_i are subsets of {1,...,(X)} of size p.
* dx_J_i := ⋀_j ∈ J_i dx_j and x_J_i := ∏_j ∈ J_i x_j
* k is a tuple of (X) integers, where the j-th entry counts the number of occurences of j in the J_i.
* m is a tuple of (X) integers, where the j-th entry is the multiplicity of the coordinate x_j in Δ.
* x^k/m := ∏_j=1^(X) x_j^k_j/m_j
For smooth proper varieties X without any orbifold structure, the sheaves S^n Ω^p_(X,0) defined this way coincide with the usual symmetric powers of the module of differentials ^n Ω^p_X. More generally, if (X, Δ) is an orbifold where all multiplicities in Δ are equal to 1 or ∞, the sheaves S^n Ω^p_(X,Δ) defined above coincide with the symmetric powers of the module of log differentials ^n Ω^p_X(logΔ). However, in general, the sheaves S^n Ω^p_(X,Δ) are not the symmetric powers of any coherent sheaf (so that calling them symmetric differentials is a significant abuse of language). The main use of S^n Ω^p for us comes from the fact that these sheaves behave nicely when they are pulled back by orbifold morphisms.
If f (X, Δ_X) → (Y, Δ_Y) is a morphism of smooth orbifolds and n, p ≥ 0, then pullback of differential forms induces a morphism f^*S^n Ω^p_(Y, Δ_Y)→ S^nΩ^p_(X,Δ_X).
Campana shows this when k=ℂ using computations in the analytic topology, see <cit.>. His arguments adapt to positive characteristic, as we show now.
We have a morphism of sheaves
f^* ^n Ω^p_Y(logΔ_Y) →^n Ω^p_X( (Δ_X + f^* Δ_Y)).
As f^*S^n Ω^p_(Y, Δ_Y) (resp. S^nΩ^p_(X,Δ_X)) is a subsheaf of the source (resp. the target) of this morphism, we may argue étale-locally around a fixed point η. As the sheaves involved are locally free, we may and do assume that η∈ X is of codimension 1 (except for in the trivial situation in which X is zero-dimensional).
Locally around η, the sheaf f^* S^n Ω^p_(Y,Δ_Y) is generated by the pullbacks of the local generators of S^n Ω^p_(Y,Δ_Y) around f(η) (see Definition <ref>). Let ω∈ S^n Ω^p_(Y, Δ_Y) be such a generator. Let Y→ Y be a connected étale neighborhood of f(η) such that
* Y is an étale open of 𝔸^d with d = Y,
* Δ_Y is in normal crossing form (i.e., Δ_Y is given by the pullback of x_1·…· x_ℓ =0 in 𝔸^d for some ℓ≥ 0),
* f(η) specializes to the origin of 𝔸^d, and
* ω has the form described in Definition <ref>.
There is a connected étale neighborhood X→ X of η such that f|_X factors over Y and such that Δ_X is in normal crossings form. Since the induced morphism f (X, X×_X Δ_X) → (Y, Y×_Y Δ_Y) is (still) orbifold, we may replace (X, Δ_X) by (X, X×_X Δ_X) and (Y, Δ_Y) by (Y, Y×_Y Δ_Y).
As Y is an étale open of 𝔸^d, we obtain a map X →𝔸^d given by d maps f_1,...,f_d X →𝔸^1. For i=1,…,d, we let m_i denote the multiplicity of the (pullback of the) prime divisor {y_i=0} in Δ_Y. Since f(X) is not contained in Δ_Y, we know that whenever m_i > 1, the function f_i is not identically zero. Viewing f_i as an element of the DVR 𝒪_X, η, we decompose it as f_i = t^ν_i g_i with t a uniformizer and g_i(η) ≠ 0. Since f is an orbifold morphism, for any i with ν_i ≠ 0, we have ν_i ≥m_i/m_η, where m_η is the multiplicity of the divisor η in Δ_X.
Let J ⊆{1,...,d} be a p-element subset and consider the rational p-form dy_J/y_J on Y. If none of the functions f_i with i ∈ J vanish along η, the pullback f^*(dy_J/y_J) has no pole at η. If such an i ∈ J exists, then f^*(dy_J/y_J) has a pole of order at most 1. Since we can always write f^*(dy_J/y_J) = (dt/t) ∧ u + v, where u is a (p-1)-form with no pole at η and v a p-form with no pole at η, the pullback of ω is given by
f^*ω = ∏_i=1^d (f_i)^k_i/m_i⊗_α=1^n ( (dt/t∧ u_α) + v_α).
Here, as before, u_α and v_α are forms with no pole at η. We can write the tensor product of sums as a sum of tensor products. When doing this, the order at η of each summand occuring in such a rewriting is at least
(∑_i=1^d k_i/m_iν_i) - k_t,
where k_t counts the number of ((dt/t) ∧ u_α)-factors occuring in that summand (as opposed to v_α-factors). Note that k_t ≤∑ k_i, where the sum runs over those i for which ν_i ≥ 0. Using our estimate ν_i ≥m_i/m_η from before, we obtain that the order at η of each summand is at least
(∑_i=1
ν_i ≠ 0^d k_i/m_η) - k_t ≥ -k_t + k_t/m_η,
so the pole at η is at most of the order we allow for elements of S^n Ω^p_(X, Δ_X). Hence f^*ω∈ S^n Ω^p_(X, Δ_X), which concludes the proof.
If X is a smooth proper variety of dimension n, the sheaf S^1 Ω^n_(X,0) is just the dualizing sheaf ω_X. Thus, one might guess that for a smooth orbifold (X, Δ), the sheaf S^1 Ω^n_(X,Δ) should correspond to a line bundle related to the ℚ-divisor K_X + Δ. Of course, naively formulated like this, this guess does not really make sense, since K_X+Δ is not a ℤ-divisor and hence does not correspond to any line bundle. However, as we show now, the intuition can be saved. (Recall our convention that, for ℒ a line bundle, D a ℚ-divisor, and n a natural number such that nD is a ℤ-divisor, we write (ℒ(D))^⊗ n instead of ℒ^⊗ n(nD).)
Let (X, Δ) be a smooth proper orbifold of dimension n and let N be a natural number such that NΔ is a ℤ-divisor. Then S^N Ω^n_(X, Δ)≅ω_X(Δ)^⊗ N.
For the sheaf of log-differentials, we have Ω^n_X(logΔ) = ω_X(Δ) (see <cit.>). It follows that ^N Ω^n_X(logΔ) = ^N ω_X(Δ). Since symmetric powers of line bundles agree with tensor powers, it follows that ^N ω_X(Δ) = ω_X(Δ)^⊗ N. Thus, S^N Ω^n_(X, Δ) is by construction a locally free subsheaf of ω_X(Δ)^⊗ N. More precisely, we see that locally around a point p ∈ X, it is the subsheaf generated by the single element
x_1^N/m_1...x_n^N/m_n⊗_l=1^N dx_1 ∧ dx_2 ... ∧ dx_n/x_1x_2...x_n
where x_1,...,x_n are a set of normal crossing coordinates for Δ, and m_i denotes the multiplicity of the coordinate x_i in the orbifold divisor. The subsheaf generated by this element is equal to ω_X(Δ)^⊗ N in some neighborhood of p. The claim follows since p was arbitrary.
Let f (X, Δ_X) (Y, Δ_Y) be a near-map of smooth proper orbifolds with n:= Y and let N be a natural number such that N Δ_Y is a ℤ-divisor. Then there is an induced pullback morphism f^* ω_Y(Δ_Y)^⊗ N→ S^N Ω_(X,Δ_X)^n of locally free sheaves on X.
By Lemma <ref>, we have ω_Y(Δ_Y)^⊗ N= S^N Ω^n_(Y, Δ_Y).
By Lemma <ref>, we get a morphism f^* S^N Ω^n_(Y, Δ_Y)→ S^N Ω^n_(U, Δ_X ∩ U) of sheaves on U, where U ⊆ X denotes the domain of definition of f. By Remark <ref>, the line bundle f^* S^N Ω^n_(Y, Δ_Y) on U extends to a line bundle on X. Furthermore, as the morphism of locally free sheaves extends as well by Hartogs, this concludes the proof.
If X and T are smooth varieties, and π_X X × T → X and π_T X × T → T denote the canonical projections, we get a direct sum composition for the Kähler differentials:
Ω^1_X × T≅π_X^* Ω^1_X ⊕π_T^* Ω^1_T
Passing to exterior powers, and noting that taking exterior powers commutes with taking pullbacks, we retain such a direct sum decomposition, although it gets slightly more involved:
Ω^m_X × T≅⊕_i=0^m π_X^* Ω^i_X ⊗π_T^* Ω^m-i_T
Lastly, if A and B are modules over any commutative ring, we have the following direct sum decomposition for the symmetric powers:
^n(A ⊕ B) ≅⊕_i=0^n (^i A ⊗^n-i B)
By combining the two previous lines, we obtain that ^n π_X^* Ω^m_X is a direct summand of ^n Ω^m_X × T. Hence we get an idempotent endomorphism of ^n Ω^m_X × T which projects an element into that summand. Furthermore, if t ∈ T is any closed point and ι_t X = X ×{t}⊆ X × T is the inclusion, then the pullback map ι_t^* ^n Ω^m_X × T→^n Ω^m_X factors over that projection. We now prove the analogous result for orbifolds.
Let (X, Δ_X) and (T,Δ_T) be smooth orbifolds. Let π_X and π_T denote the canonical projection of X × T onto X and T, respectively. Then for all natural numbers N and m, the sheaf π_X^* S^N Ω^m_(X,Δ_X) is a direct summand of S^N Ω^m_(X, Δ_X) × (T,Δ_T). Furthermore, if t ∈ T is a closed point and ι_t X = X ×{t}⊆ X × T is the inclusion, then the pullback map ι_t^* S^N Ω^m_(X,Δ_X) × (T,Δ_T)→ S^N Ω^m_(X,Δ_X) factors over the projection to ι_t^* π_X^* S^N Ω^m_(X,Δ_X).
We first deal with the case that Δ_X and Δ_T are ℤ-divisors, i.e. that all multiplicities are either 1 or ∞. In this case, we have S^N Ω^m_(X,Δ_X) = ^N Ω^m_X(logΔ_X). The latter is a genuine symmetric power of an exterior power of Ω^1_X(logΔ_X). Notice that the decomposition
Ω^1_X × T(log (Δ_X × T + X ×Δ_T)) = π_X^* Ω^1_X(logΔ_X) ⊕π_T^* Ω^1_T(logΔ_T)
is still valid. Thus, the discussion of the previous paragraph applies, proving the result in this case.
In general, Δ_X and Δ_T are not ℤ-divisors. By definition, the sheaf S^N Ω^n_(X,Δ_X) is a subsheaf of
^N Ω^m_X(logΔ_X),
and, similarly, the sheaf S^N Ω^m_(X,Δ_X)× (T,Δ_T) is a subsheaf of
^N Ω^m_X× T(logΔ_X× T + X×Δ_T).
By the previous paragraph we know that the morphism
π_X^* ^N Ω^m_X(logΔ_X) →^N Ω^m_X× T(logΔ_X × T + X ×Δ_T)
is injective and has a retract. Since the projection map π_X is orbifold, it follows from Lemma <ref> that the above injection sends the subsheaf π_X^*S^NΩ^m_(X,Δ_X) to the subsheaf S^NΩ^m_(X,Δ_X)× (T,Δ_T). To prove the claim, it thus suffices to show that the retraction also respects these subsheaves. This can be checked locally, and it suffices to consider the generators. This can be done very explicitly.
Indeed, let (x,t) ∈ X × T be any closed point, let dx_1,...,dx_n be local coordinates for X around x which exhibit Δ_X in normal crossings form, and let dt_1,...,dt_r be local coordinates for T around t which exhibit Δ_T in normal crossings form. Then dx_1,...,dx_n, dt_1,..., dt_r are local coordinates for X × T exhibiting its orbifold divisor in normal crossings form. Let ω be a local generator of S^N Ω^m_(X, Δ_X) × (T,Δ_T) around (x,t). If ω contains any factors containing a dt_i, the pullback ι_t^* ω will be identically zero. Thus, it remains to consider the case where the only differentials appearing in ω are products of dx_i terms. Pulling back such a generator of S^NΩ^m_(X,Δ_X)× (T,Δ_T) along ι_t yields a (formally identical) generator of S^N Ω^m_(X,Δ_X). This proves the desired claim.
Finally, to prove that the pullback along ι_t factors over this direct summand, note that π_X ∘ι_t = 𝕀_X is the identity, so that in fact ι_t^* π_X^* S^N Ω^m_(X,Δ_X) = S^N Ω^m_(X,Δ_X).
§ KOBAYASHI–OCHIAI'S THEOREM FOR ORBIFOLD PAIRS
In this section, we prove the finiteness theorem for dominant maps into a smooth orbifold of general type (Y, Δ_Y). The first step of the proof is to show that given a dominant morphism f (X, Δ_X) → (Y, Δ_Y), we can recover f from its induced map on global sections of the canonical bundles ω_Y(Δ_Y)^⊗ N(Y) →ω_X(Δ_X)^⊗ N(X) for sufficiently large N, where N only depends on (X, Δ_X) and (Y, Δ_Y) but not on f. This allows us to shift the focus from studying dominant morphisms to studying linear maps ω_Y(Δ_Y)^⊗ N(Y) →ω_X(Δ_X)^⊗ N(X) satisfying certain conditions.
To state the next lemma, we introduce some terminology. We call a line bundle very big if the rational map to projective space induced by its global sections is birational onto its image. Note that every big line bundle has a tensor power which is very big. Of course, a very ample line bundle is very big. Also, if V is a vector space, the projective space ℙ(V) parametrizes subspaces of codimension 1.
Let X and Y be projective varieties. Assume that X is locally factorial. Let ℒ_X and ℒ_Y be line bundles on X and Y respectively. Assume that ℒ_X is very big and that ℒ_Y is very ample. Consider the following set:
S = { (f, ϕ) | f X Y dominant and ϕ f^*ℒ_Y →ℒ_X injective}
If (f,ϕ) and (g,ψ) have the same image under the composed map of sets
S →(ℒ_Y(Y), ℒ_X(X))∖{0}→{rational maps from ℙ(ℒ_X(X)) to ℙ(ℒ_Y(Y)) },
then f=g.
Before starting the proof, we note that the set S is well-defined by Remark <ref>.
Let (f, ϕ) and (g, ψ) be elements of S which induce the same rational map
γℙ(ℒ_X(X)) ℙ(ℒ_Y(Y)).
By our assumptions on the line bundles, the space ℙ(ℒ_X(X)) contains a birational copy X of X and ℙ(ℒ_Y(Y)) contains Y. The following square commutes whenever the compositions are defined:
X [r, densely dashed] [d, densely dashed] Y [d]
ℙ(ℒ_X(X)) [r, densely dashed, "γ"] ℙ(ℒ_Y(Y))
Here, the upper horizontal arrow can be either f or g. Note that the indeterminacy locus of γ is a linear subspace, and that X is not contained in any proper linear subspace of ℙ(ℒ(X)). Hence γ is defined on some open of X and the commutativity of the diagram above implies that γ sends X to Y. So we get a rational map X Y. The composition X X Y is equal to both f and g whenever it is defined, showing that f=g on a dense open subset. As Y is separated, this implies that f=g everywhere.
We now have all the prerequisite results needed for our proof of the announced theorem. We follow the general proof strategy of <cit.>.
Let (X, Δ_X) and (Y, Δ_Y) be smooth proper orbifolds with (Y, Δ_Y) of general type. If X= Y, then there are only finitely many separably dominant near-maps (X, Δ_X) (Y, Δ_Y).
If there are no separably dominant near-maps from (X,Δ_X) to (Y,Δ_Y), then we are done. Otherwise, let f (X, Δ_X) (Y, Δ_Y) be a separably dominant near-map. By Corollary <ref>, for N ∈ℕ sufficiently divisible, we get an induced morphism of line bundles
f^* (ω_Y(Δ_Y))^⊗ N→ω_X(Δ_X)^⊗ N.
Since f is separably dominant, the morphism of line bundles f^* (ω_Y(Δ_Y))^⊗ N→ω_X(Δ_X)^⊗ N is non-zero, hence injective. This implies that ω_X(Δ_X)^⊗ N is a big line bundle, so that the orbifold (X, Δ_X) is also of general type. Increasing N if necessary, we can thus assume that all of the following hold:
* ω_X(Δ_X)^⊗ N and ω_Y(Δ_Y)^⊗ N are well-defined line bundles.
* The line bundle ω_X(Δ_X)^⊗ N is very big.
* There is an effective divisor C ⊆ Y such that (ω_Y(Δ_Y)^⊗ N)(-C) is very ample.
We fix the integer N and the effective divisor C ⊆ Y from the last bullet point. We define V_X := Γ(X, ω_X(Δ_X)^⊗ N) and V_Y := Γ(Y, ω_Y(Δ_Y)^⊗ N(-C)). Note that we obtain a closed immersion ι_Y Y →ℙ(V_Y) and a rational map ι_X X ℙ(V_X) which is birational onto its scheme-theoretic image X. For every dominant near-map f, we get an induced vector space morphism f^* V_Y → V_X. By Lemma <ref>, we can recover f from f^* and even from ℙ(f^*). Thus, we are led to studying linear maps V_Y → V_X.
Let H := (V_Y, V_X)^∨, with ^∨ denoting the dual. Composition of functions is a canonical bilinear map V_X^∨×(V_Y, V_X) → V_Y^∨ and after identifying (V_Y,V_X) with its double dual, we get a bilinear morphism V_X^∨× H^∨→ V_Y^∨. It induces a relative (over ℙ(H)) rational map F ℙ(V_X) ×ℙ(H) ℙ(V_Y). For every closed point h ∈ℙ(H), we denote by F_h the rational map ℙ(V_X) = ℙ(V_X) ×{h}ℙ(V_Y).
To prove the proposition, we are first going to construct a “small” locally closed subset H_3 of ℙ(H) such that the set of separably dominant near-maps (still) injects into H_3 via f↦ℙ(f^∗).
Let H_1 ⊆ℙ(H) be the subset for which F_h maps X⊆ℙ(V_X) to Y ⊆ℙ(V_Y). To see that this is a meaningful condition, note that the indeterminacy locus of F_h is a linear subspace and that X⊆ℙ(V_X) is contained in no proper linear subspace. Let η_X be the generic point of the scheme X. Since H_1 is the set of h in ℙ(H) such that the morphism
{η_X}×ℙ(H)→ℙ(V_Y)
factors over Y, it follows from Lemma <ref> that it is closed in ℙ(H).
Note that we obtain a relative rational map X × H_1 Y.
Let H_2⊂ H_1 be the subset of elements of H_1 for which the induced rational map X Y is separably dominant. By Lemma <ref>, the set H_2 is open in H_1.
Let H_3⊂ H_2 be the subset of rational maps g X Y such that, for every global section ω of ω_Y(Δ_Y)^⊗ N, the pullback (g∘ι_X)^*ω is a global section of ω_X(Δ_X)^⊗ N. By applying Lemma <ref> to every single ω and taking the intersection over all closed sets obtained this way, we see that H_3 is closed in H_2. Hence H_3 is locally closed in ℙ(H), and we give it the reduced scheme structure.
If f (X, Δ_X) (Y, Δ_Y) is a separably dominant orbifold near-map, then the induced map ℙ(f^*) lies in H_3. As we mentioned before, by Lemma <ref>, different separably dominant orbifold near-maps induce different elements of H_3. Therefore, to prove the proposition, it suffices to show that H_3 is finite. To do so, let H_4 be an irreducible component of H_3, so that H_4 is a quasi-projective variety. Since H_3 is quasi-projective, it has only finitely many irreducible components. Therefore, to conclude the proof, it suffices to show that H_4 is finite.
Let H_4 be the closure of H_4 in ℙ(H), and note that H_4 is a projective variety. Since H_1 is closed in ℙ(H), we see that H_4 is contained in H_1. In particular, we can interpret every closed point of H_4 as a (possibly non-dominant) rational map X Y. Let H_4 be a smooth projective variety and let H_4→H_4 be an alteration such that the preimage of H_4∖ H_4 in H_4 is a strict normal crossings divisor D_H (this exists by <cit.>).
We let G X ×H_4 Y be the relative rational map induced by the above map X × H_1 Y.
By Lemma <ref>, the sheaf S^N Ω^n_(X, Δ_X) × (H_4, D_H) has π_X^* S^N Ω^n_(X,Δ_X) as a direct summand. We denote by
π S^N Ω^n_(X, Δ_X) × (H_4, D_H)→π_X^* S^N Ω^n_(X,Δ_X)
the projection. We now define the morphism ΨH_4→(V_Y,V_X) = H^∨ by Ψ(h) = [ ω↦ι_h^*π(G^*ω) ], where ι_h X→ X×H_4 is the inclusion map x↦ (x,h). We now show that Ψ is a well-defined morphism of varieties.
To do so, fix a closed point h ∈H_4 and ω∈ V_Y. Then ω∈Γ(Y, ω_Y(Δ_Y)^⊗ N).
If h does not lie over H_4, then the form G^*ω might have a pole along X ×{h}, so that the pullback of G^∗ω to X is not well-defined. But we do have some control over the poles of G^*ω. Indeed, it can only have poles along Δ_X×H_4 with orders bounded by the coefficients of N Δ_X or poles along X × D_H. The latter poles are logarithmic by Corollary <ref>. This means that G^*ω is a global section of S^N Ω^n_(X, Δ_X) × (H_4, D_H). In particular, π(G^*ω) is a global section of π_X^* S^N Ω^n_(X,Δ_X). Since global sections of π_X^* S^N Ω^n_(X,Δ_X) only have poles along subsets of Δ_X×H_4, the element ι_h^*π(G^*ω) is always well-defined, i.e., Ψ:H_4→ H^∨ is well-defined.
The restriction of Ψ to elements of H_4 lying over H_4 is simpler to describe. Indeed, if h lies over H_4, then we can restrict G^∗ω to X ×{h}. By definition of H_3, after identifying X ×{h} with X, this will give us an element of V_X=Γ(X, ω_X(Δ_X)^⊗ N). By the second part of Lemma <ref>, this element coincides with ι_h^*G^*ω.
Let h_1 and h_2 be elements of H_4 lying over H_4 such that Ψ(h_1) = Ψ(h_2). Since Ψ(h_1):V_Y→ V_X is the injective map ω↦ (G∘ι_h_1)^∗ω and Ψ(h_2):V_Y→ V_X is the injective map ω↦ (G∘ι_h_2)^∗ω, it follows from Lemma <ref> that the dominant near-maps G∘ι_h_1 and G∘ι_h_2 are equal. This obviously implies that h_1 and h_2 lie over the same element of H_4 (via the alteration H_4→H_4).
On the other hand, since H_4 is a projective variety and H^∨ is affine, the morphism Ψ is constant. Since Ψ separates elements lying over distinct points of H_4 (see previous paragraph), we conclude that H_4 is a singleton. This concludes the proof.
As we show now, we may drop the properness and smoothness assumptions on (X,Δ_X).
Let (X, Δ_X) and (Y, Δ_Y) be orbifolds with (Y, Δ_Y) a smooth proper orbifold of general type. If X = Y, then there are only finitely many separably dominant near-maps (X, Δ_X) (Y, Δ_Y).
First, let X' be the locus of smooth points of X ∖Δ_X. Then, the set of separably dominant near-maps (X,Δ_X) (Y,Δ_Y) injects into the set of separably dominant near-maps X' (Y,Δ_Y). Now, let X be an snc compactification of X' and let D=X∖ X. Then, the set of separably dominant near-maps X' (Y,Δ_Y) equals the set of separably dominant near-maps (X,D) (Y,Δ_Y). The latter is finite by Proposition <ref>.
Note that Theorem <ref> follows from the following stronger finiteness statement.
Let (X, Δ_X) and (Y, Δ_Y) be orbifolds with (Y, Δ_Y) a smooth proper orbifold of general type. Then there are only finitely many separably dominant near-maps (X, Δ_X) (Y, Δ_Y).
We may assume that the base field k is uncountable. As before, we may replace (X, Δ_X) by the smooth locus of X ∖Δ_X, so we may assume that X is smooth and Δ_X is empty.
We argue by contradiction. Assume that (f_i X (Y, Δ_Y))_i ∈ℕ is an infinite sequence of pairwise distinct separably dominant near-maps. Let n := (Y). We will construct an n-dimensional subvariety Z ⊂ X which still admits infinitely many pairwise distinct separably dominant near-maps to (Y, Δ_Y), in contradiction to Corollary <ref>.
Consider the n-fold direct sum ( X)^⊕ n of the tangent bundle of X, viewed as a variety over X. For every i ∈ℕ, consider the following subset of ( X)^⊕ n:
U_i := { (x,v_1,...,v_n) ∈ ( X)^⊕ n | f_i is defined at x and {(df_i)_x(v_j)}_j=1,...,n is a basis of _f_i(x) Y }
Clearly, U_i is an open subset of ( X)^⊕ n. Moreover, by the implication (a) (b) of Lemma <ref>, the open subset U_i is nonempty.
Now, for each pair of natural numbers i,j, consider the following subset of ( X)^⊕ n:
V_ij := { (x,v_1,...,v_n) ∈ ( X)^⊕ n | f_i and f_j are defined at x and f_i(x) ≠ f_j(x) }
Since we assumed the near-maps f_i to be pairwise distinct, every V_ij is a nonempty open.
Since k is uncountable, there exists a point (x,v_1,...,v_n) which lies in every U_i and every V_ij. Let Z ⊂ X be an n-dimensional smooth closed subvariety of X such that x ∈ Z and _x Z = span(v_1,...,v_n). Then, the maps f_i|_Z Z Y are pairwise distinct. Moreover, they are separably dominant by the implication (b) (a) of Lemma <ref>. Also, by Corollary <ref>, each f_i|_Z Z (Y,Δ_Y) is an orbifold near-map. Since this contradicts Corollary <ref>, this concludes the proof.
Theorem <ref> also holds if we allow (X, Δ_X) and (Y, Δ_Y) to be ℚ-orbifolds (but still requiring (Y, Δ_Y) to be smooth, proper, and of general type). Indeed, as before, we immediately reduce to the case that X is a smooth variety. Writing Δ_Y = ∑ (1-1/m_i) D_i, we can define Δ_Y = ∑ (1-1/m_i) D. Then every morphism X → (Y, Δ_Y) is also a morphism X → (Y, Δ_Y); hence we are reduced to the case of ℤ-orbifolds.
We conclude with the following application to endomorphisms of orbifolds of general type which generalizes finiteness results of Matsumura and Iitaka (see, for example, <cit.>) to the setting of orbifold pairs.
If (X,Δ) is a smooth projective orbifold pair of general type, then the following statements hold.
* Every separably dominant near-map (X,Δ) (X,Δ) is birational.
* Every separably dominant morphism (X,Δ)→ (X,Δ) is an automorphism.
* The group of birational near-maps (X,Δ) (X,Δ) is finite.
To prove the first statement, let f (X, Δ) (X, Δ) be a separably dominant near-map. Let f^n be the n-fold composition of f. Since every f^n is a separably dominant near-map, by Proposition <ref>, there are distinct positive integers m,n such that f^m = f^n. Hence the degree of f must be 1, so it is birational.
To prove the second statement, we first note that any surjective endomorphism f:X→ X is finite. Indeed, the induced morphism f^∗(X)⊗ℚ→(X)⊗ℚ is injective, and thus an isomorphism of finite-dimensional ℚ-vector spaces. Now, we argue by contradiction to show that f is finite. If f is not finite, then there is an integral curve C on X with f(C) a point. Let L be an ample divisor. Since f^∗ is an isomorphism, there is a divisor D such that f^∗ D ≅ L. By the projection formula, we have
(C,L) = (C,f^∗ D) = (f_∗ C, D) =0.
This contradicts the ampleness of L. Thus, the morphism f is finite.
Now, to prove the second statement, note that any dominant morphism X→ X is surjective, hence finite. Thus, by (i), every separably dominant morphism (X,Δ_X)→ (X,Δ_X) is birational and finite. It follows from Zariski's Main Theorem and the normality of X that f is an automorphism.
The third statement obviously follows from Proposition <ref>. This concludes the proof.
alpha
|
http://arxiv.org/abs/2306.06163v1
|
20230609180001
|
Entanglement of Purification in Random Tensor Networks
|
[
"Chris Akers",
"Thomas Faulkner",
"Simon Lin",
"Pratik Rath"
] |
hep-th
|
[
"hep-th",
"cond-mat.stat-mech",
"gr-qc",
"quant-ph"
] |
tr
(
[
plainthmTheoremlemLemmadefinitiondefnDefinitionremRemarkconjConjectureconvConventionpropPropositionAPS/[email protected] for Theoretical Physics,
Massachusetts Institute of Technology, Cambridge, MA 02139, USA
[email protected]@illinois.eduDepartment of Physics, University of Illinois,
1110 W. Green St., Urbana, IL 61801-3080, USA
[email protected] of Physics, University of California,
Santa Barbara, CA 93106, USA
The entanglement of purification E_P(A B) is a powerful correlation measure, but it is notoriously difficult to compute because it involves an optimization over all possible purifications.
In this paper, we prove a new inequality: E_P(A B)≥1/2S_R^(2)(A B), where S_R^(n)(A B) is the Renyi reflected entropy.
Using this, we compute E_P(A B) for a large class of random tensor networks at large bond dimension and show that it is equal to the entanglement wedge cross section EW(A B), proving a previous conjecture motivated from AdS/CFT.
Entanglement of Purification in Random Tensor Networks
Pratik Rath
July 31, 2023
======================================================
§ INTRODUCTION
Given a bipartite density matrix ρ_AB, the entanglement of purification E_P(A B) is defined as <cit.>
E_P(A B) = min_|ψ⟩_ABA'B' S(AA'),
where S(R)=-(ρ_R logρ_R ) is the von Neumann entropy.
The minimization runs over all possible purifications of ρ_AB, i.e., |ψ⟩_ABA'B' such that _A'B'(|ψ⟩⟨ψ|)=ρ_AB, and the |ψ⟩ that achieves the minimum is called the optimal purification.
E_P(A B) is a useful measure of correlations in a bipartite mixed state and is proven to be monotonic under local operations <cit.>.
However, it is generally intractable to compute because of the optimization over all possible purifications [Exceptions to this include pure states like Bell pairs and classically correlated states like GHZ states, see Ref. <cit.> for details.].
In the context of AdS/CFT [See Ref. <cit.> for a review of the quantum information perspective on AdS/CFT.], it has been conjectured that for A, B subregions of the CFT, there is a simple geometric, AdS dual to E_P(A:B).
The entanglement wedge of subregion AB of the CFT is the bulk region between AB and the minimal surface γ_AB (also called the Ryu-Takayanagi (RT) surface <cit.>).
This is, in appropriate settings, the bulk region reconstructable from the corresponding boundary subregion <cit.>.
Based on this, Refs. <cit.> conjectured that E_P(A B) is given by
E_P(A B)=EW(A B)=Area(Γ_A:B)/4G_N,
where Γ_A:B is the entanglement wedge cross section, the minimal surface dividing the entanglement wedge into portions containing A and B respectively, as depicted in fig:EW.
G_N is Newton's constant and in this paper, we will set ħ=c=1 by choosing natural units.
Proving this AdS/CFT conjecture appears quite challenging.
However, there exists a toy model of AdS, called random tensor networks (RTNs), which have proven useful in discovering new insights into AdS/CFT entanglement properties <cit.>, especially because of their connection to fixed-area states <cit.>.
The goal of this note is to present progress on proving the conjecture (<ref>) in RTNs.
We compute E_P by using a known upper bound and deriving a new lower bound (Theorem <ref>), which we are able to argue matches the upper bound in certain RTNs.
This argument relies on results obtained previously for the reflected entropy, S_R(A B), in RTNs <cit.>.
The reflected entropy is defined as <cit.>
S_R(A B)=S(AA^*)_|√(ρ_AB)⟩,
where the state |√(ρ_AB)⟩ is the canonical purification, which lives in the Hilbert space Endℋ_AB of operators acting on ℋ_AB.
Endℋ_AB is isomorphic to the doubled Hilbert space ℋ_AB⊗ℋ_A^*B^*.
The bounds are as follows.
It is conjectured that the reflected entropy in AdS/CFT satisfies
S_R(A:B) = 2 EW(A:B),
and this has been proven rigorously for a large class of RTNs <cit.>, as we will discuss.
Moreover, as argued in <cit.>, RTNs in general satisfy
E_P(A:B) ≤ EW(A:B).
This places the upper bound E_P ≤ S_R / 2.
The rest of this paper proves the lower bound and discusses when it matches this upper bound.
§ REFLECTED ENTROPY FROM MODULAR OPERATOR
The Renyi reflected entropy is
S_R^(n)(A B)=S_n(AA^*)_|√(ρ_AB)⟩,
where S_n(R)=1/1-nlog(ρ_R^n) is the nth Renyi entropy.
The lower bound in Theorem <ref> will require the following lemma
that
rewrites the Renyi reflected entropy using the formalism of modular operators appearing in Tomita-Takesaki theory [See Ref. <cit.> for a review.].
Consider a finite dimensional system with Hilbert space ℋ_AB⊗ℋ_C, where subsystem C is completely general. Given a state |ψ⟩[|ψ⟩ does not need to be cyclic and separating.] and subsystem AB, the modular operator is defined as
Δ_AB,ψ = ρ_AB⊗ρ_C^-1,
where the inverse is defined to act only on the non-zero subspace of ρ_C and Δ_AB,ψ is defined to annihilate the orthogonal subspace.
For integer n ≥ 2,
S_R^(n)(A B) = 1/1-nlog⟨ψ^⊗ n|Σ_AΔ_AB^⊗ n,ψ^⊗ n^1/2Σ_A^†|ψ^⊗ n⟩ ,
where Σ_A(A^*) are twist operators that cyclically permute the n copies of |√(ρ_AB)⟩ on subregion A(A^*), |ψ⟩ is an arbitrary purification of ρ_AB, and Δ_AB^⊗ n,ψ^⊗ n=Δ_AB,ψ^⊗ n.
Start with eq:SRn and rewrite it as <cit.>
S_R^(n)(A B) =1/1-nlogρ_AA^*^n
ρ_AA^*^n = ⟨√(ρ_AB)^⊗ n|Σ_AΣ_A^*|√(ρ_AB)^⊗ n⟩.
As described in Ref. <cit.>, operators act on Endℋ_AB by left and right actions, i.e.,
O_AB|M_AB⟩ = |O_AB M_AB⟩
O_A^*B^*|M_AB⟩ = |M_AB O_AB^†⟩,
and the inner product is defined by
⟨M|N|=⟩M^†N.
Using this, one finds that eq:SRntwist is given by
ρ_AA^*^n = _(AB)^⊗ n√(ρ_AB)^⊗ nΣ_A√(ρ_AB)^⊗ nΣ_A^†.
To express eq:SRtrace in terms of modular operators, we consider an arbitrary purification of ρ_AB denoted |ψ⟩, giving
ρ_AA^*^n = _(AB)^⊗ n√(ρ_AB)^⊗ nΣ_A√(ρ_AB)^⊗ nΣ_A^†
=⟨ψ^⊗ n|Σ_AΔ_AB^⊗ n,ψ^⊗ n^1/2Σ_A^†Δ_AB^⊗ n,ψ^⊗ n^-1/2|ψ^⊗ n⟩
=⟨ψ^⊗ n|Σ_AΔ_AB^⊗ n,ψ^⊗ n^1/2Σ_A^†|ψ^⊗ n⟩,
where we have used the fact that the ρ_C dependence cancels out in the second line.
For the last line, we have used Δ_AB,ψ^-1/2|ψ⟩=|ψ⟩ which is easy to see by working in the Schmidt basis.
§ LOWER BOUND
For integer n ≥ 2,
E_P(A:B) ≥ S^(n)_R(A:B) / 2.
In Ref. <cit.>, it was proven that for integer n ≥ 2, the Renyi reflected entropy is monotonic under partial trace, i.e., S_R^(n)(A BC)≥ S_R^(n),(A B).
This immediately implies Theorem <ref> by the following argument.
Let |ψ⟩_ABA'B' be the optimal purification.
Then
2S(AA')≥ 2S_n(AA')=S_R^(n)(AA' BB')≥ S_R^(n)(A B),
where we have used the fact that S^(n)_R(C:D)=2S_n(C) for a pure state on CD.
That said, we choose to present the proof below because it is self-contained and far simpler than the proof of monotonicity in Ref. <cit.>.
We first define the Renyi generalization of E_P(A B) as
E_P^(n)(A B) = min_|ψ⟩_ABA'B' S_n(AA').
Applying the monotonicity of Renyi entropy, i.e., ∂_n S_n ≤ 0, for n>1 we have
E_P(A B)≥ E_P^(n)(A B).
Now consider an arbitrary purification |ψ⟩_ABA'B'. For integer n≥ 2, the Renyi entropy for subregion AA' can be computed using twist operators in a fashion similar to Eqs. (<ref>,<ref>), i.e.,
S_n(AA') =1/1-nlogρ_AA'^n
ρ_AA'^n =⟨ψ^⊗ n|Σ_AΣ_A'|ψ^⊗ n⟩.
Define the operators Π_AB,ψ (Π_A'B',ψ) to be projectors onto the non-zero subspaces of the reduced density matrices on AB (A'B'). Then, using Π_AB,ψ|ψ⟩=Π_A'B',ψ|ψ⟩=|ψ⟩, we can insert Π_AB,ψ (Π_A'B',ψ) from the right (left) in eq:twist for each of the n copies of |ψ⟩. Note that Π_ABΠ_A'B'=Δ_AB,ψ^1/4Δ_AB,ψ^-1/4 as the inverse density matrices in the modular operators annihilate the orthogonal subspaces. We can use this fact to insert a pair of modular operators into eq:twist to get
(ρ_AA'^n) =⟨ψ^⊗ n|Σ_AΔ_AB,ψ^1/4Δ_AB,ψ^-1/4^⊗ nΣ_A'|ψ^⊗ n⟩
≤(⟨ψ^⊗ n|Σ_AΔ_AB^⊗ n,ψ^⊗ n^1/2Σ_A^†|ψ^⊗ n⟩.
.⟨ψ^⊗ n|Σ_A'Δ_AB^⊗ n,ψ^⊗ n^-1/2Σ_A'^†|ψ^⊗ n⟩)^1/2,
where we have applied the Cauchy-Schwarz inequality between the modular operators.
Using Δ_AB,ψ^-1=Δ_A'B',ψ and eq:deltaSR, the two terms in the last line of eq:ineq can be related to Renyi reflected entropies on A:B and A':B' respectively. Thus, we have
21/1-nlog(ρ_AA'^n) ≥ S_R^(n)(A B) +S_R^(n)(A' B').
Finally using the fact that S_R^(n)(A' B')≥ 0, applying eq:ineq1 to the optimal purification arising in the calculation of E_P^(n)(A B) and using eq:renyiEP, we have our desired inequality.
We will use the inequality at n=2 since it is the strongest.
It is important to note that this inequality was derived using twist operators which only exist at integer n.
In the context of computing entanglement entropy, one usually analytically continues the answer obtained at integer n to non-integer values using Carlson's theorem.
However, it is not necessarily possible to analytically continue an inequality.
For example, the monotonicity of Renyi reflected entropy under partial trace, i.e., S_R^(n)(A BC)≥ S_R^(n),(A B) was proved to be true at integer n<cit.>, whereas counterexamples were found for non-integer n in Ref. <cit.>.
§ RANDOM TENSOR NETWORKS
We can now use these bounds to compute E_P in many random tensor network states.
These states are defined as (up to normalization) <cit.>
|ψ⟩=∏_<xy>∈ E⟨xy|∏_x∈ V|V_x⟩,
where we are considering an arbitrary graph defined by vertices V and edges E.
The states |V_x⟩ are Haar random and the states |xy⟩ are maximally entangled.
This defines a state on the vertices living at the boundary of the graph.
We will consider RTNs in the simplifying limit where all bond dimensions χ_xy are large such that logχ_xy∝log D and D→∞[log D∼1/4G_N in AdS/CFT in units where l_AdS=1.].
For RTN states, the Renyi reflected entropy is computed by finding the optimal configuration of permutations that minimizes a certain free energy (see Ref. <cit.> for details).
It was proved in Ref. <cit.> that the optimal configuration involves four permutation elements {e,g_A,g_B,X} and takes the general form shown in fig:triway.
In detail, we have
lim_D→∞S_R^(n)(A B)/log D = 2 𝒜_n(A B C) - n/n-1𝒜(AB C),
where 𝒜_n(A B C) is the triway cut with tensions t_A:B=1 and t_A:C=t_B:C=n/2(n-1) (see fig:triway). 𝒜(AB C) is the minimal cut separating AB from C.
While the triway cut problem provides a natural analytic continuation in n and Refs. <cit.> have provided evidence that this in fact is the correct prescription, it is not necessary to assume this for the purpose of this paper.
For now we note that at n=2, all the tensions are equal and normalized to 1.
On the other hand, in the limit n→ 1, the RHS of eq:triway approaches 2EW(A:B).
Now, the key point is that there exist networks where the triway cut configuration is identical for n→1 and n=2.
This corresponds to networks where the X region in fig:triway vanishes at n=2.
We will demonstrate such examples in sec:eg.
For now, assuming such a network and using eq:ineq_main, we have
E_P(A B)≥1/2S_R^(2)(A B) = EW(A B).
To prove the opposite inequality, we repeat the arguments made in Refs. <cit.>.
There is an approximate isometry relating the RTN state |ψ⟩_ABC to the state |ψ⟩_ABC' defined on the same graph truncated to the entanglement wedge of AB, with C'=γ_AB.
The RT formula can still be applied and optimizing over the choice of decomposition C=A'∪ B', we have S(AA')=EW(A B).
Since we have found one such purification, we have
E_P(A B)≤ EW(A B)
Note that each of the above inequalities is in the D→∞ limit.
Combining these two inequalities, we have E_P(A B)=EW(A B) up to terms vanishing in the D→∞ limit.
It is then also clear that the geometric purification in Refs. <cit.> is the optimal purification to leading order in D.
§ EXAMPLES
In this section, we provide simple examples of RTNs to demonstrate regions of parameter space where we have proved E_P(A B)=EW(A B).
While in the continuum limit one generically expects a non-trivial X region as shown in fig:triway, for any discrete network we expect a codimension-0 region of parameter space where the X region vanishes.
§.§ 1TN
The first example we consider is that of a Haar random tripartite state, represented by a graph with a single vertex and three legs with bond dimensions d_A/B/C respectively (see fig:1TN).
In this case, the reflected entropy was computed in detail in Ref. <cit.>.
We present the phase diagram in fig:1TN.
The phase boundaries at n=2 are represented as a function of x_A=log d_A/log d_C and x_B=log d_B/log d_C.
Apart from the shaded region marking the X domain, we have proved E_P(A B)=EW(A B) everywhere else.
It is also straightforward to read off the optimal purification since we already argued it is given by the geometric purification suggested in Ref. <cit.>.
One may consider a simple deformation of the above model, by changing the maximally entangled legs of the RTN to non-maximally entangled legs.
Such states have also been useful to model holographic states <cit.>.
In fact, the simplest situation where we add non-maximal entanglement to the C leg results in a state identical to the PSSY model of black hole evaporation <cit.>.
We can thus use the results of Ref. <cit.> which computed the reflected entropy in this model.
The phase diagram turns out to be similar to fig:1TN except the shaded region turns out to be larger.
Thus, non-maximal links do not help in improving the applicability of our result.
We provide some more details on this in Appendix <ref>.
§.§ 2TN
The next simplest network to consider is one where we have two vertices connected by an internal bond labelled W as shown in fig:2TN.
For simplicity, the external C bonds are chosen to have identical bond dimension.
In general, we have the phase diagram shown in fig:2TN.
Again, we see a large codimension-0 region of parameter space where our proof applies.
In fact, motivated by holography, Ref. <cit.> considered a limit where x_W = log d_W/log d_C→ 0.
In this limit, the shaded domains containing the element X vanish at arbitrary n.
Thus, our proof always applies in this limit.
§ DISCUSSION
In this note we have proven E_P=EW for a large class of RTNs.
Our result relied on the inequality E_P≥1/2S_R^(2) proven as Theorem <ref>.
Proving the stronger inequality E_P≥1/2S_R would prove E_P=EW more generally, but this cannot be achieved with our proof technique.
It would be interesting to check this numerically using the techniques of Ref. <cit.>.
An inequality of the form of eq:ineq can in fact be proved for heavy local operators in AdS/CFT by using the geodesic approximation and the techniques of computing mirror correlation functions <cit.> (see fig:geom).
In AdS_3/CFT_2, twist operators are local and can be analytically continued to n≈1.
Applying the inequality, we would then find S(AA')≥1/2S_R(A:B)+1/2S_R(A':B') in any geometric purification.
It would be interesting if this argument can be generalized to non-geometric states, so that we can minimize the LHS and find the strengthened inequality.
PR is supported in part by a grant from the Simons Foundation, and by funds from UCSB.
CA is supported by the Simons foundation as a member of the It from Qubit collaboration, the NSF grant no. PHY-2011905, and the John Templeton Foundation via the Black Hole Initiative. This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0360.
§ NON-MAXIMALLY ENTANGLED RTNS
In a standard RTN, the edges are projected onto maximally entangled states.
These RTN states can be deformed to nearby states by simply changing the entanglement spectrum on the edges.
One may then ask whether we can prove E_P=EW for a larger class of states by considering such a deformation, and attempting to enlarge the parameter space where the inequality in Theorem <ref> is saturated.
It turns out the answer is no, and we give an example in this section to highlight the basic issue.
Consider the 1TN model of sub:1TN with a non-maximally entangled leg for subregion C.
This state, for a specific choice of spectrum, is identical to that of the PSSY model, an evaporating black hole in JT gravity coupled to end-of-the-world branes with flavour indices entangled with a radiation system <cit.>.
Here, we will not restrict to the PSSY spectrum, and find more generally how this deformation affects the phase diagram of reflected entropy.
For generality, consider the state |ρ_AB^m/2⟩, a one parameter generalization of the canonical purification.
Ref. <cit.> computed the entanglement spectrum of ρ_AA^* for this state.
It consists of two features: a single pole of weight p_d(m) and a mound of min(d_A^2-1,d_B^2-1) eigenvalues with weight p_c(m).
The weights are given by
p_d(m) = (ρ_AB^m/2)^2/d_A d_B (ρ_AB^m)
p_c(m) = 1-p_d(m).
Now, we would like to compare the phase diagram of this model with the standard 1TN with maximally entangled legs.
First note that the transition between e and X in fig:1TN is dictated by the location of the entanglement wedge phase transition, which we hold fixed to compare the two models.
Then the remaining question is where the transition from X to g_A/g_B happens.
Consider the region of the phase diagram where d_A>d_B.
The transition happens in the connected sector.
Thus, we have p_c(m)≈ 1 and the spectrum of ρ_AB is well approximated by the spectrum on the C leg.
Using this, we find that the location of the transition for S_R^(2) is given by
p_d(m) =1/d_B.
Using eq:pd, we then have
(2-m)S_m/2-(1-m)S_m=log d_A,
where S_n is the nth Renyi entropy of the non-maximal spectrum on the C leg.
Then it is clear that at m=1, the location of the phase transition is
x_A = S_1/2/S_1≥ 1. The standard 1TN has a flat spectrum, i.e, S_n=S_1 and the transition is at x_A=1.
Thus, the shaded region where we cannot prove E_P=EW is larger after deforming the RTN to add non-maximally entangled legs.
As a side note, we would like to mention what happens for m≥ 2 where one can use the usual RTN calculation of domain walls with tensions modified by the entanglement spectrum, thus introducing an m dependence <cit.>.
For m≥ 2, we have
x_A = S_m/2/S_1-(m-1)S_m/2-S_m/S_1≤ 1 since S_m≤ S_m/2≤ S_1.
Thus, the X region shrinks for m≥ 2 after deforming the spectrum on the legs.
However, as demonstrated above for m=1, the naive analytic continuation of the result at m≥ 2 fails.
|
http://arxiv.org/abs/2306.05315v1
|
20230608160731
|
Large-scale adaptive multiple testing for sequential data controlling false discovery and nondiscovery rates
|
[
"Rahul Roy",
"Shyamal K. De",
"Subir Kumar Bhandari"
] |
stat.ME
|
[
"stat.ME",
"math.ST",
"stat.TH"
] |
RNN-Based GNSS Positioning using Satellite Measurement Features and Pseudorange Residuals
Ibrahim Sbeity12,
Christophe Villien1,
Benoît Denis1, and E. Veronica Belmega32
1
CEA-Leti, Université Grenoble Alpes,
F-38000 Grenoble, France
2 ETIS UMR 8051, CY Cergy Paris Université, ENSEA, CNRS, F-95000, Cergy, France
3 Univ. Gustave Eiffel, CNRS, LIGM, F-77454, Marne-la-Vallée, France
Emails: [email protected], [email protected]
July 31, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================
In modern scientific experiments, we frequently encounter data that have large dimensions, and in some experiments, such high dimensional data arrive sequentially or in stages rather than full data being available all at a time. We develop multiple testing procedures with simultaneous control of false discovery and nondiscovery rates when m-variate data vectors X_1, X_2, … are observed sequentially or in groups and each coordinate of these vectors leads to a hypothesis testing. Existing multiple testing methods for sequential data uses fixed stopping boundaries that do not depend on sample size, and hence, are quite conservative when the number of hypotheses m is large. We propose sequential tests based on adaptive stopping boundaries that ensure shrinkage of the continue sampling region as the sample size increases. Under minimal assumptions on the data sequence, we first develop a test, i.e., a stopping and a decision rule, based on an oracle test statistic such that both false discovery rate (FDR) and false nondiscovery rate (FNR) are nearly equal to some prefixed levels with strong control at these levels. Under a two-group mixture model assumption, we propose a data-driven stopping and decision rule based on local false discovery rate statistic that mimics the oracle rule and guarantees simultaneous control of FDR and FNR asymptotically as m tends to infinity. Both the oracle and the data-driven stopping times are shown to be finite (i.e., proper) with probability 1 for all finite m and converge to a finite constant as m grows to infinity. Further, we compare the data-driven test with the existing “gap” rule proposed in <cit.> and show that the ratio of the expected sample sizes of our method and the “gap” rule tends to zero as m goes to infinity. Extensive analysis of simulated datasets as well as some real datasets illustrate the superiority of the proposed tests over some existing methods.
Key words and phrases:
Average sample number, Compound decision rule, False discovery and nondiscovery proportions, Local false discovery rate, Multiple comparison, Sequential sampling, Stopping rule.
§ INTRODUCTION
Technological advancements in the past few decades have challenged statisticians with the task of drawing inferences for high-dimensional or ultrahigh-dimensional data which led to the development of a myriad of innovative statistical methodologies suited for those settings. Among many high-dimensional statistical problems, large-scale multiple testing got much attention due to its vast applications in many areas including genetics, medical imaging, and astrophysics, to name a few. For instance,“high throughput” devices such as microarrays produce gene expression levels and a statistician needs to compare sick and healthy subjects for thousands or even millions of genes simultaneously (<cit.>). Multiple testing of a large number of hypotheses also arises in many spatial settings such as disease mapping (<cit.>), astronomical surveys (<cit.>), and public health surveillance (<cit.>), among others.
A widely popular error metric for large-scale multiple testing is the false discovery rate (FDR), proposed in the seminar paper by <cit.>, which is the expected value of false discovery proportion (FDP) defined as the ratio of the number of false rejections divided by the total number of rejections.
Substantially large number of statistical methods have been developed to control FDR or some variations of it such as marginal FDR (mFDR, <cit.>), positive FDR (pFDR, <cit.>) and false discovery exceedance (FDX, <cit.>) and some of these methodologies are developed in the context of large-scale multiple testing when m grows to infinity. To review the literature on large-scale multiple testing and FDR controlling methodologies, we refer to <cit.>, <cit.>, <cit.>, <cit.> and the references therein.
The above multiple testing procedures are applicable when full data is available at the time of testing and these are known as fixed-sample-size tests. However, there are a number of applications when, instead of full data being available at the time of analysis, data arrive sequentially one at a time or in groups or stages. A statistician may need to draw inferences each time new data arrives based on the data available till that time point. For such sequentially observed data, sometimes referred to as streaming data, one may need to test multiple hypotheses simultaneously. A very natural application of sequential multiple testing is in clinical trials with multiple endpoints where patients are collected sequentially or in groups. At each interim stage of such trials, a decision is made whether to accept or reject or to collect more samples.
Sequential multiple testing has evolved into its current form in the last decade. Three approaches of sequential testing have been adopted by statisticians. In the first approach, or each new time point, observations corresponding to the m different data-streams are considered to be vectors of size m. i.e., as if these m observations are the measurements of m different properties of a single unit. We continue sampling until decisions corresponding to each hypothesis are simultaneously determined. decisions for all the hypotheses are made simultaneously. This approach is favorable when the data arrive as units or vectors of size m and therefore the cost of observing individual coordinates corresponding to different data-streams for each unit is insignificant in respect to that of engaging a new unit. The authors who followed this approach include <cit.>.
In the second approach, each data-stream is considered to be sequential results of conducting a different experiment. Therefore it is permissible for data-streams to be stopped at different time points. This approach is helpful when the cost of conducting different experiments is higher than involving a new unit. Related works include <cit.>.
The third approach is known as online multiple testing. Here, instead of the sequential arrival of new observations, new hypotheses are taken into consideration and the objective is to control some measure of false positives like FDR (<cit.>) and FWER (<cit.>).
In this article, we are following the first approach. i.e., the number of hypotheses(m) is fixed and new observations appear as vectors of size m. In the literature, among the first and second approaches, the following pairs of error matrices were considered to be controlled: FWER-I and FWER-II (<cit.>); FDR and FNR (<cit.>); and generalized FWER pair (<cit.>). These existing methods perform well when the number of hypotheses is small. But in a large-scale setting, the controls become conservative. i.e., as m grows larger, the attained error matrices become much smaller than the supposed control which implies a large stopping sample size. In the non-sequential version of multiple testing, the articles <cit.> provide a line of optimal methods that work well in large-scale scenarios. In this article, we have followed their methods for obtaining a sequential multiple testing method appropriate for a large number of hypotheses. The main contributions of this paper are the following. I. We provide an oracle rule for a general setup with adaptive boundaries that is proper and achieves FDR and FNR control under minimal conditions. II. Under the two-group mixture model, we develop an adaptive data-driven rule that achieves asymptotic control of FDR and FNR as the number of hypotheses m grows to infinity. III. The stopping time for the oracle and the data-driven rule under the two-group mixture model converges weakly to a finite natural number as m tends to infinity. IV. Finally, we show that the Asymptotic relative efficiency of the `GAP' rule compared to our oracle rule under the two-group mixture model tends to zero. The article is organized in the following way: in section 2 we have described the basic setup and required assumptions. In section 3 the oracle rule for general setup has been proposed and the Theorem <ref> is expressed. Section 4 considers a more specialized two-group mixture model set up and the updated oracle rule for this setup is formulated. Also in section 4 a data-driven rule has been proposed and Theorems <ref> and <ref> have been stated. Section 5 establishes the asymptotic dominance of stopping times of the asymptotically optimal rule of <cit.> on our oracle rule. Section 6 verifies the results stated in this article numerically with simulation studies. In section 7 we have applied our method on two real data sets. Section 8 concludes this article with related discussions. Proofs of the theorems and the lemmas are included in the appendices <ref>-<ref>.
§ MODEL DESCRIPTION
Suppose, with respect to some measurable space (Ω,σ,μ), we are required to test m pairs of hypotheses simultaneously. A vector of binary (0,1) random variables θ=(θ_1,θ_2,⋯,θ_m) determines the true states of the hypotheses such that, if θ_i is 0(1), then the i-th null (alternative) hypotheses is true. A sequence of random m-vectors 𝐒_𝐧= (S_n^1,S_n^2,⋯,S_n^m) are defined on a sequence of monotone increasing sigma fields {σ_n| n ∈ℕ} such that σ(∪_n=1^∞σ_n)⊆σ. {S_n^i} is called the i-th test statistic at `time' n and is used to test for the i-th pair of hypotheses.
A sequential test for testing the m hypotheses is defined as the pair (T,δ); Where T is a {σ_n}-stopping time (i.e., if 𝐒_𝐧 is observed up to time point T∈ℕ, then T is defined on the σ-field σ_n). δ=(δ^1,δ^2,⋯,δ^m)∈{0,1}^m is a decision vector defined on σ-field σ_T. We say we reject i-th null hypothesis (or simply reject) if δ^i=1 and accept otherwise. The dependency of δ on T is suppressed for simplicity of notation. Any sequential test (T,δ) defined in such a way makes V=∑_i=1^m(1-θ^i)δ^i false rejections among R=∑_i=1^m δ^i rejections and W=∑_i=1^mθ^i(1-δ^i) false acceptance among m-R=∑_i=1^m(1-δ^i) acceptance when the true states of the hypotheses are given by θ. We follow a Bayesian path by considering θ to be random with some distributional assumption on it. In light of the above discussion, FDR and FNR for the sequential multiple test (T,δ) are defined as follows:
FDR = E( ∑_i=1^m(1-θ^i)δ^i/(∑_i=1^m δ^i)∨1)
FNR = E (∑_i=1^m(1-δ^i)θ^i/(∑_i=1^m (1-δ^i))∨1)
For simplicity, we have omitted the dependence of the test (T,δ) in the definition of FDR and FNR.
Fix α,β∈ (0,1). Our objective is to find a test (T,δ) which controls FDR and FNR at levels α and β respectively. Keeping this objective in mind we define a class of tests:
Δ(α,β)= {(T,δ): FDR≤α & FNR≤β}
A desirable criterion for any sequential simultaneous test (T,δ) is to belong in the class Δ(α,β). In literature, among the existing sequential multiple testing methods, <cit.> and <cit.> provide sequential multiple testing rules (T,δ∈Δ(α,β)). The premise in <cit.> is different from us as different coordinates are allowed to have different stopping times. The asymptotically optimal decision rule in <cit.> is more suitable for FWER type I and II control and therefore results in a conservative control for FDR and FNR. For large-scale multiple testing problems, such rules are not ideal. Along the lines of studies by Sun et al. for multiple testing problems using local fdrs
<cit.> defined Local Index of Significance(LIS) for i as:
t_n^*i=ℙ(θ^i=0|𝐒_𝐧) ; i=1(1)m
Suppose, given θ_i=j, the joint distribution of 𝐒_𝐧 be f^*_n(𝐒_𝐧|θ_i=j); j=0,1. So,
t_n^*i=ℙ(θ_i=0)f^*_n(𝐒_𝐧|θ_i=0)/ℙ(θ_i=0)f^*_n(𝐒_𝐧|θ_i=0)+ℙ(θ_i=1)f^*_n(𝐒_𝐧|θ_i=1)
Although, LIS statistics are generally defined for continuous random variables, this definition can also be used for discrete test statistics. In that case, f^*_n(𝐒_𝐧|θ_i=j) is discrete.
Jointly t_n^*1,t_n^*2,⋯,t_n^*m will be used to test the hypotheses
H_0i: θ_i = 0 vs. H_1i: θ_i = 1
A large number of tests can be generalized in this form. For example, suppose, we want to compare the means of some variable of two homogeneous groups where the observations from new units of each group are obtained sequentially. A sequential two-sample t-test is appropriate in such a situation. <cit.> provides a brief review on this topic. Here, if the size of the observations from the j-th group is n_j, j∈{1,2}, we can consider the sequential t statistics with total sample size n (= n_1 + n_2) to be S_n and thus our setup applies here.
In the multiple testing literature, authors have mostly used the p values as the test statistics. For the sequential cases too, we can compute and upgrade the p values at each sample size(n). Similarly, we can compute the LIS statistic given the null distribution and alternative distributions are known. If the observations are continuous, we can also compute a simple transformation S_ni= Φ^-1(p_ni), where, p_ni is the p value corresponding to the i-th coordinate with sample size n and Φ() is the standard normal CDF. Such transformations are very useful for testing purposes. For continuous random variables, <cit.> proposed the use of z-score. <cit.> showed that, when the both-sided alternatives are asymmetric, the z- score approach has some advantages over the p value based methods. In our analysis, we will take shelter of the z score based method whenever possible.
§ ORACLE RULE USING LIS STATISTICS
In this section we propose the oracle adaptive rule for general set up. This rule is based on the perspective of a wise oracle who knows the exact joint distribution of the data streams at each time point. Fix α, β∈(0,1). The rule is described in Algorithm 1.
Discussion 1:
If r_τ+a_τ = m, we have only one choice of s given by s=r_τ= m - a_τ. i.e., we have a unique decision. But otherwise s can take any one of the values in {r_τ,⋯,m-a_τ}. For a proper choice of p_1 we can define s to be
s = ⌊ (m - a_τ+1 - r_τ ) p_1 ⌋
Where ⌊.⌋ is the greatest integer function. Now if the probability of a hypothesis to be null is π_0, the expected number of null hypotheses is mπ_0. So if the probability of a null hypothesis to be rejected is α_1 and the probability of an alternative hypothesis to be falsely accepted is β_1, then the expected total number of rejections is m(π_0α_1+(1-π_0)(1-β_1)) and expected total number of acceptance is m(π_0(1-α_1)+(1-π_0)β_1). We equate the mFDR i.e. the proportion of expected false rejections and expected rejection with α and mFNR i.e. the proportion of expected false acceptance and expected acceptance with β to get estimates of α_1 and β_1 as
α_1 = (1-π_0)/ π_0 - β/(1-β)/(1-α)/ α - β/(1-β)
β_1 = π_0/(1-π_0) - α/(1-α)/(1-β)/ β - α/(1-α)
The idea is that those undecided hypotheses potentially have the highest LIS values among those rejected (if they are rejected) or potentially have the lowest LIS values among those accepted. Hence they are most likely to be false positives or false negative decisions. Therefore we want to divide the undecided region into two parts proportional to expected false rejection and expected false acceptance. i.e., we choose p_1 = π_0 α_1 / (1-π_0) β_1, where the values of α_1 and β_1 are given as above.
Discussion 2:
From algorithm 1, we can define for stage n the adaptive lower cutoff, denoted by t_n^*l as,
t_n^*l=
t_n^*(r_n+1) if r_n < m
1 if r_n = m
Any hypothesis i with LIS value (t_n^*i) less than t_n^*l is considered to be a potential alternative. Similarly we can define the adaptive upper cutoff for stage n denoted by t_n^*u as,
t_n^*u=
t_n^*(m-a_n) if a_n < m
1 if a_n = m
Any hypothesis i with LIS value (t_n^*i) greater than t_n^*l is considered to be a potential null.
Suppose all the components of 𝐒_𝐧 are continuous. For some n∈ℕ, r_n+a_n≥ m if and only if t_n^*l>t_n^*u almost surely. Further, if there are some discrete variables present in the mix then, t_n^*l>t_n^*u implies r_n+a_n≥ m.
Proof of lemma <ref> is in appendix <ref>. From lemma <ref>, we see that, the time taken for arriving at the criteria t_n^*l>t_n^*u is equal to τ almost surely if all the components in 𝐒_𝐧 are continuous, whereas, for the presence of discrete random variables, the corresponding time is almost surely greater than or equal to τ. Due to the analytic advantage of this new criteria, we redefine our stopping rule as:
τ = inf{n ∈ℕ: t_n^*l>t_n^*u}
To ensure the termination of the procedure at a finite time, we make the following assumption:
Under H_0i, t_n^*ip→1 and under H_1i, t_n^*ip→0 ∀ i=1(1)m.
This assumption makes sure that as we observe more data, evidence towards the truth increases weakly.
For some n∈ℕ, [n] denotes the set {1,2,⋯,n}
The following theorem proves that the oracle rule for general setup is proper and controls both FDR and FNR.
Fix α,β∈ (0,1). Suppose Assumption <ref> holds and m∈ℕ. Let (τ,δ^*) be the oracle rule characterized by algorithm 1.
then
ℙ(τ<∞)=1
And further,
(τ,δ^*) ∈Δ(α,β)
Proof of Theorem <ref> is included in appendix <ref>.
In practice, it is hard to compute the LIS values under a general setup. Assumption <ref> refers to a two-grouped mixture model, where, LIS for each coordinate depends on observation in that coordinate only. Under such premise, LIS are proved to be equal to local FDR (lfdr) almost surely. <cit.> explored the idea of lfdr and its application in multiple testing using a two-component mixture model. In the next section, we assume a similar model and develop an oracle (data-driven) rule using lfdr as the test statistic.
§ ORACLE AND DATA-DRIVEN RULES UNDER TWO GROUP MIXTURE MODEL
§.§ Oracle Rule
In this section, we consider the following assumptions on the statistics 𝐒_n:
θ_i iid∼ Benoulli(1-π_0)
S_n^i|θ_i ∼ f_n independently
where, f_n(.)=θ_i f_1n(.)+ (1-θ_i)f_0n(.). The lfdr for i at time n is defined as
t_n^i=ℙ(θ_i=0|Z_n^i)
<cit.> showed that Assumption <ref> implies t_n^i=π_0f_0n(S_n^i)/f_n(Z_n^i).
Now we introduce the following lemma :
If Assumption <ref> is true, then
ℙ(t_n^*i= t_n^i)=1 ∀ i=1(1)m
This equality occurs as a result of Assumption <ref>. The proof is in appendix <ref>. So, using the lfdr statistics an adaptive oracle rule similar to that of the oracle rule for the general setup may be formulated. Fix α,β∈ (0,1). The rule is obtained by replacing t_n^*i with t_n^i in algorithm 1.
Discussion:
Like before we can define for stage n the adaptive lower cutoff, denoted by t_n^l as,
t_n^l=
t_n^(r_n+1) if r_n < m
1 if r_n = m
Any hypothesis i with LFDR (t_n^i) less than t_n^l is considered to be a potential alternative. Similarly we can define the adaptive upper cutoff for stage n denoted by t_n^u as,
t_n^u=
t_n^(m-a_n) if a_n < m
1 if a_n = m
Any hypothesis i with LFDR (t_n^i) greater than t_n^l is considered to be a potential null.
We can define the stopping time to be
T = inf{n∈ℕ : t_n^l > t_n^u}
This new stopping time, like before, is almost surely equal to that of the original stopping time for the continuous case, but is greater than equal to the original stopping time for the discrete case.
The following theorem shows that the oracle rule under the independent mixture model is proper and controls FDR and FNR at desired levels.
Fix α,β∈ (0,1). Suppose Assumption <ref> and <ref> holds and m∈ℕ. Let (T,δ) be the oracle rule for the independent mixture model as obtained by replacing t_n^*i with t_n^i in algorithm 1. Then,
ℙ_θ(T<∞)=1
Further,
(T,δ) ∈Δ(α,β)
Proof of Theorem <ref> is included in appendix <ref>.
§.§ Data-driven Rule:
From our earlier discussion, it is established that, t_n^i= π_0f_0n(S_n^i)/f(S_n^i), but in practice π_0, f_0n(.) and f_n(.) are not known. Therefore, we cannot use the oracle rule directly. In the case of 𝐒_𝐧 being a vector of independent continuous random variables, estimates of all of these quantities are readily available in the literature. For example, when the sample size is n, if the null distribution of each of the statistics is F_0n, we can construct the z scores 𝐙_𝐧=(Z_n^1,Z_n^2,⋯,Z_n^m) as
Z_n^i = Φ^-1 (F_0n(S_n^i))
Then the null distribution for these z scores is standard normal. We can obtain the estimate of the null proportion π_0 following <cit.>. The denominator can be obtained using a kernel density estimator. If the null distribution is unknown, we can estimate it using methods such as FIND SOME LITERATURE. Once all the unknown quantities are estimated, we get the estimated local fdr values as t̂_n^i, i∈{1,2,⋯,m} and we can replace t_n^*i with t̂_n^i in algorithm 1 to get the data-driven rule for independent mixture model set up for fixed α,β∈(0,1).
Discussion:
Like earlier, we can define for stage n the adaptive lower cutoff, denoted by t̂_n^l as,
t̂_n^l= t̂_n^(r̂_n+1) if r̂_n < m
1 if r̂_n = m
Any hypothesis i with estimated LFDR (t̂_n^i) less than t̂_n^l is considered to be a potential alternative. Similarly we can define the adaptive upper cutoff for stage n denoted by t̂_n^u as,
t̂_n^u= t̂_n^(m-â_n) if â_n < m
1 if â_n = m
Any hypothesis i with estimated LFDR (t̂_n^i) greater than t̂_n^l is considered to be a potential null.
Suppose each co-ordinates of 𝐒_n are continuous random variables. For some n∈ℕ, r̂_n+â_n≥ m if and only if t̂_n^l>t̂_n^u almost surely.
Proof of Lemma <ref> is similar to that of Lemma <ref> and therefore omitted. Due to Lemma <ref>, t̂_n^u < t̂_n^l occurs for the first time at n=T_d. So we can restate the data-driven rule by changing the stopping criteria as:
“Stop sampling as soon as t̂_n^l >t̂_n^u "
For the data-driven rule to work properly, we require the estimated local fdrs t̂_n^i to be in close proximity to the actual local fdr values t_n^i. We can ensure an asymptotic accuracy of the method i.e. the error rates are asymptotically contained at a prefixed value (Theorem <ref>) if the following consistency property for the estimates of the local fdrs holds for each coordinate.
As m↑∞, for all j ∈{1,2,⋯,m },
t̂_n^jp→ t_n^j.
The following theorem proves that the above procedure controls both FDR and FNR asymptotically.
Fix α,β∈ (0,1). Define the class of sequential multiple tests that controls FDR and FNR respectively at levels α and β asymptotically as:
Δ'(α,β) = {(T,𝐝) | lim_m↑∞ FDR ≤α and lim_m↑∞ FNR ≤β}
Now suppose assumptions <ref>, <ref>, <ref> holds and m∈ℕ. Let (T_d,δ̂) be the data-driven rule obtained by replacing t_n^*i with t̂_n^i in algorithm 1. Then,
(T_d,δ̂) ∈Δ'(α,β)
The proof is included in appendix <ref>.
As an intermediate step of proving theorem <ref>, we stated and proved lemma <ref>. This and corollary <ref> are important results of this paper. Lemma <ref> says that if the statistics 𝐒_n are continuous and assumptions <ref>,<ref> and <ref> are satisfied, then, the stopping time T_d for the data-driven rule converges weakly to a finite non-stochastic positive integer n_0 as the number of hypotheses m grows to infinity. corollary <ref> yields the same result for the oracle stopping time too. This is very important because, in the existing literature, there are no such sequential multiple testing methods that stop in a finite time even when the number of hypotheses gets large indefinitely. As we shall see in the next section, the stopping time of the only competitor of our method, <cit.>, converges to infinity. This result helps our method carry out large-scale multiple testing problems at a very low cost. This phenomenon is illustrated in figure <ref>.
§ COMPARISON WITH EXISTING COMPETITOR
<cit.> proved that if the number of alternatives is known (K) then among all tests that reject exactly K hypotheses, the Gap Rule defined in <cit.> is asymptotically optimal for simultaneously controlling FDR and FNR as α and β tends to 0, given for each coordinate, a simple null is tested against a simple alternative and observations against each coordinate are iid. They also study the asymptotics of m as functions of α∧β and provide some restrictions so that the optimality holds. However, the asymptotic behavior of the expected stopping time as m↑∞ has not been studied. In this section we shall show that, for the independence setup described in Assumption <ref> stopping time for Gap rule goes to ∞ in a weak sense.
Suppose, assumptions <ref>, <ref> hold. Let m_1 be the number of alternatives and T_g^* be the stopping time corresponding to the asymptotically optimal Gap rule for controlling FDR and FNR respectively at level α and β as mentioned in Theorem 3.1 in <cit.>. Then for any L∈ℕ,
lim_m↑∞ℙ(T_g^*<L) = 0
Proof of lemma <ref> is included in appendix <ref>.
If assumptions <ref>, <ref> holds and if T and T_g^* be the stopping times of the oracle rule and asymptotically optimal Gap rule then,
E(T)=o(E(T_g^*))
Proof of corollary <ref> follows from corollary <ref> and Lemma <ref>
§ SIMULATION STUDY:
In this section, our goal is to numerically validate the theoretical claims made earlier. The foremost of them was that the performance of the data-driven rule is asymptotically equivalent to that of the oracle rule. We also ensured theoretically that the expected stopping time of the Gap rule as proposed by <cit.> is much higher than that of the oracle rule. Numerical validation for the same is provided here. Also, the adaptive Gap rule, where the cutoff boundary for making decisions is obtained using monte carlo simulations, can not be compared theoretically and therefore the comparison is made using numerical examples. Finally, we measured the performance of our sequential multiple testing rule with the Benjamini-Hochberg rule. Each numerical example is considered for two cases, one sparse case, where π_1 is 0.2 and one dense case with π_1 equal to 0.8.
We assume that, θ_0 = (θ_01,θ_02,⋯,θ_0m)∈{0,1}^m is a realised value on the unobservable random binary indices θ according to (<ref>). Given θ_0, the data {X_n^i| n ∈ℕ , i ∈ [m]} is generated from the mixture model
X_n^i iid∼ (1-θ_0i) f_0(x) + θ_0i f_1(x)
For the oracle rule the computation of local fdr is straightforward since the parameters and the distributions are assumed to be known. For the data-driven rule, null proportion 1-π_1 is estimated using the method described in <cit.> for the continuous case. Here, we have computed the z scores since the method requires the null distribution to be Gaussian. For the continuous case, the mixture density is estimated using the kernel density estimation method. The null distribution is assumed to be standard normal. The estimation method by <cit.> requires a particular choice of the tuning parameter λ which was fixed at 0.1. The only case when a discrete distribution was assumed was to test whether the observation follow Bernoulli(7,0.1) (f_0) or a Bernoulli(7,0.3)(f_1) distribution. In that case, the null proportion 1-π_1 was estimated using the method described by <cit.>. With that established, the other parts in the expression of local fdr are known since the pairs of hypotheses in question are both simple. Once we got the local fdr values, at each sample size, we follow the steps described in Algorithm 1. The corresponding code is very easy to implement and will be uploaded soon. can be obtained in {insert address} .Here, throughout this section, we have fixed α at 5% and β at 10%.
Numerical study 1: In this example, we compare the performance of the oracle rule (LfdrI_0) with the data-driven rule (LfdrI_D). As a data generating process, we consider the example 1 with μ_0 = 0 and μ_1= 0.25. The average sample numbers (ASN) were computed based on 50 Monte Carlo runs. Figure <ref> shows an ASN versus m plot. The plots for both sparse and dense cases support our theoretical claim.
Numerical study 2: The performance of LfdrI_0 and LfdrI_D are measured against that of the GAP rules as described in <cit.>. To implement the GAP rule, we need to be informed about the number of rejections. To make it comparable with our method, we generate the data using (<ref>), we note the number of rejections for each case and use it for implementing the GAP rule. We also note that the implementation of GAP rule really works if the hypotheses to be tested are simple. We carefully choose such cases. The authors in <cit.> provide one asymptotically optimal rule (GAPao), where a theoretical value of the cutoff is provided. For practical purposes, another adaptive rule is also mentioned, where the boundary is obtained by simulations. Here, whenever this rule (GAPsb) has been implemented, we have used 50 Monte Carlo runs and computed the average false discovery proportions (f̂dr) and the average false nondiscovery proportions (f̂nr). We have repeated this process for different values of the cutoff starting from 0.1 with a gap of 0.1 until we get a cutoff ĉ for which f̂dr≤α and f̂nr≤β. Then for ĉ, we again compute the ASN, f̂dr and f̂nr with 200 Monte Carlo runs.
We consider example 1 with μ_0 = 0 and μ_1=0.25. Then for LfdrI_0, LfdrI_D, GAPao and GAPsb, we generate plots of ASN, f̂dr and f̂nr (each computed for 200 Monte Carlo runs) against m in figure <ref>.
In table <ref>, we list the ASN, f̂dr and f̂nr of LfdrI_0, LfdrI_D, GAPao and GAPsb in different testing problems as will be discussed latter. In each case, we used 200 Monte Carlo runs to compute different components. We write down the savings in ASN for using LfdrI_D as compared to GAPao and GAPsb. For the GAPao rule, the f̂dr and f̂nr has been omitted owing to the fact that, in each case, both of them have an average value of 0.
The exaples considered in table <ref> are listed below.
Example 1: Here, X_n^i|(θ_i=θ_0i) iid∼ (1-θ_0i) N(μ_0,1) + θ_0i N(μ_1,1). For table <ref>, we consider μ_0 =0 and μ_1 = 0.25.
Example 2: Here, X_n^i|(θ_i=θ_0i) iid∼ (1-θ_0i) exp (λ_0)+ θ_0i exp (λ_1). For table <ref>, we consider λ_0 =1 and λ_1 = 1.2.
Example 3: Here, X_n^i|(θ_i=θ_0i) iid∼ (1-θ_0i) N(0,σ_0) + θ_0i N(0,σ_1). For table <ref>, we consider σ_0 =1 and σ_1 = 1.2.
Example 4: Here, X_n^i|(θ_i=θ_0i) iid∼ (1-θ_0i) Bernoulli(7,p_0) + θ_0i Bernoulli(7,p_1). For table <ref>, we consider p_0 =0.1 and p_1 = 0.3.
For each of these examples, we observe that, the performance of LfdrI_0 and LfdrI_D are equivalent. The most interesting observation here is that, on both of the cases, f̂dr and f̂nr are almost equal to desired levels α and β respectively. This proximity is more prominant and stable in case of LfdrI_0 in comparison with LfdrI_D which supports the etsablished theory that, FDR and FNR control is exact in case of the oracle rule and is assymptotic in case of the data-driven rule. We can therefore hope to prove our method to be optimal in some sense in a future work. In each of the cases, GAPao results in 0 f̂dr and f̂nr with 0 standard error and therefore omitted in the table. Savings with respect to LfdrI_D is reported for GAPao and GAPsb. This savings is much higher in case of GAPao due to its conservative property. It is somewhat less but still significant in case of GAPsb. Therefore table <ref> shows absolute dominance of LfdrI_D over the GAP rule. Here the difference in savings for the sparse case and the dense case results from the asymmetry in the values of α and β chosen here. Similar resrults are obtained in case of large values of m with respect to the example 1 as shown in figures <ref> and <ref>.
Numerical study 3: This study is so designed that, we can compare the performance of the sequential multiple testing method LfdrI_D with some existing fixed sample rules- among which, that (BH) devised by <cit.> is the most popular. Also, being an FDR controlling rule, this is a matching competetor for us. For comparison purpose, we consider different hypotheses testing problems. In each cases, first, we run LfdrI_D for 200 Monte Carlo runs with α and β fixed at level mentioned earlier and note the ASN, f̂dr and f̂nr. Then, we continue run BH with same α but different sample size n and note f̂nr untill we get a n̂ that yields a f̂nr which is very close to that obtained from LfdrI_D. We report n̂, f̂dr and f̂nr. We also report savings in the ASN of LfdrI_D as compared with n̂ of BH. We use the BH function in the library "Mutoss" in the Bioconductor repository for this purpose. The results are reported in table <ref>. Here the following examples have been considered.
Example 5: At each stage n∈ℕ, for each coordinate i∈[m], 2 observation are made. X_n^i is invariably coming from the null distribution (control) and Y_n^i is coming from a mixture density (case). i.e., this is a sequential case-control or 2 sample study. For single hypotheses framework, such problem has a wide literature. One can refer to <cit.> for some insights. Define n_X(n_Y) to be the number of new observations of the control (case) variable to be made at each stage. Here we assumed n_X=n_Y=1. The alternative density (f_1(x)) of the mixture distribution is considered to be bimodal. i.e.,
X_n^i iid∼ N(μ_0,σ_0)
Y_n^i | (θ_i=θ_0i) iid∼ (1-θ_0i) N(μ_0,σ_1) + θ_0i (η_i N(μ_1,σ_1) + (1-η_i) N(μ_2,σ_1))
with η_iiid∼ Bernoulli (p_1). Here, we considered μ_0 = 0, μ_1=0.25, μ_2 = -0.5, σ_0=σ_0=1 and p_1=0.75. We performed a 2 sample Welch test for each of the coordidnates. This Welch statistics becomes our intended statistics and we preform LfdrI_D based on these test statistics. We use the p values from these tests to perform BH.
Example 6: We assume
X_n^i|(θ_i=θ_0i) iid∼ (1-θ_0i)N(μ_0,σ^2)+θ_0i N(μ_1,σ^2)
Here, we considered μ_0 = 0, μ_1 = 0.25, σ=1. We performed the Student's t test for this example. the LfdrI_D and BH was performed as before based on the test statistics.
Example 7: We assume
X_n^i|(θ_i=θ_0i) iid∼ (1-θ_0i)Cauchy(μ_0,1)+θ_0iCauchy(μ_1,1)
Here, we considered μ_0 = 0, μ_1 = 0.25. As the test statistics, we calculated the log likelihood ratio. The standard distribution of the statistic is not known. Therefore, at each stage, we simulated 100000 null values of the statistics and we estimated the null CDF values using these simulated values and we obtained the z score and we proceeded as before. For computing the p values, we used the same simulated null log-likelihood ratios and the fact that a high likelihood ratio provides evidence against the null hypotheses. i.e., one-sided test was appropriate.
For each of these examples, we observe that the savings for using LfdrI_D with respect to BH are large. The discrepancy in savings in sparse and the dense case is due to the difference in the proportion of null (1-π_1), since it plays a major role in control of FDR for BH method.
§ DATA APPLICATION
In this section, we apply our method in real datasets and compared them against the Benjamini Hochberg method and the local fdr based method proposed by <cit.>. We have used 2 datasets for this purpose. The first of them is the Prostate Data (<cit.>) obtained from https://efron.ckirby.su.domains/LSI/datasets-and-programs/https://efron.ckirby.su.domains/LSI/datasets-and-programs/. The dataset contains gene expression data on 6033 genes from 102 prostate cells. Among them, 52 were from tumor cells and the rest were from nontumor cells. Therefore, this constitutes a case-control study and the goal is to identify genes whose expressions are associated with prostate tumors.
Suppose, X_ij represents the j th gene expression associated with the i th prostate cell with tumor and Y_kj represents the j th gene expression associated with the k th prostate cell without tumor. Here , j = 1(1)6033, i = 1(1)52 and k = 1(1)50. We have performed 2 sample t test against each of the genes. For the i-th gene we have computed the statistic
S_n^i = X̅_n1^i - Y̅_n2^i/σ̂_n1,n2^i√(1/n1+1/n2)
Where, n1 is the number of observations from the prostate tumor cells, n2 is the number of samples from the prostate cells without tumor, and n=n1+n2. X̅_n1^i, Y̅_n2^i and σ̂_n1,n2^i are respectively the mean of the i the gene expression for n1 tumor cells, that of the n2 non-tumor cells and the pulled standard deviation (with denominator n-2) fot the full data with n1 and n2 samples for the i th gene. This is the two-sample t statistic corresponding to the sample of sizes n1 and n2. Under the assumption that
X_ijiid∼ N(μ_1,σ^2)
and
Y_ijiid∼ N(μ_2,σ^2)
for all i=1(1)6033, S_n^i follows t distributions with n-2 degrees of freedom independently.
After standardizing and quantile normalizing the dataset, we compute for the full data, i.e., for n1=52 and n2=50, the statistic S_102^i corresponding to the i-th gene, i=1(1)6033. Our objective is to test
H_0i: μ_1 = μ_2 versus H_1i: μ_1 ≠μ_2
Therefore, a both-side test is appropriate here. We compute the p values accordingly. i.e.,
p_n^i = 2 min{F_100(S_n^i),(1-F_100(S_n^i)) }
Here, F_n corresponds to the cdf of a t distribution with n degrees of freedom.
Similarly, we calculate the z scores as
z_n^i = Φ^-1(F_100(S_n^i))
The following picture shows the histogram of the p values and z scores for the full data.
The histograms show that the z scores follow a normal distribution and the p values follow a uniform distribution (except perhaps for a larger frequency near 0), which justifies our assumption of normality for the data. The graphs indicate that the proportion of alternatives is relatively very small (i.e., this data is sparse).
First, we apply the Benjamini Hochberg rule (<cit.>) for the full data. For this, we set α=0.05. The number of positive genes we thus obtained is 21. We follow this by the AdaptZ method described by <cit.> with the same value of α. Here we obtained a total of 29 genes responsible for prostate cancer.
Finally, we apply our sequential multiple testing method to the dataset. We start with a pilot sample of 50 cells, among which, 25 were from the tumor cells (case) and 25 were from the normal cells (control). We then computed the z scores corresponding to the two-sample t-test statistics (under null which follow independent t distributions with parameter 48) against each gene expression value from the 50 cases and 50 controls. These z scores are used to compute the local fdr estimates for each of the 6033 genes. We then follow the second step of the algorithm of LfdiI_D to obtain the number of potential nulls (â_50) and potential alternatives (r̂_50) at levels α=0.05 and β=0.1. We note that, here r̂_50 = 14 and â_50=5982. i.e., r̂_50+â_50 = 5996 which is less than 6033. So we proceed to the next stage. For each consecutive step, we obtain a new sample from either the case or the control with an equal probability. With this updated sample of size 51, we repeat the same process. We continue this until we have for some n, r̂_n+â_n≥ 6033. This n is called the stopping time.
For β =0.1, the stopping time is 69 and we can discover 12 genes to be positive. Here, the savings of the sample size is high (almost 32.4 %) But the number of discoveries is much less than both Benjamini Hochberg and AdaptZ methods. If we consider β=0.07 however, The stopping time is 87 (with a sample size savings of about 15 %), but our method discovers 23 positive genes. Which is higher than the number of discoveries made by the BH method.
The second dataset we have considered, is the dataset collected and used by <cit.>. The dataset consists of gene expression levels on 7129 genes from the bone marrow tissues of 72 acute leukemia patients. Among them, 47 were suffering from acute lymphoblastic leukemia,
(ALL) and the rest of the 25 were acute myeloid
leukemia (AML) patients. Our goal is to discover genes that discriminate between these two types of leukemia.
For this, we consider a mixture of gaussian distribution as before for each gene expression value for each patient. Therefore, a two-sample t-test is applicable here as well. The following picture shows the histograms of the z scores and p values for the full dataset.
The histogram of p value has a peak near 0, otherwise, it seems to be uniformly distributed. The Histogram of the z scores resembles a normal distribution with slightly heavy frequencies at the tail. This tells us that the distribution is medium sparse.
The BH method for α 0.05 discovers 1280 positive genes. AdaptZ method for the same level of α can identify 1300 genes that can identify the difference between these two types of leukemia.
The LfdrI_D method for α=0.05 and β=0.1 does not stop for the given dataset and it also identifies 1300 positive genes.
So, from the above data applications, we can conclude that, the data-driven method can either save sample size or performs as well as the optimal AdaptZ rule for the nonsequential case.
§ PROOF OF THEOREM 1
Let 𝒜={
i ∈ [m] : θ^i=1
} be the set of signals. Note that, if at stage n, t_n^*i≤α ∀ i ∈𝒜 and t_n^*i≥(1-β) ∀ i ∈𝒜^c, then we must have, τ≤ n. i.e.,
ℙ(τ≤ n) ≥ ℙ({∩_i ∈𝒜{ t_n^*i≤α}}∩{∩_i ∈𝒜^c{ t_n^*i≥(1-β) }} )
≥ ∑_i ∈𝒜ℙ^i_1(t_n^*i≤α) + ∑_i ∈𝒜^cℙ^i_0 (t_n^*i≥(1-β))-m+1
≥ 1- ∑_i ∈𝒜ℙ^i_1(t_n^*i>α) - ∑_i ∈𝒜^cℙ^i_0 (t_n^*i<(1-β))
The second line is due to Boole's inequality. Here, ℙ^i_j(A)=ℙ(A|θ_i = j) ∀ i = 1(1)m and j=0,1. The proof is complete if we take limit n↑∞ to both side of the inequality and use assumption <ref>.
To prove the second part, first note that for timepoint n, both ∑_lt_n^*(l)/q and 1/q∑_l=1^q(1-t_n^*(m-l+1)) are increasing in q∈[m] almost surely. Now we introduce the following 2 lemmas:
If l ∈ [r_τ] almost surely, the test (τ,𝐃') with 𝐃'=(D'^1,D'^2,⋯, D'^m) given by
D'^i = 1 (t_τ^*i≤ t_τ^*(l))
controls FDR at level α for all θ∈{0,1}^m.
and
If l ∈ [a_τ] almost surely, the test (τ,𝐝') with 𝐝'=(d'^1,d'^2,⋯, d'^m) given by
d'^i = 1 (t_τ^*i≥ t_τ^*(m-s+1))
controls FNR at level β for all θ∈{0,1}^m.
Proofs of Lemma <ref> and <ref> are provided in appendix <ref>.
Now, for any s = (m-a_τ)(1)r_τ, and rejection region ℜ^* given by algorithm 1, FDR≤α by Lemma <ref> and for acceptance region [m]∖ℜ^*, FNR≤β by Lemma <ref>
So, (τ,δ^*)∈Δ(α,β) using Lemma <ref> and <ref>.
§ PROOF OF THEOREM 2
First we state the following lemma:
Provided assumption <ref> is true, for any random finite stopping time T, defined on {σ_n},
ℙ_θ(t_T^*i≠ t_T^i)=0
Proof of Lemma <ref>. is in appendix <ref>. Lemma <ref>. ensures that, although T depends on all the m data-streams, due to assumption <ref>, for making inference about the i-th hypothesis, observations from the i-th data-stream are sufficient. The proof of theorem <ref> follows from theorem <ref> and Lemma <ref>..
§ PROOF OF THEOREM 3
As discussed earlier, the hypothesis corresponding to data-stream i with t̂_n^i<t̂_n^l are considered potential alternatives and the hypothesis corresponding to data-stream i with t̂_n^i>t̂_n^u are considered potential nulls. If there are some data-streams i with t̂_n^l ≤t̂_n^i ≤t̂_n^u, we observe a new sampling unit. It is evident that if for some n, t̂_n^l>t̂_n^u, each data-stream is either a potential null or a potential alternative (or both!). So we can stop our sampling procedure. Lemma <ref> says that for continuous test statistics, the opposite is also true; i.e., at stopping time T_d, we must have t̂_T_d^l>t̂_T_d^u with probability 1.
From the discussion above, we get an alternative definition of the stopping time T_d; namely, we stop at the earliest time n when the adaptive rejection boundary t_n^l is greater than the adaptive acceptance boundary t_n^u. i.e., T_d=inf{n∈ℕ:t̂_n^l>t̂_n^u}. This definition widens the scope to study the asymptotic behavior of the stopping time as the number of data-streams m diverges to ∞. But first, we need to establish the asymptotic properties of t̂_n^l and t̂_n^u. Define,
Q̂_n(t) = ∑_j=1^m1(t̂^j_n≤ t)t̂^j_n/∑_j=1^m1(t̂^j_n≤ t) t ∈ [t_n^(1),1]
0 t∈[0,t_n^(1))
The following lemma describes some properties of the function Q̂_n(t):
* Q̂_n(t) is constant in the interval [t̂_n^(r),t̂_n^(r+1)) for r=1(1) m-1 and in the intervals [0,t̂_n^(1)) and [t̂_n^(1),1]
* Q̂_n(t) is right-continuous in (0,1).
* Q̂_n(t) is non-decreasing in [0,1].
Proof of Lemma <ref> is in appendix <ref>. Now note that, if r̂_n=m, by definition of r̂_n, Q̂_n(1)≤α. So, sup{t∈ [0,1]: Q̂_n(t)≤α}=1 and if r̂_n<m, sup{t∈ [0,1]: Q̂_n(t)≤α}=t̂_n^(r̂_n+1). So,
sup{t∈ [0,1]: Q̂_n(t)≤α}=t̂_n^l
The following lemma ensures a weak non-stochastic limit to t̂_n^l.
Define 𝒬_n(t)=π_0 ℙ^1_0(t_n^1≤ t)/ℙ_θ(t_n^1≤ t)∨1 for t∈[0,1].Let,
𝒯_n^l=sup{t ∈ [0,1]: 𝒬_n(t)≤α}. Then,
t̂_n^l p→𝒯_n^l
Proof of Lemma <ref> can be found in the proof of Lemma A.5 in <cit.>. We define
Q̂'̂_n(t) = ∑_j=1^m1(t̂^j_n≥ t)(1-t̂^j_n)/∑_j=1^m1(t̂^j_n≥ t) t ∈ [0,t_n^(m)]
0 t∈(t_n^(m),1]
The function Q̂'̂_n(t) has the following properties:
* Q̂'̂_n(t) is constant in the interval (t̂_n^(m-r),t̂_n^(m-r+1)], for r=1(1)m-1 and in the intervals [0,t̂_n^(1)] and (t̂_n^(m),1]
* Q̂'̂_n(t) is left-continuous in (0,1).
* Q̂'̂_n(t) is non-increasing in [0,1].
The proof is similar to the proof of Lemma <ref> and so is omitted. It is easy to see that.
t̂_n^u=inf{t∈ [0,1]: Q̂'̂_n(t)≤β}
Finally, we find the limiting value of t̂_n^u from the following lemma.
Define 𝒬_n'(t)=π_1
ℙ^1_1(t_n^1≤ t)/ℙ_θ(t_n^1≤ t)∨1 for t∈[0,1].Let,
𝒯_n^u=inf{t ∈ [0,1]: 𝒬_n'(t)≤β}. Then,
t̂_n^u p→𝒯_n^u
Define, ŝ_n=t̂_n^l-t̂_n^u and 𝒮_n=𝒯_n^l-𝒯_n^u. Then,
ŝ_np→𝒮_n as m↑∞
The result follows directly from Lemmas <ref> and <ref>.
The next lemma ensures a finite stopping time for the data-driven rule even for an indefinitely large number of hypotheses.
Suppose assumptions <ref>, <ref> and <ref> hold true. Then,
lim_m↑∞ℙ (T_d<∞)=1
and,
lim_m↑∞ℙ (T_d≠ n_0)=0
where,
n_0=inf{n∈ℕ: 𝒯_n^l>𝒯_n^u}
Suppose assumptions <ref> and <ref> hold true. Then,
lim_m↑∞ℙ (T<∞)=1
and,
lim_m↑∞ℙ (T≠ n_0)=0
where,
n_0=inf{n∈ℕ: 𝒯_n^l>𝒯_n^u}
Therefore, as the number of hypotheses increases, the oracle stopping time T converges weakly towards a finite natural number n_0. Proof of corollary <ref> is a consequence of the proof of Lemma <ref>.
The proof of Theorem <ref> is completed by the following lemma.
Suppose assumptions <ref>, <ref> and <ref> hold. Let s∈[m-â_T_d,r̂_T_d]. Define 𝐃̂'̂=(D̂'̂^1,D̂'̂^2,⋯,D̂'̂^m) where
D̂'̂^i=1(t̂_T_d^i≤t̂_T_d^(s))
Then, for such a test (T_d,𝐃̂'̂)∈Δ'(α,β)
§ PROOF OF LEMMAS
First, note that, for continuous test statistics 𝐒_n, ℙ(t_n^*i=1)=ℙ(t_n^*i=0)=0 ∀ i = 1(1)m.
if part:
We assume that, t_n^*l>t_n^*u. For the trivial cases, i.e., when t_n^*u=0 for example, by definition, a_n=m. Therefore, r_n+a_n≥ m almost surely. The same follows when t_n^*l=1.
So, if the non-trivial case is true, i.e., 0<t_n^*u<t_n^*l<1,
by definitions of t_n^*l and t_n^*u, t_n^*l=t_n^*(r_n+1) and t_n^*u=t_n^*(m-a_n). And finally, r_n+a_n≥ m almost surely. The second part of the lemma is therefore proved.
only if part:
Let r_n+a_n≥ m. For r_n=m, we know, t_n^*l=1 almost surely, which in turn proves that t_n^*l>t_n^*u almost surely. Similarly if a_n=m, the same result follows.
For the non trivial case, i.e., when r_n<m and a_n< m, we have t_n^*l=t_n^*(r_n+1) and t_n^*u=t_n^*(m-a_n). Finally we have r_n+a_n≥ m i.e., r_n+1 > m -a_n, i.e., t_n^*l>t_n^*u almost surely.
t_n^*i = ℙ(θ_i=0|Z_n)
= π_0f^*_n(Z_n|θ_i=0)/π_0f^*_n(Z_n|θ_i=0)+π_1f^*_n(Z_n|θ_i=1) (due to (2.4))
= π_0f_0(Z_n^i)∏_j≠ i f_n(Z_n^j)/π_0f_0(Z_n^i)∏_j≠ i f_n(Z_n^j)+π_1f_1n(Z_n^i)∏_j≠ i f_n(Z_n^j)
= π_0f_0(Z_n^i)/f_n(Z_n^i) = t_n^i
Assumption 2 states that
X_n^j|θ_j iid∼θ_j f_1 + (1-θ_j) f_0, n ∈ℕ
with
θ_j ∼Bernoulli(π_1)
for each j∈ 1(1)m.
Then total number of alternative is K=∑_j=1^m θ_j. i.e. K ∼Bin(m,π_1).
Let T_g^* be the stopping time for GAP rule for number of coordinates m. i.e.,
T_g^* = inf{n∈ℕ | Λ_n^(K)-Λ_n^(K+1)≥logK(m-K)/α∧β}
Where, Λ_n^j is the log-likelihood ratio corresponding to the j-th coordinate , j ∈ 1(1)m and Λ_n^(1)≥Λ_n^(2)≥⋯≥Λ_n^(m) is correspondingly the ordered representation of the log-likelihood ratios. For any finite positive integer L,
ℙ(T_g^*≤ L) ≤ℙ(∪_n A_n(m))
≤∑_n ℙ( A_n(m))
Where
A_n(m)={ω∈Ω | log(S_n^(K(ω))(ω))-log(S_n^(K(ω)+1)(ω))≥
log(K(ω)(m-K(ω))/α∧β)}
.
Now let Λ_n^j, j=1(1)m are iid with cdf H_n(.),
B_ϵ,c(m)={ω∈Ω : |Λ_n^(Z_m)(ω)-H_n^-1(π_1)|<ϵ for Z_m ∈ℕ, Z_m = mπ_1 + a_m,
|a_m| < c√(m)log(m)}
Due to Lemma 6. of <cit.>, ∃ M_1 ( depending on c & ϵ) ∈ℕ such that ℙ(∩_m≥ M_1 B_ϵ,c(m))=1.
Therefore, it is evident that, for all c,ϵ>0 ℙ( B_ϵ,c(m))→1 as m↑∞.
For c>0, let
D_c(m) = {ω∈Ω : |K(ω)-mπ_1| < c√(m)log(m)}
Due to Bernstein's inequality, it can be shown that, for fixed c>0, ℙ(D_c(m))→ 1 as m ↑∞.
So we have, for fixed c,ϵ>0,
ℙ(D_c(m)∩ B_ϵ,c(m)) ≥ℙ(D_c(m))+ ℙ( B_ϵ,c(m)) -1
Which converges to 1 as m ↑∞.
Now, for any ω∈ D_c(m)∩ B_ϵ,c(m), |Λ_n^(K(ω))(ω)-H_n^-1(π_1)|<ϵ & |Λ_n^(K(ω)+1)(ω)-H_n^-1(π_1)|<ϵ i.e. |Λ_n^(K(ω))(ω)-Λ_n^(K(ω)+1)(ω)|<2ϵ. But, log(K(ω)(m-K(ω))/(α∧β))>2log(m)+ζ for some sufficiently small ζ.
Therefore,
ℙ(A_n(m))= ℙ(A_n(m) ∩ (D_c(m)∩ B_ϵ,c(m))) +
ℙ(A_n(m) ∩ (D_c(m)∩ B_ϵ,c(m))^c)
Where the first term converges to 0 due to the discussion in the previous paragraph and the second term converges to 0 because of <ref>.
Finally, we observe that ℙ(T_g^*>L) → 0 as m ↑∞ for any L ∈ℕ, which proves the lemma.
By (2.1), FDR due to the sequential test (τ,𝐃') is:
FDR= E( ∑_i=1^m(1-θ^i)D'^i/(∑_i=1^m D'^i)∨1)
= E_Z_τ (E_θ |Z_τ ( ∑_i=1^m(1-θ^i)D'^i/(∑_i=1^m D'^i)∨1|Z_τ))
= E_Z_τ ( ∑_i=1^m(1-E(θ^i|Z_τ))D'^i/(∑_i=1^m D'^i)∨1)
= E_Z_τ ( ∑_i=1^m t_τ^*iD'^i/(∑_i=1^m D'^i)∨1)
= E_Z_τ ( 1/l∨ 1∑_i=1^l t_τ^*(i))
≤ E_Z_τ ( 1/r_τ∨ 1∑_i=1^𝔯_τ t_τ^*(i))
≤ α
The first inequality occurs since l∈ [r_τ] and the sequence of cumulative average of ordered(increasing) values is increasing in number of terms involved. The second inequality comes from (3.5).
By (2.2), FNR due to the sequential test (τ,𝐝') is:
FNR= E( ∑_i=1^m(1-d'^i)θ^i/(∑_i=1^m (1-d'^i))∨1)
= E_Z_τ (E_θ |Z_τ ( ∑_i=1^m(1-d'^i)θ^i/(∑_i=1^m (1-d'^i))∨1|Z_τ))
= E_Z_τ ( ∑_i=1^m(1-d'^i)E_θ(θ^i|Z_τ)/(∑_i=1^m (1-d'^i))∨1)
= E_Z_τ ( ∑_i=1^m (1-t_τ^*i)(1-d'^i)/(∑_i=1^m (1-d'^i))∨1)
= E_Z_τ ( 1/l∨ 1∑_i=1^l (1-t_τ^*(m-i+1)))
≤ E_Z_τ ( 1/a_τ∨ 1∑_i=1^𝔞_τ (1-t_τ^*(m-i+1)))
≤ β
As in the previous proof, since l ∈ [a_τ] , and due to the fact that the sequence of cumulative average of ordered (increasing) values is increasing in number of terms involved, the first inequality occurs. The second inequality comes from (3.6).
Define, for fixed n∈ℕ, 𝒜_n={ω:t_n^*i(ω)≠ t_n^i(ω)}.
Due to Lemma <ref>,
ℙ(𝒜_n)=0
Define, ℬ={ω:t^*i_T(ω)(ω)≠ t^i_T(ω)(ω)}.
Let ω_0∈ℬ. Then T(ω_0)=n_0∈ℕ. Therefore, ω_0∈𝒜_n_0.
i.e., ℬ⊆∪_n∈ℕ𝒜_n.
Finally,
ℙ(ℬ)≤ ℙ(∪_n∈ℕ𝒜_n)
≤ ∑_n∈ℕℙ( 𝒜_n)
= 0
This completes the proof.
* Note that, Q̂_n(t̂_n^(r))=∑_j=1^m1(t̂^j_n≤t̂_n^(r))t̂^j_n/∑_j=1^m1(t̂^j_n≤t̂_n^(r)) .
Now, by definition of t̂_n^(r), ∑_j=1^m1(t̂^j_n≤t̂_n^(r))t̂^j_n = ∑_j=1^r t̂_n^(j) and ∑_j=1^m1(t̂^j_n≤t̂_n^(r))=r.
So, Q̂_n(t̂_n^(r))=1/r∑_j=1^r t̂_n^(j).
Now, for t ∈ (t̂_n^(r),t̂_n^(r+1)), there is no local FDR value. Hence for such t, ∑_j=1^m1(t̂^j_n≤ t)t̂^j_n = ∑_j=1^r t̂_n^(j) and ∑_j=1^m1(t̂^j_n≤ t)=r. So, for any t ∈ [t_n^(r),t_n^(r+1)), Q̂_n(t)=1/r∑_j=1^r t̂_n^(j).
Similarly, for t ∈ [t̂_n^(m),1],Q̂_n(t)=1/m∑_j=1^m t̂_n^(j).
By definition, for t ∈ [0,t̂_n^(1)),Q̂_n(t)=0.
Hence, Lemma <ref>. 1 is proved.
* Lemma <ref>. 1 implies that value of Q̂_n(t) is constant in the intervals (0,t̂_n^(1)); (t̂_n^(r),t̂_n^(r+1)) for r=1(1)m-1 and (t̂_n^(m),1). And therefore is continuous in those intervals. Our goal is to show that, Q̂_n(t) is right continuous at the points: {t̂_n^(r) for r=1(1)m }. Now, from the discussion in the previous part, we can conclude that, for h < t̂_n^(r+1)-t̂_n^(r), Q̂_n(t̂_n^(r)+h) = Q̂_n(t̂_n^(r)) for r=1(1)m-1.
Therefore, lim_h↓ 0Q̂_n(t̂_n^(r)+h)=Q̂_n(t̂_n^(r)) and hence, Q̂_n(t) is right continuous in the points {t̂_n^(r) for r=1(1)m-1 }.
The same idea applies for t̂_n^(m) for Q̂_n(t̂_n^(m)) for h < 1-t̂_n^(m) and thus Q̂_n(t) is right continuous at the point t̂_n^(m).
Hence, Lemma <ref>. 2 is proved.
* From Lemma <ref>. 1 we can deduce that, Q̂_n(t) is constant in [0,1] except for jumps at the points {t̂_n^(r) for r=1(1)m }. To prove that Q̂_n(t) is non decreasing, we are done if we can prove that jumps at the points mentioned above are positive.
Now, jump at point t̂_n^(r) for r=1(1)m-1 is:
Q̂_n(t̂_n^(r+1))-Q̂_n(t̂_n^(r))= 1/r+1∑_j=1^r+1t̂_n^(j)-1/r∑_j=1^r t̂_n^(j)
= 1/r(r+1)∑_j=1^r (t̂_n^(r+1)-t̂_n^(1))>0
Hence, Lemma <ref>. 3 is proved.
Due to Assumption 1., as n↑∞, ℙ_1^1(t_n^1≥ t)→ 0 ∀ t ∈ (0,1] and ℙ_0^1(t_n^1≤ t')→ 0 ∀ t' ∈ [0,1). Therefore, ∀ t ∈ (0,1), 𝒬_n(t)→ 0 and 𝒬_n'(t)→ 0 as n↑∞. And ℙ_1^1(t_n^1≥0 )=ℙ_0^1(t_n^1≤1)=1. So, by definition, lim_n↑∞𝒯_n^l→ 1 and lim_n↑∞𝒯_n^u→ 0.
Now, if for some n∈ℕ, ŝ_n>0, we must have T_d≤ n almost surely. i.e.,
lim_m↑∞ℙ(T_d≤ n) ≥ lim_m↑∞ℙ(ŝ_n>0)
≥ ℙ(𝒮_n>ϵ) for ϵ∈(0,1)
Last inequality holds due to corollary 1.. As n↑∞, we get,
lim_m↑∞ℙ(T_d≤∞) ≥ lim_n↑∞ℙ(𝒮_n>ϵ)
= 1
Exchange of limits is allowed by BCT since probability values are bounded in [0,1]. The last euality holds since 𝒮_n=𝒯_n^l-𝒯_n^u n↑∞→ 1( > ϵ∈(0,1)). Hence first part is proved.
To prove the second part, note that, by definition of n_0, 𝒮_n≤0, ∀ n ∈ [n_0-1] and 𝒮_n_0>0. Since, lim_n↑∞𝒮_n→1, we must have, n_0<∞.
Now by corollary 1., for fixed n∈ℕ, ŝ_np→𝒮_n as m↑∞. So, for n∈[n_0], for every ϵ>0, for some δ>0, ∃ M_n, such that, ℙ(|ŝ_n-𝒮_n|<ϵ)≥(1-δ/n_0) ∀ m≥ M_n. Fix ϵ<min{-𝒮_1,-𝒮_2,⋯,-𝒮_n_0-1,𝒮_n_0}. For such ϵ, for m>M_0=max{M_1,M_2,⋯,M_n_0},
ℙ(ŝ_n<0)≥ℙ(ŝ_n<𝒮_n+ϵ)≥ℙ(|ŝ_n-𝒮_n|<ϵ)≥(1-δ/n_0)
∀ n ∈[n_0-1] and
ℙ(ŝ_n_0>0)≥ℙ(ŝ_n_0>𝒮_n_0-ϵ)≥ℙ(|ŝ_n_0-𝒮_n_0|<ϵ)≥(1-δ/n_0).
Now, for δ>0, ∃ M_0 ∈ℕ such that ∀ m>M_0,
ℙ(T_d=n_0)= ℙ(∩_n∈[n_0-1]{ŝ_n<0}∩{ŝ_n_0>0})
≥ ∑_n∈[n_0-1]ℙ(ŝ_n<0)+ℙ(ŝ_n_0>0)-n_0+1
≥ n_0(1-δ/n_0)-n_0+1
= 1-δ
i.e., lim_m↑∞ℙ(T_d=n_0) =1, which proves the lemma.
For the test (T_d,D̂'̂),
FDR= E_θ( ∑_i=1^m(1-θ^i)D̂'̂^i/(∑_i=1^m D̂'̂^i)∨1)
= E_Z_T_d (E_θ |Z_T_d ( ∑_i=1^m(1-θ^i)D̂'̂^i/(∑_i=1^m D̂'̂^i)∨1|Z_T_d))
= E_Z_T_d ( ∑_i=1^m(1-E(θ^i|Z_T_d))D̂'̂^i/(∑_i=1^m D̂'̂^i)∨1)
= E_Z_T_d ( ∑_i=1^m t_T_d^*iD̂'̂^i_T_d/(∑_i=1^m D̂'̂^i_T_d)∨1)
In the final line we add T_d as the suffix of D̂'̂^i to emphasis the fact that D̂'̂ depends on the stopping time T_d (which was omitted to maintain simplicity.)
Now, T_d=n_0 implies
∑_i=1^m t_T_d^*iD̂'̂^i_T_d/(∑_i=1^m D̂'̂^i_T_d)∨1=∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1
From Lemma <ref>. we get,
∑_i=1^m t_T_d^*iD̂'̂^i_T_d/(∑_i=1^m D̂'̂^i_T_d)∨1-∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1p→0
as m↑∞. Since the quantity is bounded in [-1,1], we get
lim_m↑∞( FDR - E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1))=0
So,
lim_m↑∞( FDR-E(Q̂_n_0(t̂_n_0^(s)))=lim_m↑∞( E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1-Q̂_n_0(t̂_n_0^(s)))
Now, D̂'̂^i_n_0 = 1(t̂_n_0^i≤t̂_n_0^(s)) with, s∈ [m-â_n_0,r̂_n_0]. And due to Assumption 2, ℙ(t_n_0^*i≠ t_n_0^i)=0.
E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1-Q̂_n_0(t̂_n_0^(s))) = E(∑_i=1^m (t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s))/(∑_i=1^m 1(t̂_n_0^i≤t̂_n_0^(s)))∨1)
= E(1/m∑_i=1^m (t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s))/1/m(∑_i=1^m 1(t̂_n_0^i≤t̂_n_0^(s)))∨1)
Now,
var(1/m∑_i=1^m (t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s))) =1/m^2∑_i=1^mvar((t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s)))+
1/m^2∑_i≠ j^m cov((t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s)),(t_n_0^j-t̂_n_0^j)1(t̂_n_0^j≤t̂_n_0^(s)))
For fixed i,j∈[m] (i≠ j) let,
ρ_ij= cov((t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s)),(t_n_0^j-t̂_n_0^j)1(t̂_n_0^j≤t̂_n_0^(s)))
≤ var((t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s))) (= ρ_ii)
≤ E(((t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s)))^2)
≤ E(t_n_0^i-t̂_n_0^i)^2 → 0
The last convergence follows from the dominated convergence theorem since |t_n_0^i-t̂_n_0^i| ∈ [-1,1]. As a result
var(1/m∑_i=1^m (t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s)))=1/m^2[∑_i=1^m ρ_ii + ∑_i≠ jρ_ij] → 0
and therefore due to weak law of large number,
1/m∑_i=1^m (t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s)) p→ E((t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s))) → 0
Since,
|E((t_n_0^i-t̂_n_0^i)1(t̂_n_0^i≤t̂_n_0^(s))) | ≤ E(|t_n_0^i-t̂_n_0^i|)→ 0.
Now, if s=m-â_n_0, 1(t̂_n_0^i≤t̂_n_0^(m-â_n_0))=1(t̂_n_0^i≤t̂_n_0^u) almost surely due to 4.30.
var(1/m(∑_i=1^m1(t̂_n_0^i≤t̂_n_0^u)))=1/m^2∑_i=1^m var( 1(t̂_n_0^i≤t̂_n_0^u)
+1/m^2∑_i≠ jcov( 1(t̂_n_0^i≤t̂_n_0^u),1(t̂_n_0^j≤t̂_n_0^u))
First we consider the convergence of the covariance term. Say,
ρ̂_ij= cov( 1(t̂_n_0^i≤t̂_n_0^u),1(t̂_n_0^j≤t̂_n_0^u))
= ℙ({t̂_n_0^i-t̂_n_0^u≤0}∩{t̂_n_0^j-t̂_n_0^u≤0})-(ℙ(t̂_n_0^i-t̂_n_0^u≤0))^2
Assumption 3 implies t̂_n_0^i-t_n_0^ip→0 ∀ i ∈[m] and Lemma <ref> implies t̂_n_0^up→𝒯_n_0^u as m↑∞. So,
(t̂_n_0^i-t_n_0^i,t̂_n_0^j-t_n_0^j,t̂_n_0^u-𝒯_n_0^u)p→(0,0,0)
jointly.
Therefore it is easy to see that,
ℙ({t̂_n_0^i-t̂_n_0^u<0}∩{t̂_n_0^j-t̂_n_0^u<0})
→
ℙ({t_n_0^i-𝒯_n_0^l<0}∩{t_n_0^j-𝒯_n_0^l<0})
as m↑∞
Due to Assumption 2,
ℙ({t_n_0^i-𝒯_n_0^u<0}∩{t_n_0^j-𝒯_n_0^u<0})=
(ℙ(t_n_0^i-𝒯_n_0^u<0))^2 ∀ i,j ∈ [m]
and
ℙ(t̂_n_0^i-t̂_n_0^u<0) →ℙ(t_n_0^i-𝒯_n_0^u<0)
So, the covariance term in <ref> converges to 0 for all i,j ∈[m]. i.e.,
lim_m↑∞ρ̂_ij=0
And we see that the variance,
var(1(t̂_n_0^i<t̂_n_0^l)≤ E(1(t̂_n_0^i<t̂_n_0^l)))≤ 1
So,
var(1/m(∑_i=1^m 1(t̂_n_0^i<t̂_n_0^l))) ≤1/m^2∑_i=1^m 1 + 1/m^2∑_i≠ jρ̂_ij→1/m +0 → 0
Therefore, due to weak law of large numbers,
1/m(∑_i=1^m u(t̂_n_0^i<t̂_n_0^u))p→ E(u(t̂_n_0^i<t̂_n_0^u))→P_θ(t_n_0^i<𝒯_n_0^u)
Now, if s>m-â_n_0,
1/m∑_i=1^m 1(t̂_n_0^i≤t̂_n_0^(s)) ≥1/m(∑_i=1^m 1(t̂_n_0^i<t̂_n_0^u)) p→ℙ(t_n_0^i<𝒯_n_0^u)
So, for any s∈[m-â_n_0,r̂_n_0]
1/m∑_i=1^m (t-t̂_n_0^i) 1(t̂_n_0^i≤t̂_n_0^(s)) p→ 0
1/m∑_i=1^m 1(t̂_n_0^i≤t̂_n_0^(s)) p→ℙ(t_n_0^i<𝒯_n_0^u)>0
i.e., due to <ref>, for such s,
∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1-Q̂_n_0(t̂_n_0^(s)) p→ 0
And since,
|∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1-Q̂_n_0(t̂_n_0^(s)) | ≤ 1
We conclude,
lim_m↑∞ E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1-Q̂_n_0(t̂_n_0^(s))) = 0
Finally, ℙ(T_d≠ n_0)→ 1 â_T_dp→â_n_0 and r̂_T_dp→r̂_n_0. Therefore, for any s∈ [m-â_T_d,r̂_T_d], ℙ(s∈ [m-â_n_0,r̂_n_0])→ 1 and for such s,
lim_m↑∞ (FDR-α) = lim_m↑∞[ FDR- E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1)+
E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1-Q̂_n_0(t̂_n_0^(s)))+E(Q̂_n_0(t̂_n_0^(s))-α)]
≤ lim_m↑∞[ FDR- E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1)] +
lim_m↑∞ E(∑_i=1^m t_n_0^*iD̂'̂^i_n_0/(∑_i=1^m D̂'̂^i_n_0)∨1-Q̂_n_0(t̂_n_0^(s)))
= 0
Last inequality follows since, for s≤r̂_n_0, Q̂_n_0(t̂_n_0^(s)) ≤Q̂_n_0(t̂_n_0^(r̂_n_0))≤α. The limits tend to 0 due to <ref> and <ref>.
We therefore observe that, for such a test (T_d,D̂'̂), defined in Lemma <ref>.,
lim_m↑∞ FDR ≤α
We follow similar steps for proving asymptotic control of FNR. First note that,
FNR= E_θ( ∑_i=1^mθ^i(1-D̂'̂^i)/(∑_i=1^m (1-D̂'̂^i))∨1)
= E_Z_T_d( ∑_i=1^m (1-t_T_d^*i)(1-D̂'̂^i_T_d)/(∑_i=1^m (1-D̂'̂^i_T_d))∨1)
Due to Lemma <ref>., assumption <ref>. and due to the fact that
|∑_i=1^m (1-t_T_d^i)(1-D̂'̂^i_T_d)/(∑_i=1^m (1-D̂'̂^i_T_d))∨1| ≤ 1,
we get
lim_m↑∞[FNR - E_Z_T_d( ∑_i=1^m (1-t_n_0^i)(1-D̂'̂^i_n_0)/(∑_i=1^m (1-D̂'̂^i_n_0))∨1)]=0
Following our previous proof we can show that for s∈[m-â_n_0,r̂_n_0],
1/m∑_i=1^m1(t̂_n_0^i≥t̂_n_0^(s+1))(t̂_n_0^i-t_n_0^i) p→ 0
and
1/m∑_i=1^m1(t̂_n_0^i≥t̂_n_0^(s+1))
≥ 1/m∑_i=1^m1(t̂_n_0^i≥t̂_n_0^l) p→ℙ(t_n_0^i≥𝒯_n_0^l)>0
by weak law of large numbers. Therefore,
∑_i=1^m (1-t_n_0^i)(1-D̂'̂^i_n_0)/(∑_i=1^m (1-D̂'̂^i_n_0))∨1-Q̂_n_0'(t̂_n_0^(s))
= 1/m∑_i=1^m1(t̂_n_0^i≥t̂_n_0^(s+1))(t̂_n_0^i-t_n_0^i)/1/m∑_i=1^m1(t̂_n_0^i≥t̂_n_0^(s+1))∨ 1p→ 0
and since
| 1/m∑_i=1^m1(t̂_n_0^i≥t̂_n_0^(s+1))(t̂_n_0^i-t_n_0^i)/1/m∑_i=1^m1(t̂_n_0^i≥t̂_n_0^(s+1))∨ 1|≤ 1
we conclude
lim_m↑∞E(∑_i=1^m (1-t_n_0^i)(1-D̂'̂^i_n_0)/(∑_i=1^m (1-D̂'̂^i_n_0))∨1-Q̂_n_0'(t̂_n_0^(s)))=0
Finally,
lim_m↑∞ (FNR-β) = lim_m↑∞[FNR - E_Z_T_d( ∑_i=1^m (1-t_n_0^i)(1-D̂'̂^i_n_0)/(∑_i=1^m (1-D̂'̂^i_n_0))∨1)+
E(∑_i=1^m (1-t_n_0^i)(1-D̂'̂^i_n_0)/(∑_i=1^m (1-D̂'̂^i_n_0))∨1-Q̂_n_0'(t̂_n_0^(s)))+
E(Q̂_n_0'(t̂_n_0^(s+1))-β)]
≤ lim_m↑∞[ FNR - E_Z_T_d( ∑_i=1^m (1-t_n_0^i)(1-D̂'̂^i_n_0)/(∑_i=1^m (1-D̂'̂^i_n_0))∨1)] +
lim_m↑∞ E(∑_i=1^m (1-t_n_0^i)(1-D̂'̂^i_n_0)/(∑_i=1^m (1-D̂'̂^i_n_0))∨1-Q̂_n_0'(t̂_n_0^(s)))
= 0
Last inequality follows since, for s≥ m-â_n_0,
Q̂_n_0'(t̂_n_0^(s+1)) ≤Q̂_n_0(t̂_n_0^(m-â_n_0+1)) ≤β.
The limits tend to 0 due to <ref> and <ref>.
We therefore observe that, for such a test (T_d,D̂'̂), defined in Lemma <ref>.,
lim_m↑∞ FNR ≤β
So (T_d,D̂'̂)∈Δ'(α,β). i.e., Lemma <ref>. is proved.
apalike
[Aharoni and Rosset, 2014]AR14
Aharoni, E. and Rosset, S. (2014).
Generalized α-investing: definitions, optimality results and
application to public databases.
Journal of the Royal Statistical Society: Series B: Statistical
Methodology, pages 771–794.
[Bahadur, 1966]Bahadur66
Bahadur, R. R. (1966).
A note on quantiles in large samples.
The Annals of Mathematical Statistics, 37(3):577–580.
[Bartroff, 2018]Bar18
Bartroff, J. (2018).
Multiple hypothesis tests controlling generalized error rates for
sequential data.
Statistica Sinica, 28:363–98.
[Bartroff and Song, 2014]BS14
Bartroff, J. and Song, J. (2014).
Sequential tests of multiple hypotheses controlling type i and ii
familywise error rates.
Journal of Statistical Planning and Inference, 153:100–114.
[Bartroff and Song, 2020]BS20
Bartroff, J. and Song, J. (2020).
Sequential tests of multiple hypotheses controlling false discovery
and nondiscovery rates.
Sequential Analysis, 39(1):65–91.
[Benjamini and Hochberg, 1995]BH95
Benjamini, Y. and Hochberg, Y. (1995).
Controlling the false discovery rate: A practical and powerful
approach to multiple testing.
Journal of the Royal Statistical Society. Series B
(Methodological), 57(1):289–300.
[Cai and Jin, 2010]CJ10
Cai, T. T. and Jin, J. (2010).
Optimal rates of convergence for estimating the null density and
proportion of nonnull effects in large-scale multiple testing.
The Annals of Statistics, 38(1):100 – 145.
[Cai and Sun, 2009]CS09
Cai, T. T. and Sun, W. (2009).
Simultaneous testing of grouped hypotheses: Finding needles in
multiple haystacks.
Journal of the American Statistical Association,
104(488):1467–1481.
[Cai and Sun, 2011]cai2011compound
Cai, T. T. and Sun, W. (2011).
A compound decision-theoretic approach to large-scale multiple
testing.
In High-dimensional Data Analysis, pages 75–116. World
Scientific.
[Caldas de Castro and Singer, 2006]caldas2006controlling
Caldas de Castro, M. and Singer, B. H. (2006).
Controlling the false discovery rate: a new application to account
for multiple and dependent tests in local statistics of spatial association.
Geographical Analysis, 38(2):180–208.
[Cao et al., 2022]cao2022optimal
Cao, H., Chen, J., and Zhang, X. (2022).
Optimal false discovery rate control for large scale multiple testing
with auxiliary information.
The Annals of Statistics, 50(2):807–857.
[Crockett and Gebski, 1984]CG84
Crockett, N. and Gebski, V. (1984).
A simplified two sample sequential t-test.
Journal of Statistical Computation and Simulation,
20(3):217–234.
[De and Baron, 2012a]DB12a
De, S. K. and Baron, M. (2012a).
Sequential bonferroni methods for multiple hypothesis testing with
strong control of family-wise error rates i and ii.
Sequential Analysis, 31(2):238–262.
[De and Baron, 2012b]DB12b
De, S. K. and Baron, M. (2012b).
Step-up and step-down methods for testing multiple hypotheses in
sequential experiments.
Journal of Statistical Planning and Inference,
142(7):2059–2070.
[De and Baron, 2015]DB15
De, S. K. and Baron, M. (2015).
Sequential tests controlling generalized familywise error rates.
Statistical Methodology, 23:88–102.
[Dudoit et al., 2008]dudoit2008multiple
Dudoit, S., Van Der Laan, M. J., and van der Laan, M. J. (2008).
Multiple testing procedures with applications to genomics.
Springer.
[Efron, 2004a]E04
Efron, B. (2004a).
Large-scale simultaneous hypothesis testing.
Journal of the American Statistical Association,
99(465):96–104.
[Efron, 2004b]EfB04
Efron, B. (2004b).
Large-scale simultaneous hypothesis testing.
Journal of the American Statistical Association,
99(465):96–104.
[Foster and Stine, 2008]FS08
Foster, D. P. and Stine, R. A. (2008).
α-investing: a procedure for sequential control of expected
false discoveries.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 70(2):429–444.
[Gang et al., 2023]GSW21
Gang, B., Sun, W., and Wang, W. (2023).
Structure–adaptive sequential testing for online false discovery
rate control.
Journal of the American Statistical Association,
118(541):732–745.
[Genovese and Wasserman, 2002]GL02
Genovese, C. and Wasserman, L. (2002).
Operating Characteristics and Extensions of the False Discovery Rate
Procedure.
Journal of the Royal Statistical Society. Series B (Statistical
Methodology), 64(3):499 – 517.
[Golub et al., 1999]Gea99
Golub, T. R., Slonim, D. K., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov,
J. P., Coller, H., Loh, M. L., Downing, J. R., Caligiuri, M. A., et al.
(1999).
Molecular classification of cancer: class discovery and class
prediction by gene expression monitoring.
science, 286(5439):531–537.
[Green and Richardson, 2002]green2002hidden
Green, P. J. and Richardson, S. (2002).
Hidden markov models and disease mapping.
Journal of the American statistical association,
97(460):1055–1070.
[He and Bartroff, 2021]HB21
He, X. and Bartroff, J. (2021).
Asymptotically optimal sequential fdr and pfdr control with (or
without) prior information on the number of signals.
Journal of Statistical Planning and Inference, 210:87–99.
[Javanmard and Montanari, 2018]JM18
Javanmard, A. and Montanari, A. (2018).
Online rules for control of false discovery rate and false discovery
exceedance.
The Annals of statistics, 46(2):526–554.
[Miller et al., 2001]miller2001controlling
Miller, C. J., Genovese, C., Nichol, R. C., Wasserman, L., Connolly, A.,
Reichart, D., Hopkins, A., Schneider, J., and Moore, A. (2001).
Controlling the false-discovery rate in astrophysical data analysis.
The Astronomical Journal, 122(6):3492.
[Perone Pacifico et al., 2004]perone2004false
Perone Pacifico, M., Genovese, C., Verdinelli, I., and Wasserman, L. (2004).
False discovery control for random fields.
Journal of the American Statistical Association,
99(468):1002–1014.
[Ramdas et al., 2017]RYWJ17
Ramdas, A., Yang, F., Wainwright, M., and Jordan, M. (2017).
Online control of the false discovery rate with decaying memory.
[Ramdas et al., 2018]RZWJ18
Ramdas, A., Zrnic, T., Wainwright, M., and Jordan, M. (2018).
Saffron: an adaptive algorithm for online control of the false
discovery rate.
In International conference on machine learning, pages
4286–4294. PMLR.
[Singh et al., 2002]Singhea02
Singh, D., Febbo, P. G., Ross, K., Jackson, D. G., Manola, J., Ladd, C.,
Tamayo, P., Renshaw, A. A., D'Amico, A. V., Richie, J. P., et al. (2002).
Gene expression correlates of clinical prostate cancer behavior.
Cancer cell, 1(2):203–209.
[Song and Fellouris, 2017]SF17
Song, Y. and Fellouris, G. (2017).
Asymptotically optimal, sequential, multiple testing procedures with
prior information on the number of signals.
Electronic Journal of Statistics, 11(1):338 – 363.
[Song and Fellouris, 2019]SF19
Song, Y. and Fellouris, G. (2019).
Sequential multiple testing with generalized error control: An
asymptotic optimality theory.
The Annals of Statistics, 47(3):1776 – 1803.
[Storey, 2002]Storey02
Storey, J. D. (2002).
A direct approach to false discovery rates.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 64(3):479–498.
[Storey et al., 2004]STS04
Storey, J. D., Taylor, J. E., and Siegmund, D. (2004).
Strong control, conservative point estimation and simultaneous
conservative consistency of false discovery rates: a unified approach.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 66(1):187–205.
[Sun and Cai, 2007]SC07
Sun, W. and Cai, T. T. (2007).
Oracle and adaptive compound decision rules for false discovery rate
control.
Journal of the American Statistical Association,
102(479):901–912.
[Sun and Cai, 2009]SC09
Sun, W. and Cai, T. T. (2009).
Large‐scale multiple testing under dependence.
Journal of the Royal Statistical Society Series B,
71(2):393–424.
[Sun et al., 2015]Sea15
Sun, W., Reich, B. J., Tony Cai, T., Guindani, M., and Schwartzman, A. (2015).
False discovery control in large-scale spatial multiple testing.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 77(1):59–83.
[Tian and Ramdas, 2021]TR21
Tian, J. and Ramdas, A. (2021).
Online control of the familywise error rate.
Statistical Methods in Medical Research, 30(4):976–993.
|
http://arxiv.org/abs/2306.10235v1
|
20230617023326
|
Exact continuum theory of anti-Klein tunneling in bilayer graphene
|
[
"P. A. Maksym",
"H. Aoki"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
^1Department of Physics, University of Tokyo, Hongo, Tokyo
113-0033, Japan
^2School of Physics and Astronomy, University of Leicester,
Leicester LE1 7RH, UK
^3Electronics and Photonics Research Institute,
National Institute of Advanced Industrial Science and
Technology (AIST), Tsukuba 305-8568, Japan
Exact conditions for anti-Klein transmission zeros are found analytically
with a 4-component continuum approach which includes trigonal warping.
Anti-Klein tunneling occurs at oblique incidence on steps and barriers
with soft and hard walls as well as in the known case of normal incidence on
a hard step. The necessary energy and angle of incidence depend on the
crystallographic orientation of the step or barrier. At normal incidence
on an armchair step in unbiased bilayer graphene, anti-Klein tunneling
occurs because both the continuum and the tight binding Hamiltonians are
invariant under layer and site interchange. At oblique incidence,
anti-Klein tunneling is valley-dependent even in the absence of trigonal
warping. An experimental arrangement that functions both as a detector of
anti-Klein tunneling and a valley polarizer is suggested. There are
cases where anti-Klein tunneling occurs in the 4-component theory but
not in the 2-component approximation.
Exact continuum theory of anti-Klein tunneling in bilayer graphene
H. Aoki^1,3
July 31, 2023
==================================================================
§ INTRODUCTION
Anti-Klein (AK) tunneling is the absence of tunneling at a potential step
in bilayer graphene (BLG). It was discovered theoretically
<cit.> by using the 2-component approximation <cit.>
to the full 4-component continuum Hamiltonian and has been attributed to
the pseudospin of the 2-component states <cit.>. However the 4-component continuum Hamiltonian
cannot be expressed exactly in terms of a pseudospin vector. So can AK
tunneling occur in the 4-component continuum theory? We show that it can.
We also show that AK tunneling is valley asymmetric and may occur
at arbitrary potentials with both soft and hard walls. And we show further
that it occurs in a tight binding theory.
Absence of tunneling means that the transmission coefficient of a step or
barrier is exactly zero. This happens at a p-n or n-p junction,
that is when the electron energy is in the conduction band on one side of a
potential interface and in the valence band on the other side. Within this
energy range, the transmission coefficient may vanish over an extended
range of energies <cit.> or at a single critical energy
<cit.>. Which case occurs depends on the structure and geometry of
the interface. We use 'AK tunneling' to mean zero transmission in these
cases and others we report here.
In the first work on AK tunneling <cit.>, it was found that
zero transmission occurs at normal incidence on a potential step in
unbiased BLG. The transmission vanishes everywhere between the conduction
and valence band edges and the zero is exact within the 2-component
approximation without trigonal warping (TW). It occurs because pseudospin
conservation requires that the propagating plane wave incident on a step
matches onto an evanescent plane wave on the other side of the step.
Subsequently AK tunneling was found at normal incidence on a potential
step in biased BLG <cit.>, again in the 2-component approximation
without TW. In this case the transmission vanishes at one critical energy
where the incident state matches onto an evanescent state on the other side
of the step. At this energy, the pseudospin conservation condition is that
the expectation values of the pseudospin of the incident and evanescent
states are identical.
The existence of exact transmission zeros in the 2-component approximation
is a puzzle because the full 4-component Hamiltonian cannot be expressed in
terms of a pseudospin vector. To solve this puzzle, we find the condition
for AK tunneling in the 4-component continuum approach, including TW,
analytically. It turns out that AK tunneling at a potential step occurs
when a particular pair of evanescent wave polarization vectors on the left
and right sides of the step are orthogonal.
The orthogonality relation is a general condition for exact transmission
zeros. If it is evaluated with 4-component vectors, it gives the condition
for AK tunneling in the 4-component approach. If it is evaluated with
vectors found from the 2-component approximation it gives the condition for
AK tunneling in the 2-component approximation. Thus exact transmission
zeros can occur in both approaches but normally at different
<cit.> incidence conditions. This seems to solve the puzzle.
But what is the origin of the pseudospin conditions? In brief,
symmetry. When the step edge is parallel to an armchair direction (or
arbitrary direction with no TW), the 4-component continuum and tight
binding Hamiltonians for normal incidence are invariant under simultaneous interchange of layers
and sites. We call this swap symmetry and show that the swap quantum number
of the corresponding 4-component states is ± 1, like the pseudospin.
The orthogonality relation and the swap symmetry lead to all the pseudospin
conditions found in the 2-component approximation. Thus we arrive at a
consistent and exact picture of AK tunneling that is valid in both the
4-component theory and the 2-component approximation.
However our objective is to go beyond this point and investigate the
physics of AK tunneling systematically. In the case of an arbitrary
potential, our orthogonality condition for AK tunneling at a hard
potential step generalizes to vanishing of the corresponding transfer
matrix element. We use the orthogonality and transfer matrix conditions to
search for AK tunneling systematically. We find it not only in the well
known case of normal incidence on hard steps but also at oblique incidence
on steps and barriers with soft and hard walls. Further, because of TW, the
conditions for AK tunneling depend strongly on the crystallographic
orientation of the step or barrier. The occurrence of AK tunneling at
soft-walled potentials is particularly significant because these systems
are experimentally realizable.
Another very interesting feature of AK tunneling is that it is valley
asymmetric unless the transmission coefficient within each valley is
symmetric in the transverse momentum. The reason is that the polarization
vectors are valley-dependent and hence the critical transverse momentum
needed to satisfy the orthogonality relation is also valley-dependent. At
this critical momentum the valley asymmetry is large because the
transmission coefficient vanishes in only one of the valleys.
This effect may be used to make a valley polarizer.
The 4-component continuum theory is appropriate for our investigations
because experimentally realizable potentials vary slowly compared to the
length scale of the lattice. In addition, the continuum theory has the
advantage that it is easy to take account of the crystallographic
orientation of the step or barrier. The continuum and tight binding
Hamiltonians have identical swap symmetry so AK tunneling occurs in both
approaches. Valley mixing occurs only in the tight binding theory but is
quite weak. We have verified this in the case of normal
incidence on a hard armchair step in unbiased BLG. AK tunneling occurs as
in the continuum theory and the effect of valley mixing on the reflected
current is between 10^-3 and 10^-5 of the total current. For an
experimentally realistic soft step, the effect should be even smaller.
We derive the conditions for AK tunneling in Section
<ref>. We then present numerical results to show AK
tunneling occurs at arbitrary incidence on potential steps and barriers
(Section <ref>). In the same section we show that swap
symmetry results in AK tunneling at normal incidence on a step in unbiased
BLG and, in addition, detail the effects of bias, TW and crystallographic
orientation. The valley dependence of AK tunneling is explained in
Section <ref> and in Section <ref> we
suggest experimental arrangements for observing AK tunneling and for
generating valley polarized currents. The relation between the 4-component
theory and the 2-component approximation is explained in Section
<ref> and our conclusions are summarized in Section
<ref>. Appendix <ref> details transmission
coefficient relations that are used in Sections <ref> and
<ref>. Mathematical details of the relation between the
4-component theory and 2-component approximation are given in Appendices
<ref> and <ref>. The tight binding theory
of a hard armchair step is explained in Appendix <ref>.
§ THEORY
§.§ Hamiltonian and plane wave states
We consider a step or barrier with edge normal at an angle θ to the
crystallographic x axis. To find the transmission coefficient we use
co-ordinates x',y' that are rotated by θ with respect to the
crystallographic co-ordinates, x,y (Fig. <ref>).
The 4-component states are of form
(ϕ_A1, ϕ_B1, ϕ_A2, ϕ_B2)^T where the subscripts
denote sites within the BLG unit cell. The K-valley continuum
Hamiltonian, expressed in terms of x',y', is
H_K =
( [ V_1 v_0π_K^† -v_4π_K^† v_3π_K e^3iθ; v_0π_K V_1 + Δ' t -v_4π_K^†; -v_4π_K t V_2 + Δ' v_0π_K^†; v_3π_K^† e^-3iθ -v_4π_K v_0π_K V_2; ]),
where the unitary transformation
diag(e^-iθ, 1, 1, e^iθ) has been used to reduce
the θ dependence to factors of the form exp(± 3iθ)
<cit.>. Here π_K = p_x' + ip_y',
p_x' and p_y' are momentum components and v_0, v_3 and v_4 are
velocities. t is the interlayer coupling and Δ' is a small energy
shift of the interlayer coupled sites <cit.>. The step edge is
taken to be at x'=0. The potentials V_i in layer i become uniform far
away from the step or barrier edges. In K', π_K is
replaced by π_K'≡ -p_x' + ip_y' and θ by -θ.
Plane wave states occur in the regions of uniform potential. In each valley
these states satisfy
H 𝐞_αexp(i_α·𝐫) =
E𝐞_αexp(i_α·𝐫),
where H is the appropriate valley Hamiltonian, E is the energy,
𝐞_α is a polarization vector, α is a mode index,
𝐫 = (x',y') and
_α = (k_α, k_y') is the
𝐤-vector and k_α is its x' component. The plane waves
may be propagating or evanescent.
To find k_α and the polarization vectors as a function of E and
k_y' we re-write Eq. (<ref>) as an eigenvalue equation for
p_α≡ħ k_α. This gives
v_x'^-1(W + p_y' v_y')𝐞_α
= -p_α𝐞_α,
where p_y' = ħ k_y',
W =
( [ V_1 - E 0 0 0; 0 V_1 + Δ' - E t 0; 0 t V_2 + Δ' - E 0; 0 0 0 V_2 - E; ])
and the velocity operators in the K-valley are
v_x' =
( [ 0 v_0 -v_4 v_3 e^3iθ; v_0 0 0 -v_4; -v_4 0 0 v_0; v_3 e^-3iθ -v_4 v_0 0; ])
and
v_y' =
( [ 0 -iv_0 iv_4 iv_3 e^3iθ; iv_0 0 0 iv_4; -iv_4 0 0 -iv_0; -iv_3 e^-3iθ -iv_4 iv_0 0; ]).
In the K' valley θ is replaced by -θ and the sign of the
velocity parameters changes in v_x'.
The matrix on the left hand side of Eq. (<ref>) is a general complex
matrix hence its left eigenvectors, 𝐟^†_α, and right
eigenvectors, 𝐞_α, form a biorthogonal set, that is
𝐟^†_α·𝐞_β = δ_αβ,
where the 𝐞 vectors are normalized so that
𝐞^†_α·𝐞_α = 1.
The biorthogonality relation, Eq. (<ref>), is valid for any general
complex matrix but in the special case of the matrix in Eq. (<ref>),
there is also a relation between the 𝐞 vectors and the
𝐟^† vectors. By taking the Hermitean conjugate of
Eq. (<ref>) it can be shown that
𝐟^†(k_α) =
N_k_α𝐞^†(k^*_α) v_x',
where N_k_α
is a normalization constant and the k_α are either real or form
complex conjugate pairs. Then it follows from Eq. (<ref>) that
𝐞^†(k_α) v_x'𝐞(k_β) ∝δ_k^*_α k_β.
That is, the 𝐞 vectors are orthogonal with respect to the x'
component of the velocity and hence the x' component of the current.
The physical consequence of this orthogonality is that in a superposition
of plane wave states there is no interference between the currents carried
by the propagating states and if a tunneling current is present it is
spatially uniform. Orthogonality relations similar to Eq. (<ref>)
have been found in a 𝐤·𝐩 theory of semiconductor
superlattices <cit.> and a tight binding theory of potential
barriers in graphene <cit.>. In an earlier paper <cit.>,
we used Eq. (<ref>) to
simulate scattering in BLG numerically but without
presenting the proof given here.
§.§ AK tunneling at hard steps
The transmission and reflection coefficients can be found easily by using
biorthogonality. We explain this first for the case when AK tunneling may
occur, i.e. when there are two propagating modes and two evanescent modes
on both sides of the step.
A plane wave is taken to be incident from the left of the step. The wave
functions ψ_l and ψ_r on the left and right sides of the step are
ψ_l = [𝐞_1le^ik_1l x' +
r_2𝐞_2le^ik_2l x' + r_4𝐞_4le^ik_4l x']
e^ik_y' y',
ψ_r = [t_1 𝐞_1re^ik_1r x' +
t_3𝐞_3re^ik_3r x']e^ik_y' y',
where the t_i are transmitted amplitudes and r_i are reflected
amplitudes. Mode 1 is right propagating, mode 2 is left propagating, mode
3 is right decaying, mode 4 is left decaying and the subscripts l and r
denote the left and right sides of the step. The wave function must be
continuous at the step edge. Hence
𝐞_1l + r_2𝐞_2l + r_4𝐞_4l =
t_1 𝐞_1r + t_3𝐞_3r.
Equations for t_1 and t_3 are obtained by applying the biorthogonality
condition to Eq. (<ref>). Thus
𝐟^†_1l·𝐞_1r t_1 +
𝐟^†_1l·𝐞_3r t_3 = 1
𝐟^†_3l·𝐞_1r t_1 +
𝐟^†_3l·𝐞_3r t_3 = 0.
The coefficient matrix in these equations must be non-singular and this
excludes the possibility that 𝐟^†_3l·𝐞_1r = 0
when 𝐟^†_3l·𝐞_3r = 0. Hence when
𝐟^†_3l·𝐞_3r = 0,
the transmission coefficient, t_1, vanishes. Eq. (<ref>) is
the orthogonality condition mentioned in the introduction and
is the exact condition for AK tunneling at a hard potential step.
It may be satisfied because of swap symmetry or for critical
values of the incidence parameters (Section <ref>).
The reflection coefficients may also be obtained from Eq. (<ref>)
and are given by
r_2 = 𝐟^†_2l·𝐞_1r t_1 +
𝐟^†_2l·𝐞_3r t_3
r_4 = 𝐟^†_4l·𝐞_1r t_1 +
𝐟^†_4l·𝐞_3r t_3.
In deriving Eqs. (<ref>), (<ref>), (<ref>) and (<ref>),
we have focused on the case of two propagating modes and two evanescent
modes however the polarization vectors are biorthogonal in all
cases and the number of modes does not change.
Hence Eqs. (<ref>), (<ref>), (<ref>), (<ref>) are
always valid; the only case dependence is in the meaning of the mode indices.
Thus biorthogonality provides an easy way of finding the transmission
and reflection coefficients but as far as we know this has not been
reported before.
§.§ AK tunneling at soft steps and arbitrary potential barriers
The condition for AK tunneling at a hard step,
Eq. (<ref>), can be generalized to soft steps and arbitrary
potential barriers by using a transfer matrix <cit.> to find the
transmission coefficients. The transfer matrix M relates the amplitudes
of the waves on the left and right sides of the system,
D(x_l)𝐚_l = M D(x_r)𝐚_r, where 𝐚_l =
(𝐫^T, 𝐢^T)^T, 𝐚_r = (𝐱^T,
𝐭^T)^T. Here 𝐢 is a vector of incident wave
amplitudes, 𝐫 is a vector of reflected wave amplitudes,
𝐭 is a vector of transmitted wave amplitudes, 𝐱 is a
vector of the amplitudes of waves incident from the right and D(x') is a
diagonal matrix of phase factors, exp(ik_i x').
The transmission coefficients satisfy equations analogous to Eqs. (<ref>)
and (<ref>),
M_11 t_1 e^i k_1r x'_r + M_13 t_3 e^i k_3r x'_r =
e^i k_1l x'_l
M_31 t_1 e^i k_1r x'_r + M_33 t_3 e^i k_3r x'_r = 0.
When
M_33 = 0 ,
the transmission coefficient, t_1, vanishes. Eq. (<ref>) is the
transfer matrix condition mentioned in the introduction and is the exact
condition for AK tunneling at an arbitrary potential step or
barrier. Eq. (<ref>) shows that AK tunneling may occur
but numerical calculations of M_33 are needed to check whether it does
occur. This is a difficult computational problem as large numerical errors
accumulate because of the growing exponential contributions to the transfer
matrix. This can be avoided by computing the transmission coefficient and
locating its zeros instead of searching for the zeros of M_33.
However M_33 can be computed accurately in the exceptional case of a thin
barrier which consists of a spatially uniform potential with hard edges.
In this case the transfer matrix elements are
M_αβ = 𝐟^†_α l·[
∑_j 𝐞_jcexp(-ik_jc w) 𝐟^†_jc] ·𝐞_β r,
where w is the barrier width and the subscript c denotes polarization
vectors in the center of the barrier. The mathematical form of
Eq. (<ref>) is a consequence of biorthogonality. This form is valid
for arbitrary barrier widths but can be used to compute the transfer matrix
elements accurately only when the width is small.
§ EXAMPLES OF AK TUNNELING
In this section we give examples of AK tunneling in the 4-component,
continuum theory. Steps in unbiased BLG are discussed in Section
<ref>, steps in biased BLG in <ref> and
barriers in <ref>. We also explain why AK tunneling at
normal incidence in unbiased BLG results from the swap symmetry of the
Hamiltonian (<ref>).
Transmission coefficients in BLG have novel features that result from
strong TW. When the constant energy contours are warped, the gradient of
E(𝐤) is no longer parallel to 𝐤 so the current
carried by a Bloch state is also not parallel to 𝐤. Further,
when there are points of inflection on the contour, several Bloch states
with distinct 𝐤-vectors may contribute to the total current in a
particular direction <cit.>. Thus multiple incident states may
occur and even when there is only one incident state it may couple to two
distinct propagating states on the exit side of a step. A similar situation
may occur without TW in biased BLG because of its Mexican hat band
structure.
When there is one incident state, the transmission coefficient is
T = 1/j_x1l(|t_1|^2 j_x1r + |t_3|^2 j_x3r),
where j_x1l is the current carried by the incident state and j_x1r
and j_x3r are transmitted state currents. When there is only one
propagating state on the exit side, j_x3r vanishes because mode 3 is
then evanescent but when there are two propagating states j_x3r is not
zero. Thus Eq. (<ref>) gives the transmission coefficient in both
cases. In this work, we have found the case of two propagating transmitted
states only in Fig. <ref> (left) and only in a very small range of
incidence angles (see figure caption). We have not found the case of
several incident states although this case can occur <cit.> and is
relevant to experiment. It is discussed further in Section
<ref>.
There are 4 step configurations in each valley, because carriers may be
incident from the left or right and may encounter an up step or a down step
(Fig. <ref>). This gives 8 possible transmission
coefficients when multiple propagating states do not occur and more
otherwise. However these transmission coefficients are related by symmetry
and all of them have similar features. We detail only the case of the
configuration in the K valley. The relations between the
transmission coefficients are explained in Appendix <ref>.
T is a function of E and one variable related to the angle of
incidence. This variable can be either k_y', or the polar angle of the
incident state 𝐤-vector, ϕ_k (k-incidence angle) or the
polar angle of the incident current, ϕ_c (current incidence
angle). These angles are different in the presence of TW because the
current is not parallel to 𝐤. We plot T as a function of E,
ϕ_k or k_y'. However ϕ_c is relevant to experiments in the
ballistic transport regime so in the figure captions we give the values of
ϕ_c and ϕ_k at which AK zeros occur.
To find the AK condition we normally use bisection to locate the zeros of
𝐟^†_3l·𝐞_3r or M_33. This method
brackets the roots of a function so we can be sure that a root exists
between the brackets that it returns. We stop bisecting when these brackets
differ by a number close to 64-bit precision. In the case of thin barriers,
w 150 nm, with hard walls, we use Eq. (<ref>) to find M_33.
For thicker barriers or systems with soft walls, we use an S-matrix
method <cit.> to search for minima of T. The minimum
value found in all cases is <10^-9.
Throughout this work we use '∼' and '=' to distinguish incidence
parameters that are found numerically from incidence parameters that are
input to our codes. '∼' followed by a number with 4 significant digits
indicates a parameter found numerically while '=' followed by a number
gives an exact input value.
The Hamiltonian parameters in meV <cit.> are: γ_0
= 3160, γ_3 = 380, γ_4 = 140, t = 381, Δ' = 22.
The velocity parameters in Eq. (<ref>) are related to the γ
parameters by v_i=aγ_i√(3)/2ħ, where a=0.246 nm is the
lattice constant.
The potentials are given in the figure captions. The subscript l denotes
potentials on the left side of a step and the left and right sides of a
barrier, r denotes the right side of a step and c denotes the center of
a barrier.
§.§ Potential steps in unbiased BLG
In unbiased BLG we have found AK tunneling only when the step edge is
parallel to an armchair direction or a zigzag direction. In the armchair
case, AK tunneling occurs only when the incident current is normal to the
step edge (Section <ref>) and results from the swap
invariance of the 4-component Hamiltonian (Section <ref>).
This is the only case where AK tunneling occurs in an extended energy
range. In the zigzag case, it may occur at normal or oblique current
incidence but only at critical values of the energy or angle of incidence
(Section <ref>).
§.§.§ Armchair edge
Fig. <ref> (left) shows transmission coefficients for normal
incidence on a potential step in unbiased BLG. The armchair directions
correspond to θ = n π / 3, where n is an integer; the θ =
0^∘ case is shown in the figure. AK tunneling occurs in the energy
range where the incident state on the left side of the step is in the
conduction band and the transmitted state on the right side is in the
valence band. This range starts about 1-2 meV above the bottom of the
potential step and ends about 1-2 meV below the top. These energy offsets
occur because the conduction and valence bands overlap in a small energy
range when TW is present <cit.>. Except for the
offsets, AK tunneling at θ = 0^∘ is similar to that found
earlier for a hard step in the 2-component approximation without TW
<cit.>. However in the presence of TW the occurrence of AK
tunneling depends strongly on the step orientation.
This is illustrated by the case of θ = 15^∘. This value of θ
is midway between the θ = 0^∘ armchair direction and the θ
= 30^∘ zigzag direction. Fig. <ref> shows that in this case
zero transmission does not occur but it is still possible to observe a
large decrease in T in the energy range between the conduction band edge
on the left and the valence band edge on the right.
Fig. <ref> (left) also shows that AK tunneling occurs
at both hard and soft steps. The soft step potential is (V_0/2)(1 +
tanh(x'/w)), where V_0 is the step height and w is the step width.
The conditions needed for AK tunneling at this soft step are
exactly the same as those needed for a hard step. When θ
= 15^∘, the large decrease in T also occurs.
§.§.§ Swap symmetry of Hamiltonian
The AK tunneling at normal incidence occurs because when k_y = 0, the
4-component Hamiltonian for unbiased BLG is swap invariant and so is
the coefficient matrix in Eq. (<ref>).
The swap operation is performed by the operator
S =
( [ 0 σ_x; σ_x 0; ]),
where σ_x is a Pauli matrix and the zeros denote 2× 2
matrices whose elements are all zero. The eigenvalues of S are s = ±
1 and both are doubly degenerate. By expressing the Hamiltonian in the
basis formed by the eigenvectors of S it can be shown that 𝐞_3
and 𝐞_4 are in the s=-1 subspace when E is in the valence
band and in the s= +1 subspace when E is in the conduction band.
Further, Eq. (<ref>) shows that the same is true for
𝐟_3^†.
The AK tunneling at a hard, armchair step in unbiased BLG is a consequence
of the fact that the swap eigenvalues, s_3l of 𝐟_3l and
s_3r of 𝐞_3r are of opposite sign.
Because S^2 = I, the 4× 4 unit matrix,
𝐟^†_3l·𝐞_3r = 𝐟^†_3l S^2
·𝐞_3r = s_3ls_3r𝐟^†_3l·𝐞_3r= -𝐟^†_3l·𝐞_3r.
Hence 𝐟^†_3l·𝐞_3r
vanishes. Thus AK tunneling at a hard
step occurs throughout the energy range where the incident state is in the
conduction band and the transmitted state is in the valence band or vice
versa.
The AK tunneling at a soft, armchair step in unbiased BLG is also a
consequence of the swap symmetry. M_33 gives the amplitude of the
𝐞_3 contribution to the state, ψ_l, on the left of a step
when the state on the right is 𝐞_3rexp(ik_3r x'). That is
M_33 = 𝐟^†_3l·ψ_l. But the state on the right is
in the s= -1 subspace and remains in this subspace for all x' as the
two subspaces are decoupled because of the swap symmetry. Thus ψ_l is
in the s= -1 subspace and M_33 vanishes because
𝐟^†_3l is in the s= +1 subspace. Hence the occurrence
of AK tunneling is independent of the shape of the step potential, as can
be seen in Fig. <ref> (left).
Another important consequence of swap symmetry is that complete evanescent
to propagating mode conversion occurs at armchair potential steps in
unbiased BLG. The propagating states in the conduction band have opposite
swap symmetry to those in the valence band and the same is true for the
evanescent states. The same analysis that led to M_33 = 0 then shows
that the propagating-propagating and evanescent-evanescent elements of the
transfer matrix vanish, that is
M_11 = M_12 = M_21 = M_22 = M_33 = M_34 = M_43 = M_44 = 0.
Hence any propagating state on one side of a step must couple to an
evanescent state on the other side.
§.§.§ Zigzag edge
The zigzag edges correspond to θ = nπ/6 where n is an odd
integer. Fig. <ref> (right) shows that AK tunneling occurs at
normal current incidence on a θ = 30^∘ zigzag step at a critical
energy E_crit∼ 109.6 meV. And Fig. <ref> (left) shows
that AK tunneling occurs at oblique incidence on the same step over a wide
range of energies. In both figures the AK transmission zeros are very sharp
but T is 1% within a few meV or a few degrees of the
zeros. Thus each AK zero is surrounded by an observable transmission
minimum. The cut-offs in T(ϕ_k) near |ϕ_k| = 30^∘ at E =
E_crit are caused by total external reflection <cit.>.
The AK tunneling at oblique incidence results from TW.
Without TW, AK tunneling in unbiased BLG occurs only
at normal incidence because the unnormalized 𝐞_3 vectors in this
case are
𝐞_3 = (c(λ - k_y), 1, b, b c(λ + k_y))^T,
where k_3 = i λ, c = iħ (v_0 - b v_4)/(E-V) and
b = +1 in the conduction band and -1 in the valence band. By evaluating
𝐟^†_3l·𝐞_3r with these vectors it can be shown
that AK tunneling only occurs at normal incidence as found in earlier work
<cit.> in the 2-component approximation without the Δ
and γ_4 terms. However in the presence of TW, the
𝐞_3 vectors no longer have the simple form given in
Eq. (<ref>) and AK tunneling occurs at oblique incidence as shown
in Fig. <ref> (left).
The AK tunneling at normal current incidence shown in Fig. <ref>
(right) occurs at a critical condition when one of the AK transmission
zeros occurs exactly at a ϕ_k value that makes ϕ_c zero. As shown
in Fig. <ref> (left), the AK zeros move to smaller
|ϕ_k| when the energy increases. When E=E_crit, an AK zero occurs at
ϕ_k ∼ -15.66^∘, the ϕ_k value that makes the incident
current normal to the step edge. This results in the AK zero shown in
Fig. <ref> (right) which also occurs at E=E_crit.
§.§ Potential steps in biased BLG
In biased BLG, AK tunneling does not occur over an extended energy
range because the bias potential breaks the swap symmetry. Nevertheless
AK tunneling does occur at critical energies or angles of incidence where
𝐟^†_3l·𝐞_3r vanishes. These energies and
angles depend on the step edge orientation and the bias field
configuration, that is whether the bias fields on opposite sides of
the step are parallel or anti-parallel.
For all step edge orientations other than zigzag, AK tunneling occurs at a
critical pair of θ and ϕ_k values or a critical pair of
θ and E values. A pair of values is needed because
𝐟^†_3l·𝐞_3r is complex unless the step edge
orientation is zigzag. This means two parameters must be varied to ensure
that the real and imaginary parts of
𝐟^†_3l·𝐞_3r are both zero. We find these
zeros by fixing E and varying θ and ϕ_k.
Fig. <ref> (right) shows an example of an AK transmission zero
which occurs at a hard step close to the 60^∘ armchair direction. The
form of T(ϕ_k) is similar to the form found in unbiased BLG
(Fig. <ref> (left)) and T(ϕ_k) is again small within a few
degrees of the exact zero. The figure also shows an AK transmission zero
at a soft step close to the 60^∘ armchair direction. The positions of
the zeros in biased BKG depend on the step width because the swap symmetry
is broken. However in the example shown in Fig. <ref> (right),
θ and ϕ_k only change by a few degrees when
the step wall is changed from hard to soft.
The bias fields in the case of Fig. <ref> (right) are in the
anti-parallel configuration. Similar AK transmission zeros occur in the
parallel field configuration. However their position is more sensitive to
the bias field magnitude: when the magnitude increases from zero they move
away from the armchair direction rapidly.
AK tunneling also occurs in biased BLG when the step edge is parallel to a
zigzag direction. These directions are special because
𝐟^†_3l·𝐞_3r is real. Then zeros can be found
by varying one parameter; we vary either E or ϕ_k. The resulting
form of T is very similar to that found in unbiased BLG: typically there
are two zeros in T(ϕ_k) and there is a critical energy where a
transmission zero occurs at normal current incidence.
The occurrence of these zeros depends on the bias field configuration. In
the anti-parallel case they occur at normal and oblique incidence with and
without TW up to at least ≃± 21 meV bias. In the case of parallel
fields and oblique incidence they also occur up to at least ≃± 21
meV bias when there is no TW. But if TW is present the bias magnitude must be
14 meV. In the case of parallel fields and normal incidence,
we have not found any AK zeros without TW and when TW is present the bias
magnitude must be 7 meV.
§.§ Potential barriers
AK tunneling occurs at potential barriers as well as steps. We show this
first for a barrier with hard walls. To find the necessary barrier width
and potential we set E=56 meV, θ∼ 56.54^∘ and
ϕ_k∼-1.174^∘ as in Fig. <ref> and vary the barrier
width and V_1c to find zeros of M_33. The barrier width that makes
M_33 zero also depends on V_2c; a width of ≃ 9 nm is obtained
with V_2c∼ 103.3 meV. The potential and barrier width found in this
way are used to compute the transmission coefficients in both parts of
Fig. <ref>. AK tunneling occurs at normal 𝐤
incidence when θ = 30^∘ and oblique 𝐤 incidence
when θ∼ 56.54^∘.
Fig. <ref> also shows that AK tunneling occurs at soft
potential barriers. The wall width is chosen to be slightly less than
an order of magnitude smaller than the barrier width. Nevertheless,
the position of the AK zero at oblique incidence is very
sensitive to the soft wall width.
The smallness of the barrier width is quite remarkable. The width is only
≃ 9 nm yet tunneling through the barrier is blocked completely. AK
tunneling also occurs at wider barriers. When the edge is parallel to the
30^∘ zigzag direction, we have found it at barriers up to about
150 nm wide in biased BLG and about 25 nm wide in unbiased BLG,
see Fig. <ref> (d).
The possibility of AK tunneling at finite width barriers has been
mentioned in ref. <cit.> on the basis of calculations in the
2-component approximation with TW for a barrier in unbiased BLG with the
edge parallel to an armchair direction. We have not found AK tunneling in
this case, both in the 4-component theory and the 2-component
approximation. The most likely cause of this discrepancy is that evanescent
waves are not taken into account in ref. <cit.>.
§ VALLEY DEPENDENCE OF AK TUNNELING
The condition for AK tunneling can be valley-dependent because in BLG the
transmission coefficient can be valley-dependent. Because of time reversal,
the transmission coefficients in the two valleys satisfy
T_K(k_y') = T_K'(-k_y'),
see Appendix <ref>. In principle, Eq. (<ref>) allows valley-dependent
transmission to occur. However, if T(k_y') = T(-k_y') within each
valley, Eq. (<ref>) gives T_K(k_y') = T_K'(k_y'). Hence
valley-dependent transmission can occur only when the symmetry of
T(k_y') is broken within each valley.
In BLG there are two symmetry breaking mechanisms.
The first is TW. This breaks the symmetry because the
constant energy contours are not symmetric in k_y' unless the step
edge is parallel to an armchair direction.
The second mechanism is asymmetry of the potential, that is V_i(x')
V_i(-x'). This allows valley asymmetric transmission even in the
absence of TW.
The transmission coefficient T_K(k_y') for the potentials
V_i(x') is related by symmetry to the transmission coefficient
T̂_K'(k_y') for the spatially inverted potentials V_i(-x'),
see Appendix <ref>.
In the presence of TW,
T_K(k_y', θ) = T̂_K'(k_y', θ±π/3)
but without TW
T_K(k_y') = T̂_K'(k_y').
If the potentials are symmetric, T = T̂ in each valley hence
T(k_y') is valley symmetric. Otherwise T(k_y') is in general valley
asymmetric. This counter-intuitive relation between the symmetry of T in
the transverse direction and the symmetry of V in the longitudinal
direction results from the fact that π_x' = p_x' in the K-valley
and -p_x' in the K' valley.
Fig. <ref> illustrates valley-dependent transmission in BLG. We plot
T as a function of v_0 p_y' = v_0 ħ k_y' to show the valley
symmetry or asymmetry explicitly. The transmission coefficients without TW
are computed by setting v_3 = 0 and retaining all the other terms in the
Hamiltonian. Part (a) shows T(k_y') for a potential step without
TW. The transmission is valley-dependent in accordance with
Eq. (<ref>) and T_K(k_y') = T_K'(-k_y') in accordance with
Eq. (<ref>). Part (b) shows T(k_y') for a potential barrier
without TW. The barrier potential is symmetric in x' so the transmission
is symmetric in k_y'. Part (c) shows T(k_y') for no TW and the same
potential barrier as for part (b) plus an additional potential that makes
the barrier asymmetric. The
transmission is valley-dependent in accordance with Eq. (<ref>).
(In each layer the symmetry breaking potential consists of
a constant shift applied in the x' range 110 ≤ x' ≤ 150 nm, where
the origin is at the entrance edge of the barrier and the barrier width is
150 nm. The shifts are -80 meV in layer 1 and -40 meV in layer 2.)
Part (d) shows T(k_y') for the same symmetric potential barrier as for
part (b) but with TW. The transmission is valley-dependent and the
transmission coefficients satisfy Eq. (<ref>).
An important consequence of Eqs. (<ref>) and (<ref>) is
that AK tunneling is valley-dependent and this can be seen in
Fig. <ref>. If there is an AK zero at position k_y' in a
particular valley, one also occurs at -k_y' in the other valley. This
can result in a very large difference in the transmission coefficients in
the two valleys. For example, in part (d) near v_0p_y' = ± 80 meV,
the transmission coefficients in the two valleys differ by over 4 orders of
magnitude. It should be possible to use this effect to realize a valley
polarizer, see Section <ref>
The large valley dependence of the transmission does not occur in monolayer
graphene (MLG) at typical carrier energies. First, because TW is weak in
MLG unless the carrier energy is high <cit.>. Secondly, because the
equivalent of the swap symmetry in MLG is site interchange, an operation
performed by σ_x. In each valley the MLG Hamiltonian satisfies
σ_x H(k_y') σ_x = H(-k_y'). This has the consequence that
T(k_y') = T(-k_y') in each valley. Hence the potential asymmetry
mechanism is not available in MLG.
§ EXPERIMENTAL CONSEQUENCES
The ideal arrangement for experimental investigation of the effects we
have reported is a potential barrier in the ballistic transport regime
<cit.>. The barrier geometry has
the advantage that electrodes can be placed on the exit side to collect the
outgoing current while operation in the ballistic transport regime allows
the incidence conditions to be controlled. We envisage an arrangement
similar to the one suggested in our earlier work <cit.> where a
collimated beam of electrons <cit.> is incident on a potential
barrier formed by a top gate and a bottom gate is used to set the
Fermi level.
To obtain a clear signal, the incidence conditions should be set so that AK
tunneling occurs in both valleys. Eq. (<ref>) shows that this
requires k_y' = 0 as in Fig. <ref>. It should be possible to
satisfy this condition experimentally by fixing the collimator position and
varying the gate voltages. Although the AK zeros are very sharp, we have
found that T remains small, 1 to 0.01%, over a
measurable range of incidence parameters centered on the exact zero. This
drop in T is the experimental signature of AK tunneling. However when TW
is strong, several incident 𝐤 states may carry current at the
same ϕ_c <cit.>. The ϕ_c ranges where this happens are
of small width, only ≃ 0.4^∘, and should be avoided to obtain a
clear signal of AK tunneling.
The experimental arrangement we have suggested becomes a valley polarizer
when k_y' 0. Then
if the collimator is aligned so that carriers are incident at the critical
angle for AK tunneling, transmission takes place only in one valley, while
carriers in the other valley are reflected away from the barrier. This
mechanism is similar to valley polarization by total external reflection
<cit.> but can generate valley polarization even without
TW.
§ RELATION BETWEEN 4-COMPONENT AND 2-COMPONENT THEORIES
In this section we show that the exact condition for AK tunneling in the
2-component approximation is simply the orthogonality condition,
Eq. (<ref>), with the exact 4-component polarization vectors
replaced with approximate ones (Section <ref>). We then
show that in the case of normal incidence this condition is equivalent to
the pseudospin conditions given by earlier authors <cit.> (Section <ref>). Finally, we
compare transmission coefficients computed numerically with the 4-component
theory and the 2-component approximation (Section
<ref>).
TW and other corrections were not taken into account in the first work on
AK tunneling in the 2-component approximation <cit.>. In this section we set v_3, v_4 and Δ' in
Eq. (<ref>) to zero so that our 2-component Hamiltonian is the same as
in refs. <cit.> and <cit.>.
§.§.§ Condition for AK tunneling in the 2-component approximation
The 2-component approximation to the 4-component theory is obtained by
eliminating the dimer components, ϕ_B1 and ϕ_A2,
approximately <cit.>. To first order in 1/t, the 2-component
state formed from the non-dimer components,
(ϕ̃_A1, ϕ̃_B2)^T, is found from the
effective Hamiltonian
H̃_K = -v_0^2/t( [ 0 (π_K^†)^2; (π_K)^2 0; ]) +
( [ V_1 0; 0 V_2; ]),
where tilde denotes the 2-component approximation. To the same order of
approximation, the dimer components satisfy
ϕ̃_B1 = -v_0/tπ_K^†ϕ̃_B2
ϕ̃_A2 = -v_0/tπ_K ϕ̃_A1.
The transmission and reflection coefficients may be found by imposing
appropriate boundary conditions at the step edge. As H̃_K contains
second order derivatives, these conditions are continuity of each component
and its derivative <cit.>. However this method of finding the
transmission and reflection coefficients obscures the relation between the
4-component theory and the 2-component approximation. We therefore
reformulate the 2-component approach so the boundary conditions become the
continuity of each component of an approximate 4-component state.
To do this we use the approximate dimer components given by
Eqs. (<ref>) and (<ref>).
As the only y'-dependence is a factor of exp(ik_y'y'),
Eqs. (<ref>) and (<ref>) imply that the x' derivatives of
ϕ̃_B2 and ϕ̃_A1 are continuous provided that
ϕ̃_B1 and ϕ̃_A2 are continuous. This allows the
derivative boundary condition to be replaced by a continuity condition on
the approximate 4-component state
(ϕ̃_A1, ϕ̃_B1,
ϕ̃_A2, ϕ̃_B2)^T. Next we show that the
corresponding approximate polarization vectors satisfy a biorthogonality
relation similar to Eq. (<ref>).
Eqs. (<ref>), (<ref>) and (<ref>) lead to an
eigenvalue equation for the approximate polarization vectors,
𝐞̃_α,
ṽ_x'K^-1(W̃ + p_y'ṽ_y'K)𝐞̃_α
= -p̃_α𝐞̃_α,
where
W̃ =
( [ V_1 - E 0 0 0; 0 0 t 0; 0 t 0 0; 0 0 0 V_2 - E; ])
and ṽ_x'K and ṽ_y'K respectively are
v_x'K and v_y'K with v_3 and v_4 set to zero. The matrix on the
left hand side of Eq. (<ref>) is again a general complex matrix
hence the approximate polarization vectors form a biorthogonal set. This
means biorthogonality can be used as described in Section <ref>
to find the transmission coefficients in the 2-component approximation.
Thus the exact condition for AK tunneling in the 2-component approximation is
𝐟̃^†_3l·𝐞̃_3r = 0,
where 𝐟̃^†_3l is an approximate left polarization
vector.
§.§.§ Pseudospin conditions for AK tunneling at normal
incidence in the 2-component approximation
In the case of normal incidence, Eq. (<ref>) leads to the
pseudospin conditions found by earlier authors <cit.>. We outline the proof of this here and give mathematical
details in the appendices.
At normal incidence in unbiased BLG, the approximate polarization vectors
are eigenvectors of the swap operator because ṽ_x'K and
W̃ in Eq. (<ref>) are swap symmetric. This means that
the condition for AK tunneling in the 2-component approximation is the
same as shown in Section <ref> for the 4-component theory.
This condition is equivalent to the pseudospin conservation condition
because the pseudospin eigenvalue of a 2-component polarization vector is
identical to the swap eigenvalue of the corresponding approximate
4-component vector (Appendix <ref>).
In the case of biased BLG, the AK condition is that the expectation values
of the pseudospin on opposite sides of a step are the same. This condition
can be obtained by rotating the polarization vectors and using
Eq. (<ref>) to find the necessary rotation angle (Appendix
<ref>).
§.§.§ Numerical examples
In this section we present numerically computed transmission coefficients
for biased BLG and show that the critical energy and angle of incidence for
AK tunneling in the 2-component approximation may differ significantly
from those found in the 4-component theory.
Fig. <ref> shows transmission coefficients for electrons at
normal incidence. The critical energy for AK tunneling differs by about a
factor of 2 when there is a large bias mismatch. Then the 2- and 4-
component transmission coefficients near the critical energies differ by
one to two orders of magnitude (left side of figure).
Fig. <ref> shows transmission coefficients for electrons at
oblique incidence. In this case AK tunneling in the 2-component
approximation has not been reported before but occurs in accordance with
Eq. (<ref>). But although AK zeros occur at both 16 meV
(Fig. <ref>, left) and 56 meV (Fig. <ref>, right)
in the the 4-component theory, there is no zero at 16 meV in the
2-component approximation. In general, the 2-component approximation
appears to be poor at large transverse momentum.
Figs. <ref> and <ref> suggest that the reliability of
the 2-component approximation depends on the energy, angle of incidence and
interlayer bias. Because of this it is preferable to use the
4-component theory for numerical calculations. This requires no extra
computational cost or programming effort as the number of boundary
conditions is same in both cases.
§ SUMMARY AND CONCLUSION
We have found exact conditions (Eqs. (<ref>) and
(<ref>)) for AK tunneling in the 4-component continuum theory of
BLG. These conditions have 3 important consequences.
First, AK tunneling is ubiquitous but depends on the crystallographic
orientation of the step or barrier. In unbiased BLG at normal incidence on
a hard or soft armchair step it occurs because of the swap symmetry of the
4-component Hamiltonian. When swap symmetry is not present it occurs in
biased and unbiased BLG, not only at normal incidence but also at oblique
incidence on hard and soft steps and barriers with TW in all cases.
Secondly, AK tunneling at oblique incidence is
valley asymmetric provided that the transmission coefficient within each
valley is asymmetric in the transverse momentum. This asymmetry occurs
naturally because of TW but even without TW, asymmetry can be induced by
making the potential asymmetric in the longitudinal direction.
Thirdly, the exact condition for AK tunneling at normal and oblique
incidence in the 2-component approximation, Eq. (<ref>), is
just Eq. (<ref>) with the 4-component polarization vectors
replaced by approximate ones. At normal incidence
Eq. (<ref>) and swap symmetry lead to the pseudospin
conditions for AK tunneling in the 2-component approximation. However,
there are cases where AK tunneling occurs in the 4-component theory but
not in the 2-component approximation.
The theoretical methods we have developed are applicable to analysis of
transmission and reflection in the tight binding approach, at least in the
case of normal incidence on a hard armchair step in unbiased BLG. We show
in Appendix <ref> that in this case AK tunneling occurs as in
the continuum approach and that the transmission zero results from swap
symmetry and the orthogonality condition. Further investigation of
AK tunneling in the tight binding approach would require the
development of numerical methods to find all the k_x values for a step of arbitrary
orientation and compute T for a soft step.
Our findings are experimentally testable because we have shown that AK
tunneling occurs at experimentally realizable soft-walled potentials and
the transmission coefficient remains small over a measurable range centered
on the exact transmission zeros. It should be possible to observe AK
tunneling by using a graphene electron collimator <cit.> coupled
to a potential barrier and working in the ballistic transport regime
<cit.>. When this arrangement is
operated at zero transverse momentum it can detect AK tunneling and if it
is operated at non-zero transverse momentum, it functions as a valley
polarizer. The valley polarization is large and can be optimized by
adjusting the potential.
In summary, our work suggests that AK tunneling in BLG occurs under a wide
range of conditions, is experimentally detectable and can be used to make a
valley polarizer.
PAM thanks Prof. S. Tsuneyuki for hospitality at the Department of Physics,
University of Tokyo. The computations were done on the ALICE high performance
computing facility at the University of Leicester. HA is grateful for
support from the Core Research for Evolutional Science and Technology
“Topology" project from the Japan Science and Technology Agency (Grant No.
JPMJCR18T4) and JSPS KAKENHI Grant No. JP17H06138.
§ SYMMETRY RELATIONS
§.§ General relations for steps and barriers
We have previously detailed some relations between transmission
coefficients for potential barriers <cit.>. The only difference
between a barrier and a step is that the potential is the same on the
entrance and exit sides of the barrier, while for step it is different. All
of the relations we have already given can be generalized to the case of a
step. Here we state the relations that apply in the case when there is one
incident state and one propagating transmitted state.
All of the relations can be derived from the asymptotic S-matrix or the
Hamiltonian. The asymptotic S-matrix relates the amplitudes of the
incoming and outgoing waves in the asymptotic regime where the evanescent
wave amplitudes are negligible:
( [ r; t; ]) =
( [ S_a S_b; S_c S_d; ])
( [ i_0; x_0; ]),
where i_0 is the amplitude of the
incident wave, r is the amplitude of the reflected wave, t is the
amplitude of the transmitted wave and x_0 is the amplitude of a wave
incident from the right.
The relations <cit.> between the S-matrix elements and
between the transmission coefficients are
|S_b| = |S_c|,
T_K(k_y', θ) = T_K'(-k_y', θ),
T_K(k_y', θ) = T̂_K'(k_y', θ±π/3),
T_K(k_y', θ) = T_K'(-k_y', ±π/3 - θ),
where T̂ is the transmission coefficient for a barrier with the
spatially inverted potentials, V_i(-x'). Eq. (<ref>) is a
consequence of the unitarity of the S-matrix (or generalized unitarity
<cit.> when the polarization vectors are not normalized to unit
flux). Eq. (<ref>) results from time reversal and
Eqs. (<ref>) and (<ref>) occur because there are
transformations that relate the Hamiltonians at different values of θ.
An additional relation occurs in the case of unbiased BLG because the swap
operator then transforms the Hamiltonian as
SH(k_y', θ)S = H(-k_y', -θ). This leads to the relation
T(k_y', θ) = T(-k_y', -θ),
which holds in each valley.
§.§ Relations between transmission coefficients for the 4 step
configurations
In Section <ref> we stated that the transmission
coefficients for the 4 step configurations in Fig. <ref> are
related. We detail these relations first for the case when there is one
incident state and one propagating transmitted state. This is the case for
all the transmission coefficients presented in the main text, except when
-18.41 ϕ_k -16.21^∘ in Fig. <ref> (left). We
explain the changes that apply in this small range at the end of this
sub-section.
In the case of one incident state and one propagating transmitted state in
the presence of TW, all the transmission coefficients can be found from 2
independent functions of k_y' and this reduces to 1 when the step edge
is parallel to an armchair direction or, when there is no bias,
a zigzag direction. Without TW only one function of k_y' is needed.
Within each valley this is a consequence of Eq. (<ref>).
The physical meaning of S_b and S_c is that S_c
is the transmitted amplitude of a wave incident from the left and S_d is
is the transmitted amplitude of a wave incident from the right. Then it
follows from Eq. (<ref>) that T_ru = T_lu and T_rd =
T_ld, where the subscripts are defined in Fig. <ref>.
Once the transmission coefficients in one valley are known, those
in the other valley can be found from Eq. (<ref>). Thus only two
independent functions are needed to find all the transmission coefficients.
These functions can be taken to be T_lu and T_ld.
When the step edge is parallel to an armchair direction, only one
function is needed. In this case Eqs. (<ref>) and
(<ref>) give T_luK(k_y', 0) = T_ldK(-k_y', π / 3)
while Eqs. (<ref>) and
(<ref>) give T_ldK(k_y', 0) = T_ldK(k_y', π / 3).
Hence T_ldK(k_y', 0) = T_luK(-k_y', 0) and similarly
T_ldK(k_y', π/3) = T_luK(-k_y', π/3). Thus only one function
is needed and can be taken to be T_lu.
When the step edge is parallel to a zigzag direction, similar reasoning
leads to T_ldK(k_y', π/6) = T_luK(-k_y', π/2) and
T_ldK(k_y', π/2) = T_luK(-k_y', π/6). Hence in general,
T_lu and T_ld at the same value of θ remain
distinct. However in the special case of unbiased BLG, Eq. (<ref>)
together with the 2π/3 periodicity that results from trigonal warping,
give T(k_y', π/6) = T(-k_y', π/2). Then it follows that
T_ldK(k_y', π/6) = T_luK(k_y', π/6) and
T_ldK(k_y', π/2) = T_luK(k_y', π/2). Hence only one function
is needed and can be taken to be T_lu.
When there is no TW, the transmission coefficients are independent of
θ because the constant energy contours are circular. Then
reasoning similar to that used in the armchair case leads to
T_ldK(k_y') = T_luK(-k_y'). Again only one function
is needed and can be taken to be T_lu.
In the exceptional angular range in Fig. <ref> (left), one
incident state couples to two propagating transmitted states. When the step
is reversed this changes to two incident states each of which
couples to one propagating transmitted state. We have investigated this
case numerically for unbiased BLG as in Fig. <ref> (left) at the
incidence conditions and potentials given in the figure caption. We find
that the sum of the transmission coefficients can be obtained from one
independent function and this function can be taken to be T_lu as given
by Eq. (<ref>) for the case when there are two propagating
transmitted states. We also find that the sum satisfies Eq. (<ref>).
When the sum is known in one valley, this equation gives the sum in the
other one.
§.§ Relations used in Section <ref>
Eq. (<ref>) is a consequence of Eq. (<ref>) and the fact
that T is independent of θ when there is no TW. Alternatively,
Eq. (<ref>) can be obtained from the K Hamiltonian,
Eq. (<ref>). When there is no TW, inverting the x' co-ordinate,
i.e. putting x'→ -x', transforms H_K into the K'
Hamiltonian, Ĥ_K', in which the potentials V_i(x') are replaced
by V_i(-x'). This leads to Eq. (<ref>).
§ PSEUDOSPIN CONDITIONS FOR UNBIASED BLG
The pseudospin conservation condition can be stated in two ways. In the
first report of AK tunneling in BLG, <cit.> the authors say
that the propagating states on the left side of a step match onto an
evanescent state on the right so both states have the same pseudospin. In
later reports, <cit.> the authors say equivalently
that the propagating states on opposite sides of the step are of opposite
pseudospin. These conditions result from swap symmetry and we show this by
using the approximate polarization vectors.
It is convenient to work in a representation where the
component order is non-dimer followed by dimer, i.e. the approximate
4-component states are of form (ϕ̃_A1, ϕ̃_B2,
ϕ̃_A2, ϕ̃_B1)^T. The approximate polarization
vectors for the evanescent (e) and propagating (p) states are
𝐞̃_e = N_e
(1, ã_e, -iv_0ħλ̃/t, -iã_e
v_0ħλ̃/t)^T,
𝐞̃_p = N_p
(1, ã_p, -v_0ħk̃_x/t, -ã_p v_0ħk̃_x/t)^T,
where iλ̃ and k̃_x are approximations to the
x-component of 𝐤 and N_i are normalization constants.
ã_i = ±sgn(E-V_1)√((E-V_1)/(E-V_2)) where the sign is
+ for the evanescent state and - for the propagating state.
The swap operator in the same representation is
S =
( [ σ_x 0; 0 σ_x; ]).
In unbiased BLG, ã_e = ± 1, ã_p = ∓ 1, where the
upper signs apply in the conduction band and the lower signs apply in the
valence band. Hence conduction band propagating states are swap
antisymmetric (s=-1) and so are valence band evanescent
states. The 2-component vectors formed from these vectors by neglecting
the dimer components are eigenvectors of the pseudospin, σ_x, with
eigenvalue s_x=s. Thus the pseudospins on both sides of the step are
identical when the state on the right is purely evanescent. The pseudospin
condition on the propagating states can be obtained in a similar way.
§ PSEUDOSPIN CONDITIONS FOR BIASED BLG
§.§ Rotation of polarization vectors
In biased BLG, the pseudospin condition for AK tunneling at a potential
step is that the incident state on the left side and the evanescent state
on the right side have the same the pseudospin expectation value. Or
equivalently, that the expectation values of the pseudospin of the
right propagating states on either side of the step are of equal magnitude
and opposite sign <cit.>.
AK tunneling at normal incidence <cit.> occurs when the potentials
and energy satisfy
E-V_1l/E-V_2l = E-V_1r/E-V_2r,
sgn(E-V_1l) = -sgn(E-V_1r).
The pseudospin expectation value conditions result from evaluating the
pseudospin expectation values for the 2-component states that occur when
Eq. (<ref>) is satisfied.
To show these conditions and Eq. (<ref>) result from swap
symmetry, we rotate the approximate 4-component polarization vectors for an
evanescent state so they become eigenstates of the swap operator. This
rotation can always be performed but we show that AK tunneling occurs only
for a critical pair of rotation angles. These angles give
Eqs. (<ref>) and the pseudospin expectation condition.
The necessary rotation matrix is
R =
( [ Q(ω) 0; 0 Q(ω); ]),
where
Q = ( [ cos(ω) -sin(ω); sin(ω) cos(ω); ]),
ω = ±π/4 - tan^-1ã_e.
Here the sign is that of the desired S eigenvalue and the rotation angle
ω is chosen so that ã_e becomes ± 1. Thus the rotated
vector becomes an eigenvector of S.
To identify the critical rotation angles it is convenient to work with only
the 𝐞 vectors. We use
Eq. (<ref>) to write Eq. (<ref>) as
𝐞̃^†_4lṽ_x'𝐞̃_3r = 0,
where the velocity operator in the (non-dimer, dimer) representation is
ṽ_x' = v_0
( [ 0 σ_x; σ_x 0; ]).
We choose the rotation angles ω_l and ω_r so that the S
eigenvalues on the left and right sides of the step are of opposite
sign. Then we insert these rotations into Eq. (<ref>). This gives
𝐞̃^†_4lṽ_x'𝐞̃_3r = 𝐞̃^†_4l R^T(ω_l) R(ω_l) ṽ_x'
R^T(ω_r) R(ω_r)𝐞̃_3r
= 𝐞̃^†_4l R^T(ω_l) R(ω_l +ω_r)
ṽ_x' R(ω_r)𝐞̃_3r,
where we have used ṽ_x' R^T = R ṽ_x'.
Next, we show that the right hand side of Eq. (<ref>) vanishes
when ω_l+ω_r = 0. We obtain
𝐞̃^†_4l R^T(ω_l) R(ω_l +ω_r)
ṽ_x' R(ω_r)𝐞̃_3r
= 𝐞̃^†_4l R^T(ω_l) R(ω_l +ω_r)
Sṽ_x'S R(ω_r)𝐞̃_3r,
= 𝐞̃^†_4l R^T(ω_l) S R^T(ω_l +ω_r)
ṽ_x'S R(ω_r)𝐞̃_3r,
= -𝐞̃^†_4l R^T(ω_l)R^T(ω_l +ω_r)
ṽ_x'R(ω_r)𝐞̃_3r,
where we have used RS = SR^T and the fact that the S eigenvalues on
opposite sides of the step are of opposite sign.
R(ω_l +ω_r) = R^T(ω_l +ω_r) = I when
ω_l+ω_r = 0 and then it follows from Eq. (<ref>)
that the right hand side of Eq. (<ref>) vanishes.
Eq. (<ref>) shows that ω_l+ω_r = 0 when ã_el
= -ã_er and this condition leads to Eq. (<ref>) and
the associated condition on the sign of E-V_1. Further, when
ã_el = -ã_er, the expectation values of the swap
operator on the left and right sides of the step satisfy
𝐞̃_1l^† S𝐞̃_1l =
𝐞̃_3r^† S𝐞̃_3r and these
expectation values are identical to the pseudospin expectation values.
The reason for the equality of the swap and pseudospin expectation values
is that non-dimer and dimer sub-vectors of the approximate 4-component
polarization vectors are proportional to each other.
Although we have used a rotation that makes the evanescent states
eigenstates of S, it is impossible to find a rotation that makes
all the plane wave states eigenstates of S. The reason is that
transformation of the coefficient matrix in Eq. (<ref>) results
in a matrix (Section <ref>) that has one invariant
subspace of dimension 2 so only 2 of the 4 rotated states can be
eigenstates of S. A rotation similar to Q is used in ref. <cit.>
but appears to be applied only to the propagating states. The
transformation of the evanescent states, which requires a different
rotation angle, is not discussed and neither is the invariant subspace.
§.§ Transformation of coefficient matrix
The transformation of the coefficient matrix in Eq. (<ref>) and
the resulting invariant subspace are illustrated in this section with the
example of s=+1 evanescent states in the conduction band. Similar
subspaces occur in all other cases. We also show that it is impossible to
find a rotation that makes all the plane wave states eigenstates of S.
In biased BLG, the swap operator commutes with neither the Hamiltonian nor
the coefficient matrix. This means the swap operator and coefficient matrix
cannot share a complete set of eigenvectors. However, non-commuting
operators may share a subset of eigenvectors. This occurs in the present
case and results in the invariant subspace.
We perform 2 steps to demonstrate the existence of the invariant subspace
and show that it is 2-dimensional. First, we transform the coefficient
matrix with the rotation operator, R, in Eq. (<ref>). Then we
express the transformed matrix in the basis formed by the eigenvectors of
the swap operator.
In the (non-dimer, dimer) representation the coefficient matrix in
Eq. (<ref>) becomes
C = 1/v_0( [ 0 0 t 0; 0 0 0 t; 0 -ε_2 0 0; -ε_1 0 0 0; ]),
where ε_i = E - V_i. This matrix is not swap symmetric because
V_1 V_2 in biased BLG. The lack of swap symmetry persists after the
matrix has been transformed.
The matrix of eigenvectors of the swap operator in the (non-dimer, dimer)
representation is
1/√(2)( [ 1 0 1 0; 1 0 -1 0; 0 1 0 1; 0 1 0 -1; ]),
where the order is two s=1 vectors followed by two s=-1 vectors.
The transformed matrix, expressed in the swap eigenvector basis, is
C' = 1/v_0( [ 0 t 0 0; -α 0 β 0; 0 0 0 t; γ 0 α 0; ]),
where
2 α = (ε_1 + ε_2) cos 2ω,
2 β = (ε_1 + ε_2) sin 2ω -
(ε_1 - ε_2),
2 γ = (ε_1 + ε_2) sin 2ω +
(ε_1 - ε_2).
Eqs. (<ref>) and (<ref>) are valid for arbitrary ω.
We now show that the transformed matrix has an invariant subspace when
ω is chosen so that the evanescent wave polarization vectors are
rotated so they become eigenvectors of the swap operator. In the case of
the s=+1 subspace in the conduction band, Eqs. (<ref>) and
(<ref>) give
α = √(ε_1 ε_2),
β = ε_2 - ε_1,
γ = 0.
As γ = 0, the lower left 2× 2 sub-matrix of the transformed
matrix vanishes, hence the space spanned by the s=+1 vectors forms an
invariant subspace of dimension 2, as stated in Section <ref>.
The eigenvectors that span this invariant subspace are of form
(u_1, u_2, 0, 0)^T and satisfy
( [ 0 t; -α 0; ])
( [ u_1; u_2; ]) =
-iv_0 ħλ̃( [ u_1; u_2; ]).
The eigenvalues are ± i√(t √(ε_1 ε_2)) and
give the known values of λ̃ in the 2-component
approximation. The remaining 2 eigenvectors of M' are propagating states
with a mixture of s=+1 symmetry and s=-1 symmetry. Replacing
ã_e with ã_p = -ã_e in Eq. (<ref>) gives a
transformation that puts the propagating states in the invariant subspace
and makes the evanescent states a mixture of symmetry types. Hence it is
impossible to find one value of ω that makes all
the states eigenstates of S, as stated in Section <ref>.
§ TIGHT BINDING THEORY OF AK TUNNELING
We use tight binding theory to find the transmission and reflection
coefficients for Bloch waves at normal incidence on an armchair step in
unbiased BLG. AK tunneling occurs in this situation because of swap
symmetry and the transmission and reflection coefficients are almost
identical to those found with the continuum theory.
Fig. <ref> shows the step geometry. We use a rectangular unit cell
that has twice the area of the primitive cell. The atoms are arranged in
columns separated by a distance a/2, where a is the lattice
constant. There are 2 columns per cell and we take the cell origins to be
on the even-numbered columns. Each column contains 4 inequivalent
sites. The step edge is midway between columns 0 and 1. The midway position
ensures that potential does not change abruptly at any atomic site.
The tight binding Bloch waves are a superposition of basis Bloch waves:
ϕ_𝐤 = 1/√(N)∑_s v_s ∑_𝐑
e^i 𝐤· (𝐑 + 𝐝_s)
u(𝐫 - (𝐑 + 𝐝_s)),
where N is the number of cells. The cell origins are at positions
𝐑, the position of site s in the unit cell is
𝐝_s and u is an atomic orbital. The sum over 𝐑 is a basis Bloch wave and the
numbers v_s are expansion coefficients. These coefficients are the
elements of a polarization vector, 𝐯.
Normal incidence on an armchair step corresponds to incidence in the
crystallographic x direction (Fig. <ref>). Hence the equation for
𝐯 can be obtained by putting k_y = 0 in
the 𝐤-space Hamiltonian in ref. <cit.>. This gives
{[A + (V' - E) I ] + λ_x A + λ_x^-1
A}𝐯 = 0,
where
A = (
[ 0 -γ_0 γ_4 -γ_3; -γ_0 0 γ_1 γ_4; γ_4 γ_1 0 -γ_0; -γ_3 γ_4 -γ_0 0 ]),
is the matrix of tight binding parameters and I is the 4× 4 unit
matrix. V' = diag(V, V + Δ', V + Δ', V), where V is
the potential. The Hamiltonian for k_y = 0 is swap symmetric because A
and V' are swap symmetric.
At fixed energy, Eq. (<ref>) represents a quadratic eigenvalue problem
(QEP) for λ_x ≡exp(ik_x a / 2). This is not the only way of
finding λ_x as one can instead <cit.> write Eq. (<ref>)
as a linear eigenvalue problem for (λ_x + λ_x^-1). However
the QEP formulation is better for our purposes because it leads directly to
an orthogonality condition analogous to Eq. (<ref>).
The QEP defined by Eq. (<ref>) is palindromic <cit.> and this
property guarantees that the plane waves occur in ± k_x pairs. QEPs can
normally be solved numerically with a linearization method, however a
special linearization is needed to preserve the ± k_x pairing. We
use the linearization recommended in ref. <cit.> and write our QEP
as
[
(
[ A A; A_0 - A A ])
+ λ_x
(
[ A A_0 - A; A A ])
]
(
[ λ_x 𝐯; 𝐯 ]) = 0,
where A_0 = A + (V' - E) I.
The solution of the non-symmetric eigenvalue problem (<ref>) gives 8
right polarization vectors, 𝐞, of form 𝐞^T = (λ_x
𝐯^T, 𝐯^T). Because of the ± k_x pairing, 4 of these
vectors are associated with the K valley and 4 with the K' valley. The
physical meaning of the 𝐞 vectors is that the first 4 components are Bloch
wave amplitudes on column 1 and the last 4 are the amplitudes on column 0.
As the eigenvalue problem is nonsymmetric, the solution also gives a set of
left polarization vectors, 𝐟^†. The 𝐞 and
𝐟^† vectors form a biorthogonal set as described in
Section <ref>.
The wave functions on the left and right sides of the step are
ψ_l = ϕ_k_1lτ_i +
∑_τ r_2τϕ_k_2lτ + r_4τϕ_k_4lτ,
ψ_r = ∑_τ t_1τϕ_k_1rτ + t_3τϕ_k_3rτ,
where the notation is similar to that in Eqs. (<ref>) and
(<ref>). However Bloch waves replace the plane waves and ψ_l
and ψ_r are formed from Bloch waves from both valleys to account for
the possibility of valley mixing. τ is the valley index and τ_i
is the valley of incidence. The system wave function is
ψ = ψ_l when x < a/4 and ψ = ψ_r when x > a/4.
Equations for the transmission and reflection coefficients are obtained
from the condition <cit.> that ψ is an eigenstate of the
tight binding Hamiltonian, H_TB, that is (H_TB - E)|ψ⟩ = 0.
This condition is satisfied when
⟨ u(𝐑_s)|(H_TB - E)|ψ⟩ = 0,
for each of the 8 atomic sites, 𝐑_s, adjacent to the step
edge. No other sites need to be considered as the in-plane coupling is
restricted to nearest neighbors. Eqs. (<ref>) give 8 linear
equations for the 4 unknown transmission coefficients and the 4 unknown
reflection coefficients.
Eqs. (<ref>) are linear in the amplitudes of ψ_l and ψ_r
at the site 𝐑_s. The site amplitude of a Bloch wave at site s
in column n is v_s exp(i k_x na / 2), as can be seen from
Eq. (<ref>). After some tedious manipulations involving these site
amplitudes, it can be shown that Eqs. (<ref>) are equivalent to the
simpler condition that the site amplitudes in ψ_l and ψ_r are equal on
column 0 and equal on column 1 <cit.>. This condition can be written
as the vector equation
𝐞_1lτ_i +
∑_τ r_2τ𝐞_2lτ + r_4τ𝐞_4lτ =
∑_τ t_1τ𝐞_1rτ + t_3τ𝐞_3rτ,
where the vectors 𝐞 are the 8-component polarization vectors
found by solving Eq. (<ref>). Eq. (<ref>) is the tight
binding analog of Eq. (<ref>). We solve it with the
biorthogonality method we used to solve Eq. (<ref>).
By following the same steps that led to Eq. (<ref>), we find that
t_1τ vanishes in both valleys when
𝐟^†_3lK·𝐞_3rK =
𝐟^†_3lK'·𝐞_3rK' =
𝐟^†_3lK·𝐞_3rK' =
𝐟^†_3lK'·𝐞_3rK =
0.
These scalar products vanish because of swap symmetry as in the continuum
approach. The swap eigenvalues of the Bloch states are identical in both
valleys because the matrix A in Eq. (<ref>) is
𝐤-independent. Hence the swap classification of the propagating
and evanescent Bloch waves is the same as the plane wave swap
classification found in Section <ref>. The 8-component
𝐟^† and 𝐞 vectors have the same swap
eigenvalues as the Bloch waves because the matrices in Eq. (<ref>)
are invariant under the 8-component swap operator
S_8 =
(
[ S 0; 0 S ]).
S_8^2 = I_8, the 8 × 8 unit matrix. Hence for any pair of
𝐟^† and 𝐞 vectors with opposite swap
eigenvalues, 𝐟^†·𝐞 =
𝐟^† S_8^2 ·𝐞 =
-𝐟^†·𝐞. Therefore all the scalar products in
Eq. (<ref>) vanish in the energy range where the incident
state is in the conduction band and the transmitted state is in the
valence band or vice versa. Thus AK tunneling occurs in the same energy
range as found in the continuum approach (Section <ref>).
Fig. <ref> (left) shows the excellent agreement between
transmission coefficients computed with the continuum and tight binding
approaches. The difference between the transmission coefficients is at most
≃ 6× 10^-4 at E ≃ 135 meV. Fig. <ref>
(right) shows that the valley mixing is very small. The valley-flip
transmission and reflection coefficients are typically between 3 and 5
orders of magnitude smaller than the valley-preserving coefficients.
Similar small valley mixing was reported in earlier work on barrier
transmission away from the anti-Klein condition <cit.>.
99
Katsnelson06 M. I. Katsnelson, K. S. Novoselov and A. K. Geim,
Nat. Phys. 2, 620 (2006).
McCann13 E. McCann and M. Koshino, Rep. Prog. Phys. 76,
056503 (2013).
Park11 S. Park and H.-S. Sim, Phys. Rev. B 84, 235432 (2011).
Gu11 N. Gu, M. Rudner and L. Levitov, Phys. Rev. Lett
107 156603 (2011).
Park12 C. Park, Solid State Commun. 152, 2018 (2012).
footnote1 One exception is the case of normal incidence on an
armchair step in unbiased BLG.
Maksym21 P. A. Maksym and H. Aoki, Phys. Rev. B 104,
155401 (2021).
Smith86 D. L. Smith and C. Mailhiot, Phys. Rev. B 33,
8345, (1986).
Chen16 Feng-Wu Chen, Mei-Yin Chou, Yiing-Rei Chen and Yu-Shu Wu,
Phys. Rev. B 94, 075407 (2016).
Partoens06 B. Partoens and F. M. Peeters, Phys. Rev. B
74, 075404 (2006).
McCann07E. McCann, D.S.L. Abergel and Vladimir I. Fal’ko,
Eur. Phys. J. - Special Topics 148, 91 (2007).
Pereira09 J. M. Pereira Jr, F. M. Peeters, R. N. Costa Filho and
G. A. Farias, J. Phys: Condens. Matter 21 045301 (2009).
Varlet14 A. Varlet, M. H. Liu, V. Krueckl, D. Bischoff, P. Simonet,
K. Watanabe, T. Taniguchi, K. Richter, K. Ensslin and T. Ihn,
Phys. Rev. Lett. 113, 116601 (2014).
Cobaleda14 C. Cobaleda, S. Pezzini, E. Diez and V. Bellani,
Phys. Rev. B 89 121404(R) (2014).
Nam17 Y. Nam, D. Ki, D. Soler-Delgado and A. F. Morpurgo,
Nat. Phys. 13 1207 (2017).
Oka19 T. Oka, S. Tajima, R. Ebisuoka, T. Hirahara, K. Watanabe,
T. Taniguchi and R. Yagi, Phys. Rev. B 99 035440 (2019).
Barnard17 A. W. Barnard, A. Hughes, A. L. Sharpe, K. Watanabe,
T. Taniguchi and D. Goldhaber-Gordon, Nat. Commun. 8, 15418
(2017).
Maksym21a See Fig. 4 of ref. <cit.>.
Mackey06 D. S. Mackey, N. Mackey, C. Mehl and V. Mehrmann,
SIAM J. Matrix Anal. Appl. 24, 165 (2006).
Osbourn79 G. C. Osbourn and D. L. Smith, Phys. Rev. B
19, 2124 (1979).
sitenote Eqs. (<ref>) can be rearranged into a homogeneous
system of linear equations for the differences of the site amplitudes in
ψ_l and ψ_r. These equations have only the trivial solution
unless γ_0 = γ_4, which is not the case.
|
http://arxiv.org/abs/2306.04087v1
|
20230607011650
|
Accelerating 128-bit Floating-Point Matrix Multiplication on FPGAs
|
[
"Fumiya Kono",
"Naohito Nakasato",
"Maho Nakata"
] |
cs.DC
|
[
"cs.DC",
"cs.AR",
"cs.MS",
"cs.PF",
"math.OC"
] |
Accelerating 128-bit Floating-Point Matrix Multiplication on FPGAs
Fumiya Kono14, Naohito Nakasato2, Maho Nakata3
1 Shizuoka Institute of Science and Technology, Fukuroi, Shizuoka, JAPAN
2 The University of Aizu, Aizuwakamatsu, Fukushima, JAPAN
3 Cluster for Pioneering Research, RIKEN, Wako, Saitama, JAPAN
4 [email protected]
July 31, 2023
=====================================================================================================================================================================================================================================================================================
General Matrix Multiplication (GEMM) is a fundamental operation widely used in scientific computations. Its performance and accuracy significantly impact the performance and accuracy of applications that depend on it. One such application is semidefinite programming (SDP), and it often requires binary128 or higher precision arithmetic to solve problems involving SDP stably. However, only some processors support binary128 arithmetic, which makes SDP solvers generally slow. In this study, we focused on accelerating GEMM with binary128 arithmetic on field-programmable gate arrays (FPGAs) to enable the flexible design of accelerators for the desired computations. Our binary128 GEMM designs on a recent high-performance FPGA achieved approximately 90GFlops, 147x faster than the computation executed on a recent CPU with 20 threads for large matrices. Using our binary128 GEMM design on the FPGA, we successfully accelerated two numerical applications: LU decomposition and SDP problems, for the first time.
Matrix Multiplication, binary128, Systolic Arrays, Intel FPGA SDK for OpenCL, Performance Benchmarking, LU Decomposition, Semidefinite Programming
§ INTRODUCTION
General Matrix Multiplication (GEMM) is a crucial computation in various scientific and engineering algorithms. Its precision plays a significant role in determining the accuracy of the target applications. Different applications have different precision requirements for the number of bits used to represent floating-point (FP) numbers. As defined by the IEEE 754 standard <cit.>, FP formats and arithmetic are available in various precisions, including binary16 (also known as half-precision), binary32 (single-precision), binary64 (double-precision), and binary128 (quadruple-precision). The suffix in each format indicates the number of FP bits supported by the respective format, with higher numbers indicating higher precision.
In machine learning (ML) using artificial neural networks, it has been shown that binary16 is sufficient for storing the weights of these networks. This has led to the development of hardware architectures that support highly parallel computation using binary16 arithmetic. One example is the TensorCore on recent NVIDIA graphics processing units (GPUs), designed for matrix multiplication with lower precision and has multiplication and accumulation performed in binary16 and binary32 arithmetics, respectively. Other ML accelerators, such as Google's TPUv3 <cit.>, also support the bfloat16 format, an extended half-precision FP format.
On the other hand, operations with higher precision, such as binary128, are also required by specific applications. One example is Semidefinite Programming (SDP), a natural extension of linear programming that aims to minimize linear functions subject to certain constraints. In semidefinite programming, it is common to solve given problems using the Primal-Dual Interior-Point Methods (PDIPM) <cit.>. However, according to these methods, SDP is numerically unstable near the optimal solution because the variable matrices become singular <cit.>. Therefore, Nakata <cit.> proposed using higher precision numbers to solve optimization problems using SDP to maintain the desired numerical accuracy.
However, since few processors represented by the IBM z13 processor <cit.> support binary128 as hardware, the performance of applications relying on binary128 arithmetic is typically 100 to 1000x slower than that only relying on binary64. Therefore, the acceleration of binary128 arithmetic is crucial for accelerating SDP.
In this research, we implemented GEMM in binary128 arithmetic on Field Programmable Gate Arrays (FPGAs). The advantage of targeting FPGAs is their flexibility in optimizing accelerators for target computations. Additionally, while GPUs are designed with many parallel processors and fast memories, FPGAs are simply an array of logic gates that allow us to reconfigure designs and how they work during computation. This characteristic of FPGA enables us to create a suitable design for specific calculations while minimizing the use of hardware resources. As a result, energy consumption during computation on FPGAs is typically much lower than on GPUs.
Nagasu <cit.> compared the energy consumption of FPGA and GPU computations for the same tsunami modeling application and demonstrated the effectiveness of FPGAs. They showed that their implementation on the Arria10 FPGA consumed approximately 5x less energy than the initial implementation on an AMD Radeon GPU.
Implementing logic designs on FPGAs is typically more challenging than parallel programming on GPUs because logic designs must be written in Hardware Description Language (HDL). To alleviate this difficulty, we adopt Intel's OpenCL-based high-level synthesis (HLS) techniques for designs in this research.
To design high-performance GEMM operations on FPGAs, it is essential to utilize pipeline parallelism and create a systolic array <cit.>. Matteis <cit.> developed , a numerical library inspired by the open-source implementation of the Basic Linear Algebra Subroutines (BLAS) for Intel FPGAs. also provides a version of the systolic array design for its GEMM implementation.
In this research, we extended it to support various FP precisions.
The OpenCL standard supports neither arithmetic operations of binary128 nor that of higher precision than binary128. Furthermore, the OpenCL standard only supports arithmetic operations in binary32 and binary64 <cit.>. While a recent version of the OpenCL SDK for Intel FPGAs supports specific FP precisions, its main target is binary16 and bfloat16 for machine learning.
In this research, we adopted customized FP units developed by Nakasato <cit.> that support various FP formats, including the binary128 format. Nevertheless, this paper focused on developing and evaluating binary128 format FP addition, multiplication units, and acceleration of binary128 GEMM operations.
The main contributions of this research are as follows:
* We implemented fast GEMM designs in the binary128 format on FPGAs
* We developed an application interface compatible with the standard BLAS library.
* We evaluated the performance of designs with practical applications.
While this research builds upon the preceding work, we successfully integrated designs into MPLAPACK <cit.>, an extension of all BLAS and LAPACK (Linear Algebra PACKage) routines to support multi-precision FP operations, including binary128. Therefore, the designs can also be immediately used in numerical applications that utilize MPLAPACK as a backend.
Our binary128 GEMM design implemented on Terasic DE10a-Net Agilex FPGA achieved 90.9GFlops by utilizing maximum hardware resources. Furthermore, its integration to practical applications of blocked LU decomposition and SDP contributed to at most 5.3x and 2x speed-up compared with the computation on a recent Intel i9-10900 CPU with 20 threads parallelization by OpenMP, respectively.
This paper first presents a brief specification of designs. Then, to inspect the fundamental characteristics of the designs, we first evaluate their performance on Terasic DE5a-Net Arria10 FPGA. Based on the analysis obtained by this evaluation, we focus on more practical benchmarking by using Nallatech (BittWare) 520N Stratix10 FPGA, which is installed on a supercomputer system in operation, and Agilex FPGA, the latest and high-end Intel FPGA. Finally, we discuss the applications of design by integrating it to blocked LU decomposition and SDP problems.
§ RELATED WORKS
The study of GEMM in high-precision arithmetic is a popular topic in multiple-precision research, but previous studies have mainly focused on CPU or GPU implementations.
Nakasato <cit.> accelerated the GEMM routines for binary32, binary64, and 128-bit double-double (DD) <cit.> precision on the AMD Cypress GPU.
Also, Nakata <cit.> presented a fast GEMM implementation in DD precision on NVIDIA GPUs. In the paper, they have applied their GEMM implementation in DD precision to the algorithm in SDP.
Kouya <cit.> implemented LU decomposition supporting multi-precision floating-point numbers such as DD, triple-double (TD), and quad-double (QD). With AVX vectorization, the implementation successfully accelerated the LU decomposition for Intel and AMD CPUs.
Joldes <cit.> developed CAMPARY, a multi-precision arithmetic library for NVIDIA GPUs based on the CUDA programming model, which supports DD, TD, and QD precision.
Isupov and Knyazkov have been working on MPRES-BLAS for NVIDIA GPUs <cit.>, which is an interval evaluation for the fractional representation of numbers in the Residue Number System (RNS) <cit.> to represent arbitrary precision numbers. MPRES-BLAS was the fastest among CAMPARY and CUMP <cit.> GEMM implementation for 424-bit precision.
Mukunoki <cit.> also had proposed a fast GEMM implementation in binary128 or less precision based on the Ozaki scheme <cit.>, an accurate GEMM algorithm by representing FP numbers as non-overlapping sums of FP numbers.
They showed the performance evaluation of their method on CPUs and prospects of extension for GPUs.
However, research for GEMM in high-precision arithmetic on FPGA has yet to be seen. Licht <cit.> targeted Xilinx FPGAs to implement GEMM by using systolic array designs. Afterward, they experimented with their GEMM to support various FP precision up to 1024-bits <cit.> extending the implementation of the Multiple Precision Floating-Point Reliable (MPFR) <cit.> library.
Although their motivation lies in the acceleration of an SDP solver,
the practical evaluation of their designs still needs to be done.
§ MATRIX MULTIPLICATION FOR FPGA
§.§ Implementation
The GEMM routine in BLAS performs matrix multiplication for matrices A and B as follows:
C = α A B + β C,
where α and β are scalar parameters.
Listing <ref> presents the API in C language to the GEMM routine for multi-precision FP numbers called Rgemm provided by MPLAPACK <cit.>. Note that _Float128 is the standard data type in C language for binary128, as defined in ISO/IEC TS 18661-3:2015 <cit.>. MPLAPACK utilizes _Float128 through the GNU C++ compiler via GNU extensions.
The first two arguments specify the transpose operation of matrices A and B. The three arguments lda, ldb, and ldc represent the leading dimensions of matrices A, B, and C, respectively.
In the practical implementation of the GEMM routine, calculating the matrix multiplication AB is a critical part of its computation. Assume that we have two matrices A and B with sizes m × k and k × n, respectively. Then, an element of the resulting matrix C' = AB is computed by the summation as follows:
C'_ij = ∑_p=0^k-1 A_ip× B_pj,
where i,j, and p are indices ranging 0 ≤ i < m, 0 ≤ j < n, and 0 ≤ p < k, respectively. The calculation of the whole matrix C' involves a 3-level nested loop.
Fig. <ref> illustrates the design of a systolic array for design derived from <cit.>. This design is characterized by a 2-D array of processing elements (PE) aligned P_C × P_R.
Each PE calculates Eq. (<ref>) for assigned sub-matrices of A and B.
The size of the sub-matrices A and B and the value of P_C × P_R determine how the input matrices are partitioned.
In the computation flow, the input matrices A and B are read from main memory via the Read module and sent to the PEs through the Feed module. A is sent by column, and B is sent by row, assuming that both matrices are not transposed. They are first received by PEs with IDs (P_R-1, 0) or (0, P_C-1) and forwarded to the adjacent PEs in the systolic array on each clock cycle. Each PE accumulates the result of a operation for the same element in C' and sends it to the Drain module, which is eventually collected by the Store module to be written back to the main memory.
More specifically, is a generator of OpenCL kernels for the systolic array. The generated systolic array consists of four OpenCL kernels: two kernels that combine the Read and Feed modules for A and B, one Store kernel for C, and a main kernel for the array of PEs and Drain module.
The main kernel explicitly calls a function for one PE in a loop. By fully unrolling the loop, the main kernel defines the systolic array. Because the computation task of a PE is just a operation, we can replace the operation in the original design with any unit for a desired FP format. This enables us to create a systolic array design corresponding to the designated precision.
In addition, to replace the operation, we modify and extend the other three kernels for the Read, Feed, and Store modules to support a wider memory bus for binary128 arithmetic. We also extend the original kernels to optimize load and store operations from DRAM. The Read and Feed kernels are equipped with a memory buffer in front of the Feed module. In the original design, the memory buffer is called a memory tile and explicitly instantiated as a 1-D array. The memory tile acts as a cache memory to store a sub-matrix of A and reuse the sub-matrix many times.
The exploitation of the memory tile reduces the pressure on the memory bandwidth of DRAM and improves the performance of designs, as shown in the later section.
The number of PEs in the present systolic array is P_R × P_C. We instantiate P_R × P_C binary128 units. The additional computations in the definition of the GEMM, as shown in Eq. (<ref>),
require the computation of two scaler-matrix multiplications and one matrix addition, which are very costly in a GEMM design on an FPGA. In the present systolic array, we need additional P_C multiply units for α A, a load unit for C, P_C multiply units for β C, and P_C add units for the summation of α A and β C. Except for the multiply units for α A, which can be merged with the Feed module, the other units are only activated in the final stage of the GEMM operation at the Store module. Therefore, in this research, we only calculate Eq. (<ref>) on an FPGA, while the host CPU handles the transpose operations and other additional operations involving α and β. Supporting those additional operations, we develop an API that is compatible with the standard Rgemm provided by MPLAPACK.
It enables us to use designs immediately in numerical applications with minimal changes.
§.§ Performance Models
Here, we summarize the performance models for design. In this section, f represents the clock frequency of the logic circuit design in MHz.
§.§.§ Performance of GEMM
The peak performance of the designs depends on the layout of systolic arrays, as shown in Fig. <ref>. When we use P_R × P_C PEs, the peak performance F_ peak (GFlops) is given by Eq.(<ref>).
F_ peak = 2× P_R × P_C × f × 10^610^9
The measured performance F_ perf of the designs in GFlops is calculated by Eq. (<ref>), where T_ exec is the execution time in seconds.
F_ perf = 2mnkT_ exec× 10^9
In Eq. (<ref>), m, n and k denote the matrix size parameters. For the multiplication of n × n square matrices, the number of FP operations is 2n^3.
§.§.§ Memory Bandwidth Requirement
The performance of the designs is also affected by memory bandwidth of an FPGA board. P_R × P_C systolic array takes P_R + P_C inputs conveyed by two vertical and horizontal Feed pipelines at every cycle. Thus, the required memory bandwidth B_ req (GB/s) is given by Eq. (<ref>).
B_ req = (P_R + P_C) × f × 10^6 × N_ Byte10^9
N_ Byte represents the word size established as 16 bytes in the present work. If the systolic array consists of 8 × 8 PEs, B_ req
equals 256f × 10^-3 GB/s. For example, the requirement B_ req becomes 51.2GB/s for the design where the clock frequency f is 200MHz. To fully utilize all PEs in the designs, B_ req must be smaller than the memory bandwidth of a target FPGA board.
§ PERFORMANCE EVALUATION
This section presents the performance evaluation of designs on three FPGA systems.
§.§ Benchmarking Conditions
§.§.§ Target FPGA Systems
Table <ref> shows the specification of FPGAs used in this benchmarking: Terasic DE5a-Net Arria10, Nallatech (BittWare) 520N Stratix10, and Terasic DE10a-Net Agilex. The Stratix10 FPGA is a computation node of Cygnus, a supercomputer system operating at the University of Tsukuba in Japan since 2019. We use Intel FPGA SDK for OpenCL to design and implement designs. A different host system hosts each FPGA as specified in the bottom rows of Table <ref>.
§.§.§ Evaluation Method
We first evaluate designs for square matrices by scaling n.
Also, we evaluate the performance of multiplying non-square matrices with sizes m × k and k × n as more realistic and practical evaluations.
To calculate the performance in GFlops, Eqs. (<ref>) and (<ref>) are used. The computation time T_ exec in Eq. (<ref>) is the average of three trials in each benchmarking. As a target of comparison, we use a baseline of the Rgemm executed on the host system of Agilex (i9-10900 CPU) with 20 threads by OpenMP parallelization.
Besides, we compare numerical accuracy with the Rgemm routine provided by MPLAPACK on a CPU. As shown in Eq. (<ref>), we calculate the average L1 norm of the difference between two n × n matrices as throughout the evaluation.
E_ L1 = ∑_i=0^n-1∑_j=0^n-1| C^F_ij - C^R_ij| n^2,
In Eq. (<ref>), C^F and C^R denote the result matrices by our implementation for FPGAs and Rgemm, respectively.
allows us to determine how accurately designs match the results of the reference implementation.
To highlight the main characteristics of computational performance, we begin by evaluating the designs on the Arria10 FPGA in this section. The following section covers the performance evaluation of the designs on newer FPGAs, including Stratix10 and Agilex.
§.§ Benchmarking Results on Arria10
§.§.§ Evaluation for Square Matrices
We present benchmarking results for designs. The systolic array consists of PEs arranged in a square with P_R=P_C=2,4, and 8. Table <ref> shows the logic synthesis results on the Arria10 FPGA system.
Our binary128 GEMM design requires more DSP blocks for larger PE arrays.
Therefore, the number of available DSP blocks is the primary constraint for the design. The row labeled Fmax shows the clock frequency of each design. Therefore, their peak performance F_ peak is shown in the last row based on Eq. (<ref>).
Fig. <ref> shows the performance of each design on Arria10. The matrix size n ranges from 64 to 4096. The performance of designs F_ perf with 2× 2, 4× 4, and 8× 8 PEs is at a maximum of 1.88, 7.1, and 15.0GFlops, respectively. Since each PE can work independently for data streaming and operations on the systolic array, the performance is proportional to the number of PEs in the design.
However, with a small n, the computation load for each PE is not sufficiently high to reach the maximum performance of the designs. It reaches the peak at a specific n, such as n=2048 for 8× 8 PEs, and the performance scaling becomes flat at larger n.
We then evaluate the numerical error of computation results between designs and the Rgemm routine based on Eq. (<ref>). for n<512 is distributed between 10^-31 and 10^-30. As we set n to 4096, increases to 2.0× 10^-28. The layout of PEs does not make a significant difference in .
Regarding the comparison between F_ perf and F_ peak,
a ratio to designs of 2× 2, 4× 4, and 8× 8 PEs is 99.5%, 97.3%, and 58.2%, respectively.
Recall that the memory bandwidth requirement B_ req is given by Eq. (<ref>). As we substitute Fmax of each design in Fig. <ref> with f in Eq. (<ref>), we find B_ req 15.1GB/s, 29.2GB/s and 51.5GB/s for 2× 2, 4× 4 and 8× 8, respectively.
Our Arria10 system has two DDR3 memories that provide 34.2GB/s of the total bandwidth.
It is sufficient for the designs of 2× 2 and 4× 4 PEs. As a result, their F_ perf is close to the peak. However, the design of 8× 8 PEs requires 51.5GB/s, which is 1.5x more significant than the available bandwidth.
Therefore, the design of 8× 8 PEs is limited by memory transfer from DRAM.
As a result, we see that the ratio between F_ perf and F_ peak is much lower than that of other designs of fewer PEs.
§.§.§ Effects of Memory Buffer for The Systolic Array
To enhance performance, we instantiate more PEs in design. However, the memory bandwidth of the FPGA board poses a limitation. Therefore, the systolic array generated by has a module called memory tile in front of the Feed module. It is a local memory buffer working as a cache memory for each PE to mitigate the memory bandwidth requirements provided in Eq. (<ref>). As the systolic array incorporates a more significant number of PEs, increasing the size of M_ Tile is necessary to provide the larger buffer in designs.
The results presented in Sec. <ref> were all obtained by the designs with M_ Tile=32. We then conduct additional benchmarking to further investigate the potential performance improvement by adopting a larger value of M_ Tile.
Fig. <ref> illustrates the performance of the GEMM by using the designs of 4× 4 and 8× 8 PEs where M_ Tile ranges from 24 to 256.
The figure shows the performance of each design for four matrices where (k,n)= (4096,512), (4096,2048), (2048,2048), (4096,4096) assuming m=k. Computations using the design of 4× 4 PEs are not affected by the change of M_ Tile since their B_ req (30.25GB/s) is within the board memory bandwidth (34.2GB/s).
On the other hand, we see that using a larger M_ Tile≥ 64 improves the performance of the 8× 8 PEs. In those cases, the performance increases by 1.5 to 2x compared to the design with M_ Tile=32 and reaches its peak at M_ Tile=128. In contrast, the smaller M_ Tile≤ 24 causes even lower performance. For the square matrix with n=4096, we achieved 21.6GFlops at M_ Tile=128, 84% of F_ peak in Table <ref>. We also see that this M_ Tile scaling is effective in multiplying tall-skinny matrices where n is relatively much smaller than k. The larger M_ Tile reduces a bottleneck of the current implementation to some extent.
§.§.§ Evaluation for Non-square matrix
In computation of square matrices, we found that the performance of designs was ideal, except for the memory bandwidth constraint caused by the large PE layout. We then evaluate the performance for non-square matrices. Fig. <ref> shows the result gained by multiplications of m × k and k × n matrices where m and k are fixed at m=k=4096 and only n is varied between 32 and 4096. In this evaluation, we set M_ Tile=128 in all designs.
In the case of multiplication with rectangular matrices, the current systolic array design is ineffective due to load imbalance among PEs.
However, when the layout of PEs is small, such as 2× 2 PEs, the performance does not drop even for multiplication with 4096 × 128 compared to 4096 × 4096.
However, the multiplication on the design of 8× 8 PEs clearly shows a performance degradation for any n. In particular, for the computations of tall-skinny matrices where n is much smaller than k, the design of 8× 8 PEs performs far from its maximum capacity. The performance is as low as that of 2× 2 PEs. When we similarly fix m and n to m=n=4096 and scale k between 32 to 4096, the computation of each design shows the same result as in Fig. <ref>.
§.§ Benchmarking Results on Stratix10 and Agilex
We then evaluate designs on Stratix10 and Agilex FPGAs under the same benchmarking conditions.
Based on the previous evaluation of Arria10, the designs targeted in this section are 8× 8 PEs with M_ Tile=128. Additionally, we implemented a design of 8× 16 PEs with M_ Tile=256 and 512 to utilize the abundant hardware resources on Stratix10 and Agilex.
However, their resources are still insufficient to implement 16× 16 PEs due to the limited number of available logic cells.
Table <ref> summarizes the logic synthesis results of our designs implemented on each FPGA.
As we increase the size of the memory buffer on each PE by scaling M_ Tile,
the utilization of memory bits and RAM blocks on the FPGAs accordingly increases.
However, this does not cause problems on the Stratix10 and Agilex FPGA systems when we set M_ Tile=512 for 8× 16 PEs. As a result, Fmax and F_ peak for designs on Stratix10 and Agilex are much higher than those on Arria10.
Fig. <ref> shows the performance of designs on the two FPGAs. On FPGA systems of Stratix10 and Agilex,
we could execute GEMM with the size of a maximum n=24576 thanks to their large board memory. For comparison, we plot the performance on a host CPU (i9-10900) in the Agilex FPGA system.
We first focus on results for Stratix10. The design of 8× 8 PEs with M_ Tile=128 almost reached its peak performance at n=4096. The performance scaling for larger n is at 32.8GFlops, 99% of the peak. 8× 16 PEs with M_ Tile=256 similarly reached a peak of 45.0GFlops at around n=12000. However, compared to the design of 8× 8 PEs, its performance improvement is sluggish because the Fmax of the 8× 16 PEs significantly dropped and led to a low F_ peak of the design.
As we examine the performance of the designs on Agilex, the optimization of PE layout and M_ Tile
successfully contributed to performance improvement.
While the design of 8× 8 PEs with M_ Tile=128 certainly performs effectively, that of 8× 16 PEs with M_ Tile=512 is much better. The computation by the 8× 16 PEs achieved 90.9GFlops, 91% of the peak, for the largest matrix size of n=24576
in contrast to one by the 8× 8 PEs yielding 50.4GFlops at n=18000, about 96% of its peak.
The importance of the size of M_ Tile can be easily understood by comparing it
with a reference plot for the design of 8× 16 PEs with M_ Tile=128 on Agilex.
If we set M_ Tile=128, the performance of the design is at most 77GFlops, which is only 77% of the peak. In particular, a trench in the plot at n=16384 results in a significant performance drop to 54.1GFlops around that point.
One reason may be that those specific large matrices accidentally cause accesses that stride over different memory banks on four independent DIMMs on the Agilex FPGA board. However, the memory buffer exploited by the larger M_ Tile (e.g. 512) helps to alleviate problems related to unexpected memory access patterns and facilitates steady performance improvement.
Finally, design is very high performance compared to the Rgemm routine executed on the CPU with 20 threads. Its performance settles at 650MFlops for n>1024. Therefore, we have a significant advantage in processing large matrices. The design of 8× 16 PEs with M_ Tile=512 on Agilex is 145x faster than the computation on a recent CPU with the maximum number of threads.
In addition, we show the performance of designs for non-square matrices on Stratix and Agilex FPGAs. Fig. <ref> shows the benchmarking result when m and n are fixed to m=n=16384, and k is scaled between 32 and 16384. As presented in the benchmarking on Arria10, the performance drop for ratios of n:k<2:1 is not significant. However, for tall-skinny matrices where k is particularly small, like k≤ 128, even the performance on Agilex is just a few GFlops. As a result, the advantage of designs compared to computation on CPUs is lost.
§ APPLICATION OF BINARY128 MATRIX MULTIPLICATION
Once we have designs by the systolic array architecture, we can accelerate practical applications which require binary128 GEMM operations. We here describe two applications of our implementation with performance evaluation. In this section, ℝ^n× n denotes n× n real matrices.
§.§ Blocked LU Decomposition
§.§.§ Problem Specification of LU Decomposition
The LU Decomposition is a fundamental operation in numerical analysis that factorizes the given square matrix A as a multiplication of lower and upper triangular matrices like A = LU where L and U are lower and upper triangular matrices, respectively.
Based on BLAS routines, the LU decomposition in binary64 precision is implemented as a routine called dgetrf in LAPACK.
The degetf routine adopts a blocked LU decomposition algorithm thoroughly investigated and implemented for every supercomputer in the last four decades. Its variation is the most famous parallel benchmarking program called LINPACK.
The blocked LU decomposition algorithm effectively solves dense linear equations on accelerator architectures like GPU since its computation is mainly processed as GEMM operations.
Let us consider the LU decomposition for a matrix A ∈ℝ^n× n with the block size b,
as shown in Fig. <ref>. Then, we obtain L and U on A by repeating the following procedure recursively.
* Divide A into 4 sub-matrices: A_11∈ℝ^b× b, A_12∈ℝ^b× (n-b), A_21∈ℝ^(n-b)× n, and A_22∈ℝ^(n-b)× (n-b).
* Perform decomposition A_11 = L_11U_11.
* Solve U_12 that satisfies L_11U_12 = A_12.
* Solve L_21 that satisfies L_21U_11 = A_21.
* Update A_22 by A_22 = A_22 - L_21U_12.
* If n-b>0 still holds, go back to step 1 after substituting A with A_22.
In step 5, we have matrix multiplication L_21U_12. When b = 1, the blocked LU decomposition is reduced to a non-blocked routine called dgetrf2 in LAPACK. When b is large enough, the computation of dgetrf is dominated by GEMM operations in step 5. Accordingly, it can be accelerated by GEMM routines on GPUs or FPGAs.
In MPLAPACK <cit.>, all BLAS and LAPACK routines are extended to support multi-precision FP operations, including binary128. We modify an extended version of dgetrf in MPLAPACK called Rgetrf, which calls the Rgemm routine. In this paper, we replace calls to Rgemm
with operations executed on FPGAs.
The number of FP operations in the LU decomposition algorithm is 2n^33 - n^22 + 5n6 <cit.>. Here, we regard it as 2n^33. Therefore, F'_ perf as shown in Eq. (<ref>) gives the computation performance for the following evaluation.
F'_ perf = 2n^33× T_ exec× 10^9
§.§.§ Evaluation of GEMM for LU Decomposition
We assume that an input n × n matrices whose elements are given by random numbers in a range of [0.0, 1.0). Then, the input matrices can be factorized by the LU decomposition. We decompose the square matrices by applying designs in the algorithm.
Based on the evaluation in the previous section, we measure the performance of blocked LU decomposition with the design of 8 × 16 PEs on Agilex FPGA. We scale the size of matrices n and apply different block sizes b to find the optimal size of b. As a comparison, we present a result on the design of 8 × 16 PEs on Stratix10 where b = 128. We also give an another comparison with a result obtained through computation using only the host CPU (Intel Core i9-10900). In that computation, the Rgetrf routine in MPLAPACK takes charge of the LU decomposition with 20 threads by OpenMP parallelization.
Fig. <ref> summarizes our results of the LU decomposition. For Agilex FPGA,
we present the performance in each case of b=108, 128, 144.
The black line shows the performance scaling obtained by the computation on the CPU.
We observe that b = 108 yields the best performance on the Agilex FPGA as represented by 2.5GFlops at n=20000.
However, with a large matrix of n = 24576, a higher b yields the peak. We can see in the figure that the highest performance is 2.6GFlops obtained with b=144 for the matrix of n=24576. On the other hand, the performance deteriorates when we apply even larger values of b such as b=192 and 256, yielding 2.3GFlops and 2.1GFlops, respectively. Similarly, the design on the Stratix10 FPGA is superior to the CPU computation for n>3000. Although it is slower than the computation on the Agilex FPGA, it finally reaches 2.2GFlops at n=20000, which is 4.7x faster than that of the CPU.
Since the performance on FPGAs improves slowly by scaling n until computation data saturate every PE, the performance on the CPU for small n is superior to that of FPGAs. When the matrix size n = 512, the smallest size in this evaluation, the performance on the CPU is 278MFlops which is 2 to 3x faster than that of FPGAs.
We see that the intersection of the performance scaling between the CPU and FPGAs is around n = 1536. The performance of the CPU execution does not improve for n > 2000, which is 458MFlops at n=24576. In contrast, the performance of the LU decomposition by using designs on Agilex FPGA is at a maximum of 5.3x faster than that of the CPU.
We compare the decomposed matrices L and U calculated by the designs on FPGAs with the reference result calculated by the CPU by using Eq. (<ref>). In the case of n≤ 1536, where the CPU computation is still faster than FPGAs, we find ∼ 10^-31.
On the other hand, as we test for the matrix of n=24576,
we find ∼ 10^-28. This consequence is the same as we expected, considering the previous evaluation of design.
Finally, we compare our results with those of previous work by Kouya <cit.>,
who presented optimizations of LU decomposition using DD arithmetic.
Specifically, they have applied memory blocking and vectorization using AVX2 instructions
and evaluated the performance on an Intel Core i9-10900X CPU. According to their benchmarking for n = 1024, the performance of a conventional blocked LU decomposition code with b=64 was 132MFlops. Similarly, the performance of a vectorized LU decomposition code with b=32 was 363MFlops. In contrast, our result with the design of 8× 16 PEs achieved 324.5MFlops for n=1024 and b=108 on an Agilex FPGA. Even the fastest design on the high-end FPGA is not significantly beneficial for small matrices.
As a result, from performance perspective for small matrices,
designs are inferior to the vectorized LU decomposition code on a CPU.
However, we emphasize that our designs on recent FPGAs are much more effective for large n.
With the current best performance of our LU decomposition being 2.5GFlops, our FPGA designs are superior for large matrices. It is also worth noting that our work and the work by Kouya <cit.> use different FP formats.
DD arithmetic is well suited for recent high-end CPUs equipped with vector arithmetic units such as AVX2 and AVX512 instructions on the x86-64 ISA, Neon, and SVE instructions on the ARM ISA.
§.§ Semidefinite Programming (SDP)
SDP is an optimization problem to minimize or maximize a given linear function under
the constraint of symmetric semidefinite matrices. It has vast possible applications in engineering <cit.>, finance <cit.>, quantum chemistry <cit.>, and physics <cit.>, which have been investigated for a long time.
SDPA<cit.> is a numerical implementation and software package for SDP written in C++ <cit.>.
The algorithm used in the SDPA is called the PDIPM, one of the iteration methods for SDP.
Previous research <cit.> has extended the SDPA to support various precision
FP operations such as SDPA-GMP, -DD, and QD <cit.>. The GMP version uses arbitrary precision arithmetic.
Thus, a user must specify the precision beforehand. These extended versions of the SDPA use a part of MPLAPACK<cit.> as a back-end, mainly through calling the Rgemm routine.
To determine which parameters are utilized in GEMM routines called from the SDPA, we conduct 92 problems provided by SDPLIB<cit.> using SDPA-binary128 with MPLAPACK. As we are currently focusing on accelerating GEMM routines in our work, we have modified the code to record the 13 arguments specified in Listing <ref> for the Rgemm routine during the execution of all problems.
Analysis of the collected data reveals that the SDPA frequently calls the Rgemm routine with non-square matrices,
and none of the leading dimensions of the matrices in the Rgemm routine equal m, n, or k. Of the over 800 combinations of arguments recorded in the collected data, we find only 50 combinations where the condition n = m = k = lda = ldb = ldc holds. As shown in Sec. <ref>, the performance of designs on FPGAs for non-square matrices is inferior to that for square matrices.
Based on our analysis, we evaluate the performance of the SDPA calling
Rgemm operation accelerated by an FPGA
only when either two conditions are satisfied; (1) m equals n or
(2) m × n × k is larger than a predefined parameter N_ min = 10^6.
We test different N_ min and find that N_ min = 10^6 to 10^7 is optimal for the SDPA.
We only present the performance benchmarking of the SDPA on Agilex FPGA
for selected problems from SDPLIB shown in Table <ref>.
We present the elapsed time per iteration of the SDPA-binary128
on the three systems: CPU-A (Intel Xeon Gold 5122 4 cores @ 3.60GHz),
CPU-B (Intel i9-10900 CPU 10 cores @ 2.80GHz), and CPU-B using design of 8 × 16 PEs on Agilex.
The performance with the FPGA is 2 to 4x and roughly 1.5x faster than that of CPU-A and CPU-B, respectively.
Note that the performance of SDPA-binary128 on CPUs is proportional to the number of cores on a given CPU.
We verify that each solution computed by design improves upon the solution obtained via double-precision calculations.
As illustrated in Table <ref>, we present the relative gaps, primal/dual feasible errors, and the numbers of iterations for problems theta2, theta3, theta4, theta6, and controll11 from SDPLIB,
as computed on CPU-B using binary128, FPGA (Agilex) using our design,
the DD precision version <cit.>, and the double precision version <cit.>.
As smaller errors indicate better results, the solutions obtained via design exhibit
an improvement over those obtained via double precision calculations and are of comparable or slightly superior quality to those obtained via DD arithmetic.
Our binary128 Rgemm accelerated by FPGAs effectively accelerates the PDIPM for SDP problems.
§.§ Discussions on Application Performance
The blocked LU decomposition algorithm Rgetrf outlined in Sec. <ref>
employs the Rgemm operation to compute A_22 = L_21U_12,
where both matrices are non-square and skinny.
L_21 and U_12 are matrices of dimensions b × k and k × b, respectively.
During the loop from step 2 to step 6, k is reduced as k = n - pb,
where p represents the iteration number starting from p = 1.
At an initial phase of the algorithm, k is large enough such that designs on the Agilex FPGA
effectively accelerate the performance of Rgetrf.
However, as k becomes much smaller than n at a later phase of the algorithm,
the acceleration by the Agilex FPGA becomes ineffective.
The blocking size b also impacts the performance of the GEMM on FPGAs.
For instance, if b is too small, the performance of Rgemm on FPGAs
is significantly reduced, as depicted in Figs. <ref> and <ref>.
On the other hand, the PDIPM frequently calls the Rgemm operation
for small non-square matrices with a wide range of combinations of matrix sizes n, k, and m.
The largest matrix size in all problems presented in Table <ref> is only n = k = m = 2000.
With a matrix size of n = k = m = 2000, the performance of Rgemm on FPGAs is half the peak performance.
In most cases, the algorithm calls the Rgemm operation for much smaller matrices
when it is not executed on the FPGA.
In a previous evaluation of a fast GEMM in DD arithmetic on GPUs by Nakata <cit.>,
it was shown that the performance of the PDIPM in DD arithmetic accelerated by a GPU
is more than 10x faster than that on a CPU with four cores.
According to their results, the size of matrices does not significantly affect the performance
of Rgemm on GPU. Therefore, they have always utilized the GPU, except for very small matrices.
Despite the superior performance of our accelerated Rgemm implementation on the Agilex FPGA,
which is more than 100x faster than the reference Rgemm on a 10-core CPU,
the two applications evaluated in this section are not substantially accelerated by the FPGA.
Therefore, to make designs on FPGAs more practical for real-world applications,
we will need to extensively modify the systolic array design generated by to address the performance degradation for small matrices and non-square matrices.
A potential solution is to develop an extended version of Rgemm that incorporates another level of blocking in the host code. Specifically, we could develop a new Rgemm API based on a batched GEMM algorithm <cit.>.
It would allow us to instantiate multiple systolic arrays on an FPGA to handle the batched GEMM algorithm.
One of a hardware implementation of a batched GEMM algorithm focusing on 64 and smaller bits of FP numbers was reported by Ledoux <cit.>. Their systolic array design leverages a stalling-free output scheme for the output matrix C to maximize the overlap of host data transfers with GEMM computations.
§ CONCLUSION
In this paper, we presented implementation and its evaluation of different Intel FPGAs, and
its integration into numerical applications such as blocked LU decomposition and SDP.
Our GEMM designs on FPGAs are based on the 2-D systolic array generated by the library.
Furthermore, by optimizing memory buffer size, which stores reused data in fast on-chip memory,
we successfully implemented 8× 16 PEs to accelerate the GEMM in
binary128 arithmetic on FPGAs.
The benchmarking in this paper showed that our implementation is particularly advantageous when computing large matrices of size n>10^4.
For example, in our evaluation of implementation on the Agilex FPGA, the performance was 90.9GFlops, 91% of the estimated peak performance of the design. This resulted in a 147x speed-up compared to the Rgemm routine provided by MPLAPACK on an i9-10900 CPU with 20 threads.
Further benchmarking of various matrix multiplications showed that our designs are pretty effective to accelerate
GEMM operations for square and almost-square matrices. In other words, LU decomposition can be solved faster using our implementation than with existing CPU routines. However, our design was not effective at handling tall-skinny matrices, commonly found to solve semidefinite programming.
Our current systolic array designs for GEMM operations are based on the OpenCL kernels
generated by the latest version of <cit.>.
The is designed to be flexible and accommodate various kernel configurations
for different BLAS routines, such as General Matrix-Vector Multiplication (GEMV) and
Triangular Solve with Multiple Right-Hand Sides (TRSM).
However, in this study, we extracted only the systolic array kernels of GEMM for our work.
Extending our work to other BLAS routines would be an interesting area for future research.
There is still room for optimization to improve the performance of our GEMM design
when we use it to calculate tall-skinny matrix multiplications.
Further optimizations are necessary to achieve the desired performance, especially for SDP problems.
In future work, we will compare such optimized GEMM designs with other high-precision GEMM
implementations on accelerators. Another area of future work will be to
explore other FP formats in our GEMM designs by replacing
the current binary128 units with units in different arithmetic.
Our base implementation of the latest codes is designed to be flexible and accept various OpenCL kernel configurations for different routines. However, in this study, we only extracted the systolic array kernels for our design.
There is still room for optimization to improve the performance of our design when calculating tall-skinny matrix multiplications. While further optimization is needed to achieve the desired performance, our designs have
the advantage of flexibility in the precision of floating-point numbers. In future work, we will compare optimized GEMM designs
with existing implementations on GPUs and explore other arithmetic precisions
by substituting the current binary128 MAD units with MAD units in different arithmetic.
§ ACKNOWLEDGMENT
A part of this paper is based on results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This work was partly supported by MEXT as ”Feasibility studies for the next-generation computing infrastructure” and KAKENHI Grant Number JP23K11133.
This research in part used computational resources of Cygnus provided by Multidisciplinary Cooperative Research Program in Center for Computational Sciences, University of Tsukuba.
We thank Prof. Ishikawa, High Energy Accelerator Research Organization, and Prof. Daisaka, Hitotsubashi University, Japan, for their help evaluating our designs on Stratix10.
IEEEtran
|
http://arxiv.org/abs/2306.17665v1
|
20230630135219
|
Fuzzy Dark Matter in Relativistic Stars
|
[
"Zeinab Rezaei"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
Bias-Free Estimation of Signals on Top of Unknown Backgrounds
Oliver Schulz
Accepted XXX Received XXX
=============================================================
Fuzzy dark matter (FDM), a practical alternative to cold dark matter, can exist in compact stars.
Here, applying the FDM equation of state (EoS) constrained by CMB and large-scale structure data,
we calculate the structure of relativistic stars in the presence of FDM.
For this aim, the EoS for the visible matter in neutron stars, quark stars, and hybrid stars from the observational data are employed.
A piecewise polytropic EoS constrained by the observational data of GW170817 and the data of six low-mass X-ray binaries with
thermonuclear burst or the symmetry energy of the nuclear interaction describes the neutron star matter.
For quark star matter, we apply the EoSs within the Bayesian statistical approach using the mass and
radius measurements of PSR J0030+0451 from NICER. Employing the two-fluid formalism, we study the structure of FDM admixed relativistic stars.
(cosmology:) dark matter, stars: interiors, cosmology: observations.
§ INTRODUCTION
Fuzzy dark matter (FDM) composed of ultralight bosonic particles with m ∼ 10^-22 eV has been proposed to solve
different problems such as disagreement between cold dark matter (DM) predictions and small scale observations,
missing satellite problem and core-cusp problem in dwarf galaxies <cit.>.
FDM as a Bose Einstein condensate with the quantum effects at scales in the order of kpc (the de Broglie wavelength of
particles) experiences quantum pressure as well as gravitational attraction.
Due to the balance among the quantum pressure and the gravity, a soliton core forms near the center of FDM halo
and the core structure can release the FDM particle properties <cit.>.
The behavior of FDM at large scales is not different from the cold DM, while the quantum nature of
FDM influences the structure formation at small scales <cit.> and delays galaxy formation via
macroscopic quantum pressure <cit.>. The wavelike nature
of FDM results in the formation of granular structures in the FDM halo <cit.>.
Several studies have been considered to constrain the mass of FDM particles.
Galaxy luminosity function at high redshifts <cit.>,
Lyman alpha forests <cit.>,
CMB power spectrum <cit.>, radius-dependent velocity dispersion <cit.>, abundance of Milky Way subhalos <cit.>,
tidal streams from globular clusters <cit.>, galactic ultra-faint
dwarf galaxies <cit.>, observed displacements of star clusters and
active galactic nuclei from the centers of their host galaxies <cit.>,
and the observations of high-redshift lensed galaxies from CLASH survey <cit.>
are some examples.
Ultralight axion DM is one of the candidate for FDM <cit.>.
The forms of these axions have been predicted in string theory <cit.>.
In some investigations, the detection of axion DM has been considered <cit.>.
FDM can influence the astrophysical objects in different scales.
First galaxies are collected in a FDM cosmology and the primordial stars can form along dense DM filaments <cit.>.
The structure of self gravitating systems containing axions (axion stars) has been investigated and
the collision of axion stars with neutron stars (NSs) can release the energy of axions <cit.>.
There may be a large number of axion stars in galaxies and their collisions with each other and with other astrophysical
objects such as ordinary stars and NSs are possible <cit.>.
The attractive self-interactions of DM axions result in nongravitational growth of density fluctuations and the formation of bound
objects can influence the axion density perturbations on length scales <cit.>.
Cold DM axions may be converted into photons in the NS magnetosphere <cit.>.
Axion DM can be detected via the narrow radio lines radiated by the NSs <cit.>.
Pulsar timing array experiment has been suggested to detect the FDM signals <cit.>.
FDM affects the dynamics of binary systems <cit.>.
Variations of the orbital parameters of binary systems induced by the perturbations of FDM have been studied <cit.>.
Recently, DM in different compact objects such as NSs and quark stars (QSs) has been one of the
interesting subjects in astrophysics.
NSs can constrain the asymmetric DM <cit.>.
Low-mass NSs can be formed from the accretion-induced collapse of DM admixed white dwarfs <cit.>.
Spectroscopy measurements of NSs have been employed to detect DM <cit.>.
NSs admixed with DM and the constraints on DM properties from the observation of GW170817 <cit.>
have been explored.
DM particles can be captured by NSs and this leads to the NSs thermalization <cit.>.
DM interactions with muons <cit.>, DM Admixed NSs with
the DM-nucleons interactions via Higgs portal <cit.>, and self-interacting bosonic DM <cit.>
have been considered.
DM affects the nuclear matter parameters and the equations of state (EoSs) of nucleonic-matter <cit.> and
the curvatures of the NS <cit.>.
By modeling a massive NS with DM particles, the secondary component of GW190814
has been constrained <cit.>.
The possibility of the fact that GW190814 is a bosonic DM admixed compact star has been studied <cit.>.
Mass radius relation and second Love number of stars containing ordinary matter and non-self annihilating fermionic DM
have been calculated <cit.>.
The transmutation of NSs admixed with DM and gravitational collapse in the star centers result in
the formation of black holes with masses M≈ 1 M_⊙ <cit.>.
Dynamical evolution of DM admixed NSs with fermionic DM has been investigated <cit.>.
Self-annihilating neutralino WIMP DM may accrete into NSs and compact objects with long-lived lumps of
strange quark matter form <cit.>.
The regions of stability for compact stars containing massless quark matter and fermionic DM have been calculated <cit.>.
The observation of strange QSs could set constraints on
the scattering cross sections of light quarks and non-interacting scalar DM <cit.>.
The structure of strange stars admixed with self-interacting bosonic DM has been considered <cit.>.
The observations of strange stars in GW170817 confirmed that these stars have a mirror DM core <cit.>.
According to the above discussions, one can easily conclude that the FDM can have important effects on relativistic stars.
In this paper, we study the structure of NSs, QSs, and hybrid stars in the presence of FDM.
§ FUZZY DARK MATTER CONSTRAINED BY THE OBSERVATIONAL DATA
In this study, we employ a constrained FDM model with a quartic self-interaction <cit.>.
For this aim, a scalar field ϕ, with Lagrangian
ℒ=1/2g^μν∂_μϕ∂_νϕ-V(ϕ),
are considered. The potential has the form
V(ϕ)=1/2m^2ϕ^2+1/4λϕ^4,
in which m denotes the mass term and λ shows the strength of quartic self-interactions. Assuming a homogeneous and isotropic universe with a flat Robertson-Walker metric, anharmonic corrections to the mass term lead to the EoS for the scalar field with pressure P and density ρ,
w=P/ρ,
with
w=3λ/8m^4ρ/1+9λ/8m^4ρ.
Applying CMB <cit.> and large-scale structure (LSS) <cit.> data, the parameters of this
model have been constrained <cit.>. The constraint for the mass is m≥10^-24 eV and
for allowed masses, the constraint on λ is as follows,
log_10λ<-91.86+4log_10(m/10^-22 eV).
Here, to describe FDM, we apply the values m=10^-24 eV for the mass and λ=10^-100 for the
self-interactions of FDM. In Figure. <ref>, we have presented the EoS of FDM
constraint with the observational data.
§ TWO-FLUID FORMALISM FOR FUZZY DARK MATTER ADMIXED STARS
Starting with two-fluid formalism <cit.>, we apply one static and spherically
symmetric spacetime described by the line element,
dτ^2=e^2ν(r)dt^2-e^2λ(r)dr^2-r^2(dθ^2+sin^2θ dϕ^2),
and the energy momentum tensor of a perfect fluid,
T^μν=-p g^μν+(p+ε)u^μu^ν.
In the expression T^μν, p and ε are the total
pressure and total energy density, respectively, which are the
results of both visible (V) and dark (D) sectors,
p(r) = p_V(r) + p_D(r),
ε(r) =ε_V(r) + ε_D(r).
In Eq. (<ref>), p_V stands for the EoS of visible matter in compact stars, while p_D
presents the FDM EoS given by Eq. (<ref>).
Considering the above profiles, the Einstein field equations result in <cit.>
e^-2λ(r)=1-2M(r)/r,
dν/dr=M(r)+4π r^3 p(r)/r[r-2M(r)],
dp_V/dr=-[p_V(r)+ε_V(r)] dν/dr,
dp_D/dr=-[p_D(r)+ε_D(r)] dν/dr.
Here, M(r)=∫_0^r dr 4 π r^2 ε(r) denotes the total mass inside a sphere with radius r
and we specify the visible matter sphere and DM sphere with
the conditions p_V(R_V)=0 and p_D(R_D)=0, respectively. In this work, we assume that the densities of visible matter and dark matter are the same in the center of the star.
For the stars in the binaries, the tidal forces lead to induce the tidal deformabilities in the stars <cit.>. The traceless quadrupole moment tensor of the star Q_ij is related to the tidal field tensor E_ij by
Q_ij=-2/3k_2 R_V^5 E_ij=-λ E_ij,
in which λ = 2/3k_2 R_V^5 denotes the tidal
deformability. Besides, the tidal Love number k_2 is as follows <cit.>,
k_2 = 8 β^5/5(1-2β)^2[2-y_R+(y_R-1)2β]
× [2β(6-3y_R+3β(5y_R-8))
+ 4β^3(13-11y_R+β(3y_R-2)+2β^2(1+y_R))
+ 3(1-2β)^2[2-y_R+2β(y_R-1)] ln(1-2β)]^-1.
and β=M/R presents the compactness of the star. Furthermore, solving the following differential
equation leads to the value of y_R=y(r=R_V),
rdy(r)/dr+y^2(r)+y(r)F(r)+r^2Q(r)=0.
The functions F(r) and Q(r) are given by,
F(r)=[1-4π r^2(ε(r)-p(r))](1-2M(r)/r)^-1,
and
r^2 Q(r)=4π r^2[5ε(r)+9p(r)+ε(r)+p(r)/∂ p(r)/∂ε(r)]
× (1-2M(r)/r)^-1 -6 (1-2M(r)/r)^-1
- 4 M^2(r)/r^2(1+4π r^3 p(r)/M(r))^2 (1-2M(r)/r)^-2,
We solve Eq. (<ref>) along Eqs. (<ref>)-(<ref>) with the initial condition y(0) = 2.
In addition, the dimensionless tidal deformability is defined by
Λ = 2/3k_2R_V^5/M^5.
In the case of quark star which is self bound, the discontinuity of the energy density at the surface of star should be considered. In the present study, we apply the boundary treatment on the stellar surface
to join the interior solution with the exterior one as in Refs. <cit.>,
y_R^ext= y_R^int-ε_s/M/4π R_V^3,
in which ε_s is the energy density at the surface of star.
§ FUZZY DARK MATTER ADMIXED NEUTRON STAR
In order to quantify the visible matter in NSs, we utilize the EoS
of dense NS matter in the form of a piecewise polytropic expansion
which is constrained by the observational data of GW170817 and the data of six low-mass X-ray binaries (LMXB) with thermonuclear burst or the symmetry energy of the nuclear interaction <cit.>.
The EoS with the expression P=Kρ^Γ is parameterized with four pressure parameters
{p̂_̂1̂,p̂_̂2̂,p̂_̂3̂,p̂_̂4̂} at the corresponding densities of {1, 1.85, 3.7, 7.4}ρ_sat
in which the saturation density has the value ρ_sat=2.7×10^14 gcm^-3 <cit.>.
The joint analysis confirms that the constraint on p̂_̂1̂ mainly is the result of nuclear constraints,
the constraint on p̂_̂2̂ is predominantly determined by the gravitational wave data and the LMXB sources with thermonuclear bursts, the constraint on p̂_̂3̂ heavily comes from the LMXB source data
and the current bounds of M_TOV, and the range of p̂_̂4̂ is
narrowed down by LMXB sources with thermonuclear burst.
Piecewise polytropic EoS of NS matter and the mass radius relation for both NS and NS admixed with FDM are given in Figure. <ref>.
For the FDM admixed neutron star (FDMANS), we have considered the total mass versus the visible radius, i.e. the radius of sphere
containing NS matter.
FDM leads to stars with lower masses. The radius of FDMANSs is smaller than the
radius of NSs with the same mass. Therefore, FDM results in more compact stars.
For most FDMANSs, the larger stars are more massive, in contrary with NSs.
The interaction between FDM and NS matter leads to the self-bound FDMANSs a behavior different from the normal NSs which are gravitationally bound.
We have also shown the constraints on the mass radius relation obtained from the pulsars and the gravitational wave data with different colour bars.
NICER observations for PSR J0952-0607 <cit.>, PSR J2215+5135 <cit.>, PSR J0740+6620 <cit.>, PSR J0030+0451 <cit.>, and the merger events GW170817 <cit.> and GW190814 <cit.> give these constraints.
Both NSs and FDMANSs satisfy the constraints from the recent observational data.
The presented observations confirm that the maximum mass of FDMANSs is lower than the value ∼ 2.0 M_⊙.
FDM leads to stars with lower maximum mass than all the observational data shown in this figure.
Figure. <ref> explains the behavior of visible and dark sectors in FDMANSs.
In very low mass stars, the mass of two sectors is not sensitive to the size of spheres.
However, for other FDMANSs, the mass of visible and dark spheres grows by increasing the radius.
The results confirm that for dark sphere, this behavior is not valid for all stars
and in large dark spheres, the mass decreases as the radius grows.
Figure. <ref> verifies that in smaller FDMANSs, the mass of dark sphere is higher than
the NS matter sphere. However, in larger FDMANSs, the mass of visible sector
is dominant. This opposite behavior for visible and dark sectors in FDMANS is due to the different EoSs of two sectors.
In Figure. <ref>, we have presented the tidal Love number k_2, the value y_R, and the dimensionless tidal deformability Λ in the cases of NSs and FDMANSs. Except for the low mass stars, the tidal Love number decreases due to presence of the FDM in the stars. Besides, the star mass corresponding to the maximum value of the tidal Love number is lower when the FDM is considered in the stars. However, the value y_R is higher in FDMANSs compared to NSs for most cases. Our calculations confirm that the dimensionless tidal deformability decreases with the star mass for both NSs and FDMANSs. The FDM leads to a considerable reduction of the dimensionless tidal deformability. This decrease is more significant in low mass stars. Moreover, we have shown the upper limits on dimensionless tidal deformability Λ_1.4=190^+390_-120 for GW170817 <cit.>
and Λ_1.4=616^+273_-158 for GW190814 <cit.> obtained by LIGO and Virgo Collaborations.
In NSs, the dimensionless tidal deformability is in the range 70≤Λ_1.4≤580 related to GW170817. This is while the parameter Λ for NSs is lower than Λ_1.4=616^+273_-158 related to GW190814. Considering the FDMANSs, both upper limits from GW170817 and GW190814 are larger than the dimensionless tidal deformability.
§ FUZZY DARK MATTER ADMIXED QUARK STAR
In this work, we apply three EoSs of QSs within the Bayesian statistical approach using the mass and
radius measurements of PSR J0030+0451 from NICER <cit.>. These self-bound strange quark matter EoSs are based on the bag models in which the finite quark mass and superfluidity are also considered.
Our system describing the strange quark matter is a mixture of the massless u, d quarks and electrons, as well as s quarks of finite mass m_s <cit.>.
In the first model, i.e. normal quark matter, the grand canonical potential per unit volume in the bag model
is expressed by,
Ω_Normal=∑_i=u,d,s,eΩ_i^0+3(1-a_4)/4π^2μ^4+B_eff.
Here, Ω_i^0 denotes the grand canonical potential for particle type i as the ideal Fermi gas <cit.> and
μ=(μ_u+μ_d+μ_s)/3 presents the average quark chemical potential.
In addition, B_eff determines the contributions from the quantum chromodynamics (QCD) vacuum,
and a_4 shows the perturbative QCD contribution from one-gluon exchange for gluon interaction.
Besides, the number density of each part of strange quark matter is related to the chemical potential μ_i(i=u,d,s,e) by,
n_i=-∂Ω/∂μ_i.
The conditions for the quark matter at the equilibrium state are given by the weak interactions,
μ_d=μ_u+μ_e,
μ_d=μ_s.
The condition of charge neutrality is also considered,
2/3n_u= 1/3[n_d+n_s]+n_e.
For normal quark matter, the pressure of quark matter at each value of μ is calculated by,
P_Normal=- Ω_Normal,
and the energy density of quark matter is as follows,
ε_Normal= Ω_Normal+∑_i=u,d,s,eμ_i n_i.
In the two-parameter model Normal(B_eff; a_4), the strange quark mass is fixed as
m_s = 100 MeV and the two parameters (B_eff; a_4) are determined from the joint MSP J0740+6620 and PSR J0030+0451
analysis <cit.>.
The second model describing the superfluid quark matter is Color-Flavor Locked (CFL) in which an additional term related to the
pairing energy is added to the grand canonical potential <cit.>,
Ω_CFL= Ω_Normal+3m_s^4-48Δ^2μ^2/16π^2.
In the three-parameter model CFL(B_eff; a_4; Δ), as Normal model, the strange quark mass is m_s = 100 MeV
and the three parameters (B_eff; a_4; Δ) are constrained by the observational data <cit.>.
The third model is four-parameter model CFLm(B_eff; a_4; Δ; m_s) in which the strange quark mass m_s of the CFL superfluid quark matter is also constrained by the observational data <cit.>.
Figure. <ref> presents the three models for the EoS of strange star matter considered in this work.
In CFL and CFLm models, the EoS is stiffer than the EoS in Normal model.
CFLm model also leads to EoS which is stiffer than the EoS in CFL model. The mass radius relation for QSs and FDM admixed QSs (FDMAQSs) is given in Figure. <ref>.
In each model for quark matter, the maximum mass of FDMAQSs reaches the value lower than the one
for QSs. FDM affects the star so that the FDMAQSs are smaller than QSs with the
same mass. Therefore, the FDM leads to more compact stars, like the effect in NSs. This result is in agreement with the one obtained in <cit.>.
QSs fulfill both the maximum mass and the mass radius constraints from the presented observational data. In addition, the maximum mass of FDMAQSs is lower than the value related to the maximum mass constraints.
Figures. <ref> shows the mass radius relation for visible and dark sectors in FDMAQSs.
In three models, both visible and dark sectors represent a self-bound behavior like QS and FDMAQS.
For two spheres with smaller sizes, the mass of sphere is not sensitive to
the size. For most FDMAQSs, the contribution of two sectors in the mass of stars is the same. This is while in massive stars,
the mass of visible sphere is higher than the dark one.
The tidal Love number k_2, the value y_R, and the dimensionless tidal deformability Λ for QSs and FDMAQSs are given in Figure. <ref>.
In most FDMAQSs, the tidal Love number takes higher values compared to the one in QSs with the same mass. This is while in massive FDMAQSs, FDM leads to the reduction of tidal Love number. Generally, FDMAQSs can experience larger values of the tidal Love number.
Figure. <ref> also indicates that for both QSs and FDMAQSs y_R increases as the mass grows. y_R for FDMAQSs is larger than for QSs. Our calculations confirm that FDM in QSs results in a considerable decrease of the dimensionless tidal deformability similar to the behavior in NSs. Besides, for both QSs and FDMAQSs, the dimensionless tidal deformability is lower than the upper limits from GW170817 and GW190814.
§ FUZZY DARK MATTER ADMIXED HYBRID STAR
For this study, we suppose that the hybrid star is composed of
a quark phase and a hadronic phase within a model like the one considered in <cit.>. In our model, these two parts are split by a sharp phase-transition surface without a mixed phase and the density at the
phase-splitting surface can be discontinuous <cit.>.
For the quark phase, we apply three EoSs, i.e. Normal, CFL, and CFLm models.
Furthermore, to describe the hadronic phase, the EoS of dense NS matter based on the observational data which considered in section 4 is applied. The density jump at the surface of quark-hadronic phase transition
is taken as a free parameter. By defining the parameter,
η≡ϵ_q/ϵ_h-1,
in which ϵ_q shows the density at the top of the quark phase and ϵ_h denotes
the density at the bottom of the hadronic phase, we quantify the density jump.
According to p_q = p_h at the quark-hadronic phase transition interface, ϵ_q or ϵ_h and the phase transition pressure are determined.
In Figure. <ref>, we have presented the mass radius relation for hybrid star and FDM admixed hybrid star (FDMAHS) in two cases η=0 and η=0.8. FDM affects the mass of hybrid stars
in a way that the maximum mass decreases. Similar to other compact objects, the FDMAHSs are smaller in size compared to hybrid stars. The FDM results in more
compact stars. Similar to QSs, HSs also satisfy both the maximum mass and the mass radius constraints. Moreover, our results verify that FDMAHSs fulfill the maximum mass constraint.
Figure. <ref> gives the mass radius relations of visible and dark sectors in FDMAHSs.
The mass of sphere containing visible matter increases as the size grows.
In all models, the spheres are self-bound with different contributions of visible and dark matter in low and massive stars.
In discontinuous model, the range of the size of radius is higher than the one in continuous model. The low values of the mass of each visible and dark sectors show the contribution of these parts in the total mass of FDMAHSs. Besides, Figure. <ref> verifies that FDM results in the contraction of the FDMAHSs and smaller radius of stars.
For HSs and FDMAHSs, we have shown the tidal Love number k_2, the value y_R, and the dimensionless tidal deformability Λ in Figure. <ref>.
Except in low mass FDMAHSs, the tidal Love number of FDMAHSs is smaller than the one in HSs. Besides, considering HSs, the tidal Love number is larger in
discontinuous model. However, in FDMAHSs, the tidal Love number is almost the same in continuous and discontinuous models.
In addition, considering FDMAHSs, the value y_R is smaller than the one in HSs. The discontinuous model gives lower values for y_R. Our calculations confirm that FDM considerably reduces the dimensionless tidal deformability of FDMAHSs like the one in FDMANSs and FDMAQSs. The dimensionless tidal deformability is higher in the discontinuous model compared to the continuous one. This enhancement is more significant in the low mass stars. Our calculations verify that for both HSs and FDMAHSs, the dimensionless tidal deformability is lower than the upper limits from GW170817 and GW190814.
§ SUMMARY AND CONCLUSIONS
In the relativistic two-fluid formalism, we have explored the effects of fuzzy dark matter (FDM) on the
compact stars. The equations of state for FDM as well as the visible matter in stars which have been used
are based on the observational data.
Our results verify that in FDM admixed neutron stars, FDM leads to neutron stars with lower masses.
Moreover, FDM makes more compact neutron stars.
In FDM admixed neutron stars, the mass of visible and dark spheres grows as the radius increases.
Besides, the mass of visible and dark spheres depends on the size of the stars.
FDM admixed quark stars are smaller than quark stars without FDM with the
same mass and therefore they are more compact, like the phenomena in neutron stars.
FDM admixed hybrid stars are also more compact in comparison with hybrid stars with no FDM.
Furthermore, FDM in compact stars leads to a significant change in the dimensionless tidal deformability of stars.
§ ACKNOWLEDGEMENTS
The author wishes to thank the Shiraz University Research Council.
§ DATA AVAILABILITY
All data are given either in this paper or in the references.
mn2e
17
[Abbott et al. (2017)]Abbott7 Abbott B. P., et al., 2017, PhRvL, 119, 161101. doi:10.1103/PhysRevLett.119.161101
[Abbott et al. (2018)]Abbott8 Abbott B. P., et al., 2018, PhRvL, 121, 161101. doi:10.1103/PhysRevLett.121.161101
[Abbott et al. (2020)]Abbott Abbott R., et al., 2020, ApJL, 896, L44. doi:10.3847/2041-8213/ab960f
[Abel et al. (2017)]Abel Abel C., et al., 2017, PhRvX, 7, 041034. doi:10.1103/PhysRevX.7.041034
[Acevedo et al. (2020)]Acevedo Acevedo J. F., Bramante J., Leane R. K., Raj N., 2020, JCAP, 03, 038. doi:10.1088/1475-7516/2020/03/038
[Ade et al. (2016)]Ade Ade P. A. R., et al., 2016, A and A, 594, A13. doi:10.1051/0004-6361/201525830
[Anzuini et al. (2021)]Anzuini Anzuini F., et al., 2021, JCAP, 11, 056. doi:10.1088/1475-7516/2021/11/056
[Armaleo et al. (2020)]Armaleo Armaleo J. M., Nacir D. L., Urban F. R., 2020, JCAP, 01, 053. doi:10.1088/1475-7516/2020/01/053
[Armengaud et al. (2017)]ArmengaudArmengaud E., Palanque-Delabrouille N., Yeche C., Marsh D. J. E., Baur J., 2017, MNRAS, 471, 4606. doi:10.1093/mnras/stx1870
[Arvanitaki et al. (2020)]Arvanitaki Arvanitaki A., Dimopoulos S., Galanis M., Lehner L., Thompson J. O., Tilburg K. V., 2020, PhRvD, 101, 083014. doi:10.1103/PhysRevD.101.083014
[Barranco et al. (2013)]Barranco Barranco J., Monteverde A. C., Delepine D., 2013, PhRvD, 87, 103011. doi:10.1103/PhysRevD.87.103011
[Battye et al. (2021)]BattyeBattye R. A., Garbrecht B., McDonald J., Srinivasan S., 2021, JHEP, 2021, 105. doi:10.1007/JHEP09(2021)105
[Bell et al. (2019)]Bell1 Bell N. F., Busoni G., Robles S., 2019, JCAP, 06, 054. doi:10.1088/1475-7516/2019/06/054
[Bell et al. (2020)]Bell2 Bell N. F., Busoni G., Robles S., Virgato M., 2020, JCAP, 09, 028. doi:10.1088/1475-7516/2020/09/028
[Bell et al. (2021a)]Bell3 Bell N. F., Busoni G., Robles S., Virgato M., 2021, JCAP, 03, 086. doi:10.1088/1475-7516/2021/03/086
[Bell et al. (2021b)]Bell4 Bell N. F., et al., 2021, PhRvL, 127, 111803. doi:10.1103/PhysRevLett.127.111803
[Bhat & Paul(2020)]BhatBhat S. A., Paul A., 2020, EPJC, 80, 544. doi:10.1140/epjc/s10052-020-8072-x
[Blas et al. (2020)]Blas Blas D., Nacir D. L., Sibiryakov S., 2020, PhRvD, 101, 063016. doi:10.1103/PhysRevD.101.063016
[Burkert (2020)]Burkert Burkert A., 2020, ApJ, 904, 161. doi:10.3847/1538-4357/abb242
[Camargo et al. (2019)]Camargo Camargo D. A., Queiroz F. S., Sturani R., 2019, JCAP, 09, 051. doi:10.1088/1475-7516/2019/09/051
[Cembranos et al. (2018)]CembranosCembranos J. A. R., Maroto A. L., Nunez Jareno S. J., Villarrubia-Rojo H., 2018, JHEP, 2018, 73. doi:10.1007/JHEP08(2018)073
[Chowdhury et al. (2021)]Chowdhury Chowdhury D. D., et al., 2021, ApJ, 916, 27. doi:10.3847/1538-4357/ac043f
[Church et al. (2019)]Church Church B. V., Mocz P., Ostriker J. P., 2019, MNRAS, 485, 2861. doi:10.1093/mnras/stz534
[Ciarcelluti & Sandin(2011)]Ciarcelluti Ciarcelluti P., Sandin F., 2011, Phys. Lett. B, 695, 19. doi:10.1016/j.physletb.2010.11.021
[Cicoli et al. (2022)]CicoliCicoli M., et al., 2022, JHEP, 2022, 107. doi:10.1007/JHEP05(2022)107
[Cromartie et al. (2020)]Cromartie Cromartie H. T., et al., 2020, NatAs, 4, 72. doi:10.1038/s41550-019-0880-2
[Dalal et al. (2021)]Dalal Dalal N., Bovy J., Hui L., Li X., 2021, JCAP, 03, 076. doi:10.1088/1475-7516/2021/03/076
[Damour & Nagar(2009)]Damour Damour T., Nagar A., 2009, PhRvD, 80, 084035. doi:10.1103/PhysRevD.80.084035
[Das et al. (2020)]Das1 Das H. C., et al., 2020, MNRAS, 495, 4893. doi:10.1093/mnras/staa1435
[Das et al. (2021)]Das2 Das H. C., Kumar A., Kumar B., Biswal S. K., Patra S. K., 2021, JCAP, 01, 007. doi:10.1088/1475-7516/2021/01/007
[Das et al. (2021)]Das3 Das H. C., Kumar A., Patra S. K., 2021, PhRvD, 104, 063028. doi:10.1103/PhysRevD.104.063028
[Dave & Digal(2022)]Dave Dave S. S., Digal S., 2022, PhRvD, 105, 024039. doi:10.1103/PhysRevD.105.024039
[Dengler et al. (2022)]Dengler Dengler Y., Schaffner-Bielich J., Tolos L., 2022, PhRvD, 105, 043013. doi:10.1103/PhysRevD.105.043013
[Eby et al. (2017)]EbyEby J., et al., 2017, JHEP, 2017, 99. doi:10.1007/JHEP04(2017)099
[Farhi & Jaffe(1984)]FarhiFarhi E., Jaffe R. L., 1984, PhRvD, 30, 2379. doi:10.1103/PhysRevD.30.2379
[Fonseca et al. (2021)]Fonseca Fonseca, E., Cromartie, H. T., Pennucci, T. T., et al., 2021, ApJL, 915, L12. doi:10.3847/2041-8213/ac03b8
[Foster et al. (2020)]Foster Foster J. W., et al., 2020, PhRvL, 125, 171301. doi:10.1103/PhysRevLett.125.171301
[Garani et al. (2019)]Garani1 Garani R., Genolini Y., Hambye T., 2019, JCAP, 05, 035. doi:10.1088/1475-7516/2019/05/035
[Garani & Heeck(2019)]Garani3 Garani R., Heeck J., 2019, PhRvD, 100, 035039. doi:10.1103/PhysRevD.100.035039
[Garani et al. (2021)]Garani2 Garani R., Gupta A., Raj N., 2021, PhRvD, 103, 043019. doi:10.1103/PhysRevD.103.043019
[Garani et al. (2022)]Garani4 Garani R., Levkov D., Tinyakov P., 2022, PhRvD, 105, 063019. doi:10.1103/PhysRevD.105.063019
[Gleason et al. (2022)]Gleason Gleason T., Brown B., Kain B., 2022, PhRvD, 105, 023010. doi:10.1103/PhysRevD.105.023010
[Guth et al. (2015)]Guth Guth A. H., Hertzberg M. P., Prescod-Weinstein C., 2015, PhRvD, 92, 103513. doi:10.1103/PhysRevD.92.103513
[Haensel et al. (1986)]HaenselHaensel P., Zdunik J. L., Schaefer R., 1986, A and A, 160, 121. doi:1986A and A...160..121H
[Hayashi et al. (2021)]Hayashi Hayashi K., Ferreira E. G. M., Chan H. Y. J., 2021, ApJL, 912, L3. doi:10.3847/2041-8213/abf501
[Hinderer et al. (2010)]Hinderer0 Hinderer T., Lackey B. D., Lang R. N., Read J. S., 2010, PhRvD, 81, 123016. doi:10.1103/PhysRevD.81.123016
[Hinderer (2008)]Hinderer8 Hinderer T., 2008, ApJ, 677, 1216. doi:10.1086/533487
[Hlozek et al. (2018)]HlozekHlozek R., Marsh D. J. E., Grin D., 2018, MNRAS, 476, 3063. doi:10.1093/mnras/sty271
[Hook et al. (2018)]Hook Hook A., Kahn Y., Safdi B. R., Sun Z., 2018, PhRvL, 121, 241102. doi:10.1103/PhysRevLett.121.241102
[Hu et al. (2000)]HuHu W., Barkana R., Gruzinov A., 2000, PhRvL, 85, 1158. doi:10.1103/PhysRevLett.85.1158
[Huang et al. (2018)]Huang Huang F. P., Kadota K., Sekiguchi T., Tashiro H., 2018, PhRvD, 97, 123001. doi:10.1103/PhysRevD.97.123001
[Hui et al. (2017)]HuiHui L., Ostriker J. P., Tremaine S., Witten E., 2017, PhRvD, 95, 043541. doi:10.1103/PhysRevD.95.043541
[Irsic et al. (2017)]IrsicIrsic V., Viel M., Haehnelt M. G., Bolton J. S., Becker G. D., 2017, PhRvL, 119, 031302. doi:10.1103/PhysRevLett.119.031302
[Ivanytskyi et al. (2020)]Ivanytskyi Ivanytskyi O., Sagun V., Lopes I., 2020, PhRvD, 102, 063028. doi:10.1103/PhysRevD.102.063028
[Jiang et al. (2019)]Jiang Jiang J.-L., et al., 2019, ApJ, 885, 39. doi:10.3847/1538-4357/ab44b2
[Kato & Soda(2020)]Kato Kato R., Soda J., 2020, JCAP, 09, 036. doi:10.1088/1475-7516/2020/09/036
[Kawai et al. (2022)]Kawai Kawai H., Oguri M., Amruth A., Broadhurst T., Lim J., 2022, ApJ, 925, 61. doi:10.3847/1538-4357/ac39a2
[Keung et al. (2020)]KeungKeung W.-Y., Marfatia D., Tseng P.-Y., 2020, JHEP, 2020, 181. doi:10.1007/JHEP07(2020)181
[Khlopov et al. (1985)]KhlopovKhlopov M. Iu., Malomed B. A., Zeldovich Ia. B., 1985, MNRAS, 215, 575. doi:10.1093/mnras/215.4.575
[Khmelnitsky & Rubakov(2014)]Khmelnitsky Khmelnitsky A., Rubakov V., 2014, JCAP, 02, 019. doi:10.1088/1475-7516/2014/02/019
[Kulkarni & Ostriker(2022)]Kulkarni Kulkarni M., Ostriker J. P., 2022, MNRAS, 510, 1425. doi:10.1093/mnras/stab3520
[Kumar et al. (2022)]Kumar Kumar A., Das H. C., Patra S. K., 2022, MNRAS, 513, 1820. doi:10.1093/mnras/stac1013
[Lattimer & Prakash(2007)]Lattimer Lattimer J. M., Prakash M., 2007, Phys. Rept., 442, 109. doi:10.1016/j.physrep.2007.02.003
[Lee et al. (2021)]Lee Lee B. K. K., Chu M.-C., Lin L.-M., 2021, ApJ, 922, 242. doi:10.3847/1538-4357/ac2735
[Leung et al. (2019)]Leung Leung S.-C., et al., 2019, ApJ, 884, 9. doi:10.3847/1538-4357/ab3b5e
[Li et al. (2021)]Li Li A., Miao Z.-Q., Jiang J.-L., Tang S.-P., Xu R.-X., 2021, MNRAS, 506, 5916. doi:10.1093/mnras/stab2029
[Linares et al. (2018)]Linares Linares M., Shahbaz T., Casares J., 2018, ApJ, 859, 54. doi:10.3847/1538-4357/aabde6
[Lopes & Panotopoulos(2018)]Lopes Lopes I., Panotopoulos G., 2018, PhRvD, 97, 024030. doi:10.1103/PhysRevD.97.024030
[Maity & Queiroz(2021)]Maity Maity T. N., Queiroz F. S., 2021, PhRvD, 104, 083019. doi:10.1103/PhysRevD.104.083019
[Martino et al. (2017)]Martino Martino I. D., et al., 2017, PhRvL, 119, 221103. doi:10.1103/PhysRevLett.119.221103
[Menci et al. (2017)]MenciMenci N., et al., 2017, ApJ, 836, 61. doi:10.3847/1538-4357/836/1/61
[Miller et al. (2019)]Miller Miller M. C., et al., 2019, ApJL, 887, L24. doi:10.3847/2041-8213/ab50c5
[Miller et al. (2021)]Miller21 Miller, M. C., Lamb, F. K., Dittmann, A. J., et al., 2021, ApJL, 918, L28. doi:10.3847/2041-8213/ac089b
[Mocz et al. (2019)]Mocz Mocz P., et al., 2019, PhRvL, 123, 141301. doi:10.1103/PhysRevLett.123.141301
[Mukhopadhyay & Schaffner-Bielich(2016)]Mukhopadhyay Mukhopadhyay P., Schaffner-Bielich J., 2016, PhRvD, 93, 083009. doi:10.1103/PhysRevD.93.083009
[Nacir & Urban(2018)]Nacir Nacir D. L., Urban F. R., 2018, JCAP, 10, 044. doi:10.1088/1475-7516/2018/10/044
[Nadler et al. (2019)]Nadler1Nadler E. O., Gluscevic V., Boddy K. K., Wechsler R. H., 2019, ApJL, 878, L32. doi:10.3847/2041-8213/ab1eb2
[Nadler et al. (2021)]Nadler2Nadler E. O., et al., 2021, PhRvL, 126, 091101. doi:10.1103/PhysRevLett.126.091101
[Niemeyer (2020)]NiemeyerNiemeyer J. C., 2020, PrPNP, 113, 103787. doi:10.1016/j.ppnp.2020.103787
[Nomura et al. (2020)]NomuraNomura K., Ito A., Soda J., 2020, EPJC, 80, 419. doi:10.1140/epjc/s10052-020-7990-y
[Nori et al. (2019)]NoriNori M., Murgia R., Irsic V., Baldi M., Viel M., 2019, MNRAS, 482, 3227. doi:10.1093/mnras/sty2888
[Ozel & Psaltis(2009)]OzelOzel F., Psaltis D., 2009, PhRvD, 80, 103003. doi:10.1103/PhysRevD.80.103003
[Panotopoulos & Lopes(2017a)]Panotopoulos1 Panotopoulos G., Lopes I., 2017, PhRvD, 96, 023002. doi:10.1103/PhysRevD.96.023002
[Panotopoulos & Lopes(2017b)]Panotopoulos2 Panotopoulos G., Lopes I., 2017, PhRvD, 96, 083013. doi:10.1103/PhysRevD.96.083013
[Parkinson et al. (2012)]Parkinson Parkinson D., et al., 2012, PhRvD, 86, 103518. doi:10.1103/PhysRevD.86.103518
[Pereira et al. (2021)]Pereira1 Pereira J. P., et al., 2021, ApJ, 910, 145. doi:10.3847/1538-4357/abe633
[Pereira et al. (2020)]Pereira2 Pereira J. P., Bejger M., Andersson N., Gittins F., 2020, ApJ, 895, 28. doi:10.3847/1538-4357/ab8aca
[Perez-Garcia et al. (2010)]PerezGarcia Perez-Garcia M. A., Silk J., Stone J. R., 2010, PhRvL, 105, 141101. doi:10.1103/PhysRevLett.105.141101
[Porayko & Postnov(2014)]Porayko4 Porayko N. K., Postnov K. A., 2014, PhRvD, 90, 062008. doi:10.1103/PhysRevD.90.062008
[Porayko et al. (2018)]Porayko8 Porayko N. K., et al., 2018, PhRvD, 98, 102002. doi:10.1103/PhysRevD.98.102002
[Postnikov et al. (2010)]Postnikov Postnikov S., Prakash M., Lattimer J. M., 2010, PhRvD, 82, 024016. doi:10.1103/PhysRevD.82.024016
[Quddus et al. (2020)]Quddus Quddus A., et al., 2020, J. Phys. G: Nucl. Part. Phys., 47, 095202. doi:10.1088/1361-6471/ab9d36
[Rafiei Karkevandi et al. (2022)]Rafiei Rafiei Karkevandi D., Shakeri S., Sagun V., Ivanytskyi O., 2022, PhRvD, 105, 023001. doi:10.1103/PhysRevD.105.023001
[Riley et al. (2021)]Riley Riley, T. E., Watts, A. L., Bogdanov, S., et al., 2021, ApJL, 918, L27. doi:10.3847/2041-8213/ac0a81
[Romani et al. (2022)]Romani Romani, R. W., Kandel, D., Filippenko, A. V., Brink, T. G., Zheng, W., 2022, ApJL, 934, L17. doi:10.3847/2041-8213/ac8007
[Rogers & Peiris(2021)]RogersRogers K. K., Peiris H. V., 2021, PhRvL, 126, 071302. doi:10.1103/PhysRevLett.126.071302
[Safdi et al. (2019)]Safdi Safdi B. R., Sun Z., Chen A. Y., 2019, PhRvD, 99, 123021. doi:10.1103/PhysRevD.99.123021
[Sandin & Ciarcelluti(2009)]Sandin Sandin F., Ciarcelluti P., 2009, Astropart. Phys., 32, 278. doi:10.1016/j.astropartphys.2009.09.005
[Schive et al. (2016)]SchiveSchive H.-Y., Chiueh T., Broadhurst T., Huang K.-W., 2016, ApJ, 818, 89. doi:10.3847/0004-637X/818/1/89
[Svrcek & Witten(2006)]SvrcekSvrcek P., Witten E., 2006, JHEP, 06, 051. doi:10.1088/1126-6708/2006/06/051
[Widrow & Kaiser(1993)]WidrowWidrow L.M., Kaiser N., 1993, ApJL, 416, L71. doi:10.1086/187073
[Yang et al. (2021)]Yang Yang S.-H., Pi C.-M., Zheng X.-P., 2021, PhRvD, 104, 083016. doi:10.1103/PhysRevD.104.083016
[Zheng & Chen(2016)]Zheng Zheng H., Chen L.-W., 2016, ApJ, 831, 127. doi:10.3847/0004-637X/831/2/127
[Zhou et al. (2018)]Zhou Zhou E.-P., Zhou X., Li A., 2018, PhRvD, 97, 083015. doi:10.1103/PhysRevD.97.083015
|
http://arxiv.org/abs/2306.05904v1
|
20230609135837
|
Lattice calculation of the $D_{s}$ meson radiative form factors over the full kinematical range
|
[
"R. Frezzotti",
"G. Gagliardi",
"V. Lubicz",
"G. Martinelli",
"F. Mazzetti",
"C. T. Sachrajda",
"F. Sanfilippo",
"S. Simula",
"N. Tantalo"
] |
hep-lat
|
[
"hep-lat",
"hep-ph"
] |
justification=raggedright,singlelinecheck=false
|
http://arxiv.org/abs/2306.04009v1
|
20230606204518
|
Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks
|
[
"Kanishka Misra",
"Cicero Nogueira dos Santos",
"Siamak Shakeri"
] |
cs.CL
|
[
"cs.CL",
"cs.AI"
] |
Direct Observation of Landau Levels in Silicon Photonic Crystals
Mikael C. Rechtsman
July 31, 2023
================================================================
Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks.
In this work, we propose techniques that improve upon this limitation by relying on random walks over structured knowledge graphs.
Specifically, we use soft prompts to guide LMs to chain together their encoded knowledge by learning to map multi-hop questions to random walk paths that lead to the answer.
Applying our methods on two T5 LMs shows substantial improvements over standard tuning approaches in answering questions that require 2-hop reasoning.
Work done during an internship at Google Research.
§ INTRODUCTION
Performing multi-hop reasoning to answer questions such as Where was David Beckham’s daughter born? requires two fundamental capacities:
C1: possessing pre-requisite knowledge (David Beckham’s daughter is Harper Beckham, Harper Beckham was born in Los Angeles), and C2: ability to compose internalized knowledge.
Contemporary pre-trained language models (LMs) such as BERT <cit.> and T5 <cit.> have been shown to be adept at encoding factual knowledge <cit.>, an ability that can be further boosted by explicitly integrating them with knowledge about entities and relations <cit.>.
At the same time, these LMs often struggle to compose the knowledge they encode <cit.>, and therefore do not satisfy C2.
To overcome this limitation, previous works have proposed methods that decompose multi-hop questions into single hop sub-questions that models can more easily answer <cit.>. However, such methods require training entirely separate models, or make use of human-annotations <cit.>. Furthermore, they focus on tasks where models explicitly receive additional text containing relevant facts, which makes it unclear if they can truly compose the knowledge that they have internalized.
In this work, we aim to improve the standalone, self-contained ability of LMs to perform multi-hop reasoning. We posit that random walks—paths between entity nodes sampled from structured knowledge graphs—can provide a useful training signal for LMs to compose entity knowledge. To test this, we perform a case-study on two T5 models <cit.>.
Specifically, we first integrate within the LMs the single-hop knowledge that is required to answer multi-hop questions (effectively guaranteeing C1 is met).
We show that this alone is not enough to demonstrate substantial improvements on questions requiring 2-hop reasoning.
We then adapt the knowledge integrated T5 models by training soft prompts <cit.> on random walks over the structured knowledge that they have encoded, and devise two methods that trigger this ability in the LMs given a multi-hop question as input.
The first method, (), uses two specialized soft prompts: one to parse entities and relations from the question, and another to generate a path to the answer, resembling the outputs of a random walk. The second method, , trains a single prompt on a mixture that combines the QA task with the random walk training, so as to allow the model to implicitly learn 's task. Both these soft prompt methods use the same underlying LM (kept frozen), and guide it to compose its internalized entity knowledge.
Our experiments suggest that integrating random walks in the models using our proposed techniques can substantially improve their ability to answer entity-centric 2-hop questions <cit.> at larger model sizes.
Briefly, on our methods show improvements over previously proposed prompt-tuning approaches <cit.> as well as full model fine-tuning, with and demonstrating gains of ∼16 and ∼9.6 points in exact match scores over fine-tuning the entire model, respectively.
In the case of , our methods demonstrate improvements over standard prompt-tuning methods, but fall short of the performance achieved using fine-tuning, suggesting that larger models—with up to 11B parameters—are more conducive to leveraging the training signal provided by random walks via soft prompts.
§ METHOD
§.§ Models
We apply our methods on two T5.1.1 models
<cit.>— (770M parameters) and (11B parameters), using checkpoints that have been adapted using the Prefix LM objective for 100K steps <cit.>.
§.§ Knowledge Integration
We first ensure that the LMs we use have the prerequisite single-hop knowledge (C1) required to answer multi-hop questions. This is necessary, as preliminary experiments suggested that the T5 models we used did not satisfy this primary criterion for multi-hop reasoning (see <Ref>).
Specifically, we follow <cit.> and fine-tune our LMs on knowledge graph (KG) triples containing the relevant knowledge that is to be composed to answer questions.
That is, given a triple (e_1, r, e_2), where e_1 and e_2 are entities, and r is the relation, we fine-tune our T5 models to take as input the string “”, and produce “” as output, using the Prefix LM objective <cit.>.
To avoid catastrophic forgetting <cit.> and retain the LMs' language understanding abilities, we mix our knowledge integration training instances with that of the models' pre-training corpus—i.e., C4 <cit.>—in a 50:50 mixture.
We denote the resulting models as KNowledge-Integrated T5 ().
§.§ Composing knowledge using soft prompts
Random Walk training
Our method is centered around guiding the LMs to chain together their encoded knowledge by training them on random walks over a relevant KG. We formulate random walks here as as a sequence of entity-relation-entity triples that are connected linearly via shared entities. Figure <ref> shows an example with a random walk of length 3 ().
To perform our random walk training, we rely on soft prompts <cit.>, a sequence of learnable token-vectors that are prepended to the input of the LM.
Importantly, we only update these vectors during training, thereby keeping intact the utility and encoded knowledge of the main LM, while also being parameter efficient.
Our training procedure is as follows: we first perform uniform random walks of length n over the KG used in <ref>, resulting in a set whose elements are sequences of entities interleaved by the relations that connect them: (e_1, r_1, e_2, …, r_n-1, e_n).
During training, receives as input an incomplete path, with only the initial entity and the intermediate relations (e_1, r_1, r_2, …, r_n-1), and is tasked to generate the full path: (e_1, r_1, e_2, r_2 …, r_n-1, e_n).
We denote the trained prompts that trigger this ability in as .
§.§ Performing QA using
We propose two new techniques that utilize to map natural language questions to appropriate paths in the knowledge graph:
() We take advantage of the modularity of soft prompts, and distribute the responsibility of parsing the relational structure from questions and random walk querying using separate specialized prompts, keeping the underlying model the same. We train “parsing” prompts that parse questions to incomplete random walk queries, resembling the inputs to the described above.
For instance, the question “Where was David Beckham's daughter born?” is parsed to “”.
We then swap the parsing prompts with the hopping prompts, using the outputs from the parsing step as inputs and then run inference to get a path from the entity in the question to the answer: “”, as shown in Figure <ref>.
We posit that parsing of the appropriate relational structure from the question should be easy and self-contained, since it only involves using the surface form of the question as opposed to invoking any external knowledge, which is delegated to .
We propose to jointly train a single set of prompts on a mixture of the QA task and the task (50:50), thereby halving the number of forward passes from the previous method.
Our primary motivation here is to provide diverse training signals that get models to map questions to the structured knowledge that explicitly connects the entity in the question to the answer entity.
Like , directly produces random walk paths as output, as shown in Figure <ref>.
§ EXPERIMENTAL SETUP
§.§ Data
Multi-hop QA Dataset
While traditional multi-hop QA datasets provide additional paragraphs <cit.> for models to reason over, we operate under the more challenging closed-book QA setting <cit.>, where such contexts are omitted. Specifically, we use the “compositional” and “inference” subsets of the 2WikiMultiHopQA dataset <cit.>, which contains 2-hop English questions focusing on entities and 29 relations, sourced from WikiData <cit.>. We select this dataset as it uniquely provides the precise structured knowledge that is required to answer each question, in the form of entity-relation-entity triples.[Works such as <cit.> propose unsupervised mappings of questions in more popular datasets such as NaturalQuestions <cit.> to paths in knowledge graphs, but our initial investigations of these paths found them to be extensively noisy.]
Since the test splits for these specific subsets are private, we use the validation split as the test set, and use 10% of the training set for validation. In total we have train, validation, and test questions.
1-hop QA Dataset To characterize if the models we test have the pre-requisite 1-hop knowledge, we additionally construct 1-hop questions from 2WikiMultiHopQA by applying manually defined templates over the entity triples provided for each 2-hop question (see Appendix <ref>).
For instance, the triple is converted to Who is the director of Inception?. We end up with 83,643 train, 5,022 validation, and 6,440 test QA instances.
We term this constructed dataset as .
Knowledge Integration Data
We build the KG for our methods using the set of ground-truth triples provided in the 2WikiMultiHopQA dataset ( entities and 29 relations, amounting to 95K triples).
Random Walk Training Corpus For each entity in the above KG, we sample up to random walks of length 3, each corresponding to an instance of 2 hops between entities.
We repeat this step 5 times with different seeds, discard duplicate paths, and end up with a total of unique paths as a result.
Importantly, we hold out the paths that include the triples in the QA task's validation and test sets in order to avoid leakage, ending up with / / paths as our train/validation/test sets, respectively.
This way, our experiments test for the kinds of generalization where models should successfully place entities in novel structures (complete paths in the KG), whose primitive knowledge (1-hop triples) is encoded in the model, but the composition is not. This can be viewed as a partial version of the lexical and structural generalization tests in stricter, more prominent compositional generalization benchmarks <cit.>.
§.§ Baselines and Comparisons
We compare our proposed approaches to standard fine-tuning and prompt-tuning <cit.>, which we use to directly produce the answer, without any intermediate entities or relations.
Additionally, we also adapt <cit.>, a prompt-tuning method where we initialize prompts with those that were pre-trained on related tasks.
In our adaptation, we initialize prompts using the values of the , and -transfer them to guide models to generate the full output, similar to and .
Since we operate in the closed book QA setting <cit.>, our methods cannot be directly compared to previous approaches on the dataset we considered, all of which receive paragraph contexts during training.
Only two other methods have considered the present dataset in its closed-book format <cit.>. However, both of them use smaller subsets of the validation set as their testing set, and test on different pre-trained models, making it impractical to directly compare our results to their reported values.
§ EXPERIMENTS AND FINDINGS[TRAINING DETAILS FOR ALL EXPERIMENTS CAN BE FOUND IN APPENDIX <REF>.]
We report and summarize our results as follows:
Integration of 1-hop knowledge only results in marginal improvements on 2-hop questions
We begin by first establishing the extent to which T5 models encode and compose 1-hop knowledge required to answer 2-hop questions, and whether additional knowledge integration (via ) can improve both these abilities.
From Tables <ref> and <ref>, we observe that the models struggle to answer both 1-hop as well as 2-hop questions, suggesting that they critically lack the precise 1-hop entity knowledge required to demonstrate success on the 2-hop questions.
The LMs overcome this limitation, by showing substantial gains on over their T5 counterparts—they show improvements of ∼16.5 and ∼34.8 points in exact match (EM) scores at large and xxl sizes in the fine-tuning setting, respectively (<Ref>).
However, this is insufficient to show improvements on 2-hop questions—where maximum gain over is only 2.2 points, achieved by prompt-tuning (see <Ref>).
This suggests that even after being endowed with the prerequisite 1-hop knowledge, both LMs are unable to successfully answer more complicated questions, echoing the results of <cit.>.
Note that both models almost perfectly memorize the KG in our knowledge-integration experiments (achieving ∼96% EM in under 10K training steps; see <Ref>), so their limitations on 2-hop questions are likely not due to lack of entity knowledge and perhaps instead due to the inability to compose or chain together memorized facts.
Generalizing to novel random walks may require the prompt-tuning of larger LMs
We now turn to analyzing the performance of models in generating random walks, a critical component for all our proposed QA methods. How well does prompt-tuning LMs generalize to KG paths composed of facts they have memorized but are unseen during training?
Recall that this step involved leveraging soft prompts (called ) to guide the LMs to chain together their memorized entity knowledge and generate paths akin to performing a random walk. That is, it is the that must provide the necessary condition in the encoder to facilitate successful output-generation, and not the entire LM. Also recall that we explicitly held out the paths involving triples in the validation and test sets of the main QA task to prevent complete memorization (due to leakage into the training set).
This way we are able to measure the extent to which models learned to construct KG paths in a generalized manner.
To this end, we compute the EM and F1 scores over the full generated spans of entities, interleaved by the relations that connect them.
Note that EM is substantially stricter than F1, since F1 rewards partial overlap of tokens between the target vs. the generated output.
<Ref> shows these scores for and on the validation set of our random walk task, tuned using the .
We see from <Ref> that there is a substantial gap between (∼23 EM) and (∼58 EM), suggesting that the large model finds it difficult to generalize to random walk paths involving entities and relations outside of the training set.
We conclude from this observation that the gap between and in generalizing to held-out KG paths is likely going to be reflected when tested for 2-hop QA. That is, we expect our prompting methods with as the base-model to struggle on our test set questions as their ground-truth paths were not encountered during training, and at the same time, expect the opposite to be the case for .
Additionally, the EM score achieved by the XXL-sized model is well below perfect values, highlighting important avenues for future work to improve upon these gaps.
Training on random walks substantially improves 2-hop capabilities ..but mostly in larger LMs
We used three methods that leveraged the training signal provided by random walks to compose the 1-hop knowledge as memorized by : (ours), (ours), and <cit.>.
Due to lack of space, examples of the outputs from each of these methods, along with analysis of intermediate steps (e.g., parsing) are shown in Appendix <ref>.
We observe from <Ref> that for the xxl-sized model, all three methods lead to substantial improvements in performance on 2-hop questions over standard tuning approaches on T5 and .
Notably for , random walk-integrated methods improve even over fine-tuning, which is often expected to be better at transfer learning as compared to parameter efficient methods.
Among the three, our method shows the best improvements (∼16 point gain over fine-tuning ) at answering 2-hop questions.
This showcases the promise of learning separate specialized prompts that operate over the same underlying model to first parse natural language into incomplete structured knowledge, and then expand it to answer the question, while also eliciting intermediate steps <cit.>, similar to recent in-context prompting methods <cit.>.
While the method (∼9.6 point gain over fine-tuning) falls short of , it still improves over (∼6.6 point gain over fine-tuning), suggesting that joint training of related tasks may improve over sequential training (as employed by ) in performing multi-hop reasoning, at larger model sizes.
In the case of and , while the proposed methods show improvements over standard prompt-tuning, with demonstrating a gain of 3.33 points over prompt-tuning , they fall-short of the performance achieved by fine-tuning.
However, their non-trivial improvements over regular prompt-tuning suggests the general benefits of the training signal provided by random walks, which end up being most impressive at models that are an order of magnitude larger. Overall, these results corroborate with our hypothesis from the random walk tests about 's potential inability to generate partially novel random walks given either natural language multi-hop questions () or their parses ().
§ CONCLUSION
We show that composition of memorized world knowledge can be triggered in LMs with up to 11B parameters () to a desirable extent by leveraging training signal from random walks over structured knowledge using approaches based on prompt-tuning <cit.>. Doing so leads to substantial improvements in the LMs' ability to answer 2-hop questions, even beyond standard, full model fine-tuning.
§ LIMITATIONS
Despite showing non-trivial improvements in the multi-hop capabilities of T5 models, our work has multiple limitations.
Restricted to 2-hops First, we chose 2WikiHopMultiQA <cit.> as our primary dataset since it uniquely maps each question to a chain of triples that contain the precise, noiseless single-hop knowledge required to answer the question. However, this comes at the cost of our analyses only being restricted to 2-hops (though see arguments by <cit.> who suggest 3-and-4-hop questions to be too convoluted to understand even by native-speakers).
Nonetheless, our random walk training method is general by definition, and can be extended to multiple hops, though its effectiveness on QA tasks requiring more than 2-hops of reasoning remains to be measured.
Knowledge Graph size
Our focus in this paper was to allow models to chain together their internalized knowledge in order to answer complex 2-hop questions. However, this critically requires them to possess the world knowledge required to answer the questions, for which we had to memorize the KG constructed using the structured triples provided in the dataset.
This trade-off between focusing on knowledge composition vs. fully encoding world knowledge restricted our KG to be small in size (only entities and 29 relations), which could be impractical in most real-world applications.
In future work, we will experiment with larger sized KGs <cit.>, by adding a substantially larger amount of additional triples to the existing KG, and measure their impact on multi-hop reasoning.
Lack of diverse QA tasks Finally, we were unable to consider popular datasets with CBQA versions such as TriviaQA <cit.>, NaturalQuestions <cit.>, etc., due to their lack of links from questions to structured knowledge.
Future work can apply entity and relational linking techniques <cit.> in order to augment such QA datasets with (possibly) noisy links to structured knowledge, which will allow us to paint a more holistic picture of our methods. Additionally, this would also overcome the above limitation (of KG size), as it would substantially increase the amounts of entities and relations to be encoded within models.
Implications for Larger Models
Although we show clear improvements in triggering 2-hop reasoning in the largest T5 LM (), with 11B parameters, contemporary work has shown that multi-step reasoning capacities naturally emerge in LMs that are two or three orders of magnitude larger <cit.>. However, these LMs benefit from examples in-context (especially since tuning them is non-trivial and expensive), and therefore it is unclear whether our methods can improve such models’ capacities even further. We have not tested such LMs in our work, due to resource limitations.
§ ACKNOWLEDGMENTS
We thank Noah Constant, Chung-Ching Chang, Brian Lester, and Ben Withbroe from Google Research for their helpful comments and advice. We would also like to thank our three anonymous reviewers for their useful feedback.
§ TRAINING AND EXPERIMENT DETAILS
Hyperparameters We use the default hyperparameters and optimizers used to train the T5 1.1 checkpoints <cit.> as well as those used in the Prompt-Tuning and papers <cit.>. We set the prompt-length to 100 for all prompt-tuning experiments, and initialized them with the top 100 tokens in the T5 models' vocabulary, following <cit.>. We fine-tune and prompt-tune our models for a maximum of 100K and 200K steps, respectively. We stop training on convergence, and use the checkpoint with the best validation performance to evaluate.
Tables <ref>, <ref>, and <ref> show hyperparameter values for each type of experiment. All results are from single runs.
Hardware and Compute Prompt-tuning and fine-tuning experiments for large models were run on 16 TPUv3 chips, while those for xxl models were run on 64 TPUv3 chips. One exception is knowledge integration (which also involved continual pre-training on C4, larger batch size, and longer sequences), for which we used 256 TPUv3 chips for xxl, and 64 TPUv3 chips for large.
Code For metric calculation and checkpoints, we use the T5 and T5x code-base, open-sourced on github.[<https://github.com/google-research/text-to-text-transfer-transformer/tree/main/t5>][<https://github.com/google-research/t5x>] For prompt-tuning experiments, we adapt the original code-base <cit.>, which is also open-sourced.[<https://github.com/google-research/prompt-tuning>]
Data The 2WikiMultiHopQA dataset <cit.> has been released with Apache 2.0 license.[<https://github.com/Alab-NII/2wikimultihop>]
§ ADDITIONAL ANALYSES
§.§ Knowledge Integration
Integrating single-hop entity knowledge is an important part of our methods. How well are the models able to actually encode this knowledge? <Ref> shows the dynamics of memorization across both models, measured as the exact match scores in generating e_2 given e_1 and r. From <Ref>, we see that the xxl and large models can memorize 96% of the KG within 5,000 and 10,000 steps respectively. With a batch size of 512, this translates to traversing the dataset 27 and 54 times, respectively, for xxl and large. An important caveat here is that the models are also being tuned on C4 <cit.>, in order to retain the models' general language understanding-like capabilities.
That is, they can be expected to memorize the KG relatively faster in the absence of training on the C4 corpus, but this would constitute a trade-off, by leading to overfitted models with substantial loss their original utility on other NLP tasks.
§.§ Parsing Step in
The parsing step is essential for our approach to succeed. Here we perform additional analyses on how well models can successfully extract the relational structure that is required to answer the 2-hop questions in 2WikiMultiHopQA.
Recall that the objective of the parsing step is to produce as output a sequence indicating an incomplete random walk, containing only the initial entity (seed node), followed by the relations (edges) that lead to the final entity. For instance, if the question is “Where was the director of Inception (film) born?” the output of the parsing step should be:
Here, is the entity, e_1, while and are the relations, r_1 and r_2, respectively.
We analyze the extent to which models successfully extract these three elements for the test set questions, by measuring three quantities: (1) Relation EM, which is the exact match score computed between the ground truth span of relation pairs (here “”), and that extracted from the model outputs; (2) Entity EM, which is similar to Relation EM, but only considers the initial entity; and (3) Full EM, which computes the exact match score between the full output and the target. <Ref> shows these values from prompt-tuning the two models.
From <Ref>, we see that prompt-tuning both models allows them to achieve almost perfect EM values in extracting the relation pairs from the questions. However, we notice that models are not able to maintain this performance in copying over the entity, which lowers their overall EM scores on this task.
We performed a manual analysis of 50 randomly sampled outputs—with incorrect entity predictions—and found most errors to be due to omission of tokens involving middle names, or additional information about the entity such as the “” in the above example (other examples include the entity's title, such as “”, or , , etc.)
§.§ Example Outputs
Tables <ref>, <ref>, <ref>, and <ref> show examples of outputs from the different approaches used in this work (examples shown for the xxl-sized models). Below we discuss each of these cases in detail:
* In <Ref>, all approaches that leverage the training signal from random walks succeed, while tuning methods that do not fail. Additionally, all three random walk-integrated methods agree on their parsed relational structure as well as the intermediate entity.
* In <Ref>, only the two proposed methods ( and ) succeed, while all other methods fail. Note that correctly predicts the correct intermediate entity (), but is unable to predict the final entity ().
* <Ref> shows an example where all approaches fail. However, this question is ambiguous, as aunt can either mean father's sister or mother's sister – our random walk integrated methods correctly predict these relational structures but are unable to resolve the intermediate and final entities.
* <Ref> shows an example where all approaches are supposedly scored as incorrect, but are in-fact correct. Here we argue that the ground truth answer, “United Kingdom” is in its incorrect form, since the question asks for the nationality of a person. Our random walk-integrated methods successfully predict the relational structure and intermediate entities. Moreover all approaches predict or , which are more acceptable forms of nationality for persons from the United Kingdom. This problem could be mitigated by adding in aliases for the entities in the ground-truth answer space, similar to TriviaQA <cit.>.
§ TEMPLATES FOR CONSTRUCTING
Here we describe our process of constructing : a collection of English question-answer pairs that only require single-hop knowledge using the 2WikiMultiHopQA <cit.> dataset.
The 2WikiMultiHopQA dataset provides unique sequences of single-hop triples that collectively answer each 2-hop question. These amount to a total of 95,103 unique triples spanning unique entities and 29 relations. We manually define a diverse set of templates for each relation, as shown in Table <ref>. For many relations, we have multiple different paraphrases of the question template, e.g., the relation translates to: Who is the director of X? or Who directed the film X? In such cases, we randomly sample a template from the entire set, equally weighing each. In total, we end up with 83,643 train, 5,022 validation, and 6,440 test QA pairs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.